Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If I am expected to make a better prediction of the election outcome by averaging your last 10 predictions than by taking your current prediction, then your current prediction is suboptimal.

This is Taleb's point, and it is solid as a rock.



But he provides no evidence that this is the case. The 2016 election was extremely volatile due to what actually happened during it, not the modelling. There were several bombshell press events all throughout the campaign that dramatically shifted polls.


That’s exactly Taleb’s point, both in this instance and in every book Taleb has written. Those bombshells should have been included in the model. Fat tail risks. If the modeling doesn’t include the potential for dramatic unknowns that can DO happen, then the model is no good. In the Clinton/Trump election, Trump is a showman and a name caller focused on ratings, Clinton a career politician with accusations of dirt, the model should expect “big things can still happen today, tomorrow, next week” and the absence if big events yesterday or last election holds little value relative to the probability of a big event happening in the next 24 hours.

Edit- I’m getting downvoted im guessing because I said Clinton has lots of dirt and called Trump obsessive, I’ve revised the comment to be less politically accusative. I’m not concerned with the politics, just interested in the obsession with these “predictions”


You’re not getting downvoted for saying mean things about the Clintons; don’t try and be a victim here. People disagree with your take on Taleb vs. Silver.


How are you sure?


Yea that was my reaction. maybe, maybe not. I just decided to move along. I know it’s uncouth and against the rules to comment on downvotes on HN but, for the first time for me, I seem to have been downvoted several times here on HN in the last couple weeks. I can’t shake the feeling that I’m receiving downvotes as people want to suppress different views on certain issues. My comments here are substantive and thoughtful, I read this article, have discussed Nate’s projections at length with friends and have read all Taleb’s books except his latest. My initial comment may be wrong / worthy of rebuttal, but it’s not downvote wrong. (DanG- I won’t comment on downvotes again! Sorry!)


> Fat tail risks

The 538 model does have fat fails, and also, adding even fatter fails doesn't address Taleb's fundamental criticism.

> Edit- I’m getting downvoted im guessing because I said Clinton has lots of dirt and called Trump obsessive, I’ve revised the comment to be less politically accusative. I’m not concerned with the politics, just interested in the obsession with these “predictions”

The 538 model does have fat fails.


> If the modeling doesn’t include the potential for dramatic unknowns that can DO happen, then the model is no good.

Why do you think it didn't? It took a major news event breaking at the exact right time (too early, and people would realize it was meaningless, too late, and it wouldn't have time to get out). And even with that, Trump barely eked out a win. That seems unlikely-but-not-impossible, which seems to match with 538's estimations.


I disagree that it’s unlikely. Highly paid, brilliant people are pitted against each other with massive stakes. It’s very likely that something bizarre happens. And there are still thousands of other things that COULD happen that didn’t, like a candidate getting removed by assassination, car wreck, illness, fatigue, enemy attack on the State, pandemic, on and on, that will have a massive impact on the state of things. The model was misleading, therefore it didn’t take into account Trump teams’ plan and the voters’ action.

I really don’t know how to value/process information like “Clinton 90% most likely to win,” and “Clinton loses in hotly contested election.” How do you get from A to B. The outcome of that election is heavily into “butterfly effect” territory, Taleb says model should never have been so confident. I would agree. Was it bad input info into the model or just a bad model because it relies on inputs subject to bias? I don’t know but the output information is certainly less valuable than the attention it’s receiving. Seems largely academic, and worthwhile, but not worthy of broad attention outside of the quant circles. (Nate Silver 538 is a popular topic around me, deeply non-quant territory, self included)


> It’s very likely that something bizarre happens. And there are still thousands of other things that COULD happen that didn’t, like a candidate getting removed by assassination, car wreck, illness, fatigue, enemy attack on the State, pandemic, on and on, that will have a massive impact on the state of things.

Yes, there are lots of things that could happen, but they're all pretty unlikely. A huge impact times a low probability doesn't affect the outcome much.

> I really don’t know how to value/process information like “Clinton 90% most likely to win,” and “Clinton loses in hotly contested election.” How do you get from A to B.

Well, it helps to not start from "Clinton 90% most likely to win"—if I remember correctly (which I may not), the final odds for Clinton were in the 65%-70% range.

> Taleb says model should never have been so confident. I would agree.

538 was not especially confident.

> the output information is certainly less valuable than the attention it’s receiving. Seems largely academic, and worthwhile, but not worthy of broad attention outside of the quant circles.

Maybe, but it's popular because it's something that people want to know.


I was reading from the article: “Take a look at FiveThirtyEight’s forecast from the 2016 presidential election, where the probability of Clinton winning peaked at 90%“


> I disagree that it’s unlikely. Highly paid, brilliant people are pitted against each other with massive stakes.

By that logic, polls would be off every year in most races. But they're not. There are lots and lots of years where polls are highly predictive, including 2008, 2010, 2012, and 2018.

> And there are still thousands of other things that COULD happen that didn’t, like a candidate getting removed by assassination, car wreck, illness, fatigue, enemy attack on the State, pandemic, on and on, that will have a massive impact on the state of things.

It's unclear to me in which direction any of those things would push voters, to be honest.

If a guy in a MAGA hat assassinated Biden while he sat in Church, maybe one thing would happen. If a black bloc assassinated Trump while he walked down a suburban street then something else might happen. It's unclear to me that either of those scenarios is particularly likely, and it's also unclear to me which of those two scenarios is more likely than the other. Even 0.01% seems high for either? And they seem equally likely? So I guess add fat tails to both sides of the distribution. Which is what 538 does.

At work, in an area much more boring and less high-stakes than election modeling, we do our best to actively track these sorts of "out-of-distribution scenarios" and have a "Conservative Human Oversight Mode" the model gets pushed into whenever something crazy is happening. That mode does get activated! For us, getting rid of the model because it fails spectacularly every year or two would be economically idiotic. IDK what Taleb would do in our case, but I do know his hedge fund failed. WRT election models, I expect 538 would probably put a big warning banner on their forecast -- or even take it down -- if one of the candidates were assassinated. Which is sort of equivalent to "monitor for out-of-distribution and switch to human mode".

> I really don’t know how to value/process information like “Clinton 90% most likely to win,” and “Clinton loses in hotly contested election.” How do you get from A to B.

Silver's model gave Clinton a 70% chance, not a 90% chance.

On the night before the 2016 election, Silver predicted how you might get from Clinton's 70% lead to a Trump win [1]: larger than average polling error and undecideds breaking heavily for Trump. Which is exactly what happened.

(Read the headline of [1] again.)

Again, divorce yourself from the emotion of politics and personalities, and just treat it as another statistical forecast. It is what it is: not omniscient or genius, but a decent piece of software that does what it's supposed to do.

> I don’t know but the output information is certainly less valuable than the attention it’s receiving.

I tend to agree. I also think Taleb's rank skepticism of these models gets more attention that it deserves. Like I said in my original post, this whole contest is an intellectually boring fight between equally big personalities. It's entertainment for politics junkies.

> but not worthy of broad attention outside of the quant circles.

The one thing I appreciate about 538 is that they do pour a ton of resources into explaining -- in lay terms -- how their model works and what their model does and doesn't account for. I'm not aware of any other mass-consumed statistical model whose authors have put so much effort into explaining for those willing to listen. Maybe weather and climate models. I appreciate this because it provides me a touch-point when explaining work stuff to non-technical stake holders who happen to listen to the 538 podcast.

Anyways, junkies will be junkies and Taleb/Silver are their dealers. Point twitter and news sites at 127.0.0.1 in your /etc/hosts and go buy a nice bottle of scotch. It's going to be a long week.

--

[1] https://fivethirtyeight.com/features/final-election-update-t...


> If I am expected to make a better prediction of the election outcome by averaging your last 10 predictions than by taking your current prediction, then your current prediction is suboptimal.

That's not how poll averaging works. Or, at least, that's not how poll averaging works in the 538 model.


That's not what the OP was saying. The argument is if somebody can take Silver's model's predictions over time and produce a better next estimate than it, then Silver's model is incoherent about its own beliefs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: