> HuffingtonPost had a 98% chance of Hillary winning at the end. I really hope someone holds them accountable for that
What's the evidence for them being wrong?
To be clear, I'm not arguing that they were correct. But I see a trend of attacking any prediction (including those of 538) that rated a Trump victory at less than 50% likelihood as wrong, based on the evidence that he won.
If I predict there's a less than 1% chance of you winning the lottery, you winning the lottery doesn't prove that I was wrong.
I have no idea what the likelihood of a Trump victory was before the election. Maybe it was 1%, maybe it was 99%. But I think discussion of predictions shouldn't be results-orientated for single results.
Personally I think 538's prediction was fine. In fact they explicitly called the situation where Trump would win despite the popular vote deficit.
As for the rest of the forecaster that gave Clinton 99% to win, given a US presidential election, the priori of a candidate winning is 50% (historical data, roughly speaking). So you would need some quite extraordinary evidences to get the posterior to 90+% for a candidate.
In this case, the evidences were mostly the polls. To have 99% prediction for a candidate based on a poll, the poll would need have at least 98% accuracy (assuming that when they're wrong, the probability of they being wrong in either directions is the same). Personally, I don't think the polls to be anywhere near that level of accuracy in general. Unfortunately I don't got any number of poll accuracy on hand - I'd love to see it if anyone have a citation.
There are more factors that will affect an election, but with similar reasoning to above, to get a 98-99% prediction, those factors will have to be 90%+ reliable as well. That is some quite tall order.
This is a case where the onus is on the forecaster to show that their number (98% for example) is sound, since they're the one making the extraordinary claim.
They did a post-mortem and pretty much just blamed the polls[1], but didn't really go into detail about why their model spit out that number. But for what it's worth, I have a background in data analytics, and I would never create a model that returns a number in the 90's unless the forecasted results were well outside the estimated range of error. And according to 538, based on what they were seeing, a Trump victory was well within the polling errors, and the polls were off by a similar amount in the 2012 election (but in Obama's direction). They had their forecast in the 70's (and mid 60's just a few days before). Another modeling parameter that may have been a factor in the two models was the number of undecideds. Silver was also very open about how high those were, and how it was pushing up the uncertainty of the model. So it might be possible that HuffPo was ignoring those
Regardless, for something like an election where so many factors can influence an outcome (turnout, late breaking news, biased polls) and with so few previous events to base your model on (~12 elections worth of data), you should heavily discourage your model from outputting such a high number. It is irresponsible considering what kind of an impact those types of forecasts can have on voter apathy and decision making, and the only reason I can think they did it was so that they could award themselves the 'most accurate forecaster' title after the election. But instead, they now get to award themselves the 'worst forecaster award', and the rest of us are stuck with Trump for 4 years.
The Economist had a very compelling graphic for how the polling went wrong [0]. Essentially, results were within the margin of error, but the errors (positive and negative) in a given state were strongly correlated with the percent of the white electorate with no college education.
538 made the compelling (and in the end, sadly accurate) statistical point that modeling errors in polls are very likely to be correlated across states, not independent like a lottery.
Nate Silver's reputation went up again for me in this election.
The man who completely called it wrong WRT to Trump in the primaries, and I seem to remember seeing a Tweet where he called both the final election and the World Series wrong (and the latter domain is where he first made his bones)?
I've gotten the general impression that while he might have done not as badly as others in this last minute polling, in general he did not cover himself in glory in 2016, and here I rate it by an org's confidence in their numbers.
But I didn't follow this at all closely, for it was obvious to me all or almost all the public polls were getting it seriously wrong, see e.g. https://news.ycombinator.com/item?id=12930950
Lotteries are picked randomly. Elections are chosen deliberately. It was not a toss of the dice that gave the win to trump, so any poll that did not predict him as the winner is by definition wrong.
Or to question your logic another way, what would a poll mean in the context of a lottery?
What's the evidence for them being wrong?
To be clear, I'm not arguing that they were correct. But I see a trend of attacking any prediction (including those of 538) that rated a Trump victory at less than 50% likelihood as wrong, based on the evidence that he won.
If I predict there's a less than 1% chance of you winning the lottery, you winning the lottery doesn't prove that I was wrong.
I have no idea what the likelihood of a Trump victory was before the election. Maybe it was 1%, maybe it was 99%. But I think discussion of predictions shouldn't be results-orientated for single results.