Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Like I said, I'm happy for him to get credit for nailing it. But I'm pretty sure that conclusion's based on misunderstanding his forecasts. Unless I missed it somewhere, he's never said "VA is going to Obama, IA's going to Obama, NC's going to Romney, etc." If he has, please provide a link.

For analogy, let's say I have a biased coin. I tell you that I'm pretty sure it comes up heads 75% of the time. Then we flip it and it comes up heads. Did I "nail it?" No. The coin coming up heads is stronger evidence in favor my "model" (P[heads] = 0.75) than if it had come up tails, but it's pretty far from conclusive. The logic doesn't change if I tell you that the probability of heads is 90%, 99.9%, or whatever. (but, if I say it's 99.9% and it comes up tails, I'll concede that the model's wrong. If the prediction is extreme enough, a single observation can invalidate the model).

To extend the analogy, we only know if I have a good model for this biased coin if we flip it a bunch of times and roughly 3/4 of the outcomes are heads. If we can't do that, because the coin flip is a one-off event (stretching the analogy quite a bit, but whatever), we still could tell if I'm good at modeling coin flips: if you give me a bunch of different coins, I give you a probability of heads for each one, and then we see whether the proportion of heads after we flip each one is close to the average of the probabilities I gave (i.e., let B(i) be 1 if the ith coin is heads, 0 if it's tails. Then n^{-1/2} (B(1) - Prob[coin 1 is heads] + B(2) - Prob[coin 2 is heads] + ... + B(n) - Prob[coin n is heads]) obeys a CLT and -- if the probabilities are right -- it becomes normal with mean zero and known variance as n gets large).

If I tell you that each probability is .75 or some other number greater than 0.5 but less than 1, and all the coins always come up heads than I'm not "nailing it." But, I want to emphasize, it's premature to say that that's what's going on with Silver's predictions. And from what I've read of his blog, he completely understands this and explains it well.

edit: minor change for clarity in the first and third paragraphs.



I think there's a minor error in the analogy of "state results" to "coin flip".

In the coin flip scenario each throw of the biased coins are independent. How one lands does not effect the other. The same is likely not true for how states end-up voting.

This doesn't entirely invalidate your point (yes, if you "re-ran" the election repeatedly and he "nailed it" everytime we might conclude his probabilities were incorrect), but it does explain why if 10 "60% Obaba" states vote we might not necessarily expect 6 of them to come up for Obama...


> In the coin flip scenario each throw of the biased coins are independent. How one lands does not effect the other. The same is likely not true for how states end-up voting.

For sure. If I have a point, it's that there's not enough information in any election to claim that he "nailed it." Since the outcome he said was most likely seems to have occurred, it definitely supports his model and his approach. But you'd need to see a longer track record than exists to be sure. (think of each coin flip as being a separate national election).

Independence is a little bit of a red herring, though. There are LLNs and CLTs that allow for weak dependence between the observations, and any strong/systematic dependence between the states belongs in the model (and I think is in Silver's model, but I could be misremembering). And we would expect that 6 of 10 "60% Obama" states would come up for Obama. The interdependence is going to affect the variance but not the mean. So we should expect to see (say) 8 or more of 10 going to Obama happen more frequently than a naive binomial(.6, 10) distribution would predict.


No, he never said "VA is going to Obama, ...". He said "the probability of VA going to Obama and IA going to Obama and NC going to Romney ..." is X%.

This used to be at the "Paths to Victory" on the NYT, but they've been collapsed now.

We can say that somebody "nails" a prediction through both specificity and repetition. Nate had a modest amount of both. 50/50 and 49/50 are pretty specific. He did that well in 2008 and 2012, and similarly in 2010.


The OP said, "Nate Silver correctly predicted every single state" which is mistaken. His model is not designed to make predictions like "50/50" and "49/50" and he never claims to make those predictions, so I honestly don't know what you mean by, 'we can say that somebody "nails" a prediction through both specificity and repetition. Nate had a modest amount of both.'


For every state he listed which direction he thought it would go, and an associated probability.

Every state went the direction that he thought it would go.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: