Considering the number and intensity of the attacks conservative pundits have launched at New York Times election prognosticator Nate Silver, he's probably beginning to feel a bit like a piñata. I sympathize because -- even though I'm still recovering from injuries received defending John McCain (docs say the physical therapy is going great, by the way, thank you so much for asking) -- I'm now assuming the mantle of Only Conservative to Defend Nate Silver.
Not a whole-hearted defense, mind you, but one has to call ‘em as one sees ‘em, and, in my view, criticisms such as this one by Josh Jordan (nom de Twitter: @numbersmuncher), attacking Silver/Piñata’s methods and the conclusions based thereon really miss the point. But that said, Paul Krugman’s recent defense of Silver is equally off-point. Sorry, guys, but you’re both wrong, and for the same reason: confusing probability with prediction.
To understand the difference – and Jordan and Krugman’s error – let’s look at a proper, real-world application of probabilities: Suppose you are a purchasing manager receiving a shipment of 500,000 bearings. Suppose further that the measured size of 95 percent of the ball bearings must be within a tolerance of .001 inch. Should you accept the shipment?
Of course, you are not going to measure each one of 500,000 ball bearings. Instead, you pull a random sample of 20, 50, 100, or whatever (the more, the better) bearings and measure each one. If they are all within tolerance, great. But what if one bearing out of, say, 50, is a outside of tolerance?
That’s where probability saves the day: Using a simple formula, it is possible to calculate the probability that one bearing out of a random sample of 50 would be out of tolerance from a sample of 500,000 of which 95% should be within tolerance. If the probability is high, you accept the shipment.
But – and this is the important point – a high probability is not a guarantee. When, ultimately, every bearing is measured before installation, it is still possible that more than five percent of the bearings will be bad. But you – and this is the very important point – would keep your job.
Which brings us to Nate Silver and what is important to understand about his election model, which is this: When Silver writes, say, that his model gives Barack Obama, say, a 70 percent chance of winning the 2012 election, what he is saying, in fact, is that if we could hold, say, 100 identical presidential elections with the same candidates under identical circumstances, Obama should win, on average, 70% of them.
But of course, that’s impossible and if you’ve been following my explanation, you understand that one election is a ridiculously small sample. If Obama loses, Silver’s response would simply be that this was one of the three-out-of-ten instances where Silver’s model “predicted” an Obama loss. That’s what is, in this writer’s view, so useless about the attacks on Silver: No matter who wins the election, there is no definitive way to prove Silver’s model wrong.
The above is the long explanation. Here’s the short one: The day before the Supreme Court issued its Obamacare ruling, the political betting website Intrade showed the probability of the Court striking down the law at 75.5 percent.