Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
Wednesday night, Nate Silver’s FiveThirtyEight released their much anticipated Senate model, forecasting the results of November’s election. According to Nate Silver, it’s basically the same thing as the House model, except it looks at Senate seats.
As with the House model, there are three versions: Lite, Classic, and Deluxe (represented on their website with burger icons). The Lite version is just based on polling. The Classic adds “fundamentals” (historical trends, fundraising, etc.), and the Deluxe adds expert ratings. (The three levels matter more in the Congressional model where there are fewer polls for individual districts.)
So how does it look? Like this:
As of this writing, the Democrats have a 32.6% chance of gaining control of the Senate, or 1 in 3. What’s there is the Classic version of the model. The Lite, which uses just polls, shows a 28.7% chance of Democrats gaining control, the Deluxe shows a 32.4% chance.
It was always going to be difficult for Democrats to gain control of the Senate in 2018. That’s simply a function of the staggering of Senate classes. Fewer Republican Senators are up for reelections this year.
However, if there is a blue wave this year, things could be different. There’s no standard definition of a wave (and I declined to define one in my last post on the subject), but you know one when you see it, because close races tend to tip one way. So FiveThirtyEight identifies two toss-ups (ND and FL) and two that lean Republican (TX and TN). If all four of them tip the same way, or three of them, that’s a wave, and the Democrats will gain control of the Senate.
By the way, the current FiveThirtyEight forecast for the House looks like this:
As of this writing, the Democrats have an 83.3% chance of gaining control of the House, or 5 in 6. That, again, is the classic model. The Lite and Deluxe models show that percentage to be 73.7 and 78.5 percent, respectively.
For what it’s worth, that’s up from last Thursday when the same model showed a 77.4% chance of Democrats taking control of the House.
If we’re looking at the Real Clear Politics generic House ballot as compared to a week ago, it’s about the same. Last Thursday it showed D +8.6, it’s currently D +8.2.
There’s also been a lot of noise in press reports this week about declining presidential approval ratings. That may or may not be the case, but it hasn’t shown up in the RCP average, which is at -13 points, which sounds bad (and it is), but that’s only down a tenth of a point from last week.
Once again, here are a few responses to common objections:
Yeah, yeah, but all the polls said Trump wouldn’t win either.
It doesn’t apply to every pollster. FiveThirtyEight still has their 2016 election page up. You can see it here. They gave Trump a 28.6% chance of winning. Not zero percent, not 10%, 28.6. What they’ll tell you is that they took crap from people before election day for having it that high, but it was what their model predicted.
All these polls are biased against Republicans.
Not all pollsters are created equal. There are polling outfits that lean toward Democrats and there are some that lean towards Republicans. But the incentive structure in polling favors accuracy. If you’re interested in the quality of pollsters, their predictive values, and how they lean, FiveThirtyEight keeps a list of pollster ratings.
Isn’t it interesting that it swung from D+4 to D+11 in one poll!
That’s why we look at the rolling average. An individual poll is going to vary. The rolling average smooths that out. Actually, if a firm publishes a poll with a swing like that, or aberrant results that seen to run contrary to other trends, it shows they’re doing good work, because they’re not modifying their methodology or monkeying with the poll to make the results fit better.
This poll is imperfect because of X! I disagree with its methodology.
All polls are imperfect. That’s why we take an average and look at it over time. That mitigates the imperfections of individual polls.
All polling is all broken.
It’s really not though. When it’s done well and used in the correct way, it’s actually a very useful tool. Even in 2016, well-done polls were pretty accurate.