Contributor Post Created with Sketch. Polling Perplexity

 

shutterstock_433218895Almost every day, I check the Presidential polls at RealClearPolitics, and then I shake my head. Ordinarily, there is some variation. This year, however, the differential is dramatic. Right now, for example, CNN/ORC has Clinton ahead by five points. Rasmussen Reports has Trump ahead by two. IBD/TIPP has it all tied up. The ABC News Tracking Poll has Hillary ahead by a whopping twelve, and the LA Times Tracking Poll (not listed by RealClearPolitics) has her ahead by one point.

There may be some method to this madness. I can think of two alternative explanations. The first is that the pollsters do not know what they are doing; the second is that some fancy footwork is going on.

It is easy to see why the pollsters might be baffled. When they do a poll, they ordinarily take a sample, and then they make adjustments after comparing their sample with the population (i.e., either the general population or the voting population). They want their sample to be representative of women and men; the various ethnic groups; Catholics, Protestants of various stripes, Jews, Muslims, Hindus, and the like; Republicans, Democrats, and Independents; and so forth and so on. So they weight the sample in light of these categories to make sure that it is representative. In ordinary circumstances, this is tolerably easy to do. When the world is in flux, a lot of guesswork is involved. This year there will be Republicans voting for Hillary Clinton and Democrats voting for Donald Trump. They all note this, and they try to adjust. Polling is not a science. It is an art. So the differential could be due to the fact that some of the pollsters are — in all honesty — making the wrong assumptions.

The other possibility is that we are on the receiving end of a massive con — and that the polling results are designed to encourage or discourage voting on the part of the supporters of one candidate or the other.

Here, Wikileaks may be of use for, as Tyler Durden at Zerohedge informs us, the recent dump of emails shows that John Podesta received a message some time ago from Atlas Reports, detailing how to produce polls of use to the Clinton campaign by way of oversampling certain groups, and he points out that the recent polls by Reuters, Pravda-on-the-Potomac, and, yes, ABC News have greatly oversampled Democrats.

The name of the game could be to subvert the Republicans and depress voting for their candidates by conveying to the general public that the election is over, that Trump has already lost, and that there is no point in turning out. Would such respected outfits as CNN. Reuters, Pravda-on-the-Hudson, and ABC News be party to such a maneuver?

Would these and other news outlets work closely in tandem with the Clinton campaign to get the Republicans to nominate Donald Trump, to protect Hillary Clinton from interrogation, and to trash Trump after he got the nomination?

Of course, not. That would be unAmerican. Right?

This year, I have no idea where the corruption stops. We can trust the FBI. Right?

There are 51 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Stina Inactive

    I know only a small subset of the population votes and even then, a much smaller group pays attention. I always wondered at the argument that polls can affect voting by wanting to be on the winning team.

    I just want to vote for who I think is best. If he wins, great. I especially hate the strategic ingredients of primaries to get the “most winnable” candidate.

    • #1
    • October 24, 2016, at 4:35 PM PDT
    • Like
  2. Hammer, The Member

    Wouldn’t that also depress democrat voter turnout if people think it is safely in the bag? Seems a dangerous strategy in a close election.

    • #2
    • October 24, 2016, at 4:37 PM PDT
    • Like
  3. Bereket Kelile Member
    Bereket KelileJoined in the first year of Ricochet Ricochet Charter Member

    A poll is like a forecast model, which has a set of built-in assumptions. The problem with evaluating the public polling is that we often don’t have access to that information. The LA Times is more of an outlier (pun intended) in that they make their data completely available to anyone.

    There’s also what’s called the herding effect, where polls that generate results which differ dramatically from other polls are adjusted to fall more in line with others. It’s a kind of confirmation bias. I think having a set of polls with very different results is better because it makes it easier to discriminate. When the herd uniformly moves in one direction then it’s hard to isolate the impacts of various factors, such as sampling, data collection method, or weighting.

    • #3
    • October 24, 2016, at 4:42 PM PDT
    • Like
  4. Paul A. Rahe Contributor
    Paul A. Rahe

    RyanM:Wouldn’t that also depress democrat voter turnout if people think it is safely in the bag? Seems a dangerous strategy in a close election.

    You have a point. But people like to jump on the bandwagon. No one much likes to be on the losing side.

    • #4
    • October 24, 2016, at 4:43 PM PDT
    • Like
  5. Paul A. Rahe Contributor
    Paul A. Rahe

    Bereket Kelile:A poll is like a forecast model, which has a set of built-in assumptions. The problem with evaluating the public polling is that we often don’t have access to that information. The LA Times is more of an outlier (pun intended) in that they make their data completely available to anyone.

    There’s also what’s called the herding effect, where polls that generate results which differ dramatically from other polls are adjusted to fall more in line with others. It’s a kind of confirmation bias. I think having a set of polls with very different results is better because it makes it easier to discriminate. When the herd uniformly moves in one direction then it’s hard to isolate the impacts of various factors, such as sampling, data collection method, or weighting.

    Thanks for this.

    • #5
    • October 24, 2016, at 4:44 PM PDT
    • Like
  6. Hammer, The Member

    After reading this article, I went over to five thirty eight (which I have actually never done before). They’re pretty certain about Hillary, and about a democratic senate.

    I would almost be inclined to agree that liberal pollsters (as most seem to be) would skew data, but the problem with that is that their reputations are likely more important than their politics. I’d expect even someone like Nate Silver to care more about how accurate he is than about how he can potentially influence the election.

    • #6
    • October 24, 2016, at 5:20 PM PDT
    • Like
  7. Profile Photo Member

    How much should this really matter? The strategy should always be “the polls are closer than they look,” even if you are up big. It makes people feel like they are making a difference. Saying you are up big or the election is being stolen before the voting starts makes people think they have no control.

    It’s like every fundraising letter by e-mail says that we are $25 away from nationalizing America’s industries and collectivizing the farms and my donation will make all the difference. Not technically true, but more compelling than “give $25 for a hopeless cause.”

    • #7
    • October 24, 2016, at 5:23 PM PDT
    • Like
  8. Brian Ward Contributor

    Oversampling is a standard technique in survey research. It simply means, within a random sample, targeting certain groups of interest to get a higher tabulation base than their natural fall out would allow. This allows you to analyze that group’s results with greater statistical precision. However, for the aggregate results (all returns) you weight the oversample back to its natural proportion within the entire sample universe. This way you can get projectable results for the total as well as for your subgroups (which may be very small). This is what the Podesta email appear to be referencing and its exactly what a competent research firm (and a savvy research consumer) would demand.

    Political party preference is not a fixed variable, an attribute, that can be properly modeled on a population. Gender, age, ethnicity, and the like are fixed attributes and your sample better be representative of these proportions. However, political party preference can change in a given individual at any point. Some people (not us Ricochet members), can change their answer every time they are asked the question (or depending on who’s asking). Therefore it cannot be reliably used to model the population, as a standard for weighting. Wide variance in these measures compared to your perceptions of what they should be is a bit of a red flag. It means you better check your fixed variables. But if all those check out, party affiliation alone is not reason enough to declare a sample invalid. It might just mean your pre-existing perceptions were wrong. And the direction of causation might be going the other way. It’s not that these are Democrats so they’re voting for Hillary. It might be they are voting f0r Hillary, so they consider themselves Democrats (at this moment, in the context of the polling experience).

    • #8
    • October 24, 2016, at 5:32 PM PDT
    • Like
  9. The (apathetic) King Prawn Member

    I just assume the assumptions are off this year. They have to work to some degree on actual turnout from previous elections to build their models, and this year seems to be unique, or at least the beginning of a major shift in election participation.

    • #9
    • October 24, 2016, at 5:40 PM PDT
    • Like
  10. Front Seat Cat Member

    I truly believe that many of the polls are skewed – this is the year that anything goes – look at the news coverage – where do you see anything fair? I think you are right – people may hear that so an so is ahead so who care if I vote, but I suspect this will be an exception. On another note, I have heard that Homeland Security will be monitoring the election, and some areas only mail in, other things like people voting that should not be voting – so anything could happen – pray that we make it through with no violence and continue to make progress – at this point the Republican party can only go up….

    • #10
    • October 24, 2016, at 5:43 PM PDT
    • Like
  11. Bereket Kelile Member
    Bereket KelileJoined in the first year of Ricochet Ricochet Charter Member

    Brian Ward:Political party preference is not a fixed variable, an attribute, that can be properly modeled on a population.

    We usually ask people their party affiliation and collect that with their party registration status. While there is a differential between the two (some registered Dems/Reps vote the opposite part or are swing voters) it is pretty stable. You can identify who’s a solid base voter and who’s on the fence.

    But I’d say that party registration is probably the most important variable to weight on because it’s one of the best predictors of election outcomes. One of the things I would like to know about every public poll is the proportion of Dems, Reps, and Ind’s in their sample. The other thing I’d like to know, even more, is how they define a likely voter. That actually varies quite a bit across the industry.

    • #11
    • October 24, 2016, at 5:43 PM PDT
    • Like
  12. Douglas Inactive

    For what it’s worth, the LAT and IBD polls were the most accurate in 2012.

    • #12
    • October 24, 2016, at 5:49 PM PDT
    • Like
  13. Brian Ward Contributor

    Weighting samples on registration of voters holds more promise. But nearly half the states don’t include party affiliation as a part of registration. Many are explicitly non-partisan in registration. And many random samples wouldn’t include that variable in advance – again you’d have to rely on respondent self identification – which introduces the issues I mentioned earlier. Certainly, this variable can help you understand your sample and what it represents, but it’s dangerous to overemphasize it’s predictive power. .

    • #13
    • October 24, 2016, at 6:04 PM PDT
    • Like
  14. Suspira Member

    Bereket Kelile: But I’d say that party registration is probably the most important variable to weight on because it’s one of the best predictors of election outcomes.

    How do you deal with states like mine where we don’t have party registration?

    • #14
    • October 24, 2016, at 6:08 PM PDT
    • Like
  15. Bereket Kelile Member
    Bereket KelileJoined in the first year of Ricochet Ricochet Charter Member

    Suspira:

    Bereket Kelile: But I’d say that party registration is probably the most important variable to weight on because it’s one of the best predictors of election outcomes.

    How do you deal with states like mine where we don’t have party registration?

    Good question. I don’t have much experience in those states but I’m aware some states don’t collect that information. There are companies that maintain databases of voters and my guess is they have a proxy variable to indicates how you lean based on other characteristics. You can still collect data in your surveys about how people identify on partisan lines and it will often be very strongly correlated with their voting behavior.

    One other reason that party registration is important is that consultants use that data when they pull their lists for mailing or phone calls. We incorporate that into our polls in anticipation of what the consultants need so that they know where to target.

    • #15
    • October 24, 2016, at 6:28 PM PDT
    • Like
  16. Bereket Kelile Member
    Bereket KelileJoined in the first year of Ricochet Ricochet Charter Member

    Brian Ward:Weighting samples on registration of voters holds more promise. But nearly half the states don’t include party affiliation as a part of registration. Many are explicitly non-partisan in registration. And many random samples wouldn’t include that variable in advance – again you’d have to rely on respondent self identification – which introduces the issues I mentioned earlier. Certainly, this variable can help you understand your sample and what it represents, but it’s dangerous to overemphasize it’s predictive power. .

    Where there’s no party registration you certainly can’t accurately sample based on that, but then again, it also makes that variable less necessary. The practical consideration is what data is available to consultants when they target audiences for advertising.

    The samples used for making phone calls usually contain tens of thousands of names so it’s not likely you’ll miss an entire partisan category.

    Where the data on party registration is available it is very important. Other variables are important, too, like age and ethnicity, particularly Latinos. All of those variables are also interacting with the variable on turnout (likely voters). That’s why I think understanding the likely voter definition being used in a poll is probably the single most important piece of info.

    • #16
    • October 24, 2016, at 6:38 PM PDT
    • Like
  17. Could Be Anyone Member

    RyanM:After reading this article, I went over to five thirty eight (which I have actually never done before). They’re pretty certain about Hillary, and about a democratic senate.

    I would almost be inclined to agree that liberal pollsters (as most seem to be) would skew data, but the problem with that is that their reputations are likely more important than their politics. I’d expect even someone like Nate Silver to care more about how accurate he is than about how he can potentially influence the election.

    Another important thing to remember about polling is that it focuses on averages and 538 uses far more polls than RCP and the average is a 6 percent lead for clinton over trump. I don’t deny that there is probably some media influence but trump is his own worst enemy and will lose because of himself.

    • #17
    • October 24, 2016, at 6:46 PM PDT
    • Like
  18. Matt Harris Member

    The question is – is this year like other years? Pollsters make lots of assumptions about the nature and composition of the electorate. In a regular year with 2 standard D & R candidates on the ballot, those are probably pretty good. This year, where nobody seems to like anybody, it is anybody’s guess.

    • #18
    • October 24, 2016, at 6:57 PM PDT
    • Like
  19. Palaeologus Inactive

    RyanM: After reading this article, I went over to five thirty eight (which I have actually never done before). They’re pretty certain about Hillary, and about a democratic senate.

    They are.

    Here is an interesting piece by Silver on why 538 gives Trump a better chance to win than many other predictive models.

    RyanM

    I would almost be inclined to agree that liberal pollsters (as most seem to be) would skew data, but the problem with that is that their reputations are likely more important than their politics. I’d expect even someone like Nate Silver to care more about how accurate he is than about how he can potentially influence the election.

    Many folks here might not be familiar with the fact that 538 is owned by ESPN and not the NYT these days.

    Silver does all sorts of sports analytics stuff.

    For example, CARMELO is an attempt to predict the future career arcs of NBA players.

    • #19
    • October 24, 2016, at 7:19 PM PDT
    • Like
  20. Percival Thatcher
    PercivalJoined in the first year of Ricochet Ricochet Charter Member

    Bereket Kelile:

    Brian Ward:Political party preference is not a fixed variable, an attribute, that can be properly modeled on a population.

    We usually ask people their party affiliation and collect that with their party registration status. While there is a differential between the two (some registered Dems/Reps vote the opposite part or are swing voters) it is pretty stable. You can identify who’s a solid base voter and who’s on the fence.

    But I’d say that party registration is probably the most important variable to weight on because it’s one of the best predictors of election outcomes. One of the things I would like to know about every public poll is the proportion of Dems, Reps, and Ind’s in their sample. The other thing I’d like to know, even more, is how they define a likely voter. That actually varies quite a bit across the industry.

    If they ask me which party I’m backing, I’m going to say Partido Carlista.

    Weigh that.

    • #20
    • October 24, 2016, at 7:19 PM PDT
    • Like
  21. WI Con Member
    WI ConJoined in the first year of Ricochet Ricochet Charter Member

    RyanM:After reading this article, I went over to five thirty eight (which I have actually never done before). They’re pretty certain about Hillary, and about a democratic senate.

    I would almost be inclined to agree that liberal pollsters (as most seem to be) would skew data, but the problem with that is that their reputations are likely more important than their politics. I’d expect even someone like Nate Silver to care more about how accurate he is than about how he can potentially influence the election.

    I’d argue that the mantle of ‘most accurate pollster’ has consistently shifted. Recall when Zogby, Rasmussen, The LA Times poll were all “the one that got it right”? Why have an RCP average then? If they all have an interest in being correct, they should all be roughly the same with very little variation.

    What get’s people off the couch? Likely voters, new voters, people that haven’t voted in years, young voters (and aren’t on any ‘likely voter’ screens). What makes people stay home? Look at the “How will you vote” poll on the Main Feed-is that consistent with GOP voters in general? Only 44% of GOP leaning voters intend to vote for Trump?

    I think Trump is going to lose, badly. I believe we’ll hold the House and Senate but they are only guesses. There is a presumption, an illusion of precision. These are guesses – educated guesses but guesses nonetheless.

    • #21
    • October 24, 2016, at 7:43 PM PDT
    • Like
  22. Profile Photo Member

    Paul A. Rahe: The other possibility is that we are on the receiving end of a massive con — and that the polling results are designed to encourage or discourage voting on the part of the supporters of one candidate or the other.

    This is highly likely although the degree of effect is hard to judge. After all, it worked in Florida in 2000.

    • #22
    • October 24, 2016, at 7:57 PM PDT
    • Like
  23. Profile Photo Member

    RyanM: I’d expect even someone like Nate Silver to care more about how accurate he is than about how he can potentially influence the election.

    How did he do on BREXIT ? I think he had it wrong but a quick Google search doesn’t tell me.

    • #23
    • October 24, 2016, at 8:02 PM PDT
    • Like
  24. Roberto, Crusty Old Timer LLC Member
    Roberto, Crusty Old Timer LLCJoined in the first year of Ricochet Ricochet Charter Member

    RyanM: I would almost be inclined to agree that liberal pollsters (as most seem to be) would skew data, but the problem with that is that their reputations are likely more important than their politics

    I do not see how this is supported by evidence.

    • #24
    • October 24, 2016, at 8:35 PM PDT
    • Like
  25. Valiuth Inactive
    ValiuthJoined in the first year of Ricochet Ricochet Charter Member

    The King Prawn:I just assume the assumptions are off this year. They have to work to some degree on actual turnout from previous elections to build their models, and this year seems to be unique, or at least the beginning of a major shift in election participation.

    Or not. My understanding is that they predict turnout by asking people how likely they are to vote, dropping people who say not likely, and then only using those people who declare themselves as likely to vote. The only way for them to be missing a drastic change is if people are not being honest, or if they some how are not getting a representative sample. People can lie to pollsters, but I can’t imagine that it is very likely to be a systemic issue among a whole class of people. Having some sort of hidden unreachable class is possible depending on how you do your canvas, but this group would have to be very large to turn things around for Trump. So while you might miss some small population that could be key in a close race, I doubt you can miss any population large enough to make a difference now.

    The polls were right in the primary, even though many thought they surely must be missing some extra factor that would mean a Trump loss in the end. They didn’t. I think this idea that the pollsters are missing something big is just a wishful dream.

    • #25
    • October 24, 2016, at 9:35 PM PDT
    • Like
  26. Valiuth Inactive
    ValiuthJoined in the first year of Ricochet Ricochet Charter Member

    Mike-K:

    RyanM: I’d expect even someone like Nate Silver to care more about how accurate he is than about how he can potentially influence the election.

    How did he do on BREXIT ? I think he had it wrong but a quick Google search doesn’t tell me.

    He did not do any predictions of the BREXIT, though the event was covered by his website. In fact you can listen to the Fivethrityeight Political Podcast before the BREXIT vote discussing the polling done about it. It is quite a fair analysis of what the polling was like, and it various merits and demerits. It is quite a fair and even handed analysis of the polling being done.

    • #26
    • October 24, 2016, at 10:03 PM PDT
    • Like
  27. Annefy Member

    RyanM:After reading this article, I went over to five thirty eight (which I have actually never done before). They’re pretty certain about Hillary, and about a democratic senate.

    I would almost be inclined to agree that liberal pollsters (as most seem to be) would skew data, but the problem with that is that their reputations are likely more important than their politics. I’d expect even someone like Nate Silver to care more about how accurate he is than about how he can potentially influence the election.

    Does Nate do his own polls? I thought he’s an aggregator of sorts.

    • #27
    • October 24, 2016, at 11:05 PM PDT
    • Like
  28. The Reticulator Member

    RyanM:Wouldn’t that also depress democrat voter turnout if people think it is safely in the bag? Seems a dangerous strategy in a close election.

    When I learned about propaganda techniques in one of my high school classes, the bandwagon effect was close to the top of the list.

    • #28
    • October 25, 2016, at 12:47 AM PDT
    • Like
  29. The Reticulator Member

    Roberto:

    RyanM: I would almost be inclined to agree that liberal pollsters (as most seem to be) would skew data, but the problem with that is that their reputations are likely more important than their politics

    I do not see how this is supported by evidence.

    The question is, reputation for what?

    • #29
    • October 25, 2016, at 12:49 AM PDT
    • Like
  30. Israel P. Inactive

    RyanM: but the problem with that is that their reputations are likely more important than their politics

    But if their predictions move the needle towards the outcome they want, they can have both.

    And their reputations are to promote their businesses. Whose bidding you do is probably more important than how accurate you are.

    • #30
    • October 25, 2016, at 12:55 AM PDT
    • Like

Comments are closed because this post is more than six months old. Please write a new post if you would like to continue this conversation.