Political Views of Journalists, My Feud with Eric Alterman Part 2

Yesterday, I wrote about my feud with Eric Alterman.  Specifically, I discussed my Quarterly Journal of Economics article and how Eric Alterman, when critiquing that article, accused me of “Rigging the Numbers.

 In the introduction of my book, Left Turn, I document all this—that is, I summarize the article and I note the response of Alterman.  I think that anyone who reads the introduction would agree:  It does not make Alterman look like a very good scholar or person.

Alterman has responded with this post.

From my reading of the post, I suspect that Alterman has not read any part of Left Turn beyond the introduction.  And I’m almost certain that he did not read Chapter 9.  In that chapter he comes across as an even worse scholar and person. I believe that if he had read it, he would have responded by now.   

I begin Chapter 9 by documenting what I believe to be the most important fact about media bias:  In a typical presidential election Washington correspondents vote about 93-7 for the Democrat.   This was first documented in a 1996 Freedom Forum report.  In 2008, the New York Times’ John Tierney found similar numbers.  That is, in an “unscientific poll” of his colleagues, he found that Washington correspondents preferred Obama over McCain by a ratio of 92-8.

Lots of people are at least vaguely aware that mainstream journalists strongly vote Democratic.  But often, I believe, they don’t realize just how overwhelming the numbers are. 

For instance, the congressional districts that contain Berkeley, California and Cambridge Massachusetts respectively voted 90-10 and 86-14 for Obama in the last presidential election.  Note that Cambridge has approximately twice as many Republican voters as does a typical group of Washington correspondents.

Further, I think the 93-7 statistic actually understates the true percentage of liberals in a newsroom.  Princeton political science professor Adam Meirowitz has conducted some outstanding research examining the incentives of people to lie when taking surveys.  Consistent with his research, the 93-7 number climbs to something like 96-4 if you examine campaign contributions of journalists.  You get a similar number if you ask journalists to publicly reveal their political ideologies.  For instance, when Slate asked its employees to declare for whom they’d vote in the 2008 presidential election, 55 said Obama, and only one said McCain.  When Bill O’Reilly challenged Andrea Mitchell to “tell me one conservative thinker at NBC News,” she could not, or at least would not.  Mika Brezinski admitted that when she worked at CBS News, she knew of only one person who was a fan of George W. Bush.

I am aware of only one person who has challenged the accuracy of the above surveys and claimed that they really do not reveal such overwhelmingly liberal attitudes within the newsroom.  That person is Eric Alterman.  Here is the transcript of his 2003 interview with Joe Scarborough.

Joe Scarborough: …I want to look at a 1995 poll that I know that you are aware of.  This is of Washington reporters; 89 percent in that poll said they voted for Bill Clinton in 1992; seven percent voted for Bush.

Alterman:  I’ve got a feeling you haven’t read my book, Joe.  Come out and tell the truth.  Have you read the book?

Scarborough: No, it breaks my heart to say I have not read the entire book.

(crosstalk)

Alterman: If you had read it, gotten as far as chapter two, you would see that I take that poll apart.  It’s not a very good poll.  It doesn’t tell us much of anything.  That poll had such a low response rate that no responsible social scientist would ever use it.  The fact is, is that journalists, by and large, are liberal socially and conservative economically. 

Here is the relevant passage from Alterman’s chapter 2 where he claims to “[take] that poll apart”:

Even with all those caveats, the case is not closed on the Freedom Forum poll.  The study itself turns out to be based on only 139 respondents out of 323 questionnaires mailed, a response rate so low that most social scientists would reject it as inadequately representative (p.20).

In Left Turn, after quoting the above passage, I note the following:

Note that the response rate of the survey that Alterman criticizes is 43% (=139/323).  In the same chapter Alterman discusses two other surveys—one by David Croteau, a sociologist at the Virginia Commonwealth University in Richmond, and another by the Pew Research Center. The latter two surveys support his main conclusions; it is therefore not surprising that he does not criticize their methodology.  He does not mention, however, that their response rates were respectively only 30% and 32%.

 I also write the following in a footnote:

Alterman does not cite his evidence that “most social scientists” would reject a survey with a response rate of 43%, nor, as he told Joe Scarborough, that “no responsible social scientist would ever use” such a survey.  In contrast, I am not aware of any bona fide social scientist—one with a PhD in a real social-science discipline and who, at least occasionally, publishes in top peer-reviewed social-science journals—who would reject, out of hand, a survey just because its response rate was 43%.  Indeed, Alan Gerber, Dean Karlan, and Daniel Bergan—all researchers at Yale University—note, “Response rates of 30 or 40 percent are typical in the public opinion literature.”  See “Does the Media Matter? A Field Experiment Measuring the Effect of Newspapers on Voting Behavior and Political Opinions,” American Economic Journal: Applied Economics, 2009, 1(2), p. 41.

Meanwhile, although actual social scientists—including me—prefer to see high response rates in a survey, more important is that the event—whether a person responds or not—be orthogonal to the answer that he gives.  That is, what is important is that his or her choice to respond or not be statistically independent of the preference factors that determine his answer.

I see no reason to believe that this “orthagonality” condition was violated in the Freedom Forum poll (nor for the other two surveys that Alterman cites in his Chapter Two).  Nor does Alterman give any reason why he thinks the Freedom Forum poll might violate the orthagonality condition.   Nor does he even discuss the orthagonality condition.

  1. Look Away

     Tim, the ego of the leftisits is nothing new or novel, especially as to their grasp on human nature. Unlike Burke, who said “we are midgets standing on the shoulders of giants” (close), they see themsleves as the giants stanidng on the shoulders of the midgets and therefore their eyes have slipped under the mud.

    What is new and novel, is that we have the ability,through Ricochet,to hear your important views and work. We have all known instinctually that the bias exists and makes its mark. Your work is moving this country one inch close to the left’s “Have You No Shame” moment aka the Joe McCarthy swan song. Keep up the Good Work, and Thank You!

  2. Mollie Hemingway
    C

    Having spent a bit of time in newsrooms, I agree that the political tilt there is shocking. I’m not even conservative — a libertarian — and I was typically the only non-liberal around. In some cases, I was the only non-liberal some of my colleagues had met.

    I’m not saying that their reporting was bad. Some of the most fair reporters, in fact, are the ones who know precisely how biased they are. They work to correct it. But it was weird to have to explain basic political positions or religious positions or even cultural positions to folks who fancy themselves as very smart and enlightened.

  3. Western Chauvinist

    Not a statistician here and your discussion of the orthagonality condition for surveys is new to me, so I have a question.  Is sample size significant in determining the accuracy of surveys?  What is the minimum and under what conditions?

  4. r r

    In a past podcast, John Yoo was discussing how he freely chose to live in academia… and he kind of chuckled to himself.  Almost to say… “when I say it out loud, it seems funny that I freely chose this!”  haha….

    Do you ever feel that way Dr. Groseclose?  I mean, fighting battles for feudal or purely fabricated reasons can be fun at times… but I suspect this is why so many conservative thinkers are found in think tanks rather than in academia…  I know I run up against it from time to time, and it’s always fun to point out the facts — especially the fact that the critical points are based on prejudice rather than actual evidence…

  5. r r
    Western Chauvinist: Not a statistician here and your discussion of the orthagonality condition for surveys is new to me, so I have a question.  Is sample size significant in determining the accuracy of surveys?  What is the minimum and under what conditions? · Aug 24 at 8:29am

    Edited on Aug 24 at 08:30 am

    Orthogonality basically means two variables are unrelated or uncorrelated.  That is, the fact that someone did not respond is not related to their political views.  i.e. conservatives are not more or less likely to respond than liberals.  Since political views and responses to this survey are unrelated, Alterman’s point is moot because low response rates do not indicate an unrepresentative sample (in this case).

  6. Western Chauvinist

    Thanks, Samwise.  I pretty much understood Dr. Groseclose’s description of orthogonality.  It just seems to my inexpert brain that a certain minimum of the population being studied must be sampled to get an accurate survey.  Obviously, if you’re looking at a population of a million subjects, sampling 2 of them is inadequate.

    BTW, I’m in no way interested in taking up Alterman’s argument.  I’m just trying to figure out if orthogonality eliminates the floor on sample-size and, if so, why.

  7. Cas Balicki

    Western Chauvinist, sample size, as I’m sure you know, is only important in predicting population values accurately. In general, the larger the population the larger the sample should be to attain, say, a 95% certainty of a prediction falling into a specific +/- range. Still, the statistician eventually gets to a point of diminishing returns, in that additional sampling yields a smaller and smaller gain in predictive accuracy. In the case cited by Prof. Groseclose, where 43% of a population is sampled, the proportion sampled far exceeds any point that would limit the predictive accuracy of the sample relative to the population. To go back to your example of 2 out of a million people being inadequate, 430,000 out of a million would certainly not be given that a sample of 1000 would likely yield extremely accurate predictions of population tendencies. 

  8. Cas Balicki

    Doing some calculations a sample size of 139 out of a population of 323 would yield at 95% confidence level a confidence interval of +/- 6.3%. Leaving us to to infer with 95% confidence that between 0.07% and 13.3% of the population voted Republican. At a 99% confidence level the corresponding results for the sample size and population would be +/- 8.25% yielding a Republican voting range from -1.25% or (0) to + 15.25%.

  9. Cas Balicki

    Oh! If anyone finds errors in these numbers, don’t jump all over me, because It’s been a long time since I had any truck with this kind of arithmetic. 

  10. r r

    Yeah, Cas is right in that as sample sizes increase you are far more likely to find significant results.

Want to comment on stories like these? Become a member today!

You'll have access to:

  • All Ricochet articles, posts and podcasts.
  • The conversation amongst our members.
  • The opportunity share your Ricochet experiences.

Join Today!

Already a Member? Sign In