Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
Yesterday, I wrote about my feud with Eric Alterman. Specifically, I discussed my Quarterly Journal of Economics article and how Eric Alterman, when critiquing that article, accused me of “Rigging the Numbers.”
In the introduction of my book, Left Turn, I document all this—that is, I summarize the article and I note the response of Alterman. I think that anyone who reads the introduction would agree: It does not make Alterman look like a very good scholar or person.
Alterman has responded with this post.
From my reading of the post, I suspect that Alterman has not read any part of Left Turn beyond the introduction. And I’m almost certain that he did not read Chapter 9. In that chapter he comes across as an even worse scholar and person. I believe that if he had read it, he would have responded by now.
I begin Chapter 9 by documenting what I believe to be the most important fact about media bias: In a typical presidential election Washington correspondents vote about 93-7 for the Democrat. This was first documented in a 1996 Freedom Forum report. In 2008, the New York Times’ John Tierney found similar numbers. That is, in an “unscientific poll” of his colleagues, he found that Washington correspondents preferred Obama over McCain by a ratio of 92-8.
Lots of people are at least vaguely aware that mainstream journalists strongly vote Democratic. But often, I believe, they don’t realize just how overwhelming the numbers are.
For instance, the congressional districts that contain Berkeley, California and Cambridge Massachusetts respectively voted 90-10 and 86-14 for Obama in the last presidential election. Note that Cambridge has approximately twice as many Republican voters as does a typical group of Washington correspondents.
Further, I think the 93-7 statistic actually understates the true percentage of liberals in a newsroom. Princeton political science professor Adam Meirowitz has conducted some outstanding research examining the incentives of people to lie when taking surveys. Consistent with his research, the 93-7 number climbs to something like 96-4 if you examine campaign contributions of journalists. You get a similar number if you ask journalists to publicly reveal their political ideologies. For instance, when Slate asked its employees to declare for whom they’d vote in the 2008 presidential election, 55 said Obama, and only one said McCain. When Bill O’Reilly challenged Andrea Mitchell to “tell me one conservative thinker at NBC News,” she could not, or at least would not. Mika Brezinski admitted that when she worked at CBS News, she knew of only one person who was a fan of George W. Bush.
I am aware of only one person who has challenged the accuracy of the above surveys and claimed that they really do not reveal such overwhelmingly liberal attitudes within the newsroom. That person is Eric Alterman. Here is the transcript of his 2003 interview with Joe Scarborough.
Joe Scarborough: …I want to look at a 1995 poll that I know that you are aware of. This is of Washington reporters; 89 percent in that poll said they voted for Bill Clinton in 1992; seven percent voted for Bush.
Alterman: I’ve got a feeling you haven’t read my book, Joe. Come out and tell the truth. Have you read the book?
Scarborough: No, it breaks my heart to say I have not read the entire book.
Alterman: If you had read it, gotten as far as chapter two, you would see that I take that poll apart. It’s not a very good poll. It doesn’t tell us much of anything. That poll had such a low response rate that no responsible social scientist would ever use it. The fact is, is that journalists, by and large, are liberal socially and conservative economically.
Here is the relevant passage from Alterman’s chapter 2 where he claims to “[take] that poll apart”:
Even with all those caveats, the case is not closed on the Freedom Forum poll. The study itself turns out to be based on only 139 respondents out of 323 questionnaires mailed, a response rate so low that most social scientists would reject it as inadequately representative (p.20).
In Left Turn, after quoting the above passage, I note the following:
Note that the response rate of the survey that Alterman criticizes is 43% (=139/323). In the same chapter Alterman discusses two other surveys—one by David Croteau, a sociologist at the Virginia Commonwealth University in Richmond, and another by the Pew Research Center. The latter two surveys support his main conclusions; it is therefore not surprising that he does not criticize their methodology. He does not mention, however, that their response rates were respectively only 30% and 32%.
I also write the following in a footnote:
Alterman does not cite his evidence that “most social scientists” would reject a survey with a response rate of 43%, nor, as he told Joe Scarborough, that “no responsible social scientist would ever use” such a survey. In contrast, I am not aware of any bona fide social scientist—one with a PhD in a real social-science discipline and who, at least occasionally, publishes in top peer-reviewed social-science journals—who would reject, out of hand, a survey just because its response rate was 43%. Indeed, Alan Gerber, Dean Karlan, and Daniel Bergan—all researchers at Yale University—note, “Response rates of 30 or 40 percent are typical in the public opinion literature.” See “Does the Media Matter? A Field Experiment Measuring the Effect of Newspapers on Voting Behavior and Political Opinions,” American Economic Journal: Applied Economics, 2009, 1(2), p. 41.
Meanwhile, although actual social scientists—including me—prefer to see high response rates in a survey, more important is that the event—whether a person responds or not—be orthogonal to the answer that he gives. That is, what is important is that his or her choice to respond or not be statistically independent of the preference factors that determine his answer.
I see no reason to believe that this “orthagonality” condition was violated in the Freedom Forum poll (nor for the other two surveys that Alterman cites in his Chapter Two). Nor does Alterman give any reason why he thinks the Freedom Forum poll might violate the orthagonality condition. Nor does he even discuss the orthagonality condition.