A Scandal in Political Science – An Update

 

Michael LaCour, a UCLA graduate student in political science, has been accused of faking results for a paper that he published in the journal Science. Two days ago, Science retracted the article.

Four days ago, in my Ricochet post, I explained why I believe LaCour has faked the results of a second paper. Much of my explanation was speculation. For instance, I described how the confidence intervals for his estimates of media slants didn’t follow the pattern that one would have expected. I said in that post that I would have given 10:1 odds that the results were fake; that is, I was about 90% sure that this second paper was also an instance of fraud.

A few days ago, Emory political scientist Gregory Martin posted a note examining the same paper by LaCour. Approximately a year ago, Martin and his coauthor, Ali Yurukoglu, had asked LaCour for the computer code that he wrote for his paper. LaCour gave him the code, but Martin was unable to replicate the results that LaCour reported. Perhaps most shocking, Martin noticed the following problem: In his paper LaCour wrote that all of his media data came from the UCLA Closed Caption (LACC) archive of news shows transcripts. The paper reported estimates for 60 news shows. Martin noticed, however, that 14 of the shows do not really exist in the LACC data set.

Although one could imagine innocent explanations for the problems Martin found, the evidence was stunning. After seeing Martin’s results, I became about 99% sure that LaCour faked results in the second paper.

Late yesterday, Martin added to his report two paragraphs explaining additional evidence of fraud by LaCour. The evidence in these new paragraphs make me approximately 99.99% sure that LaCour faked his results.

I’ll describe the new evidence in a moment. But first, here’s some background. A few years ago, while I was a professor in the UCLA political science department, I had a conversation in a hallway with my colleague Jeff Lewis. Lewis noted a potential problem with the media-bias paper I had written with Jeff Milyo. Although our method assumed that observations are independent (a standard assumption with statistical models), Lewis discussed some reasons why we shouldn’t believe that that is the case. He described a way to alter our method that would correct for the potentially untrue assumption.

I also remember Lewis telling me that he had described his alteration to a PhD class that he was teaching, or possibly to a single student. I think I remember that he also told me that he incorporated his model (i.e. the altered version of my and Milyo’s model) into a homework problem for his class.

I’m now almost certain that the statistical model that LaCour describes in the last page of the appendix is the exact model that Lewis constructed.

Lewis constructed his model to be run on the data that Milyo and I gathered. Those data involve citations to think tanks made by members of Congress and media outlets. In contrast, LaCour’s paper uses data about “loaded political phrases” made by members of Congress and media outlets in the LACC archive.

Here now is some of the new evidence that Martin reports. First, the data file that LaCour’s code references is labeled, “/Users/michaellacour/Dropbox/MediaBias/Groseclose/counts.csv.” Curiously, the label of the file contains my name.

Approximately two or three years ago, I gave LaCour the data from my and Milyo’s project. LaCour’s method appears to use that data. But my and Milyo’s data is based on think tank citations, whereas LaCour’s purported results are based on loaded political phrases. If LaCour really executed the method that he says he executed, it would make no sense to use my and Milyo’s data.

Second, LaCour writes in his appendix that his method assumes that the slant of each of the news shows that he analyzes has a “prior distribution” with mean equal to 50.06. Meanwhile, in our media-bias paper Milyo and I report that our estimate of the average voter’s ideology (on the “adjusted ADA” scale) is 50.06. LaCour must have gotten that number from our paper. However, it makes no sense for LaCour to use that number. The slants that he reports are on the “DW-Nominate” scale, which runs from -1 to 1. It is impossible for those slants to have a mean of 50.06.

Third, as Martin reports, LaCour’s code instructs his computer to compute slants for 20 news outlets. This happens to be the number of outlets that Milyo and I examined. In contrast, LaCour reports slant estimates for 60 outlets.

In sum, the code that LaCour claimed to use – and sent to Martin – does not make sense for the results he publishes in his paper. In contrast, the code makes perfect sense for the method that Jeff Lewis described to me that day in the hallway.

I strongly believe that something like the following happened: LaCour completely faked the media-slant results that he reports in the second paper and did not really write any computer code to estimate those results. When Martin and Yurukoglu asked him for his code, he sent the best substitute he could think of—the code that Jeff Lewis wrote for a different problem.

Published in General
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 34 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Claire Berlinski Editor
    Claire Berlinski
    @Claire

    A question for you, Tim. SSM sets everyone’s hair on fire, so there was a high incentive for other researchers to look closely at his results. I won’t hold you to a number, but what’s your suspicion–just your gut feeling–about the percentage of published papers in your field that would melt on similar scrutiny? I’m thinking about the total meltdown at the APA in recent years, which led to a lot of plausible-sounding claims like this.

    I don’t necessarily mean outright fraud, either, which is what seems to have happened here. But what percentage of published papers in political science, do you suspect, are so sloppy in one way or another that their publication should be a scandal?

    • #1
  2. user_278007 Inactive
    user_278007
    @RichardFulmer

    The “smoking gun” (to use a phrase that is quite popular these days) is the survey firm’s denial that they had participated in LaCour’s study.  At least some of the other evidence that Tim cites could be explained by honest mistakes on LaCour’s part, but his claim that the company had run the surveys cannot.  It’s surprising that an intelligent person bent on deceit would use a lie that is so provably false.  Much better to stay within the gray areas afforded by ambiguities in the meaning of the word “is.”

    • #2
  3. Z in MT Member
    Z in MT
    @ZinMT

    I have never figured out the purpose of academic fraud of this type. It seems like more work to fake data than to just analyze real data sources. If one is trying to reach a previously decided conclusion you usually can, particularly in political or social science where the scaling of the data is in part subjective – who decides what is a “loaded political phrase”. Even without changing your statistical model you can pretty much come to any conclusion just by futzing with your how you define to enumerate and classify your data.

    Is it that to get a paper published in a “high impact” journal like Science one has to have dramatic conclusions that support a particular view?

    • #3
  4. drlorentz Member
    drlorentz
    @drlorentz

    Related to Ms. Berlinski’s question, how do you think the incidence of fraud in political science publications compared to other disciplines. Medicine seems to have a fair number of these cases, or perhaps they are just better publicized. Also, it’s probably true that there are more medical than polisci research papers published, so one should normalize by that. Professor Groseclose, since you’re in the thick of this I’d appreciate your thoughts.

    Great bit of digging, by the way.

    • #4
  5. kelsurprise Member
    kelsurprise
    @kelsurprise

    You had me at “Curiously, the label of the file contains my name.”

    • #5
  6. Fake John Galt Coolidge
    Fake John Galt
    @FakeJohnJaneGalt

    I think that most of these political science studies are junk. They are “science” for the sole purpose of pushing a political agendas and making careers. The media picks them up and use them as propaganda to further their agenda. Later after they have done the damage the studies are either forgotten or occasionally refuted, put only after the damage is done.

    • #6
  7. Ricochet Contributor
    Ricochet
    @TitusTechera

    Fake John Galt:I think that most of these political science studies are junk.They are “science” for the sole purpose of pushing a political agendas and making careers.The media picks them up and use them as propaganda to further their agenda.Later after they have done the damage the studies are either forgotten or occasionally refuted, put only after the damage is done.

    I think you’re neglecting one aspect: This is both necessary preparation for administration & the only workable substitute for going to church-

    • #7
  8. Claire Berlinski Editor
    Claire Berlinski
    @Claire

    Fake John Galt:I think that most of these political science studies are junk.

    That might be true–I don’t know–but it doesn’t have to be. It’s certainly possible to do sound (and reproducible) studies about political questions, be methodologically rigorous in the way you approach them, not fake your data, and so forth. We’re not talking about a discipline that just shouldn’t exist, full stop, because the whole idea of studying such things using what we’d think of as the scientific method is so basically ridiculous.

    And I don’t know what percentage of published studies are just junk. To have any sense of that, I’d want see a well-controlled … drumroll … study. A meta-survey. That really is the only way we’d know how much of it is junk, right?

    My gut says a lot of it is junk, though. I’d like to know if I’m right, and if so, why. It doesn’t make sense to me that so many people would go into this field and then be so sloppy in their research. I assume young people go academic political science research because they’re really curious about these questions. If you think you already know the answers, there are easier ways to push a political agenda or earn a living. So why do it if you don’t really care about the answers to the questions you’re asking?

    Assume I’m right–that a high percentage of published studies are worthless. My first suspicion about why would be that we’ve got an incentive structure that rewards publishing in volume over publishing carefully–you don’t get tenure because you conducted a single really careful study, you get it because you published ten dozen sloppy ones. And I’d guess you don’t get any reward or glory for being scrupulous in reviewing your colleagues’ work–that’s just an unpleasant chore you have to do.

    But I’d really like to hear Tim’s from-the-inside thoughts about the state of the discipline, generally. My thoughts are just hunches based on looking at the literature in psychology (which is scandalously, unbelievably fraudulent and sloppy) and international relations (which varies a lot, but I think it’s become sloppier, for sure). I’ve heard that a lot of this is going on in medicine, too–which is a pretty grim thought–but I don’t know enough to be sure.

    • #8
  9. Freeven Inactive
    Freeven
    @Freeven

    If Tim is up for it, and if such a thing is possible, I’d be very interested in a post that gives some general tips or guidelines on how a layman might spot clues as to whether a given study might be meaningful or not. Are there red flags that the uninitiated might spot to get at least an indication of the quality, or is it impossible to get a sense of things without a great deal of expertise?

    • #9
  10. Ricochet Contributor
    Ricochet
    @TitusTechera

    Miss Claire, pol.sci should be brought to a merciful end whether or not the scientific method can be applied in some useful way. Unless someone can figure out how to keep this kind of enthusiasm for application subservient to some serious study of politics…

    Training all the underlings of the state has been conflated on purpose with the thinking required for policy & the learning about how a country works which is supposed to make it easier to become experienced. Things which are mere tasks & could be mastered by untold millions have been brought forward to usurp the political thought that is simply not democratic. The result is the use of quantitative studies done well or ill to wipe out political thinking.

    There is no great difficulty in seeing that people interested in the study of politics are not by nature statisticians, nor that the use of quantitative study is a servant, not a master: No one can measure outcomes of policies before those policies are enacted, so the supposed rationality of measurement is simply unavailable  in the beginning. Then pol.sci serves to hide the truth about policy-making.

    The real devil is predictions, though–the rise of pol.sci for an hundred years has led to magical prediction-making by people who are not capable properly to think about causation in things already done, measured, & studied to death. Prestige in experts & playing political jiujitsu with the consent of the governed combine to rule the public mind. End it-

    • #10
  11. drlorentz Member
    drlorentz
    @drlorentz

    Claire Berlinski:
    My gut says a lot of it is junk, though. I’d like to know if I’m right, and if so, why. It doesn’t make sense to me that so many people would go into this field and then be so sloppy in their research. I assume young people go academic political science research because they’re really curious about these questions. If you think you already know the answers, there are easier ways to push a political agenda or earn a living. So why do it if you don’t really care about the answers to the questions you’re asking?

    People may enter the field because of genuine curiosity but that doesn’t mean curiosity remains their principal motivation as they advance in their careers. Many people, including our own Milt Rosenberg, have pointed out that social science departments are dominated by Leftists. Hence, pushing a political agenda is a given. Those who don’t toe the mark are driven out. Careful, high quality work is secondary; it may even be an impediment to the agenda if that work provides the ‘wrong’ answers.

    Consider the controversy over Herrnstein & Murray’s The Bell Curve. This was an extremely careful and thorough piece of work. That was irrelevant. It told the wrong story. A sloppy work that gave the right answer would have been praised. As a tenure-track assistant professor, which response would you prefer?

    • #11
  12. Reckless Endangerment Inactive
    Reckless Endangerment
    @RecklessEndangerment

    This is what happens when you let the tail wag the dog.

    • #12
  13. PsychLynne Inactive
    PsychLynne
    @PsychLynne

    Claire Berlinski:

     I’ve heard that a lot of this is going on in medicine, too–which is a pretty grim thought–but I don’t know enough to be sure.

    As someone who consumes a great deal of medical research, yes, this is going in medicine.

    Medicine, ok, cancer treatment research, is done in a clinical trials context.  About 6% of adult patients enroll in clinical trials.  They aren’t representative of the general population, so there are already some structural problems.

    Once you branch out beyond direct treatment (and God forbid you get anywhere near tobacco), there are still some significant design flaws that exist before you even get to the bias problems.

    • #13
  14. Claire Berlinski Editor
    Claire Berlinski
    @Claire

    PsychLynne:

    Claire Berlinski:

    I’ve heard that a lot of this is going on in medicine, too–which is a pretty grim thought–but I don’t know enough to be sure.

    As someone who consumes a great deal of medical research, yes, this is going in medicine.

    Medicine, ok, cancer treatment research, is done in a clinical trials context. About 6% of adult patients enroll in clinical trials. They aren’t representative of the general population, so there are already some structural problems.

    Once you branch out beyond direct treatment (and God forbid you get anywhere near tobacco), there are still some significant design flaws that exist before you even get to the bias problems.

    Like I said, I hear this a lot. Enough that I’m wondering why no one is doing a big piece for a major newspaper or magazine about it. I mean–people really care about medical research, unlike PoliSci, and cancer research in particular.

    • #14
  15. The Reticulator Member
    The Reticulator
    @TheReticulator

    Claire Berlinski:I mean–people really care about medical research, unlike PoliSci, and cancer research in particular.

    Yes, but now medical research = politics.   That’s what happens when government gets involved to the extent it now has.   You can forget about objective research in such an environment.

    • #15
  16. drlorentz Member
    drlorentz
    @drlorentz

    Claire Berlinski: I mean–people really care about medical research, unlike PoliSci, and cancer research in particular.

    Polisci findings get plenty of press and sometimes drive policy. People may not care about the findings per se, but they sure care about the policies they shape.

    • #16
  17. Snirtler Inactive
    Snirtler
    @Snirtler

    Z in MT:I have never figured out the purpose of academic fraud of this type. It seems like more work to fake data than to just analyze real data sources …

    Is it that to get a paper published in a “high impact” journal like Science one has to have dramatic conclusions that support a particular view?

    For people in this field, the incentives are more general than you suggest above. It’s not so much that a scholar must have dramatic conclusions that support a particular view as that one must have dramatic results–substantively and statistically significant–in whatever direction to be published at all. No one gets his paper published for statistically null results. And these days to be competitive in the academic job market, a PhD looking to get hired must already have a publication or two under his belt.

    Nonetheless, as drlorentz says in #16, the incentive to fraud in the original LaCour case can only have been heightened by media interest and the policy implications of his topic. LaCour correctly read or anticipated how the media would receive “his work”. People did sit up to take notice of his purported result that a 20-minute chat with a gay activist sufficed to soften voters’ feelings toward gays as a group and toward gay marriage and would have a lasting effect.

    • #17
  18. Fake John Galt Coolidge
    Fake John Galt
    @FakeJohnJaneGalt

    Snirtler:

    People did sit up to take notice of his purported result that a 20-minute chat with a gay activist sufficed to soften voters’ feelings toward gays as a group and toward gay marriage and would have a lasting effect.

    I was always skeptical of this part.  Gays and their supporters act like the average citizen has never known or talked to any homosexuals before.  Where I think it would be a very unusual person indeed to have never known a homosexual person.  Everybody knows, works with and are even related to homosexuals.  That was why I was pretty sure the study was BS when I first heard it.  An old professor of mine told me that the first thing you should do for when you come across a study or a statistic is see if it passes the smell test.  This one did not.

    • #18
  19. drlorentz Member
    drlorentz
    @drlorentz

    Snirtler:LaCour correctly read or anticipated how the media would receive “his work”. People did sit up to take notice of his purported result that a 20-minute chat with a gay activist sufficed to soften voters’ feelings toward gays as a group and toward gay marriage and would have a lasting effect.

    The fact that we’re having this discussion is testament to public interest such findings. All politicized areas of inquiry (e.g., medicine, climate science) produce powerful incentives to get the ‘right’ answer. It even extends to deciding which questions to ask. If you want to get funding for your work, keep your job, or get tenure, you’d better ask acceptable questions and get acceptable answers.

    This is why contrarian opinions often come from older researchers. They already have tenure or are close enough to retirement that they’re outside the reach of those who would punish them for unorthodox views. In a sense, this is counterintuitive since you’d expect contrarianism to come from the young turks, not the old codgers. It is until you understand the incentives.

    In the area of climate science, Hal Lewis, Philip Stott, and Henk Tennekes spoke out because they felt they could get away with it. In Tennekes’s case, it didn’t work out so well:

    After publishing a column critical of climate model accuracy, Tennekes says he was told “within two years, you’ll be out on the street”.

    • #19
  20. Ricochet Inactive
    Ricochet
    @Pelicano

    I’m a historian, and I wonder about this in my field, too.

    In poli sci it sounds like you can share the data sets and publish the equations and methodology used.

    In history we depend on archival sources–one of a kind documents that only exist in one place (possibly not in English or even legible). Sure you footnote, but who’s going to check them?

    My own work uses 19th century federal court records, available at the National Archives. Part of my claim to originality, though, is that no one has really used these particular case files. So practically speaking I’m the only one who knows what’s there, and no one’s going to bother to double check. Everyone has their own research to produce.

    Of course, there’s an incentive not to cheat. Getting caught would be devastating, ending a career.

    But still I wonder. It’s happened too often that when I’m reading someone using sources I know well, they’re a little off. Often not significantly or in vital ways but not quite right either.

    • #20
  21. The Reticulator Member
    The Reticulator
    @TheReticulator

    drlorentz:This is why contrarian opinions often come from older researchers. They already have tenure or are close enough to retirement that they’re outside the reach of those who would punish them for unorthodox views. In a sense, this is counterintuitive since you’d expect contrarianism to come from the young turks, not the old codgers. It is until you understand the incentives.

    Interesting observation. Thank you.

    • #21
  22. The Reticulator Member
    The Reticulator
    @TheReticulator

    Pelicano:

    So practically speaking I’m the only one who knows what’s there, and no one’s going to bother to double check. Everyone has their own research to produce.Of course, there’s an incentive not to cheat. Getting caught would be devastating, ending a career.

    I presume you remember the Michael Belleseiles case.

    One disturbing thing to me was how he continued to be defended by colleagues whose work I respect.

    If he had wanted, I could have provided him with a couple of sources from the 1830s that would have helped him make his overall case about guns in early America, if he hadn’t been intent on going overboard with it.  But instead he had to invent data and ruin his career.

    But if it hadn’t cost him his career, it would have been bad for the credibility of the  history profession.

    • #22
  23. Freesmith Inactive
    Freesmith
    @Freesmith

    American social science is a prisoner of the interests and biases of those who control the debate, the folks Joel Kotkin calls the New Oligarchy, and their willing accomplices, the New Clerisy. They set the terms and the boundaries of “responsible” science.

    Only certain topics are studied. Only certain solutions are broached. Data are either trimmed to fit the desired result, as in the Green-LaCour Case, or ignored and replaced by acceptable interpretations, as in last year’s “On the Run” by Alice Goffman and this year’s “Our Kids” by Robert Putnam.

    Since America’s decline and the Civil Rights era each began with Brown v Board of Education, social science has been a tool of those who want to transform America, not describe it. Today, their usurpation is so complete we barely notice its tendentiousness.

    Once the allies of reason and truth, social scientists have been medized. Green-LaCour may represent a small victory for free scientific inquiry, but it is only a skirmish, not Salamis. The Barbarians will keep coming, armed with their studies, surveys, models and analyses.

    • #23
  24. user_428379 Thatcher
    user_428379
    @AlSparks

    I have a brother who is a tenured business professor, and he publishes.

    But I’ve never heard of members of the academy influencing business practices through their published research.  I’ve heard of The Peter Principle by Laurence Peter, who was an academic, but that wasn’t an academic publication.

    The satirical The Dilbert Principle may have had equivalent or even more influence, really.

    What CEO or the CxO’s working for him read academic journals?

    Most of these peer reviewed articles (and I’m not limiting myself to business) are junk, not necessarily because they’re untrue, but because they’re irrelevant.

    • #24
  25. Snirtler Inactive
    Snirtler
    @Snirtler

    The Reticulator:

    drlorentz:This is why contrarian opinions often come from older researchers. They already have tenure or are close enough to retirement that they’re outside the reach of those who would punish them for unorthodox views. In a sense, this is counterintuitive since you’d expect contrarianism to come from the young turks, not the old codgers. It is until you understand the incentives.

    Interesting observation. Thank you.

    Agreed. This is spot on.

    • #25
  26. Ball Diamond Ball Inactive
    Ball Diamond Ball
    @BallDiamondBall

    Fascinating.  While this is an outright fraud, much of research is tainted by a faulty application of statistical methods.  It is a scandal in its own right which is very slowly breaking.

    Using a method you do not truly understand is akin to using a computer you know nothing about.  You type stuff in, answers come out.  You might as well be asking a “psychic”.  The answers fluctuate from test to test, because the *statistical* sensitivities are not calibrated appropriately for the intended purpose.

    One of the most promising efforts to combat this trend is some refereed journals’ requirements that all studies to be published must have been registered with the journal *before* the study, to preclude “Baltimore stockbroker” cases.  Another, the difficult road, is plain old statistical education.  Most people who should know better, and perhaps most scientists, cannot correctly describe a p-value.

    • #26
  27. The Reticulator Member
    The Reticulator
    @TheReticulator

    Ball Diamond Ball:One of the most promising efforts to combat this trend is some refereed journals’ requirements that all studies to be published must have been registered with the journal *before* the study, to preclude “Baltimore stockbroker” cases.

    I’d be interested to know about this, particularly about the disciplines and journals in which this is done.  Medical research, perhaps?  I can think of some fields in which it couldn’t be expected to work very well.

    • #27
  28. Tim Groseclose Contributor
    Tim Groseclose
    @TimGroseclose

    Claire Berlinski: I won’t hold you to a number, but what’s your suspicion–just your gut feeling–about the percentage of published papers in your field that would melt on similar scrutiny?

    I don’t necessarily mean outright fraud, either, which is what seems to have happened here. But what percentage of published papers in political science, do you suspect, are so sloppy in one way or another that their publication should be a scandal?

    I think it’s small, like only 2 or 3 percent – at least if the paper is published in a top three or so journal.  Two or three times have I asked another scholar for data that he or she used in a publishable paper.  With each case the author was very cooperative, and each paper seemed perfectly sound to me.  Although it’s a small sample, I suspect that its pretty representative – the vast majority of papers have replicable results.

    I once asked a senior scholar (David Austen-Smith, a top game theorist) if he was ever tempted not to check his proofs.  My point was that he’s so smart that everyone will just assume his proofs are correct and won’t even read them.  His reply was something like: “Oh no.  Whatever paper you write, some grad student has got his or her name on it.”  He meant that, at some point, some grad student is going to read your paper extremely carefully, including checking the proofs, and probably write some sort of extension of it as a chapter for a dissertation.  That advice is probably more true for Austen-Smith than the rest of us – most of his papers end up on a syllabus somewhere for a PhD class.  However, I think most scholars who publish regularly think the way Austen-Smith advises.

    There are other papers that make minor errors – e.g. not to check whether you’re be dividing by zero in a proof, or not to check whether you’re at a “corner solution” in a proof.  I’ve been guilty of that in a paper.  I think a similar type example might be Steve Levitt’s paper, “The Effect of Police on Crime.”  He wanted to weight all of his observations by a variable, say N_t, but he accidentally used the wrong command in the statistics package and ended up weighting his observations by 1/N_t.  (I’ve used the same stat package and know that it’s a very easy mistake to make.)  This caused his estimates to overstate the true effect of police on crime by a factor of approximately two.  Another scholar found the mistake, Levitt owned up to it as soon as he was made aware of it, and the other scholar published the corrected results in the same journal.  I’d say that with something like 10 or 20 percent of the papers published at top journals would fall into this category– that is, if one looks hard enough, he or she can find a minor error in the paper such as the ones I’ve described.

    There’s another case that I briefly mentioned in an earlier post.  Over the past 5 or 10 years a lot of political science papers have begun to use “Bayesian” analysis in their statistical techniques.  The above paper by LaCour is one such paper.  With about half of these papers the author describes some very complex mathematical notions, but his or her explanation is very opaque, and I’m very suspicious that the author really doesn’t understand what he or she is writing.  Instead, I suspect the author just learned the appropriate command in a stat package, and basically just punched a button and watched what results appeared.  I think that the top political science journals probably contain about one article in every other issue that is guilty of such a problem.

    One problem is that statistical “methodologists” tend to be one of the biggest fashion followers in the social sciences.  About a decade and a half ago, instead of “Bayesian analysis,” the trend among the methodologists was “hazard models.”  At the time, the Macarena dance was also very fashionable.  Some of us, in order to make fun of the methodologists, would sing (to the tune of Macarena) “Hey hazard models.”  I wish a similar dance craze with a similarly awful song would emerge, and I could sing something similar about “Bayesian analysis.”

    Maybe the LaCour case will cause reviewers to be more skeptical of papers following the latest “methodology” fashion, and accordingly, maybe we’ll see fewer cases of researchers using methods they don’t really understand.  That’s my hope at least.  Maybe something good can come out of LaCour scandal.

    • #28
  29. Tim Groseclose Contributor
    Tim Groseclose
    @TimGroseclose

    drlorentz:Related to Ms. Berlinski’s question, how do you think the incidence of fraud in political science publications compared to other disciplines. Medicine seems to have a fair number of these cases, or perhaps they are just better publicized. Also, it’s probably true that there are more medical than polisci research papers published, so one should normalize by that. Professor Groseclose, since you’re in the thick of this I’d appreciate your thoughts.

    Great bit of digging, by the way.

    I really don’t know that much about what goes on in the hard sciences.  In the social sciences, however, I’m very confident that there’s less fraud and other problems in economics than political science and the other social science fields.  In economics the editors of the top journals are usually extremely reputable scholars – like winners and probable future winners of Nobel prizes.  That’s not the case in political science.  Meanwhile, probably the worst journals are in communications and education.  The problem with these journals is that the editors, reviewers, and authors don’t have very good technical training.  E.g., I bet that more than half of the editors and reviewers of the journals in these fields don’t fully understand what an “endogeneity problem” is.

    Also, economists don’t let ideology cloud their judgment as much as other social scientists do.  This is only an anecdote and it’s more like speculation than evidence, but still I think it is very telling.  Once, while retrieving my mail in the political science mail room at UCLA, I saw an economics journal that published an article entitled something like “Was Americorp a Success?”   I showed the journal to a colleague and said “The great thing about economics journals is that I actually need to read the article to learn the answer to the question.  If this were a sociology or education or political science journal, we’d already know the answer–of course, it was a success.”  My point was, and my colleague agreed with me, that if the journal had been from one of the latter three disciplines, we could be sure that it was run by ardent progressives.  If the author had found that Americorp wasn’t a success, then there’s no way it would have been published.

    • #29
  30. Claire Berlinski Editor
    Claire Berlinski
    @Claire

    Ball Diamond Ball:Most people who should know better, and perhaps most scientists, cannot correctly describe a p-value.

    I don’t think I’d be the only one here who would be interested in reading a definition of the concept of the p-value and a discussion of the uses and abuses of the idea.

    Might I be able to tempt you to write a post explaining this?

    • #30
Become a member to join the conversation. Or sign in if you're already a member.