Watching the Education Policy Watchmen

 

shutterstock_47640616Earlier this week, respected researchers from two universities released a four-part study on the effects of Louisiana’s school voucher program. Yet even though the researchers provided a layman’s summary of their findings, media coverage of their study varied significantly.

What makes for better or worse coverage of new research? Well, first the reporter needs to tell us what the study found and why it’s important. She should also provide context for those findings. Are they consistent with or divergent from the findings of previous research? Particularly in the latter case, good reporting will also explore the underlying causes of the findings, especially as the study’s authors understand them. And since reporters rarely have a background in policy research, they should consult with multiple experts who have different views about how to interpret the study’s findings or what their implications are. This being the 21st century, online reporting should contain a direct link to the study so that readers can easily access it to learn more. Finally, because the “tl;dr” crowd often sees only the headline, the headline should be accurate. (Note: editors usually choose the headline, not the reporters.)

Based on those criteria, I came up with the following, quick-and-dirty rating system to determine the quality of reporting on new research. As with other rating systems, results will vary depending on the weight given each criterion. But like speed limits, although the precise levels of points assigned are ultimately arbitrary (why not 67.5 miles per hour?), I nevertheless believe they reasonably reflect the relative importance of each criterion.

  • Reporting on findings (25 points): Accurately describing all the study’s major findings, positive and negative.
  • Reporting on previous research (25 points): Accurately describing the previous research to provide proper context for understanding these findings.
  • Reporting on causes of results (20 points): Accurately describing the authors’ understanding of what caused the results they found.
  • Consulting experts (15 points): Quoting at least two different education policy experts with differing views.
  • Linked to study (10 points): Linking to study so readers can easily access it.
  • Headline (5 points): Using a headline that accurately captures the study’s main finding or importance.

If you like my rating system: great! You are clearly a wise and discriminating reader, and probably quite good looking too. If not, you’re encouraged to come up with your own weights and/or criteria in the comments section. While you work on that, the rest of us will get to the grading.

What’s needed to earn full credit on the last three criteria is obvious, but here’s what I’m looking for on the first three:

  • The study had four main findings: the vouchers had a negative impact on students’ performance on tests (particularly math) although there was some improvement in the second year of the program; there’s suggestive evidence that the competitive pressure from the voucher program improved the performance of public schools; the voucher program reduced racial segregation; and there was no difference between voucher and non-voucher students on several measures of non-cognitive skills. Since the last one was a null finding, I won’t deduct points for its omission.
  • Previous research has overwhelmingly found small but statistically significant positive effects on both voucher student performance and on the performance of students remaining at their district schools. Some studies have found no discernible difference, but no voucher study found any harm until a study on the first year of Louisiana’s voucher program was released last month.
  • The study’s authors provide four possible causes for the negative impact on test scores: 1) private schools’ curricula were not aligned to the state test, so they are still undergoing a period of adjustment; 2) private schools were not prepared to take in so many students switching from low-performing district schools who were already far behind; 3) the success of recent reforms of the public schools made private schools look relatively worse; 4) due to the high level of regulations, most private schools opted not to participate in the program, leaving only those lower-performing schools that were the most desperate for funding.

For comparison, see the tone and substance of the two universities press releases:

The Times-Picayune of Greater New Orleans (Danielle Dreilinger): 70 / 100 points, Grade C-

  • Reporting on findings (15/25): Covers all major findings, but with some errors. For example, the article says “the private schools that took vouchers became more racially segregated.” Actually, that’s true for only slightly more than half of the private school transfers. Moreover, the article does not make it clear that there was improvement in the second year, stating only “the scholarship students hadn’t recovered even to where they began in mathematics.”
  • Reporting on previous research (15/25): The article did note that “most studies have found mildly positive results for small voucher programs” but also claimed that “Louisiana’s program is much larger.” That is not accurate. Louisiana has about 7,000 voucher students today, but during the study there were fewer than 1,200. Several previous studies examined the impact on students participating in comparable or larger voucher programs.
  • Reporting on causes of results (10/20): The article cites the authors’ explanations for the causes of their findings, but with some errors. For example, the article says, “The more prestigious and expensive private schools generally do not take vouchers.” Actually, the study says nothing about the relative tuition at participating or non-participating schools. Moreover, the article omits the possibility raised in the study that “extensive regulations placed on the program by the state” drove away private schools.
  • Consulting experts (15/15): Numerous experts cited.
  • Linked to study (10/10): Check.
  • Headline (5/5): Accurate.

U.S. News & World Report (Lauren Camera): 33 / 100 points, Grade F-

  • Reporting on findings (15/25): The article noted two of the three significant findings (no mention was made of segregation), but it did not clearly explain that the test scores improved in the second year, stating only: “During their second year in private school, the downward trend continued in math, but rebounded some in reading.” That’s not quite accurate. The trend was upward in both math and reading, albeit still negative relative to the control group (and not statistically significant for reading).
  • Reporting on previous research (10/25): Besides the other study on Louisiana’s voucher program, the article made no mention of the prior research on the impact of school vouchers on participants. The article did note that “existing research generally has found modestly positive or insignificant competitive effects of school voucher programs on student achievement in public schools,” though it was quick to note that opponents “counter that they harm public education by diverting funds from public to private schools.” In other words, the article countered evidence of a positive impact with a mere assertion of a negative impact. In reality, it is the evidence that contradicts the assertion.
  • Reporting on causes of results (0/20): There was no discussion at all concerning the causes of the negative impact on voucher students’ test scores.
  • Consulting experts (0/15): No outside experts cited.
  • Linked to study (8/10): Linked to researcher’s website, but not the study itself.
  • Headline (0/5): The headline “Evidence Mounts Against Louisiana Voucher Program” is editorializing masquerading as reporting. The studies showed a positive impact on racial integration and public school performance, and although still negative, the results improved in year two. This is hardly evidence “mounting against” the program, though certainly school choice supporters should be troubled by the results.

The Associated Press (Kevin McGill)30 / 100 points, Grade F-

  • Reporting on findings (25/25): The article accurately discussed the three main findings, including the improved performance in the second year.
  • Reporting on previous research (0/25): The article made no mention of the prior research on the impact of school vouchers on participants or public schools.
  • Reporting on causes of results (0/20): There was no discussion at all concerning the causes of the negative impact on voucher students’ test scores.
  • Consulting experts (0/15): No outside experts cited.
  • Linked to study (0/10): No links to the original study. This may not be totally fair to the AP, which does not appear to include links in its articles. It is a wire service after all. But again, this being the 21st century, there’s really no excuse. No points for you!
  • Headline (5/5): The media outlets that run AP stories generally pick their own headlines, so it’s not totally fair to grade them on this, but a quick survey of headlines found on Google shows the media outlets tended to summarize what was written in the first paragraph, which was accurate.

As you can see from the above, several major media outlets covering the study left much to be desired. Without the proper context, readers will have no idea that the Louisiana voucher program had both positive and negative effects, no idea that the negative effect were an anomaly among the large number of voucher studies, and no idea what might have caused that anomaly. In other words, when journalists fail to provide the proper context, their readers come away misinformed.

And thanks to the Gell-Mann Amnesia effect, most of us don’t even realize it.


Note: For those in the D.C. area interested in learning more about the impact of regulations of the effectiveness of school choice programs, join us at the Cato Institute next Friday, March 4th at noon. You can RSVP here. Come for the policy talk, stay for the sponsored lunch!

There are 4 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Brian Clendinen Member
    Brian Clendinen
    @BrianClendinen

    What  I am interested in does the control group compare the marital status of the kids parents? As far as I am concern material status of the kids biological parents or long-term guardians (the desiccation is also important)  is the most important control group. From the research I have seen, I really don’t take much of the education research very seriously unless their are a lot of studies with the same conclusion or they have properly controlled for parental status and possible parental involvement. In my mind based on studies and experience this is the single most import variable to all educational outcomes.

    The lack of scientific rigor that education researchers get away with would not fly in any of the non-social science and most of the social sciences.   I would also be interested in outside tutoring hours (since they can make up for poor classroom education because you are basically in part engaging in private education via what is really homeschooling or sending a kids to private school) but those are usually correlated with the parents.

    • #1
  2. Jason Bedrick Inactive
    Jason Bedrick
    @JasonBedrick

    Brian Clendinen: What I am interested in does the control group compare the marital status of the kids parents?

    Good question. Although the data set did not include information about the marital status of the students’ parents, this was a random-assignment study, so there is no reason to believe that the students who were randomly selected to receive a voucher were more or less likely to have a particular family situation than those that were randomly selected not to receive a voucher. The researchers did compare the treatment and control groups using the data they did have (race, gender, income, etc.) and found no discernible difference between the two groups.

    • #2
  3. The Reticulator Member
    The Reticulator
    @TheReticulator

    Good work, though I would give more weight to the headline.  A lot of news reporting could be improved if headline writers actually read the articles before writing the headlines.

    • #3
  4. J. D. Fitzpatrick Member
    J. D. Fitzpatrick
    @JDFitzpatrick

    The study’s authors provide four possible causes for the negative impact on test scores: 1) private schools’ curricula were not aligned to the state test, so they are still undergoing a period of adjustment; 2) private schools were not prepared to take in so many students switching from low-performing district schools who were already far behind; 3) the success of recent reforms of the public schools made private schools look relatively worse; 4) due to the high level of regulations, most private schools opted not to participate in the program, leaving only those lower-performing schools that were the most desperate for funding.

    What about option 5: public schools were cheating on test results before students with vouchers moved to other school? See Atlanta.

    • #4
Become a member to join the conversation. Or sign in if you're already a member.

Comments are closed because this post is more than six months old. Please write a new post if you would like to continue this conversation.