Mismatch Theory: Why a Movie Should Be Made about Prof. Richard Sander (Part 2)

 

This post is the second in a series on Prof. Richard Sander and the reaction to his Mismatch theory. You can read Part 1, Part 3, and Part 4 of this series at the links.

51ba5ZH-x8L._SX308_BO1,204,203,200_As I noted in Part 1, Sander noticed an overlap with what economist Thomas Sowell called the “mismatch” effect. If students are less prepared for a particular level of instruction—which occurs almost by design with affirmative action—then, not only do they make worse grades than their peers, they actually learn less than they would have learned if they had attended a less challenging school.

A Clever Way to Test the Effects of Mismatch

Although Sowell’s concept made sense theoretically, before Sander, there was little empirical support for it. The problem is that it is very difficult to test.

To see why, consider this hypothetical example. Suppose that you, as a researcher, gather two groups of high school students who plan to attend college. You’ve carefully selected them so that the two groups are identical in terms of the their preparation for college.

Now suppose, as an experiment, you send the first group to College A, a school that matches their level of preparation, and you send the second group to College B, a more difficult school, which does not match their preparation.

Suppose four years later you observe that the average student in the first group (at College A) achieves a GPA of 3.0, while the average student in second group (at College B) achieves a GPA of only 2.5. At first glance, you might conclude that this is evidence of mismatch. That is, the second group of students, because of their lower GPA, seemed to have learned less than the first group.

However, there’s a problem with such a conclusion. Because the second group of students attended a more difficult college, their 2.5 GPA might actually indicate more learning than the first group’s 3.0. This is a problem that is endemic to almost any study of mismatch: To test the effects of mismatch, you need students to attend different schools. But if they do, they will necessarily take different classes, which means that their grades will reflect different standards for measuring their learning. Stated differently, the easier questions that college A asks on exams will be like apples, while the more-difficult questions that college B asks will be like oranges.

Sander, however, offered one of the great insights into this problem. He recognized that an aspect of law schools makes them especially useful for testing the mismatch effect. This is that nearly all law students take the bar exam. Since the bar exam is identical for all students residing in a given state, and it is almost identical for students in differing states, it provides a common measuring stick for comparing students across different schools.

Sander’s decision to conduct research in mismatch theory was due more to happenstance than to any plan. During graduate school, his primary research focus was discrimination in housing. After he graduated, his plan was to continue research on that topic.

In 1998, however, he was invited to a conference, organized by the American Bar Foundation, which was designing a research project called the “After the JD” (AJD). One of Sander’s colleagues on the AJD project helped him gain access to a massive data set developed by the Law School Admissions Council (LASC), the group that administers the LSAT test. As Sander notes in his book Mismatch:

In the late 1980s LSAC had commissioned a major investigation of bar passage rates, primarily aimed at understanding whether (as was rumored) blacks and Hispanics nationally had poor bar passage rates, and if so, whether bar exams were somehow biased against minorities. LSAC was able to enlist nearly 90 percent of all accredited law schools in the BPS [Bar Passage Study], and those schools, in turn, persuaded some 80 percent of their students to participate. A total of more than twenty-seven thousand students starting law school in the fall of 1991 completed lengthy questionaires and gave LSAC permission to track their performance in school and later on the bar exam. A subsample of several thousand students also completed follow-up questionaires in 1992, 1993, and 1994. The BPS itself continued to collect data until 1997.[v]

Four years later, Sander noticed something curious. Like the “independent study” UCLA commissioned in 2008, nothing seemed to come of the LSAC study. That is, Sander saw no mention of the study in the press, nor did he see any academic papers using data from the study. It was as if the study was never commissioned at all. As Sander notes in Mismatch:

Despite this broad involvement and the massive cost of the BPS, by 2001 almost nothing had been heard of its results. I attended a presentation at which the study’s leader, Dr. Linda Wightman, flashed a series of slides with not-very-revealing information on them. Yes, she announced, there was a racial gap in bar passage rates, and it was worrisome, but it was not as large as some had feared. LSAC issued a follow-up report, which was also remarkably bland and opaque.

Surely there had to be much more in a dataset reported to have cost $5 million that spanned all of legal education! I dove into it, sorting through the hundreds of variables on tens of thousands of law students, with a growing sense of disappointment. Wightman and the other LSAC administrators had suppressed [or omitted much of the data they had collected.] I felt like an art student who journeys to Florence to see Michelangelo’s David only to find a fuzzy two-dimensional photo in its place.[vi]

Sander’s Mismatch Evidence

While the LSAC had mysteriously suppressed and omitted data, Sander was nevertheless intrigued by the data that the LSAC did give to the AJD scholars. Once he examined the data, he unearthed an eye-opening puzzle.

Using variables from the students’ undergraduate records and LSAT scores, Sander created an index that expressed the degree to which the students were prepared for law school. Next, through a statistical technique called “regression analysis,” Sander realized that he could create a pool of black students and a pool of white students who were approximately equal on the indexes that Sander constructed. From this, Sander discovered an empirical regularity, which I call the first fundamental regularity of race and the bar exam: In such cases of equally-matched pools, black students would consistently fail the bar exam at higher rates than white students.

Sander knew that the education literature was filled with studies showing black students performing worse than white students. However, this was different. These were black students who were selected so that they were equally strong as white students. At least that was true when they entered law school. Yet, by the time they finished law school—when they took the bar exam—the black students, for some reason, performed worse than the white students.

One potential explanation was that standardized tests such as the bar exam discriminate against black students. But that explanation is inconsistent with another regularity that Sander documented, what I call the second fundamental regularity of race and the bar exam. Sander found that that if he compared white and black students who had similar undergraduate records, LSAT scores, and law-school grades, then they performed equally well on the bar exam. If the bar exam really were racially biased, then why do black students in latter cases do equally well as white students?

The data suggested that something was causing black students to learn less in law school than their equally matched white students. That is, the black students in his hypothetical pools would show up at law school just as strong academically as a group of white students, but then they would graduate academically weaker.

Sander hypothesized that racial preferences (and the mismatch that they caused) were the culprit. All of the data were consistent with his hypothesis. Of course, other hypotheses might also be consistent with the above two regularities. But Sander couldn’t think of any. Nor could anyone else.

As I’ll discuss in my next post, a slew of pro-affirmative-action law scholars wrote critiques of Sander’s work. All of them, I believe, realized that the first and second regularities that Sander documented were solid. None even attempted to show contradictory data that could overturn them.

This series will continue tomorrow on Ricochet.


Notes

[v] Richard Sander and Stuart Taylor, Jr., Mismatch: How Affirmative Action Hurts Students it’s Intended to Help and why Universities won’t Admit it, 2012, Basic Books, p. 56.

[vi] Richard Sander and Stuart Taylor, Jr., Mismatch: How Affirmative Action Hurts Students it’s Intended to Help and why Universities won’t Admit it, 2012, Basic Books, pp. 56-7.

Published in Education, Law
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 2 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. TG Thatcher
    TG
    @TG

    Thank you for summarizing this work here, Dr. Groseclose.

    • #1
  2. mildlyo Member
    mildlyo
    @mildlyo

    I can’t help but wonder if the objections to the “mismatch” theory come from people who consider the value of credentials to be greater than learning. If the credential is the only thing that matters, then it doesn’t matter if one group gets less out of the educational opportunities of elite schools. They are still in the elite.

    Mismatch theory would then be a direct threat to both desired outcomes and self worth. That would explain why the results in the OP are being ignored by the “soulless minions of orthodoxy“.

    • #2
Become a member to join the conversation. Or sign in if you're already a member.