Mismatch Theory: Why a Movie Should Be Made about Prof. Richard Sander (Part 3)

 

This post is the third in a series on Prof. Richard Sander and the reaction to his Mismatch theory. You can read Part 1, Part 2, and Part 4 of this series at the links.

51ba5ZH-x8L._SX308_BO1,204,203,200_As I noted in Part 2 of this series, a slew of pro-affirmative-action law scholars wrote critiques of Sander’s work on Mismatch, the theory that if students are less prepared for a particular level of instruction—which occurs almost by design with affirmative action—then, not only do they make worse grades than their peers, they actually learn less than they would have learned if they had attended a less challenging school. All of these critiques, I believe, realized that the first and second regularities that Sander documented were solid. None even attempted to show contradictory data that could overturn them.

Second-Choice Schools

Some critics noted that Sander did not actually measure the racial preferences granted by the law schools. This was true. But it was impossible for him to do that because the LSAC suppressed the data that it gave him. Specifically, the LSAC’s data set did not identify the law schools that students attended. Accordingly, Sander could not identify cases of mismatch—i.e. cases where a student attended a school that was more prestigious or rigorous than a school for which he or she was prepared.

Nevertheless, the data did provide a potential proxy for mismatch. When gathering the data, the LSAC asked students if they attended their top choice of the law schools to which they were admitted. Many said they did not. To an economist, such an answer is baffling. According to the notion of “revealed preference,” economists define “top choice” to be the school that the student chooses to attend.

However, in practice, people don’t seem to use the above terms the way economists do. For instance, one might say, “Harvard was my top choice, but I went to University of Virginia because it was cheaper.” When an economist says “top choice,” he means “most preferred” given all factors, including quality of the product and the price.

However, when non-economists say “top choice” they often are referring only to the quality of the product, not the price. Further, there might have been other factors, besides price, that caused students not to attend their “top choice.” For instance, a girlfriend or boyfriend, an ill parent, or a fear of being homesick might have caused them to choose a school that they did not consider their “top choice.”

All these reasons suggest the following: If a student marked on the LSAC survey that he did not attend his top choice, then he likely attended a school that was less prestigious and less difficult than the top school to which he was admitted. If so, then that student was likely less mismatched than students who did attend their top choice.

Sander analyzed the data and found that, among black students who were equally matched on the indexes that he created, the ones who did not attend their top choice did better on the bar exam than those who did attend their top choice. At first glance, this seems counter-intuitive. It says that if you want to maximize your chances of passing the bar, you don’t necessarily want to attend the best school to which you were admitted. But this is exactly what mismatch theory would predict. Meanwhile all the countering hypotheses made the opposite prediction. The evidence, overwhelmingly, stacked up in favor of mismatch theory.

The above results show that mismatch is a real phenomenon—that is, unlike the competing hypotheses, mismatch can explain many puzzles in the data. But Sander’s work went further. It showed that the mismatch effect is very large substantively. For instance, 53% of black students in the LSAC data never passed the bar exam and thus failed to become lawyers. By contrast, only 17% of the white students failed to become lawyers. Thus, the black-white gap, 53% minus 17%, or 36%, is quite large.

Sander estimated, however, that if law schools would eliminate their racial preferences—thus eliminating the mismatch effect—then the failure rate of black students would fall possibly to as low as 30%. This would cause the black-white gap to fall to 13% (30% minus 17%). That is, about two thirds of the current gap would disappear.[vii]

Sander Publishes his Results

Sander’s results hit a nerve. As he would soon witness, proponents of racial preferences would launch a feverish attack against his findings and him personally.

One of the first signs of the coming attack occurred in March, 2004. Sander had written his mismatch results and sent them to the University of Pennsylvania Law Review, a highly respected journal devoted to legal scholarship. The Review notified him that his paper had been accepted for publication. However, a few days later, an embarrassed editor called Sander to retract the Review’s acceptance. As Sander learned later, the membership of the Review had gotten word of his controversial results, and it voted to overturn the decision of its editors.[viii]

“This was only a small setback,” wrote Sander. “Almost immediately I received two offers from journals better known than the UPLR, and I accepted the one from Stanford Law Review.”[ix]

Sander soon learned, however, that the acceptance would come with a condition. As the editors insisted, a series of other law scholars would be invited to write a series of critiques of Sander’s essay in a later edition.

This quickly became a pattern. At several law schools, Sander was invited to present his results. However, unlike the usual speech or seminar, Sander’s presentation would be followed by another speaker, who would explain why Sander was wrong. As Sander wrote:

Often these rebuttals were absurd. One law school in New York held a well-publicized event that drew an audience of some two hundred faculty and students. After I spoke about “Systemic Analysis,” the school’s admissions director rose and said that none of my findings applied to this law school. At this law school, he said, students of all races earned the same grades and had the same rate of success on the bar. I, of course, had no way to respond to these claims; my data came from databases that did not identify individual schools. But at dinner afterward another administrator leaned over with a confidential smile and said, “I hope [the admissions director] didn’t nettle you too much. He just made all that stuff up to placate our students.”[x]

At another event at Harvard University, Sander spent the day traveling from Los Angeles to Boston. However, once he arrived, he learned that the person scheduled to critique his work would not be able to attend. Sander suggested that her time could be filled with a longer question-and-answer session. However, the event organizer smiled apologetically and explained that the entire event would have to be canceled. “The general idea,” lamented Sander, “seemed to be that [my results] were too explosive or too dangerous to be presented without some filtering, some sanitizing process.”[xi]

Law Professors React

Three months after the Stanford Law Review accepted Sander’s paper, but still several months before its date of publication, the Review received an ominous letter from two scholars, Richard Lempert and David Chambers, which read in part:

[We] have had a chance to read [a draft of Sander’s paper] and believe that at crucial points Professor Sander’s analysis and conclusions are seriously flawed. We have also consulted with Timothy Clydesdale, who has been working with the same dataset that is at the core of the Sander article and who has reached results that are in some respects quite different.[xii]

Lempert and Chambers were very vague about exactly how Sander’s analysis was “seriously flawed.” However, they eventually joined Clydesdale and another author to publish a critique of Sander’s essay.[xiii] The critique, however, did not find any “serious flaws” in Sander’s analysis. For instance, the authors could not overturn any of Sander’s factual claims—in fact, they did not even challenge any of those claims. Nor could they overturn Sander’s two main findings—what I’ve called above the first and second fundamental regularities of race and the bar exam. Indeed, one part of their analysis actually bolstered Sander’s results, although they failed to mention that.

The actual words of the letter by Lempert and Chambers, I believe, were less important than who wrote them. Chambers had been the president of the Society of American Law Teachers, perhaps the leading voluntary association of law professors. Lempert was the head of the Directorate for Social, Behavioral, and Economic Sciences at the National Science Foundation. The intention of their letter was clear—to scare the Review editors away from publishing Sander’s essay.

In the end, the Review editors did not concede to Lempert and Chambers’ request.

But then something curious happened. The Stanford administration intervened. As Sander notes:

[The Review’s editors] did, however, agree to a request from Stanford’s administration that they publish multiple responses to [Sander’s] “Systemic Analysis.” A national competition was announced, and they received dozens of proposals. What neither the entrants nor I knew at the time was that the Review editors, again under pressure from their school administrators, would only publish critiques of “Systemic Analysis.” Commentators who found my analysis persuasive and important were effectively excluded from the competition.

This meant that regardless of actual content, the Stanford Law Review’s follow-up issue would, by design, feature four articles (with a total of eight well-known authors, as it would turn out) all arguing that “Systemic Analysis” was wrong. The idea of giving equal time to my opponents had reached, in this most crucial instance, the level of caricature. [xiv]

With perhaps only one exception, the critiques published in the Stanford Law Review had an odd quality. Although the authors—at least nominally—were highly respected, professional scholars, they adopted a tone that was more akin to the reader-comments section of a blog than that of a scholarly journal.

For instance, one tactic they adopted was never to mention any positive aspects of Sander’s research. For instance, almost none mentioned the first or second fundamental empirical regularities that I note above. Most readers of Sander’s piece consider these to be his most surprising, important, and solidly documented results. Put aside, if you like, Sander’s claim that the regularities can be explained by racial preferences. Even if you’re not convinced of that claim, you still have to agree that the regularities are interesting and strongly supported. The reason the authors did not mention the regularities, I believe, is because they are so solidly documented. The authors could not think of a way to criticize them. Consequently, they decided to ignore them.

Instead of attacking Sander’s main results, they adopted a second tactic. This was to pick around the edges of Sander’s research—that is, to find ancillary questions he addressed and find a minor flaw. The authors would next either (i) not address whether the flaw actually affected Sander’s main conclusions, or even more dishonest, (ii) try to deceive the reader into thinking it had major consequences for Sander’s main conclusions, when really it did not.

One example of the tactic involved what is, in my judgment, the closest thing that Lempert and his colleagues came to finding any “flawed” aspect of Sander’s analysis. This involved a case where Sander examined the relationship between a student’s race and his first-year grades at law school. Sander found that black law students tend to have a large underperformance gap, compared to white students. However, Sander also found that, if you control for college grades and LSAT scores, then the underperformance gaps shrinks to a tiny amount and, for all intents and purposes, is zero.

For this case Sander used a data set where student subjects reported their own race. However, about a quarter of the subjects declined to state their race. Instead of omitting such subjects, Sander treated them as white. He reasoned that since white students didn’t receive racial preferences, and since many of them believed that they were discriminated against, they would have the most incentive not to report their race.

Lempert and his colleagues, however, showed that Sander would have found different results had he (i) omitted such subjects or (ii) treated them as if they were a separate race (that is, treated them as if they were members of a fictional “decline to report” race).

But the latter two methods hardly make a difference substantively. Recall that Sander’s goal was to measure the extent to which first-year black law students underperform relative to white students, once you control for the students’ college grades and LSAT scores. That is, Sander conducted the following thought experiment: Suppose you take a white and a black student who have the same college grades and LSAT scores. To what extent should we expect the black student to have a lower GPA than the white student during their first year of law school? According to Sander’s estimates, the underperformance in such a case is miniscule.

Here, precisely, is how small the underperformance is. Suppose the white and black student have the same college grades and LSAT scores, and suppose, for example, that the white student has a first-year-law GPA that ranks him at the 50th percentile in his class. Given that, what should we expect the black student’s GPA percentile to be? According to Sander’s results, the answer is 49.7. That is, the black student’s underperformance is a tiny 0.3 percent. [xv]

By contrast, if you don’t control for college grades and LSAT scores, black underperformance is huge. That is, at actual law schools, where black and white students do not enter with the same college grades and LSAT scores, the first-year-law GPAs of black students are much lower. Specifically, the GPA of the median black student (i.e. the person who ranks at the 50th percentile among black students) approximately equals the GPA of the 7th-percentile white student. That is, at actual law schools black underperformance is approximately 43 percent. [xvi]

Lempert and his colleagues re-conducted Sander’s analysis but treated the “decline to state” subjects in the two ways that I mention above. When Lempert and his colleagues did this, they indeed estimated black underperformance to be higher than Sander estimated. In fact, whereas Sander found black underperformance to be statistically insignificant, Lempert and his colleagues black underperformance to be statistically significant, albeit just barely.

However, with both sets of analyses, black underperformance was substantively trivial: Whereas Sander estimated the underperformance to be less than one percent, Lempert and his colleagues estimated the underperformance to be about one and a half percent.[xvii] But the latter estimate is still tiny compared to the 43-percent underperformance that we see at actual law schools.

That is, even if you think that Lempert, et. al.’s methods are superior to Sander’s, Sander’s main point is still true: If law schools would just match black and white students more equally (which would require eliminating affirmative action), then black underperformance would shrink to a tiny fraction of what we currently observe.

A third tactic that the authors of the critiques adopted is similar to a concept that baseball umpires call “selling the call.” I first heard of the concept after my cousin attended a camp that trained people to be college-level and minor-league umpires.

“Did they teach you,” I playfully asked him, “how to say ‘Yeeeerrrrrr Oouut’,” as I made a hugely exaggerated signal.

“In fact, they teach you the opposite,” my cousin replied. “They tell you barely to make any motion when you make a call. You’re supposed to draw as little attention to yourself as possible.”

“The one exception,” he continued, “is when it’s a really close call. When that happens, they tell you that you can ‘sell the call.’ Which means you do the exaggerated motion like you just did. The idea is to make the managers and fans think you’re completely sure of yourself. That way you prevent them from arguing with you and letting the game get out of hand.”

As an example of “selling the call,” consider the issue I discuss above, where Sander lumped “decline to state” subjects with white subjects. Recall that Lempert and his colleagues showed that this choice indeed affects Sander’s results. However, as I explained, the effect only trivially affects the broader point that Sander was making. Whereas Sander’s results show that approximately 99% of black underperformance would disappear if law schools would eliminate mismatch, according to the results of Lempert and his colleagues, the true number is more like 96% or 97%.

Despite this trivial difference, here’s how Lempert and his colleagues described Sander’s analysis [my emphasis]:

Even more troubling, in performing his analysis of the NSLSP [data], Sander handled students’ race in a puzzling and distorting manner. The NSLSP has an abnormally high rate of missing data about race, with 24.6% (1176 of 4774) of respondents failing to indicate their race. … Sander compounded this missing data problem by lumping those who did not report their race with white respondents… Accordingly, in the analysis of the NSLSP, race appears irrelevant only when the data are mishandled. … Our analyses of both the NSLSP and BPS thus reveal that Sander is wrong when he concludes that the current lower performance by African Americans in law school is “a simple and direct consequence of the disparity in entering credentials between blacks and whites.” (p. 1879)

This series will continue tomorrow on Ricochet.


Notes

[vii] Richard Sander and Stuart Taylor, Jr., Mismatch: How Affirmative Action Hurts Students it’s Intended to Help and why Universities won’t Admit it, 2012, Basic Books, p. 61.

[viii] Richard Sander and Stuart Taylor, Jr., Mismatch: How Affirmative Action Hurts Students it’s Intended to Help and why Universities won’t Admit it, 2012, Basic Books, p. 70.

[ix] Richard Sander and Stuart Taylor, Jr., Mismatch: How Affirmative Action Hurts Students it’s Intended to Help and why Universities won’t Admit it, 2012, Basic Books, p. 70.

[x] Richard Sander and Stuart Taylor, Jr., Mismatch: How Affirmative Action Hurts Students it’s Intended to Help and why Universities won’t Admit it, 2012, Basic Books, p. 70.

[xi] Richard Sander and Stuart Taylor, Jr., Mismatch: How Affirmative Action Hurts Students it’s Intended to Help and why Universities won’t Admit it, 2012, Basic Books, p. 70.

[xii] Richard Sander and Stuart Taylor, Jr., Mismatch: How Affirmative Action Hurts Students it’s Intended to Help and why Universities won’t Admit it, 2012, Basic Books, p. 71.

[xiii] See David L. Chambers, Timothy T. Clydesdale, William C. Kidder, and Richard Lempert, “The Real Impact of Eliminating Affirmative Action in American Law Schools: An Empirical Critique of Richard Sander’s Study,” Stanford Law Review, May 2005, Vol. 57, pp. 1855-1898.

[xiv] Richard Sander and Stuart Taylor, Jr., Mismatch: How Affirmative Action Hurts Students it’s Intended to Help and why Universities won’t Admit it, 2012, Basic Books, p. 71.

[xv] These figures are based on estimates that Sander lists in his paper, “A Systemic Analysis of Affirmative Action in American Law Schools,” Stanford Law Review, November 2004, Volume 57, Number 2, pp. 367-483. In Table 5.2 Sander reports the results of a regression in which the dependent variable is the standardized first-year GPA of a law student. The coefficient on his “Black” variable is -.007. This means that if a black and white student are otherwise equal, then the black student is expected to have a GPA that .007 of a standard deviation lower than the white student. If the white student has a GPA that ranks at the median of the sample (i.e. he has a 50th-percentile GPA), then his standardized GPA is 0, and the expected standardized GPA of the black student is -.007. If one consults a table of p- and z- values of a standard normal distribution, one finds that when the z-value is -.007, the corresponding p-value is .497. Hence, the expected GPA percentile of the black student is 49.7%.

[xvi] See, for instance, Table 5.3 in Richard Sander’s “A Systemic Analysis Affirmative Action in American Law Schools,” Stanford Law Review, November 2004, Volume 57, Number 2, p. 431. As the table shows, not counting historically minority schools, the percentile varies between a low of 5 (at midrange public schools) and a high of 8 (at midrange private schools). At historically minority schools the percentile is 24.

[xvii] In the fourth and fifth columns of Lempert, et al.’s Table 2, the researchers re-analyze Sander’s model when the “race not reported” students are treated as a separate ethnic group. For this model the “Black” coefficient is -.030. For the standard normal distribution, the corresponding p-value for a z-value of -.030 is .488. Thus, this model estimates black underperformance to be approximately 1.2% (i.e. 50% minus 48.8%). The sixth and seventh column of the table re-analyzes Sander’s model when the “race not reported” students are omitted. For this model the “Black” coefficient is -.042. Similar calculations show that this implies a black underperformance of about 1.7%.

Published in Education, Education, Law
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 7 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Arizona Patriot Member
    Arizona Patriot
    @ArizonaPatriot

    My impression is that Charles Murray faced the same type of slanderous and unsubstantiated attacks after publishing The Bell Curve.

    No one wants to face facts that undermine an important component of one’s world view, but this type of evasion seems far more common among leftists.  They don’t look for the truth.  Rather, they look for some excuse or evasion that will allow them to ignore the facts.

    • #1
  2. TG Thatcher
    TG
    @TG

    “Figures don’t lie, but liars sure can figure.”

    • #2
  3. Duane Oyen Member
    Duane Oyen
    @DuaneOyen

    Let me quote WFB on a similar topic, which perfectly describes this issue.

    “White people are responsible for Situation X.  Also, Situation X does not exist.”

    Uh huh.

    • #3
  4. MJBubba Member
    MJBubba
    @

    Professor Groseclose,  many thanks.   Your series is good work.

    Sander and Taylor have done a great service.

    • #4
  5. Pilgrim Coolidge
    Pilgrim
    @Pilgrim

    Thanks for this series, looking forward to the next installment. Ricochet at its best.

    • #5
  6. Instugator Thatcher
    Instugator
    @Instugator

    Tim Groseclose:

    Here, precisely, is how small the underperformance is. Suppose the white and black student have the same college grades and LSAT scores, and suppose, for example, that the white student has a first-year-law GPA that ranks him at the 50th percentile in his class. Given that, what should we expect the black student’s GPA percentile to be? According to Sander’s results, the answer is 49.7. That is, the black student’s underperformance is a tiny 0.3 percent. [xv]

    Prof Groseclose,

    Looking at the footnote you cite here, the corresponding p-value for this analysis is .497. It doesn’t mean that the underperformance is a tiny .3%. The high p-value means that we are not to reject the H0 hypothesis that the two are equal. This would hold true a the 95% confidence interval (the standard for government statistical work) or even the more stringent 99% confidence interval.

    Most importantly (and relevant to this conversation)  it does not in itself support reasoning about the probabilities of the hypotheses but is only used as a tool for deciding whether to reject the null hypothesis (H0)- so your use of it to attempt to predict the GPA of any student is incorrect.

    It promotes a stronger case that affirmative action actually hurts black students, in this case by sending them to schools for which they are unprepared as opposed to schools for which they are not.

    Another question to ask of the data is, “Do Black students get into their first choice school more frequently than Non-Black students?”

    Otherwise, an excellent topic and one for which I am actively looking forward to seeing the next installment.

    • #6
  7. Tim Groseclose Member
    Tim Groseclose
    @TimGroseclose

    Instugator: Looking at the footnote you cite here, the corresponding p-value for this analysis is .497. It doesn’t mean that the underperformance is a tiny .3%. The high p-value means that we are not to reject the H0 hypothesis that the two are equal. This would hold true a the 95% confidence interval (the standard for government statistical work) or even the more stringent 99% confidence interval.

    When I wrote p-value, I didn’t mean a statistical test.  What I meant was Phi(-.007), where Phi() is the cumulative distribution function of a standard normal random variable.  Maybe I should’ve used that language instead.

    • #7
Become a member to join the conversation. Or sign in if you're already a member.