Do We Overvalue Statistics? — Judith Levy


The fascinating website, which publishes well-considered reflections by smart people on very big questions, has made this its 2014 inquiry: What scientific idea is ready for retirement?

The respondents are well worth listening to: they include scientists of many stripes, mathematicians, philosophers, and economists, as well as several knowledgeable science writers and editors (and also, for reasons that are obscure, Alan Alda). The responses are posted one after another in a gigantic stream, forming a kind of alpha-listicle. It’s Buzzfeed for boffins, essentially.

A ton of big scientific ideas get voted off the island on the respondents’ page (see sampling below), but there is one response in particular that I thought might be of interest to our little subset, concerned as we are with things like elections. Emanuel Derman, professor of financial engineering at Columbia, wrote that the power of statistics is an idea worth retiring:

…nowadays the world, and especially the world of the social sciences, is increasingly in love with statistics and data science as a source of knowledge and truth itself. Some people have even claimed that computer-aided statistical analysis of patterns will replace our traditional methods of discovering the truth, not only in the social sciences and medicine, but in the natural sciences too.

…  Statistics—the field itself—is a kind of Caliban, sired somewhere on an island in the region between mathematics and the natural sciences. It is neither purely a language nor purely a science of the natural world, but rather a collection of techniques to be applied, I believe, to test hypotheses. Statistics in isolation can seek only to find past tendencies and correlations, and assume that they will persist. But in a famous unattributed phrase, correlation is not causation.

Science is a battle to find causes and explanations amidst the confusion of data. Let us not get too enamored of data science, whose great triumphs so far are mainly in advertising and persuasion. Data alone has no voice. There is no “raw” data, as Kepler’s saga shows. Choosing what data to collect and how to think about it takes insight into the invisible; making good sense of the data collected requires the classic conservative methods: intuition, modeling, theorizing, and then, finally, statistics.

Bart Kosko, an information scientist and EE and law professor at USC, responded similarly that statistical independence is an illusion:

It is time for science to retire the fiction of statistical independence. 

The world is massively interconnected through causal chains. Gravity alone causally connects all objects with mass. The world is even more massively correlated with itself. It is a truism that statistical correlation does not imply causality. But it is a mathematical fact that statistical independence implies no correlation at all. None. Yet events routinely correlate with one another. The whole focus of most big-data algorithms is to uncover just such correlations in ever larger data sets. 

Statistical independence also underlies most modern statistical sampling techniques. It is often part of the very definition of a random sample. It underlies the old-school confidence intervals used in political polls and in some medical studies. It even underlies the distribution-free bootstraps or simulated data sets that increasingly replace those old-school techniques. 

White noise is what statistical independence should sound like…  Real noise samples are not independent. They correlate to some degree.

Science journalist Charles Seife wrote that “statistical significance” is almost invariably misused, to the point that it has become

a boon for the mediocre and for the credulous, for the dishonest and for the merely incompetent. It turns a meaningless result into something publishable, transforms a waste of time and effort into the raw fuel of scientific careers. It was designed to help researchers distinguish a real effect from a statistical fluke, but it has become a quantitative justification for dressing nonsense up in the mantle of respectability. And it’s the single biggest reason that most of the scientific and medical literature isn’t worth the paper it’s written on.

What say you, Ricochetti? First, what’s your take on the ever-growing popular reverence for statistics? And second, what scientific idea do you believe ought to be retired? To get your gears turning, here’s the promised selection of ideas the respondents came up with at (and check out the site; there are lots more where these came from):

  • Mental illness is nothing but brain illness (Joel Gold and Ian Gold)
  • Animal mindlessness (Kate Jeffery)
  • Altruism (Jamil Zaki and Tor Norretranders)
  • Entropy (Bruce Parker)
  • Moral “blank slate-ism” (Kiley Hamlin)
  • Natural selection is the only engine of evolution (Athena Vouloumanos)
  • Opposites can’t both be right (Eldar Shafir)
  • Left brain/right brain (Sarah-Jayne Blakemore and Stephen M. Kosslyn)
  • The self (Bruce Hood)
  • Beauty is in the eyes of the beholder (David M. Buss)
  • String theory (Frank Tipler)
  • Emotion is peripheral (Brian Knutson)
  • Unification, or a theory of everything (Marcelo Gleiser, Geoffrey West)
  • The intrinsic beauty and elegance of mathematics allows it to describe nature (Gregory Benford)
  • Cause and effect (W. Daniel Hillis)
  • Evidence-based medicine (Gary Klein)
  • There can be no science of art (Jonathan Gottschall)
  • Mouse models (Azra Raza, MD)
  • The clinician’s law of parsimony (aka Occam’s Razor) (Gerald Smallberg, Jonathan Haidt)
  • Standard deviation (Nassim Nicholas Taleb)
  • Artificial intelligence (Roger Schank)
  • Human nature (Peter Richerson)
  • Programmers must have a background in calculus (Andrew Lih)
  • Free will (Jerry Coyne)
  • The Big Bang was the first moment of time (Lee Smolin)
  • One genome per individual (Eric J. Topol, MD)
  • Languages condition worldviews (John McWhorter)
  • Infinity (Max Tegmark)
Published in General
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 101 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Mike H Coolidge
    Mike H

    Seeing as my wife is a biostatistician, who has a very keen idea when statistics is done “right,” I hope people won’t extrapolate whatever “independent statistics” is to all statistics, which is easy to infer from the title.

    Do we overvalue statistics? No, not when it’s actually statistics and not some big data magic meant to invent the illusion of causation.

    • #1
  2. Son of Spengler Member
    Son of Spengler

    Mike H:
    …my wife is a biostatistician, who has a very keen idea when statistics is done “right,”….

    My day job requires intensive use of statistics, and it frustrates me to no end when people’s ignorance leads them to believe that statistics is all about cherry-picking data to present a prepackaged conclusion. “Damned lies” and all that.

    But I’m pretty sure that’s not what Prof. Derman is talking about. I’ve actually had to rely on his work from time to time, and he is the real deal when it comes to statistics.

    Rather, I think he is addressing a phenomenon Richard Feynman illustrated with the parable of the Emperor’s Nose. Survey the entire population of China, and you can get a robust statistical estimate of the length of the Emperor’s nose. But the estimate would be bogus, because the empirical model is flawed.

    Too much research today relies on its statistical significance for the patina of truth. How many behavioral economics papers are derived from a sample of 24 undergraduates? The research may be statistically unassailable, but the conclusions are often overbroad. Statistical significance is a necessary — but not sufficient — condition.

    • #2
  3. Son of Spengler Member
    Son of Spengler

    Separately, I highly recommend this book by Prof. Derman.

    • #3
  4. Son of Spengler Member
    Son of Spengler

    A further thought: In Prof. Derman’s field (and mine), quantitative finance, it is often necessary to build probabilistic models of future market levels. E.g., what is the probability distribution of stock prices one year from now? Standard industry tools make heavy use of past performance. That is, they apply statistical techniques to past behavior. That approach is inadequate in many cases — and widely known to be so — because extreme events are the ones we usually care about, and by definition rare events are rarely observed. It’s a real challenge for the field to develop new probabilistic models that do not rely as heavily on statistical analysis of the past.

    • #4
  5. Franco Member

    91% of ordinary people do not understand statistics but think they do. 82% of the time statistics are used to mislead. 

    There is a story about the statistician who travelled to give lectures and became fixated on his fear that there could be a bomb on his plane. This was pre-9/11 so there weren’t special security measures. To help calm his fears, he calculated the probabilities of a bomb being on his plane. They were very low, but somehow his fears remained. After several angst-ridden flights the statistician got an idea. What were the odds of there being TWO bombs on the same plane? They were infintessimal. Thereafter the statistician  travelled with his own bomb. 

    Every poll has a built-in sampling error. What is it? No one who decliunes to participate in polls is ever polled. We will never know what these people think or how their participation would have affected the poll. It could be that there is little effect, but we don’t know, do we? On various issues I would have to believe the refuseniks would be essential for an accurate (such as it is) poll. 

    • #5
  6. Manfred Arcane Inactive
    Manfred Arcane

    “professor of financial engineering at Columbia”?

    How about we retire professional titles that inflate the creds of the practicing professionals?  Does anyone agree that any branch of finance deserves the honorific of ‘engineering’?

    • #6
  7. Manfred Arcane Inactive
    Manfred Arcane

    – “It [Statistics] is neither purely a language nor purely a science of the natural world, but rather a collection of techniques to be applied, I believe, to test hypotheses.”

    This characterization is too limiting a definition by far.  There are many instances that I am familiar with that involve characterizing the statistics on (= probability distribution for) a range of outcomes in the world for some subsystem of interest.  The frequency with which those outcomes are within some preferred sub-range is then of paramount importance.

    For example, in Ms. Levy’s part of the world, statistics might characterize the dispersion of rocket impacts around a certain high value aim point (say the Dimona nuclear reactor complex) so as to estimate how many Iron Dome interceptors might need to be expended in a raid to protect that complex, assuming errantly aimed rockets fall harmlessly outside the complex.  This estimate then informs the interceptor inventory sizing in its defense.

    This kind of statistics is invaluable.  Examples of this nature abound in the world of engineering.

    • #7
  8. Son of Spengler Member
    Son of Spengler

    Manfred Arcane:
    “professor of financial engineering at Columbia”?
    How about we retire professional titles that inflate the creds of the practicing professionals? Does anyone agree that any branch of finance deserves the honorific of ‘engineering’?

     When universities started adding quantitative finance departments in the ’90s, they all had different takes on where to put them. Depending on the specific university’s culture and strengths, and the research interests of the particular professors, they were variously housed in math departments, economics departments, or engineering departments. In Columbia’s case, it was the latter. Within the field, “financial engineering” has come to describe a particular kind of financial product design, and is no longer broadly used. But names of university departments are not easy to change. 

    • #8
  9. Manfred Arcane Inactive
    Manfred Arcane

    This kind of post really cements my attraction to Ricochet, because it has a high intelligence quotient, and stimulates us to look at the world afresh.

    Each of the mentions in the list at the end of the post might be worthy of its own thread, demonstrating how fecund is this posting.

    In the spirit of that list, let me offer up a few related pet ‘political’ attitude bugaboos that I would like to see ‘retired’:

    “Democracy embodies the ultimate degree of progress in political arrangements for any nation.”

    “A political state must needs empower the majority over the minority of citizenry.”

    “Multi-governments within a single state is not a feasible/effective/palatable/viable/stable/felicitous/durable arrangement…”

    • #9
  10. Geronimo Member

    It would be a nice problem to have if societal reliance on statistics only suffered from flaws like assumed  statistical independence.  Franco’s 91% innumeracy statistic seems too low.  An example: the AAUW publishes an annual comparison of median earnings of men and women and the David McCumbers of the world cite it smugly as evidence that women are paid less than men for the same work.   No amount of debunking can pierce the fog.

    • #10
  11. PracticalMary Member

    Every poll has a built-in sampling error. What is it? No one who decliunes to participate in polls is ever polled. We will never know what these people think or how their participation would have affected the poll. It could be that there is little effect, but we don’t know, do we? On various issues I would have to believe the refuseniks would be essential for an accurate (such as it is) poll.

     I was wondering if the polling is conducted only with people who still have house phones and haven’t gone totally cellular making an interesting sub-group…

    • #11
  12. Michael Collins Member
    Michael Collins

    Judith Levy, Ed.:

    • Artificial intelligence (Roger Schank)

    Artificial intelligence will never defeat natural stupidity.

    • #12
  13. user_278007 Inactive

    How to Lie with Statistics is a great book for people who would like to arm themselves against misleading news reports.  (19 out of 10 readers agree.)

    • #13
  14. Michael Collins Member
    Michael Collins

    This post reminds me of a debate about WalMart that I heard once on the Michael Medved show.   WalMart’s advocate on that program claimed that 80% of the items purchased at Wal-Mart were groceries.   No doubt this statistic is true. If you purchased milk, orange juice, grapes, cereal, and a laptop computer on one visit to WalMart, groceries comprised 80% of the items that you purchased on that trip.   Whatever you think of WalMart, this is a good illustration of why people need to think critically whenever statistics are thrown at them.

    • #14
  15. Carver Inactive

    Unquestionably we overvalue statistics. I am a real estate appraiser and deal in them daily. There are obviously things that can be proven with them and they are useful. However, almost everything I’ve ever heard on the news concerning any sort of real estate statistic has been grossly misleading. Especially in terms of an individual’s being able to tease out something useful for a particular decision. I am sure that applies across other disciplines.

    • #15
  16. Schrodinger's Cat Inactive
    Schrodinger's Cat

    Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.
     Nikola Tesla
    As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.
     Albert Einstein

    • #16
  17. PracticalMary Member

    Here’s one: how many statisticians (and other scientists) adhere to macro-evolutionary theory in the face of the statistics and probabilities? 

    I believe statistics has a place however this one has shown me how Evolution polluted all sciences. The big one on this list should be Evolutionary _____ , with a side bar of ‘biology’.  It’s become impossible to separate myth from the science.

    • #17
  18. tabula rasa Inactive
    tabula rasa

    I’m all for rigorous statistical studies run by honest scientists/scholars attempting to understand causes and correlations.  When done right, they add to the store of human knowledge, and help produce products and services that benefit human beings (pharmaceutical development relies on good statistics).

    Even the social sciences produce valuable studies.  For example, it is beyond dispute that, on average, children raised in two-parent families are more successful in avoiding negative social outcomes (crime, teen pregnancy, dropping out of school, etc.).  Of course, averages being averages, some children in single-parent families thrive and some children in two-parent families become big messes.  But, all things considered, kids do better when they have two parents helping them grow up, a proposition that shouldn’t be all that astounding (but which the left chooses to either ignore or marginalize).

    • #18
  19. civil westman Inactive
    civil westman

    All statistics are not equal. They are surely useful to characterize, describe and understand systematic observations and correlations with many data points, particularly in controlled experiments upon mechanical things.

    Statistical analysis of the mechanics of biological systems is more problematic, likely due to any number of unknown unknown variables. Population studies seem to result in some “slippage” as to the actual truth of purported results.

    Then, when we get to statistical analysis of political views, social behavior and opinions, it seems that anything goes when it comes to meaning. After all, the most rigorous statistical manipulation of such ephemeral and changeable “data” becomes akin to quantum mechanics, where the uncertainty principle suggests great humility is in order when stating results and making predictions as to future events. What_is_the_standard_deviation_of_an_opinion?

    Application of statistical analysis to the social “sciences,” in my mind, is one of the major intellectual errors of our time. Regrettably, statists are undeterred and hardly a day goes by when one cannot read the latest “studies show….,” something or other which supports our betters in their relentless pursuit of power over every aspect of our lives. They can count on the mindless acceptance of “statistics” which yield the desired “result.”

    • #19
  20. user_82762 Inactive


    I was an analytical instrument sales engineer.  We not only sold state of the art laboratory & process instruments but I got involved in big time custom jobs that moved the state of the art to a new state.  My dear father did pure research in a hard science with NSF grants and NIH grants.

    Given all of the above I claim no special knowledge in this field.  However, due to the stress of my instrument job, I created what I call Gawron’s First Law.  It goes as such:

    The Whole Physical World is Just So Many Shades of Grey.  However, If You Lose the Ability to Tell the Difference Between a Shade of Grey That is 99.999% Black and a Shade of Grey That is 99.999% White, You are in Very Big Trouble.

    I would only add that any statistical methodology employed should be peer reviewed by non-ideologues (if there are any left).



    • #20
  21. Gödel's Ghost Inactive
    Gödel's Ghost

    It’d be good if we could eliminate the nonsensical discipline of “statistics” and just return to the necessary discipline of probability, but only after we line all the frequentists up and shoot them. Or at least get them to understand Probability Theory: The Logic of Science.

    • #21
  22. Z in MT Member
    Z in MT

    One of the best things to come out of the financial crisis is that it taught the financial industry this lesson that Dr. Derman is stressing, and the lesson about the limited utility of the standard deviation (stressed by Nassim Nicholas Taleb).  In both cases they go to the limitations of applying Gaussian statistical models (white noise) to economic systems.

    There is no reason to drop statistics all-together however.  What statistic departments should do however is to stress Bayesian methods, which places the emphasis on transparently including and testing assumptions, acknowledging uncertainty, and brings conclusions back to a very human characteristic “degree of belief”.

    • #22
  23. Z in MT Member
    Z in MT


    I don’t think we need to line up and shoot the frequentists, we need to convert them.

    • #23
  24. Gödel's Ghost Inactive
    Gödel's Ghost

    Oh, and good for Dr. Tipler for taking the experimentalist scissors to string theory.

    • #24
  25. Gödel's Ghost Inactive
    Gödel's Ghost

    Z, you’re right, of course. I just take it you’re more optimistic than I am. :-)

    • #25
  26. user_82762 Inactive


    What do you make of the left’s tendency to implausible deniability and incredulous credence.



    • #26
  27. Byron Horatio Inactive
    Byron Horatio

    I prefer Cowperthwaite’s old line about why he banned statistics from Hong Kong.  “Because then people will want to plan!”

    • #27
  28. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake

    Z in MT: What statistic departments should do however is to stress Bayesian methods, which places the emphasis on transparently including and testing assumptions, acknowledging uncertainty, and brings conclusions back to a very human characteristic “degree of belief”.

    100% in agreement!!! Or, as a good classical statistician might say, 1,000% in agreement with a 900% margin of error.

    As someone who switched to pure math from science, and then only later tortured myself with the standard statistical curriculum (which is still classical rather than Bayesian), classical statisticians apparently find it perfectly normal to make assumptions that are so deeply silly that they’d have even RA Fisher (oft called the father of classical statistics) tearing his hair out in despair.
    It’s ridiculous to assume away prior information – or to pretend you’re assuming it away when you’re actually not.

    Adding insult to injury (or injury to insult), classical statistics is especially vulnerable to poor experimental design, yet teaching statistics and experimental design separately is apparently standard. And as RA Fisher said, you cannot make an analysis of a poorly-designed experiment (at least, not a classical statistical analysis) – you can only do a post-mortem to find what it died of.

    • #28
  29. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake

    @Gödel’s Ghost:

    I would have quoted you, but Rico 2.0 renders your quotes null for me. Perhaps you really are achieving ghostly status…

    • #29
  30. Manfred Arcane Inactive
    Manfred Arcane

    Ironic, though, that the inclusion of prior beliefs that is part and parcel of Bayesian analysis is what causes it to lose credence with outsiders who want their statistics to be entirely ‘objective’, that is independent from, and inoculated against, prior belief.

    The only cure for this seems to be to disclose that the classical treatments (such as in the design of experiments) results in different answers depending on the set up (design of the experiment), which injects its own subjective element notwithstanding appearances.

    There is room for Philosophy to step in here and tease out some interesting perspectives on statistics and its handmaiden, probability, but I doubt if that discipline (if it still exists in the modern world – does anyone know?) has risen to the challenge/opportunity.

    • #30
Become a member to join the conversation. Or sign in if you're already a member.