Do We Overvalue Statistics? — Judith Levy

 

The fascinating website edge.org, which publishes well-considered reflections by smart people on very big questions, has made this its 2014 inquiry: What scientific idea is ready for retirement?

The respondents are well worth listening to: they include scientists of many stripes, mathematicians, philosophers, and economists, as well as several knowledgeable science writers and editors (and also, for reasons that are obscure, Alan Alda). The responses are posted one after another in a gigantic stream, forming a kind of alpha-listicle. It’s Buzzfeed for boffins, essentially.

A ton of big scientific ideas get voted off the island on the respondents’ page (see sampling below), but there is one response in particular that I thought might be of interest to our little subset, concerned as we are with things like elections. Emanuel Derman, professor of financial engineering at Columbia, wrote that the power of statistics is an idea worth retiring:

…nowadays the world, and especially the world of the social sciences, is increasingly in love with statistics and data science as a source of knowledge and truth itself. Some people have even claimed that computer-aided statistical analysis of patterns will replace our traditional methods of discovering the truth, not only in the social sciences and medicine, but in the natural sciences too.

…  Statistics—the field itself—is a kind of Caliban, sired somewhere on an island in the region between mathematics and the natural sciences. It is neither purely a language nor purely a science of the natural world, but rather a collection of techniques to be applied, I believe, to test hypotheses. Statistics in isolation can seek only to find past tendencies and correlations, and assume that they will persist. But in a famous unattributed phrase, correlation is not causation.

Science is a battle to find causes and explanations amidst the confusion of data. Let us not get too enamored of data science, whose great triumphs so far are mainly in advertising and persuasion. Data alone has no voice. There is no “raw” data, as Kepler’s saga shows. Choosing what data to collect and how to think about it takes insight into the invisible; making good sense of the data collected requires the classic conservative methods: intuition, modeling, theorizing, and then, finally, statistics.

Bart Kosko, an information scientist and EE and law professor at USC, responded similarly that statistical independence is an illusion:

It is time for science to retire the fiction of statistical independence. 

The world is massively interconnected through causal chains. Gravity alone causally connects all objects with mass. The world is even more massively correlated with itself. It is a truism that statistical correlation does not imply causality. But it is a mathematical fact that statistical independence implies no correlation at all. None. Yet events routinely correlate with one another. The whole focus of most big-data algorithms is to uncover just such correlations in ever larger data sets. 

Statistical independence also underlies most modern statistical sampling techniques. It is often part of the very definition of a random sample. It underlies the old-school confidence intervals used in political polls and in some medical studies. It even underlies the distribution-free bootstraps or simulated data sets that increasingly replace those old-school techniques. 

White noise is what statistical independence should sound like…  Real noise samples are not independent. They correlate to some degree.

Science journalist Charles Seife wrote that “statistical significance” is almost invariably misused, to the point that it has become

a boon for the mediocre and for the credulous, for the dishonest and for the merely incompetent. It turns a meaningless result into something publishable, transforms a waste of time and effort into the raw fuel of scientific careers. It was designed to help researchers distinguish a real effect from a statistical fluke, but it has become a quantitative justification for dressing nonsense up in the mantle of respectability. And it’s the single biggest reason that most of the scientific and medical literature isn’t worth the paper it’s written on.

What say you, Ricochetti? First, what’s your take on the ever-growing popular reverence for statistics? And second, what scientific idea do you believe ought to be retired? To get your gears turning, here’s the promised selection of ideas the respondents came up with at edge.org (and check out the site; there are lots more where these came from):

  • Mental illness is nothing but brain illness (Joel Gold and Ian Gold)
  • Animal mindlessness (Kate Jeffery)
  • Altruism (Jamil Zaki and Tor Norretranders)
  • Entropy (Bruce Parker)
  • Moral “blank slate-ism” (Kiley Hamlin)
  • Natural selection is the only engine of evolution (Athena Vouloumanos)
  • Opposites can’t both be right (Eldar Shafir)
  • Left brain/right brain (Sarah-Jayne Blakemore and Stephen M. Kosslyn)
  • The self (Bruce Hood)
  • Beauty is in the eyes of the beholder (David M. Buss)
  • String theory (Frank Tipler)
  • Emotion is peripheral (Brian Knutson)
  • Unification, or a theory of everything (Marcelo Gleiser, Geoffrey West)
  • The intrinsic beauty and elegance of mathematics allows it to describe nature (Gregory Benford)
  • Cause and effect (W. Daniel Hillis)
  • Evidence-based medicine (Gary Klein)
  • There can be no science of art (Jonathan Gottschall)
  • Mouse models (Azra Raza, MD)
  • The clinician’s law of parsimony (aka Occam’s Razor) (Gerald Smallberg, Jonathan Haidt)
  • Standard deviation (Nassim Nicholas Taleb)
  • Artificial intelligence (Roger Schank)
  • Human nature (Peter Richerson)
  • Programmers must have a background in calculus (Andrew Lih)
  • Free will (Jerry Coyne)
  • The Big Bang was the first moment of time (Lee Smolin)
  • One genome per individual (Eric J. Topol, MD)
  • Languages condition worldviews (John McWhorter)
  • Infinity (Max Tegmark)
Published in General
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 101 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. AIG Inactive
    AIG
    @AIG

    I disagree.

    1) Correlation sometimes is causation ;) (such as in structural equation modelling, for example)

    2) While the independence assumption almost never holds, that doesn’t mean that there aren’t statistical tools that do not require that assumption, or in some way model it. There are many very sophisticated models out there to take into account all sorts of criticisms that can be thrown.

    3) Getting statistically significant results is not a threshold for getting published. It is simply a minimum threshold to tell you that there is…something…going on there. Getting published in most social science journals requires that you also…theorize…why you got those results. (which is the whole point of hypothesis testing, i.e. those variables were not tested at random). 

    4) These same criticisms can be thrown at every methodology used in social sciences. In fact, the alternatives to statistics would have far more problems and drawbacks. Statistics is the best tool, available. It does not claim to be perfect, as neither does any field in social sciences.

    5) Without statistics in social sciences, I wouldn’t have a job! :)

    • #31
  2. AIG Inactive
    AIG
    @AIG

    Z in MT:

    In both cases they go to the limitations of applying Gaussian statistical models (white noise) to economic systems.

    What statistic departments should do however is to stress Bayesian methods, .

     
    Sorry but I don’t think so. Social sciences, of which economics is part of, doesn’t do “predictions”. They study the past, and have to make generalizable assumptions. Saying that there is a possibility of an outcome outside of the “standard deviation” assumptions…isn’t saying much at all. That is known. It is not, however, what social scientists study. 

    That being said, there are of course plenty of statistical methods that do not make or need Gaussian assumptions.  And they are, of course, widely used in social sciences. So it is not fair to say that somehow people in social sciences didn’t “understand” the shortcoming of their methods.

    Second, Bayesian has even more shortcomings and limitations than the “standard” statistical tools.

    • #32
  3. user_423975 Coolidge
    user_423975
    @BrandonShafer

    The problem with statistics, like any scientific claim, is that “figures never lie, but liars figure.”  Meaning here that the statistics are only as reliable as the people producing them.  If the meaning of “retiring statistics” is that a lay person shouldn’t rely on a statistic just because it is a statistic then I agree.  Too often we see polls come out that purport to mean things are unsupported by the methods of the polling.   Statistics are a tool, and like any tool they can be used for good or ill, or anything in between.  In the case of science, or the humanities, it can be used to better understand or it can be used as a layer of abstraction.  I think one of the most glaring recent examples of the negative use is the “1 in 5 children in america struggle with hunger” It’s a good example of how a liar figures.

    • #33
  4. Sabrdance Member
    Sabrdance
    @Sabrdance

    Given this is my professional bailiwick, I agree.  Statistics has provoked a somewhat Pavlovian response in the social sciences.  See the .05, and DING!  In order to know whether the result is real we need to read the lit review to see if it is consistent with what has come before, and then read the theory to see how complicated and likely it is.  And we need to check the design to see if it would actually find what we are expecting to find.  Only when the results are consistent and fairly likely does a 95% significance actually mean anything.  If the theory is over-complicated or unconvincing, it should require a vastly higher significance to make us think it anything but the one time we got a weird sample (all the others having been dropped, unpublished, because they were null-results).  If the lit review has contradictory answers, or has very little earlier research, we should take all results with a grain of salt.

    As I warn my students -consider the possibility that everything you read is wrong and act accordingly.  Statistics is helpful, but not omnipotent.

    • #34
  5. Yeah...ok. Inactive
    Yeah...ok.
    @Yeahok

    James Gawron:

    implausible deniability and incredulous credence.

    That is how I get through the day.

    I don’t think we overvalue statistics any more than we overvalue surgery.

    I do think polls are way overvalued and I think social scientist often apply statistics to suspect data,

    • #35
  6. AIG Inactive
    AIG
    @AIG

    Sabrdance:
    See the .05, and DING! In order to know whether the result is real we need to read the lit review to see if it is consistent with what has come before, and then read the theory to see how complicated and likely it is. And we need to check the design to see if it would actually find what we are expecting to find. 

     But that’s exactly what every…respectable…social science journal does. So the argument that simply getting a “statistically significant result” is sufficient, is not true. This is why it is hypothesis testing: you first have to make sure that your hypotheses make sense, before testing them. And of course, no…respectable…social science journal is going to let you get away with simply reporting statistically significant results if your sample, constructs, methods etc are not rigorous.

    So the critique is mute. What is a better alternative? Even if you are doing experiments, you are still relying on the same methods and assumptions. 

    • #36
  7. user_18586 Thatcher
    user_18586
    @DanHanson

    There is nothing wrong with statistics, properly applied and in the right domain.  Statistical Process Control in engineering has done wonders for improving efficiency.   Many physical laws and principles have a statistical derivation. 

    Reading the list above, there is a common thread – the attempt to apply statistics and  tools like computer modeling to complex systems where there are many interactions and feedback loops.  That includes social sciences, economics and biology.   This is the real problem – using inappropriate scientific tools in domains where they are not appropriate.

    Friedrich Hayek called this ‘scientism’.  In the 20th century the ‘hard’ sciences made huge progress in improving our lives, and this led a lot of soft scientists to decide that their fields needed to be more ‘scientific’ by applying mathematical tools to things that are by their nature fuzzy and not amenable to scientific reductionism.

    Two of the biggest issues of our day are perfect examples of this:  Macroeconomics and climate science.  Both of these issues are driven by people attempting to apply standard modeling and scientific techniques to complex adaptive systems in an attempt to predict their future behavior.  So far, the experts have failed mightily in this, and for good reason.

    • #37
  8. AIG Inactive
    AIG
    @AIG

    Now I would argue that there is a lot of “non-rigorous” theory or statistical analysis that is published in a lot of low-tier journals. That’s certainly true. But that’s why the journals matter. 

    Unfortunately, if people only rely on the popular press to hear about “studies” and “results”, the likelihood is that they will hear a lot more about results published in low-tier journals (or in fact in non peer reviewed venues, such as NBER or think-tank publications), because they are both more “interesting” (i.e. unlikely to be true) and more understandable to the layperson (i.e. not very complicated analysis). 

    But that’s like forming on opinion on the intricate details of astrophysics based on Niel Degrasse Tyson’s or Bill Nye the Science Guy’s show.

    • #38
  9. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    POSTMORTEM ON AN EXPERIMENT

    (From “Statistics for Experimenters”, Box, Hunter, Hunter)

    The scene opens with seven people sitting around a table at a meeting to discuss the results. They are the plant manager, the process superintendent responsible for making the runs on  the pilot plant, a design engineer who proposed modifications B and C, a chemical engineer who suggested modification D, a plant operator who took the samples of product for analysis, an analytical chemist who was responsible for the tests made on the samples, and a part-time data analyst who made the statistical calculations. After some preliminaries the dialogue might go something like this:

    Plant manager (who would be happy if no changes were shown to be necessary):

    I am not convinced that the modifications B and C are any better than the present plant process A. I accept that the differences are highly statistically significant and that,  almost certainly,  genuine differences did occur – but I believe the differences were not due to the process changes that we instituted. Have you considered when the runs were made? I find that all the runs with process A were made on a weekend and that the people responsible for operating the pilot plant at that time were new to the job. During the week, when modifications B, C, and D were made, I see that  different operators  were involved in making the runs.

    Design engineer:

    There may have been some effects of that kind but I am  almost certain  they could not have produced differences as large as we see here.

    Pilot plant superintendent:

    Also you should know that I went to some considerable trouble to supervise everyone of these treatment runs. Although there were different operators, I’m  fairly sure  that correct operating procedures were used for all the runs. I am, however,  somewhat doubtful  as to the reliability of the method of the chemical testing which I understand has recently been changed. Furthermore I believe that not all the testing was done by the same person.

    Analytical chemist:

    It is true that we recently switched to a new method of testing, but only after very careful calibration trials. Yes, the treatment samples came in at different times and consequently different people were responsible for the testing, but they are all excellent technicians and I am  fully confident  there could be no problem there. However, I think there is a question about the validity of the samples. As we know, getting a representative sample of this product is not easy.

    Plant operator (sampler):

    It used to be difficult to get a representative sample of the product, but you will remember that because of such difficulties a new set of stringent rules for taking samples was adopted some time ago. I think we can accept  that during these trials these rules were exactly followed by the various operators who took the samples.

    Chemical engineer (proposer of method D):

    Before we go any further, are we sure that the statistical analysis is right? Does anyone here  really understand  the Analysis of Variance? Shouldn’t the experiment have been randomized in some way?

    Data analyst:

    I attended a special two-day short course on statistics and can assure the group that  the correct software program  was used for analyzing the data.

    • #39
  10. Z in MT Member
    Z in MT
    @ZinMT

    Z in MT: In both cases they go to the limitations of applying Gaussian statistical models (white noise) to economic systems. What statistic departments should do however is to stress Bayesian methods,

    AIG: Sorry but I don’t think so. Social sciences, of which economics is part of, doesn’t do “predictions”.

    Z in MT: To the contrary, I was speaking of the financial engineering end of economics where statistical models are not only used to make predictions they are used to bet huge sums of money on those predictions.

    • #40
  11. Z in MT Member
    Z in MT
    @ZinMT

    Again, the main benefit of the Bayesian mindset is that it encourages one to include assumptions explicitly in one’s analysis.  This way one can understand how assumptions effect the outcome.  Both frequentists and Bayesians use all the same tools of statistics except Bayesians like to start with the model, whereas frequentists like to start with the data.

    • #41
  12. user_129539 Inactive
    user_129539
    @BrianClendinen

    Carver:
    Unquestionably we overvalue statistics. I am a real estate appraiser and deal in them daily. There are obviously things that can be proven with them and they are useful. However, almost everything I’ve ever heard on the news concerning any sort of real estate statistic has been grossly misleading. Especially in terms of an individual’s being able to tease out something useful for a particular decision. I am sure that applies across other disciplines.

     That is largely because Real Estate is an inefficient market with very little homogeneity of product types. For averages to even be possible at having any use or meaningful value for predictive values, you have to have some reasonable amount of homogeneity of what you are measuring.  For example, if my property tax appraiser does not reduce the tax value on my house.  I actually have a lot of houses around me same size built the with-in a year by I think the same builder with virtually the same design,  to use there significantly lower tax appraisal when I appeal.    Now that is an average that is meaningful in real estate which is usable.

    • #42
  13. doc molloy Inactive
    doc molloy
    @docmolloy

    It has been said in many ways but an unsophisticated forecaster uses statistics as a drunken man uses lamp-posts – for support rather than for illumination.

     The climate change hype is a good example of bad modelling..

    • #43
  14. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    AIG:

    Sabrdance: See the .05, and DING!

    In order to know whether the result is real we need to read the lit review to see if it is consistent with what has come before, and then read the theory to see how complicated and likely it is. And we need to check the design to see if it would actually find what we are expecting to find.

    But that’s exactly what every…respectable…social science journal does.

    That may be the case for social science. There are parts of biology (I have some specific zoology papers in mind) where I’ve seen the following statistical analysis: “We used the recommended statistical software and bing! this answer popped out!”

    Now, zoologists in particular are often zoologists because they love animals and are good at observing them. They’d rather be out in the pouring rain doing field studies than cooped up inside learning how to do statistical analysis, which is perfectly understandable!
    But I sometimes get a funny feeling about the “this is what the statistical software package told us” approach… If you have no idea how a tool works, how do you know when you’re using it wrong?

    • #44
  15. AIG Inactive
    AIG
    @AIG

    Z in MT:
    To the contrary, I was speaking of the financial engineering end of economics where statistical models are not only used to make predictions they are used to bet huge sums of money on those predictions.

     Sure, but those are not “academic” studies. If people want to apply these tools for their own ends, and suffer the consequences, I have no problems. 

    Then again, I think those people are even smarter than academics in figuring out what works and doesn’t, because they get to see the results of their predictions on a regular bases. If a “prediction” tool works 99% of the time, it’s still better than no such tool, even if the possibility of it failing in a rare occurrence is present. Marketing companies like Nielsen, for example, do this stuff all the time, with great success, and have far more sophisticated tools than academics use. 

    Which brings me to my other point: even if we assume all the critiques are valid, the question is, what is the alternative? Going back to “qualitative” studies and “theorizing” isn’t an answer. 

    • #45
  16. AIG Inactive
    AIG
    @AIG

    Midget Faded Rattlesnake:

    But I sometimes get a funny feeling about the “this is what the statistical software package told us” approach… If you have no idea how a tool works, how do you know when you’re using it wrong?

     Sure. This seems to be more of a problem with the “discipline”. I have noticed this sort of thing in some social sciences too (like sociology) where the overall understanding in the field is very limited. The reviewers aren’t capable of understanding more sophisticated tools. 

    That seems to be driven by the lack of proper statistical training at the PhD level in some fields (like sociology). I don’t think the same critique applies to econ/finance/marketing/strategy disciplines, and neither political science. It might be that in these fields the “academic” studies have to be reconciled with real world outcomes  which are easily observed. Bad theories and bad tools are weeded out as they don’t conform to real world outcomes much faster than other social sciences. 

    • #46
  17. user_129539 Inactive
    user_129539
    @BrianClendinen

    &blockquote>

    Z in MT: One of the best things to come out of the financial crisis is that it taught the financial industry this lesson that Dr. Derman is stressing, and the lesson about the limited utility of the standard deviation (stressed by Nassim Nicholas Taleb). In both cases they go to the limitations of applying Gaussian statistical models (white noise) to economic systems.    

    No, the problem is even more fundamental. It is a complete lack of understanding statistics does not work with unique events. Yes you can have similar events/ data points and use them as an approximation as one helpful general guideline to help manage risk,  but you can throw away getting a real statistically meaningful confidence interval. Also the less homogenous the events are the worse your metric will be. There is no way I could ever come up with a data collection method that would allow me to run any real and meaningful regression analysis. Most people don’t realize approximation becomes worthless for even fairly close like events that happen ever few decades.  Long-term Capital was the poster child of this before the recent crash. However, wall-street never took this case study to heart.

    • #47
  18. Sabrdance Member
    Sabrdance
    @Sabrdance

    AIG:
    But that’s exactly what every…respectable…social science journal does. So the argument that simply getting a “statistically significant result” is sufficient, is not true. This is why it is hypothesis testing: you first have to make sure that your hypotheses make sense, before testing them. And of course, no…respectable…social science journal is going to let you get away with simply reporting statistically significant results if your sample, constructs, methods etc are not rigorous.
    So the critique is mute. What is a better alternative? Even if you are doing experiments, you are still relying on the same methods and assumptions.

     I haven’t had that many articles reviewed, but I can assure you, even in good journals, this is not what they do.  Publication bias is known and discussed widely in social science circles.  Reviewers look for plausible theories and significant findings.  Most of them consider an empirical finding the validation of the theory, even if the theory if sketchy.  They pass through things which are sketchy so long as the empirics are not obviously wrong.

    We have it worse than medicine, and the famous number there was “60% of published findings are wrong.”

    • #48
  19. user_1184 Inactive
    user_1184
    @MarkWilson

    PracticalMary:
    Here’s one: how many statisticians (and other scientists) adhere to macro-evolutionary theory in the face of the statistics and probabilities?
    I believe statistics has a place however this one has shown me how Evolution polluted all sciences. The big one on this list should be Evolutionary _____ , with a side bar of ‘biology’. It’s become impossible to separate myth from the science.

    Mary, I read the article you linked to.  It is a good example of the tendency of humans to abuse probability theory to generate bogus numbers for their favored hypothesis.

    • #49
  20. user_129539 Inactive
    user_129539
    @BrianClendinen

    Statistics is not what should be thrown out, this we all agree on. However, the idea that anything can be quantifiably measured, let alone statistically analyzed is what we need to throw out. The public need to be taught it is not a universal tool, knowing the limitations and what it does not do and what application it works on well on is the key.

    • #50
  21. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    AIG:

    Midget Faded Rattlesnake:

    But I sometimes get a funny feeling about the “this is what the statistical software package told us” approach… If you have no idea how a tool works, how do you know when you’re using it wrong?

    Sure. This seems to be more of a problem with the “discipline”. I have noticed this sort of thing in some social sciences too (like sociology) where the overall understanding in the field is very limited. The reviewers aren’t capable of understanding more sophisticated tools.
    That seems to be driven by the lack of proper statistical training at the PhD level in some fields (like sociology).  I don’t think the same critique applies to econ/finance/marketing/strategy disciplines…

    I won’t disagree with you, as Mr Rattler works in finance, and, as you pointed out, he has a pretty big incentive not to screw the statistics up! (Mr Rattler has become a hardcore Bayesian, incidentally, thanks to Gödel’s Ghost and ET Jaynes’s book.)

    • #51
  22. AIG Inactive
    AIG
    @AIG

    Sabrdance:

    Reviewers look for plausible theories and significant findings. Most of them consider an empirical finding the validation of the theory, even if the theory if sketchy. They pass through things which are sketchy so long as the empirics are not obviously wrong.

    Sorry, but I have to disagree again. It is far more important that your theory make sense, than your results supporting it. Having sketchy theory and good results will almost always mean your paper will be…rejected. Good theory is by far the single most important criteria. 

    • #52
  23. user_1184 Inactive
    user_1184
    @MarkWilson

    <Comment deleted by the author because of hopeless formatting issues.>

    • #53
  24. AIG Inactive
    AIG
    @AIG

    Brian Clendinen:
    Yes you can have similar events/ data points and use them as an approximation as one helpful general guideline to help manage risk, but you can throw away getting a real statistically meaningful confidence interval. Also the less homogenous the events are the worse your metric will be. There is no way I could ever come up with a data collection method that would allow me to run any real and meaningful regression analysis. Most people don’t realize approximation becomes worthless for even fairly close like events that happen ever few decades. 

     There are, of course, rare-event methodologies used in such cases. However, you’re right that the events in these cases are so rare, and using things like case-design methodologies are practically impossible, that social sciences simply don’t look at these things. Hence, it is not a “critique” of social sciences, as the original article suggests. It’s just not what social sciences does. 

    They can’t be criticized on what they don’t do. 

    • #54
  25. AIG Inactive
    AIG
    @AIG

    As for people trying to push Bayesian on us, the burden of proof is on you to demonstrate why those methods work…better…in the majority of cases…than the current tools. Theory of science aside, practically, what is the superiority? I can certainly see the superiority in a few rare applications, But for most applications, it’s a waste of time. 

    I have my theory as to why this is the new “thing” that some people are pushing. In essence, its the need for a “new thing” to circumvent reviewers which throw lots of roadblocks to well understood “traditional” methods. I.e., it’s a way to stand out from the crowd. 

    • #55
  26. user_1184 Inactive
    user_1184
    @MarkWilson

    Brian Clendinen:
    Statistics is not what should be thrown out, this we all agree on. However, the idea that anything can be quantifiably measured, let alone statistically analyzed is what we need to throw out. The public need to be taught it is not a universal tool, knowing the limitations and what it does not do and what application it works on well on is the key.

     I agree, and would extend your comment to the scientific method as a whole.  There is an unfortunate trend lately where people are becoming, for lack of a better term, science fanboys.  It becomes a modern style of faith which is expressed as an attitude or posture, whereas it should be a method of approach which is expressed in skepticism and acknowledgement of one’s own fallibility and limited knowledge.  Science Fanboy Exhibit A, whoever made this meme:

    • #56
  27. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    AIG:
    I have my theory as to why this is the new “thing” that some people are pushing. In essence, its the need for a “new thing” to circumvent reviewers which throw lots of roadblocks to well understood “traditional” methods. I.e., it’s a way to stand out from the crowd.

    In Mr Rattler’s and my case, it’s not about circumventing the reviewers. Most of what he writes is a trade secret for his company, and the math I write is fortunately  not  statistics. I notice a lot of physicists tend to be Bayesian…

    For some of us, the Bayesian approach does seem conceptually easier…

    • #57
  28. Son of Spengler Member
    Son of Spengler
    @SonofSpengler

    I go out for the day, and come back to 50 comments…. Too much to respond to. But regarding the financial use of statistics, I’d like to counter some of what has been said.

    First, use of statistics varies, primarily between the risk management side of things (more model-driven) and the profit-making side of things (more data-driven). Different models are used for different purposes, with the understanding that all models are approximations of reality and none is a complete representation. The key is understanding which shortucts are acceptable in which places. Sometimes Gaussian distributions are a reasonable assumption; sometimes they aren’t.

    Second, examples abound of quantitative risk managers who did raise concerns to management in advance of the crisis, only to have management overrule and/or dismiss them. Risk managers were marginalized, sometimes even fired. It’s true that standard risk management approaches were insufficient in certain areas, for example credit risk analysis. The industry has taken those lessons to heart. Also, risk management has become more deeply embedded in the culture of financial organizations since 2008. But there wasn’t, in my opinion, a wholesale problem with the analytical approaches risk managers were using.

    • #58
  29. Son of Spengler Member
    Son of Spengler
    @SonofSpengler

    (cont.) Finally, LTCM, and other past examples as well, have long been prominent in risk managers’ minds. The lessons and case studies are standard in certification exams and academic curricula. The problem is that each crisis teaches a different lesson. LTCM was a matter of insufficient capital to conduct a long-term strategy through short-term markets. Risk approaches adapted. The 2008 crisis we driven by a liquidity crunch, poor credit underwriting and analysis, and systemic linkages. Risk approaches now adapted further.

    There will be another crisis at some point. And then there will be handwringing over how nobody saw it coming. But jettisoning statistical analysis in favor of — what? less empirical, less substantiated hunches? — will not make the financial system or any of its participants any better off.

    • #59
  30. Clavius Thatcher
    Clavius
    @Clavius

    There are three kinds of lies.  Lies, damn lies, and statistics.  Sorry if I repeated something in the thread.

    • #60
Become a member to join the conversation. Or sign in if you're already a member.