Do We Overvalue Statistics? — Judith Levy

 

The fascinating website edge.org, which publishes well-considered reflections by smart people on very big questions, has made this its 2014 inquiry: What scientific idea is ready for retirement?

The respondents are well worth listening to: they include scientists of many stripes, mathematicians, philosophers, and economists, as well as several knowledgeable science writers and editors (and also, for reasons that are obscure, Alan Alda). The responses are posted one after another in a gigantic stream, forming a kind of alpha-listicle. It’s Buzzfeed for boffins, essentially.

A ton of big scientific ideas get voted off the island on the respondents’ page (see sampling below), but there is one response in particular that I thought might be of interest to our little subset, concerned as we are with things like elections. Emanuel Derman, professor of financial engineering at Columbia, wrote that the power of statistics is an idea worth retiring:

…nowadays the world, and especially the world of the social sciences, is increasingly in love with statistics and data science as a source of knowledge and truth itself. Some people have even claimed that computer-aided statistical analysis of patterns will replace our traditional methods of discovering the truth, not only in the social sciences and medicine, but in the natural sciences too.

…  Statistics—the field itself—is a kind of Caliban, sired somewhere on an island in the region between mathematics and the natural sciences. It is neither purely a language nor purely a science of the natural world, but rather a collection of techniques to be applied, I believe, to test hypotheses. Statistics in isolation can seek only to find past tendencies and correlations, and assume that they will persist. But in a famous unattributed phrase, correlation is not causation.

Science is a battle to find causes and explanations amidst the confusion of data. Let us not get too enamored of data science, whose great triumphs so far are mainly in advertising and persuasion. Data alone has no voice. There is no “raw” data, as Kepler’s saga shows. Choosing what data to collect and how to think about it takes insight into the invisible; making good sense of the data collected requires the classic conservative methods: intuition, modeling, theorizing, and then, finally, statistics.

Bart Kosko, an information scientist and EE and law professor at USC, responded similarly that statistical independence is an illusion:

It is time for science to retire the fiction of statistical independence. 

The world is massively interconnected through causal chains. Gravity alone causally connects all objects with mass. The world is even more massively correlated with itself. It is a truism that statistical correlation does not imply causality. But it is a mathematical fact that statistical independence implies no correlation at all. None. Yet events routinely correlate with one another. The whole focus of most big-data algorithms is to uncover just such correlations in ever larger data sets. 

Statistical independence also underlies most modern statistical sampling techniques. It is often part of the very definition of a random sample. It underlies the old-school confidence intervals used in political polls and in some medical studies. It even underlies the distribution-free bootstraps or simulated data sets that increasingly replace those old-school techniques. 

White noise is what statistical independence should sound like…  Real noise samples are not independent. They correlate to some degree.

Science journalist Charles Seife wrote that “statistical significance” is almost invariably misused, to the point that it has become

a boon for the mediocre and for the credulous, for the dishonest and for the merely incompetent. It turns a meaningless result into something publishable, transforms a waste of time and effort into the raw fuel of scientific careers. It was designed to help researchers distinguish a real effect from a statistical fluke, but it has become a quantitative justification for dressing nonsense up in the mantle of respectability. And it’s the single biggest reason that most of the scientific and medical literature isn’t worth the paper it’s written on.

What say you, Ricochetti? First, what’s your take on the ever-growing popular reverence for statistics? And second, what scientific idea do you believe ought to be retired? To get your gears turning, here’s the promised selection of ideas the respondents came up with at edge.org (and check out the site; there are lots more where these came from):

  • Mental illness is nothing but brain illness (Joel Gold and Ian Gold)
  • Animal mindlessness (Kate Jeffery)
  • Altruism (Jamil Zaki and Tor Norretranders)
  • Entropy (Bruce Parker)
  • Moral “blank slate-ism” (Kiley Hamlin)
  • Natural selection is the only engine of evolution (Athena Vouloumanos)
  • Opposites can’t both be right (Eldar Shafir)
  • Left brain/right brain (Sarah-Jayne Blakemore and Stephen M. Kosslyn)
  • The self (Bruce Hood)
  • Beauty is in the eyes of the beholder (David M. Buss)
  • String theory (Frank Tipler)
  • Emotion is peripheral (Brian Knutson)
  • Unification, or a theory of everything (Marcelo Gleiser, Geoffrey West)
  • The intrinsic beauty and elegance of mathematics allows it to describe nature (Gregory Benford)
  • Cause and effect (W. Daniel Hillis)
  • Evidence-based medicine (Gary Klein)
  • There can be no science of art (Jonathan Gottschall)
  • Mouse models (Azra Raza, MD)
  • The clinician’s law of parsimony (aka Occam’s Razor) (Gerald Smallberg, Jonathan Haidt)
  • Standard deviation (Nassim Nicholas Taleb)
  • Artificial intelligence (Roger Schank)
  • Human nature (Peter Richerson)
  • Programmers must have a background in calculus (Andrew Lih)
  • Free will (Jerry Coyne)
  • The Big Bang was the first moment of time (Lee Smolin)
  • One genome per individual (Eric J. Topol, MD)
  • Languages condition worldviews (John McWhorter)
  • Infinity (Max Tegmark)
Published in General
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 101 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Sabrdance Member
    Sabrdance
    @Sabrdance

    AIG:

    I have to disagree again. It is far more important that your theory make sense, than your results supporting it. Having sketchy theory and good results will almost always mean your paper will be…rejected. Good theory is by far the single most important criteria.

     There’s a difference between “makes sense” and “is likely.”  Reviewers check for the first, not the second.  In fact, “is likely” is a black mark against a paper because “oh, this is obvious, why do we need to publish it?”  Which leads to the selection problem in the literature -non-obvious theories with no empirical support never get published, because everyone is looking for .05 p-values.  Then, with plausible but sketchy methods or theory, or an unusual sample, it gets a .05, and boom -publication.  This isn’t news -we talk about it openlyAnd other problems, too.

    By and large, I’m fine with the review process -but it doesn’t review for truth, it reviews for competence.  Consumers of the science have to read the articles and decide whether they think the results stand up -not just say “oh, .05, must be true,” rather than outsourcing it to the reviewers.

    • #61
  2. AIG Inactive
    AIG
    @AIG

    Sabrdance:

    There’s a difference between “makes sense” and “is likely.” Reviewers check for the first, not the second…Then, with plausible but sketchy methods or theory, or an unusual sample, it gets a .05, and boom -publication.

    Sorry, but gonna have to disagree, again. You say they check for “makes sense”, but then say “with sketchy theory”. These two are mutually exclusive. You can’t have sketchy theory if it “makes sense”. “Makes sense” and “is likely” are the same thing. That’s why hypotheses are worded in terms of…how the likelihood of something is affected. If your theory makes sense, than it should be likely within the constraints of the theory.

    If it weren’t likely, you couldn’t test it.

    An “unusual sample” is especially problematic because it reduces generalizability, which makes the likelihood of publication (in a top journal) even lower. 

    Top priority is that the theory “makes sense” (ie is likley). 

    The blog you linked to doesn’t say anything particularly “worrying”. Academics is getting more specialized, and of higher quality. Yep. That’s my interpretation of that ;)

    • #62
  3. AIG Inactive
    AIG
    @AIG

    Sabrdance:

    By and large, I’m fine with the review process -but it doesn’t review for truth, it reviews for competence. Consumers of the science have to read the articles and decide whether they think the results stand up -not just say “oh, .05, must be true,” rather than outsourcing it to the reviewers.

     That’s not how science works. Science works that if the “consumers of science” think that the results of the paper are either problematic, or the theory is “sketchy”, they are free to write their own paper contradicting the published results. 

    Of course, the trick is, if you think the theory is “sketchy”, you’re going to have to come up with a way of testing that their theory is sketchy, but not yours. Easy to do in words, not so much in empirics. 

    And yes, reviewers don’t judge based on “truth”, because that’s not their job. 

    • #63
  4. Sabrdance Member
    Sabrdance
    @Sabrdance

    AIG, the theory that giving money to poor people will make their lives better “makes sense.”

    Have you read the social science literature?  It is so filled with goofy samples that we have a name for: “the college sophomore problem.”  Almost no social science has a truly random sample because random samples are expensive to collect and after spending all the money, some random event happens and contaminates your sample.  Even in top journals we use small samples, samples that are geographically compact (and therefore probably strange in some way), and we use exotic methods to tease out tiny variations.

    I’m very gently trying not to pull rank, but I’m a working -albeit newly minted -social scientist, and I can tell you that the idealized world you are describing is not the world I work in.  Not only does it not exist, almost everything I’ve mentioned is taught in PhD programs precisely because the problems are known to exist and be widespread.  There are entire literatures on publication bias, widespread methodological errors that reviewers don’t catch, the perennial “no one bothers theorizing anymore because they can just datamine,” and “no one ever checks the math/numbers.”

    • #64
  5. Sabrdance Member
    Sabrdance
    @Sabrdance

    Just to look at some samples published in the last quarter (selected from ISI Web of Knowledge citations for the Ioannidis, 2005 JAMA article everyone uses):

    Ioannidis, Journal of Economic Surveys – “What’s to know about the credibility of Empirical Economics.”

    Camfield and Palmer-Jones, Journal of Development Studies – “The 3 R’s of Econometrics: Repetition, Reproduction, and Replication.”

    Ioannidis, Biostatics -“Why ‘An Estimate of the science-wise false discovery rate and application to the top medical literature’ is false.”

    Gelman and O’Rourke, Biostatistics – “Difficulties in making inferences about scientific truth from distributions of published p-values.”

    Cumming, Psychological Science – “The New Statistics -Why and How” (a discussion of methods for avoiding the flaws in null-hypothesis testing)

    Marino, Biochemical Pharmacology – “The Use and Misuse of Statistical Methodologies in Pharmacology Research.”

    Van Assen, PLOS One – “Why publishing everything is more effective than selective publishing of statistically significant results”

    Funder et al, Personality and Social Psychology reviewed – “Improving the Dependability of Research in Personality and Social Psychology: Recommendations for Research and Educational Practice.”

     

    And 1100 more citations over the past decade.

     

    These problems aren’t a secret -they are active issues in social science now.

    • #65
  6. AIG Inactive
    AIG
    @AIG

    Sabrdance, again, I have to disagree (this is becoming a habit of mine)

    1) You don’t need to pull rank on me. My rank might be higher, or same, as your’s ;) But we don’t need to get into that.

    2) It all depends on the particular discipline. Again, I have no doubt that several social science disciplines lack methodological rigor. You are applying this, however, broadly to other disciplines which have extremely high rigor (such as economics or business disciplines or psychology and political science).

    3) You don’t always need random samples. In some fields, you can capture the entire universe of applicable cases rather easily (think of finance, for example). In others, there’s experimentation. In others still, there’s nothing wrong with looking at a subset, as long as you can show how it is, or isn’t biased. 

    4) There’s always going to be problems with whatever method you use. This is not an argument for not using statistics. 

    5) Sorry, but theory still trumps everything. In my field at least, you cannot get away without a good theory. You can “datamine”, but you still can’t publish without a good theory.

    • #66
  7. AIG Inactive
    AIG
    @AIG

    6) The “publication bias” argument is flawed. It is flawed for two reasons, I think:

    a) Null hypothesis results…can…and are published. However, they have to be theoretically interesting to be publishable, and actually require greater level of scrutiny because of the nature of a null result vs a significant result. The fact is that in 99% of cases where papers are not published, the null results are not theoretically interesting, and hence don’t get published. It is not…theoretically…interesting or rigorous to spend 10 pages theorizing about why there should be a result, only to find nothing. I.e., again it’s the…theory…that is driving the selection. Those that theorize a null result, however, have an uphill battle.

    b) Most disciplines have journals dedicated to theory only. If you want to theorize without empirical support, go for it there. Hence, “no results” is not an impediment to publishing your theory. The problem is, most of that stuff isn’t theoretically interesting, or rigorous, to begin with.

    • #67
  8. Manfred Arcane Inactive
    Manfred Arcane
    @ManfredArcane

    Here’s proof of the power of some varieties of statistical inference:

    http://www.spacewar.com/reports/New_infrared_technique_aims_to_remotely_detect_dangerous_materials_999.html

    • #68
  9. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    Manfred Arcane:
    Ironic, though, that the inclusion of prior beliefs that is part and parcel of Bayesian analysis is what causes it to lose credence with outsiders who want their statistics to be entirely ‘objective’, that is independent from, and inoculated against, prior belief.
    The only cure for this seems to be to disclose that the classical treatments (such as in the design of experiments) results in different answers depending on the set up (design of the experiment), which injects its own subjective element notwithstanding appearances.
    There is room for Philosophy to step in here and tease out some interesting perspectives on statistics and its handmaiden, probability, but I doubt if that discipline… has risen to the challenge/opportunity.

    Michael Polanyi was by training a physical chemist, not a statistician. But he was also a polymath who wrote extensively about the epistemology of science, as well as about economics and freedom.

    As someone who, you know, actually  did  science before writing about the philosophy of science, he had acute insight into how scientists use prior knowledge to investigate the world. You might like his stuff.

    Even scientific knowledge is personal knowledge, and scientific curiosity, driven by passion.

    • #69
  10. Caryn Thatcher
    Caryn
    @Caryn

    So much to add…all these years and finally, something firmly in my field! Now I don’t know where to start.

    Yes, statistics are widely misused and abused and above all are misunderstood by a good proportion of the public.  This is also, interestingly, the case for many scientists.

    I did my doctoral level biostatistics training at a very well known school in the department of epidemiology, subspecies infectious disease.  I was horrified, BTW, to discover that epidemiology was actually a social science (my undergrad was Chemistry & Biology)  in its lack of rigor.  Yes, they used statistics widely and with great confidence in their findings (literally and figuratively).  Unfortunately, as is often (and necessarily) the case with social sciences, they were applying 3 or 4 significant figure level statistical testing to one significant figure level data.  That’s largely the root of the problem, I think.

    One of the best professors I ever had in many, many years of higher ed, was Ron Brookmeyer, now at UCLA.  He would frequently discuss results being “statistically significant,” but would never define a p-value.  Of course, that made us a little crazy, being note-taking nerds, er…students.  He said…(TBC)

    • #70
  11. CandE Inactive
    CandE
    @CandE

    Franco: There is a story about the statistician who travelled to give lectures and became fixated on his fear that there could be a bomb on his plane. This was pre-9/11 so there weren’t special security measures. To help calm his fears, he calculated the probabilities of a bomb being on his plane. They were very low, but somehow his fears remained. After several angst-ridden flights the statistician got an idea. What were the odds of there being TWO bombs on the same plane? They were infintessimal. Thereafter the statistician  travelled with his own bomb. 

     I lol’ed.  Thanks.

    -E

    • #71
  12. AIG Inactive
    AIG
    @AIG

    Caryn: I did my doctoral level biostatistics training at a very well known school in the department of epidemiology, subspecies infectious disease.  I was horrified, BTW, to discover that epidemiology was actually a social science (my undergrad was Chemistry & Biology)  in its lack of rigor.

    “Social science” varies quite a lot in its methodological rigor. Epidemiology is certainly not as “rigorous” when it comes to statistics as many social sciences, such as econ, business or political science (it’s actually very far behind, I would say).  Certainly the level of biostatistical training one gets in an epi department isn’t representative of biostatistics in general. (having known some epi PhDs who would only qualify as…introductory…on their statistical knowledge)

    That being said, biostatistics is another one of those “applied” tools which is constantly reinforced through practice (like marketing or political science or finance). I.e., criticism can be levied on anything, but people who bet money on the outcomes of these applied tools, I’m quite certain, aren’t fools to use suspect tools. 

    • #72
  13. AIG Inactive
    AIG
    @AIG

    Manfred Arcane:
    Here’s proof of the power of some varieties of statistical inference:
    http://www.spacewar.com/reports/New_infrared_technique_aims_to_remotely_detect_dangerous_materials_999.html

     Yep. “Statistics” does a heck of a lot of things that most people simply don’t pay attention to…all with the same “suspect” assumptions that everyone seems to be criticizing.

    All the pattern recognition imaging software that runs everything from real-time weapons targeting to facial recognition on your phone camera; even information compression in most software used at home…is done through partial least square regression. Somehow, that’s “good enough” for the hundreds of time a day we use them, but is “suspect assumptions” when used for hypothesis testing in an academic paper ;)

    Of course, to criticize the assumptions of any particular method is easy enough. Anyone can do it. Figuring out a better alternative, now that’s the trick! 

    • #73
  14. Gödel's Ghost Inactive
    Gödel's Ghost
    @GreatGhostofGodel

    &strong>AIG: “As for people trying to push Bayesian on us, the burden of proof is on you to demonstrate why those methods work…better…in the majority of cases…than the current tools. Theory of science aside, practically, what is the superiority? I can certainly see the superiority in a few rare applications, But for most applications, it’s a waste of time.”

    Actually, it’s exactly the other way around: it’s frequentism that has no means of dealing with situations that can’t be modeled as a series of “random experiments,” and Bayesian probability has now been proven overwhelmingly successful in the field. But there’s no need to debate this; all the i’s are dotted and t’s crossed in Probability Theory: the Logic of Science, The Theory That Would Not Die, and Machine Learning: A Probabilistic Perspective. “The Theory That Would Not Die” is particularly eye-opening/infuriating: it documents several cases where users of Bayes’ theorem had to lie about it to overcome exactly the uninformed insistence on orthodox statistics on display here.

    Update: how could I forget Data Analysis: A Bayesian Tutorial? This is the fastest read, and also the most immediately practical, with code in C.

    • #74
  15. Gödel's Ghost Inactive
    Gödel's Ghost
    @GreatGhostofGodel

    AIGAll the pattern recognition imaging software that runs everything from real-time weapons targeting to facial recognition on your phone camera; even information compression in most software used at home…is done through partial least square regression.

    This is false, which makes it hard to know what to make of the rest of your post.

    • #75
  16. Manfred Arcane Inactive
    Manfred Arcane
    @ManfredArcane

    I luv least squares.  Just luv it.  The idea that you can find the ‘best’ representative in some category of useful things in mathematics is truly what makes mathematics so cool.  If Reality deviates from the assumptions needed to make the LS solution truly optimal, well then Reality is just going to have to change, sorry.

    • #76
  17. AIG Inactive
    AIG
    @AIG

    “Actually, it’s exactly the other way around: it’s frequentism that has no means of dealing with situations that can’t be modeled as a series of “random experiments,” and Bayesian probability has now been proven overwhelmingly successful in the field. ”

    Right, so as I said, with some exceptions. You just gave me an exception, and declared victory.

    “This is false, which makes it hard to know what to make of the rest of your post.”

    Someone out to tell this to Google real fast. They’re using PLS for pattern recognition, without knowing that that’s not how it’s done. 

    • #77
  18. user_18586 Thatcher
    user_18586
    @DanHanson

    Caryn:

    …they were applying 3 or 4 significant figure level statistical testing to one significant figure level data. That’s largely the root of the problem, I think.

    A perfect summation of one of the big problems I see with statistical analysis in the social sciences.

    I”m used to applying statistical methods in engineering,  where we spend as much time quantifying the errors in the data as we do in analyzing the results.  We do experiments to measure the standard error of measuring systems,  etc.  We totally understand that finding small effects through statistical analysis requires a deep understanding of the nature of the raw data and tight control over measurement accuracy.

    Then I look at the social sciences, and see studies where poorly controlled survey data is collected into huge sample sets in ‘overpowered’ studies, then the data is sliced and diced repeatedly until correlations are found.  Because the sample sizes are large, you can find very small effects that generate impressive P values,  but which don’t necessarily mean anything.  In the meantime, the raw data is extracted from fundamentally error-prone processes – with the nature of the errors poorly understood.

    • #78
  19. AIG Inactive
    AIG
    @AIG

    Dan Hanson: Then I look at the social sciences, and see studies where poorly controlled survey data is collected into huge sample sets in ‘overpowered’ studies, then the data is sliced and diced repeatedly until correlations are found.  Because the sample sizes are large, you can find very small effects that generate impressive P values,  but which don’t necessarily mean anything.  In the meantime, the raw data is extracted from fundamentally error-prone processes – with the nature of the errors poorly understood.

     Sorry, but nope. First you say “survey data”, then you talk about “large sample size”. The two almost never go together. Second, survey data is some of the most difficult sort to get published (in top journals) precisely because there is a mountain of validity and sample selection tests etc which are needed. So your criticism that the “error” is poorly understood, and the surveys are “poorly controlled”, not only is well understood, but simply doesn’t hold up to the reality of what goes in top tier journals in…most…social science disciplines. 

    Not sure what some here are reading to get their opinions. Certainly it isn’t top tier journals, in rigorous social science disciplines.

    • #79
  20. AIG Inactive
    AIG
    @AIG

    PS: Having been an engineer, trained in that sort of statistics, and now a “social scientist” trained in this sort of statistics, I can say with a high degree of confidence that no engineer is even equipped to “understand” the methods used in most social sciences. If you want to be able to understand them, correctly, you’re going to need to read one of “their” statistical books. I’d recommend Gary King at Harvard. 

    I don’t mean to be “combative” or anything, but this “bashing social science” based on anecdotal evidence or having read an NBER paper, isn’t exactly the right approach.

    • #80
  21. Gödel's Ghost Inactive
    Gödel's Ghost
    @GreatGhostofGodel

    &strong>AIG: “I can say with a high degree of confidence that no engineer is even equipped to ‘understand’ the methods used in most social sciences…”

    Based on what you’ve written here: no, you can’t.

    “I don’t mean to be “combative” or anything…”

    Then stop.

    “…this ‘bashing social science’ based on anecdotal evidence or having read an NBER paper, isn’t exactly the right approach.”

    You’re dealing with at least one expert in probability theory and machine learning who has pointed you to entire books and software on the state of the art on the subject. When you’re in a hole, it’s really best to quit digging.

    • #81
  22. AIG Inactive
    AIG
    @AIG

    “You’re dealing with at least one expert in probability theory and machine learning who has pointed you to entire books and software on the state of the art on the subject. When you’re in a hole, it’s really best to quit digging.”

    That’s great. Then perhaps you can address the issues without trying to deflect. As far as I can see, you’ve only argued here for Bayesian, and not necessarily on the issue at hand. 

    My comment on that was that despite the theoretical statistical disputes, and some specific circumstances, the issue still remains about the…practical…advantages. If there are none, in most cases, than it’s not a particularly relevant issue. If not a totally tangential issue to the OP. I said precisely 2 sentences on it, but you seem to have taken great offense at it. Fine, I’ll take it back, as long as you’re willing to answer without deflection :) 

    Now, the criticism in the OP was that statistics in social sciences has problems. So far, I haven’t seen any serious arguments or evidence for this, despite 9 pages of conversation. And yes, I am also someone who makes a living on…applying…these tools…in academia

    • #82
  23. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    AIG:
    Now, the criticism in the OP was that statistics in social sciences has problems. So far, I haven’t seen any serious arguments or evidence for this, despite 9 pages of conversation. And yes, I am also someone who makes a living on…applying…these tools…in academia

    Have you  never  met people over-eager to give statistical massages to data gleaned from poorly-designed experiments? You are perfectly right to say, “If that is the case, then the fault is not the statistics themselves, but in the people abusing statistics.” But it’s not wrong to note that a tool amenable to widespread abuse has some problems, even when it’s a very useful tool.

    Obviously, you have knowledge of the social sciences that I lack. But when social scientists are asking each other, “Maybe Mechanical Turk surveys don’t tell us as much about people (as opposed to MT users) as we’d like to assume,” or “Is there maybe too much of a tendency to draw conclusions about humanity as a whole from a very WEIRD demographic?”  I’d say they themselves are showing concern about the abuse of statistics in the social sciences.

    • #83
  24. AIG Inactive
    AIG
    @AIG

    Midget Faded Rattlesnake:

     Yes, but…

    1) Providing evidence from people in these very same disciplines, criticizing their measures, implies that they do in fact understand the limitations and drawbacks and are not, in fact, simply taking results at face value. I.e., the criticism that “all you need is statistical significance”, or “these social science guys just don’t the limitations or the assumptions”, isn’t true. 

    2) Venue matters. The rigor required for a top journal, depending on the discipline, is widely different from lower journals. The difference between social science disciplines is even greater.

    3) There are huge bodies of literature in all these disciplines dealing with all the shortcomings you point to, and in top tier journals you’re expected to address all these concerns. I.e., simply pointing out the limitations of the methods doesn’t mean that people don’t take them into account, or address them. 

    All this indicates to me that the social sciences are working exactly as they are supposed to. People discuss methods, criticize and revise. Bad methods weeded out, better methods developed. What’s the alternative? None seems to be presented so far. 

    • #84
  25. AIG Inactive
    AIG
    @AIG

    I.e., one can’t argue that these “social science guys” don’t know what they’re doing, by then linking to articles from those same “social science guys” where they talk about how they know what they’re doing ;) 

    And academic articles aren’t intended to “prove” something, or to be taken at face value. They are intended for others to read, and improve upon. If indeed this “problem” is prevalent, then this would be a great opportunity for someone to write some papers improving on them :) Apparently, that seems easier said…than done. What’s the reason for that?

    I.e., can one criticize “social science” because it develops better methods over time? That’s not a critique! 

    PS: Remember how everyone was saying that Nate Silver just HAD to be wrong because all these “surveys and polls” etc. were all flawed? Turns out, they know what they’re doing. Gallup and Nielson don’t get paid to do a poor job.

    • #85
  26. captainpower Inactive
    captainpower
    @captainpower

    On another thread, I posted a video called “The Joy of Stats.”

    While I can sympathize with the idea that we overvalue statistics, let’s keep what’s good without overrelying on them.

    For example, through statistics we can see the increase in the standard of living over time.

    captainpower:

    Hans Rosling’s 200 Countries, 200 Years, 4 Minutes – The Joy of Stats – BBC Four

    Uploaded on Nov 26, 2010

    More about this programme: 

    http://www.bbc.co.uk/programmes/b00wgq0l 

    … In this spectacular section of ‘The Joy of Stats’ he [Hans Rosling] tells the story of the world in 200 countries over 200 years using 120,000 numbers – in just four minutes. Plotting life expectancy against income for every country since 1810, Hans shows how the world we live in is radically different from the world most of us imagine.
    … watch 30 seconds starting at the 4 minute mark where you can see that the poorest countries today are better off than the richest countries a hundred years ago.
    http://youtu.be/jbkSRLYSojo?t=4m

    • #86
  27. user_740328 Inactive
    user_740328
    @SEnkey

    I would love to see post on each or any of those ideas. I tire of this democracy is perfect tripe.

    • #87
  28. user_740328 Inactive
    user_740328
    @SEnkey

    Manfred Arcane:
    This kind of post really cements my attraction to Ricochet, because it has a high intelligence quotient, and stimulates us to look at the world afresh.
    Each of the mentions in the list at the end of the post might be worthy of its own thread, demonstrating how fecund is this posting.
    In the spirit of that list, let me offer up a few related pet ‘political’ attitude bugaboos that I would like to see ‘retired’:
    “Democracy embodies the ultimate degree of progress in political arrangements for any nation.”
    “A political state must needs empower the majority over the minority of citizenry.”
    “Multi-governments within a single state is not a feasible/effective/palatable/viable/stable/felicitous/durable arrangement…”

    This are the ideas to which I was referring.

    • #88
  29. user_1184 Inactive
    user_1184
    @MarkWilson

    AIG: And academic articles aren’t intended to “prove” something, or to be taken at face value.

    But journalists and the general public think that is exactly what they are intended for.  Because Science!

    • #89
  30. Gödel's Ghost Inactive
    Gödel's Ghost
    @GreatGhostofGodel

    AIG #32: “Bayesian has even more shortcomings and limitations than the ‘standard’ statistical tools.”
     
    AIG #55: “As for people trying to push Bayesian on us, the burden of proof is on you to demonstrate why those methods work…better…in the majority of cases…than the current tools. Theory of science aside, practically, what is the superiority? I can certainly see the superiority in a few rare applications, But for most applications, it’s a waste of time.”
     
    Just a quick reminder, AIG, of what I’ve been responding to: two false claims, the counterarguments of which aren’t anecdotes or amateurs reading some second-tier journal, but include physicists, econometricians, and yes, engineers at Google, the largest user of Bayesian data analysis on the planet. You asked for proof Bayesian methods are better in the majority of cases. You now have books and their bibliographies to pursue at your leisure, and software to try on whatever data you like.
     
    Final note: I wouldn’t, in fact, make such a big deal of this if the abject ignorance nearly everyone stuck in the swamp of orthodox statistics displays on the subject hadn’t set science back approximately a century. If working scientists and engineers hadn’t had to lie about using Bayes’ theorem, knowing it works, because the Fisher/Pearson/Neyman mafia insisted it didn’t (without bothering to check actual results). And, finally, if your own dogmatism on the subject weren’t so extreme.
     
    So if I were you, I’d start reading. Because you have a lot to learn.
     

    • #90
Become a member to join the conversation. Or sign in if you're already a member.