Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
What kind of evidence would it take to persuade you that ESP exists? We skeptics say it would take extraordinary evidence. And yet, were we presented with extraordinary evidence, chances are good we’d disbelieve it. That’s irrational, right?
We intuitively form initial estimates of how plausible a claim might be, estimates quantifiable as prior probabilities. When we’re reasoning correctly in a Bayesian fashion, we assign extraordinary claims very low prior probabilities. Not exactly zero, since a prior probability of exactly zero implies that no evidence, however great, could change our mind, and extraordinary shouldn’t mean impossible. But close enough to zero to count as zero for most purposes – although not when we’re asked to re-evaluate the claims themselves.
Classical statistics typically employs a null hypothesis and one alternative hypothesis to evaluate data. The human brain, though, can juggle multiple alternative hypotheses, with experience intuiting each alternative’s prior probability – a measure of its plausibility even before it’s tested against the data collected. Drawing prior probabilities from experience and correctly updating them in light of new evidence is the essence of Bayesian rationality.
When claims already comport with our experience, we naturally – and rationally – won’t disdain evidence supporting them. When a claim seems extraordinary to us, though, we trot out the demand “extraordinary claims require extraordinary evidence.”
The seeming paradox – and evidence of our gross irrationality to those trying to convince us – is that we may persist in our disbelief even when given the extraordinary evidence we requested! Life teaches the sad lesson that people’s beliefs won’t necessarily converge when presented with identical evidence, but may, confoundingly, diverge further. Irrational! Identity-protective cognition! Motivated reasoning! Human perversity!
Evidence, or Reporting Errors?
Jaynes notes we rarely experience evidence directly. Instead, we rely on others’ reports of evidence. One possibility lurking in the back of our minds is that those reports contain reporting errors. What if they’re biased, perhaps through cognitive or publication bias? What if their data was (however inadvertently) cherry-picked? Might we suspect extraordinary evidence is only extraordinary because of experimental error? Might we even suspect deliberate deception?
Not only might we, but the more extraordinary reported evidence seems, the more we should suspect reporting error, and perhaps outright chicanery. It’s reasonable to suspect reports that “seem too good to be true.” Even in high-trust environments where suspicion of reporting error is low, when the likelihood of an extraordinary claim strikes us as even lower than the likelihood of reporting error, all that extraordinary evidence supporting the claim does is bolster our suspicion of reporting error, rather than persuading us of the claim.
Jaynes calls reporting error “deception,” even when it’s unintentional. In “Queer uses for probability theory,” a rollicking chapter in applied mathematics (fellow nerds may begin page 149 of this PDF), Jaynes discusses the famous Soal experiment in ESP and why “this kind of experiment can never convince” him of a person’s telepathic powers
…not because I assert [the probability of telepathic powers] = 0 dogmatically at the start, but because the verifiable facts can be accounted for by many alternative hypotheses, every one of which I consider inherently more plausible… and none of which is ruled out by the information available to me.
Indeed the very evidence which the ESP’ers throw at us to convince us, has the opposite effect on our state of belief; issuing reports of sensational data defeats its own purpose. For if the prior probability of deception is greater than that of ESP, then the more improbable the data are on the null hypothesis of no deception and no ESP, the more strongly we are led to believe, not in ESP, but in deception. For this reason, the advocates of ESP (or any other marvel) will never succeed in persuading scientists that their phenomenon is real, until they learn how to eliminate the possibility of deception in the mind of the reader.
Brains! Brains! (Zombie Hypotheses)
When extraordinary evidence is cited to support an extraordinary claim, the evidence may inadvertently resurrect a skeptical brain’s “dead hypotheses” instead, “dead” because the brain estimates their likelihood at near zero – but still not as close to zero as the estimate that brain assigns to the extraordinary claim. I call these dead hypotheses “zombie hypotheses,” since they spring back to life in the face of the extraordinary to feast on skeptical brains.
Jaynes observes zombie hypotheses attack even in high-trust environments, and even when the extraordinary claim is true and the evidence supporting it valid. Such zombie attacks have
…made us aware of an important general phenomenon, which has nothing to do with ESP; a person may tell the truth and not be believed, even though the disbelievers are reasoning in a rational, consistent way.
If zombie attacks occur even in high-trust environments among people of similar backgrounds, how much more likely are they in politics, where trust is lower, people’s backgrounds differ, and people routinely suspect the “deception” of not only innocent reporting error, but also of subterfuge?
Perhaps it’s no accident that political discourse often devolves into prompting the other guy to resurrect an army of zombie hypotheses, then concluding from the sheer number of zombies he summons that he must be crazy, flagrantly rationalizing, or both. Else why would he attack our reasoning with so many mythical monsters? That he may also be reasoning correctly, given his experience, and his zombie army might be evidence of this, is almost too horrible to contemplate.
The Ungrateful Undead
“You and what army?” we’re sometimes tempted to demand of opponents. Their zombie army – the army of hypotheses they find more plausible than our claim, no matter how extraordinary our evidence – that’s who. Evidence cannot be interpreted except in light of prior beliefs. And because two people’s prior beliefs may differ
…probability theory appears to allow, in principle, that a single piece of new information D [D for “data”] could have every conceivable effect on their relative states of belief.
Data never absolutely supports or refutes any claim, but only supports or refutes it relative to all the other (“prior”) information we have. When our prior knowledge differs, the same data that supports a claim for one of us may refute it for another – maddeningly, without logical error on either side.
[D]ivergence of opinions is readily explained by probability theory as logic, and that it is to be expected when persons have widely different prior information.
Although we hope – and often find – that the more data we share, the more our beliefs converge, it’s logically possible for data sharing to drive two reasoner’s beliefs farther apart without either erring logically. Now, possible isn’t the same as likely. Many of us suspect this possibility is nonetheless extremely implausible. There’s something too morally lazy – or simply too horrifying – about supposing this possibility manifests often enough in real life to justify much human agreement.
Zombie hypotheses would be far less terrifying if they were just bad-faith hypotheses resurrected in order to deny reason. The real horror of zombie hypotheses, especially for political consensus, is not that they’re a defense mechanism against reason, but that they’re baked into what reasoning is.
Is There Hope?
Carl Sagan famously described the world of insufficiently-skeptical brains as demon-haunted. ET Jaynes suggests that skeptical brains, while perhaps not haunted by demons (though I suspect all brains are, more or less) are at least prone to zombie infestations. When mutually-skeptical minds are busy attacking one another with hordes of ungrateful undead, is there any hope? Any way to stop the zombies? Yes, at least sometimes. It was alluded to earlier:
For this reason, the advocates of ESP (or any other marvel) will never succeed in persuading scientists that their phenomenon is real, until they learn how to eliminate the possibility of deception in the mind of the reader.
Jaynes continues, citing a diagram illustrating that
the reader’s total prior probability for deception by all mechanisms must be pushed down below that of ESP.
Pushing a skeptic’s estimate of the total likelihood of “deception by all mechanisms” below his estimate of the likelihood of your claim means establishing trust. Many effective techniques for establishing trust rely on something other than “cold reason.” Some techniques are not even honest (the con in con-man is short for confidence, after all). Rhetoric, for example, need not be used honestly. Rhetoric aims to persuade, and while persuasion requires establishing trust, the very possibility that rhetoric works well enough at establishing trust that it’s useful for establishing unwarranted trust puts the trust-building power of rhetoric under suspicion. Few humans are immune to the blandishments of rhetoric from someone, but when someone strikes us as untrustworthy enough to begin with, the hypothesis that their rhetoric is a confidence trick is often very undead indeed.
In today’s political climate, it’s easy to believe that establishing trust often isn’t feasible. And there’s no guarantee that it must be – indeed there’s a possibility, however slight, that it might be logically impossible.
Jaynes is not the first to observe that high trust among scientists is what enables scientists to keep the zombie hordes at bay long enough for sharing data in common to forge knowledge in common. This process goes by simpler name, learning. Without trust, there’s little hope for even the most rational of arguments to produce learning.
This essay is based off an earlier draft, published just after Halloween last year.