# Halloween Horror: Rhetoric, ESP, and the Other Guy’s Zombie Army

What kind of evidence would it take to persuade you that ESP exists? We skeptics say it would take extraordinary evidence. And yet, were we presented with extraordinary evidence, chances are good we’d disbelieve it. That’s irrational, right?

Not necessarily.

Bayesian Prior-ities

We intuitively form initial estimates of how plausible a claim might be, estimates quantifiable as prior probabilities. When we’re reasoning correctly in a Bayesian fashion, we assign extraordinary claims very low prior probabilities. Not exactly zero, since a prior probability of exactly zero implies that no evidence, however great, could change our mind, and extraordinary shouldn’t mean impossible. But close enough to zero to count as zero for most purposes – although not when we’re asked to re-evaluate the claims themselves.

Classical statistics typically employs a null hypothesis and one alternative hypothesis to evaluate data. The human brain, though, can juggle multiple alternative hypotheses, with experience intuiting each alternative’s prior probability – a measure of its plausibility even before it’s tested against the data collected. Drawing prior probabilities from experience and correctly updating them in light of new evidence is the essence of Bayesian rationality.

When claims already comport with our experience, we naturally – and rationally – won’t disdain evidence supporting them. When a claim seems extraordinary to us, though, we trot out the demand “extraordinary claims require extraordinary evidence.”

The seeming paradox – and evidence of our gross irrationality to those trying to convince us – is that we may persist in our disbelief even when given the extraordinary evidence we requested! Life teaches the sad lesson that people’s beliefs won’t necessarily converge when presented with identical evidence, but may, confoundingly, diverge further. Irrational! Identity-protective cognition! Motivated reasoning! Human perversity!

Not so fast. As physicist and Bayesian scholar ET Jaynes observes, this divergence may be entirely consistent with correct Bayesian reasoning on differing priors.

Evidence, or Reporting Errors?

Jaynes notes we rarely experience evidence directly. Instead, we rely on others’ reports of evidence. One possibility lurking in the back of our minds is that those reports contain reporting errors. What if they’re biased, perhaps through cognitive or publication bias? What if their data was (however inadvertently) cherry-picked? Might we suspect extraordinary evidence is only extraordinary because of experimental error? Might we even suspect deliberate deception?

Not only might we, but the more extraordinary reported evidence seems, the more we should suspect reporting error, and perhaps outright chicanery. It’s reasonable to suspect reports that “seem too good to be true.” Even in high-trust environments where suspicion of reporting error is low, when the likelihood of an extraordinary claim strikes us as even lower than the likelihood of reporting error, all that extraordinary evidence supporting the claim does is bolster our suspicion of reporting error, rather than persuading us of the claim.

Jaynes calls reporting error “deception,” even when it’s unintentional. In “Queer uses for probability theory,” a rollicking chapter in applied mathematics (fellow nerds may begin page 149 of this PDF), Jaynes discusses the famous Soal experiment in ESP and why “this kind of experiment can never convince” him of a person’s telepathic powers

…not because I assert [the probability of telepathic powers] = 0 dogmatically at the start, but because the verifiable facts can be accounted for by many alternative hypotheses, every one of which I consider inherently more plausible… and none of which is ruled out by the information available to me.

Indeed the very evidence which the ESP’ers throw at us to convince us, has the opposite effect on our state of belief; issuing reports of sensational data defeats its own purpose. For if the prior probability of deception is greater than that of ESP, then the more improbable the data are on the null hypothesis of no deception and no ESP, the more strongly we are led to believe, not in ESP, but in deception. For this reason, the advocates of ESP (or any other marvel) will never succeed in persuading scientists that their phenomenon is real, until they learn how to eliminate the possibility of deception in the mind of the reader.

Brains! Brains! (Zombie Hypotheses)

When extraordinary evidence is cited to support an extraordinary claim, the evidence may inadvertently resurrect a skeptical brain’s “dead hypotheses” instead, “dead” because the brain estimates their likelihood at near zero – but still not as close to zero as the estimate that brain assigns to the extraordinary claim. I call these dead hypotheses “zombie hypotheses,” since they spring back to life in the face of the extraordinary to feast on skeptical brains.

Jaynes observes zombie hypotheses attack even in high-trust environments, and even when the extraordinary claim is true and the evidence supporting it valid. Such zombie attacks have

…made us aware of an important general phenomenon, which has nothing to do with ESP; a person may tell the truth and not be believed, even though the disbelievers are reasoning in a rational, consistent way.

If zombie attacks occur even in high-trust environments among people of similar backgrounds, how much more likely are they in politics, where trust is lower, people’s backgrounds differ, and people routinely suspect the “deception” of not only innocent reporting error, but also of subterfuge?

Perhaps it’s no accident that political discourse often devolves into prompting the other guy to resurrect an army of zombie hypotheses, then concluding from the sheer number of zombies he summons that he must be crazy, flagrantly rationalizing, or both. Else why would he attack our reasoning with so many mythical monsters? That he may also be reasoning correctly, given his experience, and his zombie army might be evidence of this, is almost too horrible to contemplate.

“You and what army?” we’re sometimes tempted to demand of opponents. Their zombie army – the army of hypotheses they find more plausible than our claim, no matter how extraordinary our evidence – that’s who. Evidence cannot be interpreted except in light of prior beliefs. And because two people’s prior beliefs may differ

…probability theory appears to allow, in principle, that a single piece of new information D [D for “data”] could have every conceivable effect on their relative states of belief.

Data never absolutely supports or refutes any claim, but only supports or refutes it relative to all the other (“prior”) information we have. When our prior knowledge differs, the same data that supports a claim for one of us may refute it for another – maddeningly, without logical error on either side.

[D]ivergence of opinions is readily explained by probability theory as logic, and that it is to be expected when persons have widely different prior information.

Although we hope – and often find – that the more data we share, the more our beliefs converge, it’s logically possible for data sharing to drive two reasoner’s beliefs farther apart without either erring logically. Now, possible isn’t the same as likely. Many of us suspect this possibility is nonetheless extremely implausible. There’s something too morally lazy – or simply too horrifying – about supposing this possibility manifests often enough in real life to justify much human agreement.

Zombie hypotheses would be far less terrifying if they were just bad-faith hypotheses resurrected in order to deny reason. The real horror of zombie hypotheses, especially for political consensus, is not that they’re a defense mechanism against reason, but that they’re baked into what reasoning is.

Is There Hope?

Carl Sagan famously described the world of insufficiently-skeptical brains as demon-haunted. ET Jaynes suggests that skeptical brains, while perhaps not haunted by demons (though I suspect all brains are, more or less) are at least prone to zombie infestations. When mutually-skeptical minds are busy attacking one another with hordes of ungrateful undead, is there any hope? Any way to stop the zombies? Yes, at least sometimes. It was alluded to earlier:

For this reason, the advocates of ESP (or any other marvel) will never succeed in persuading scientists that their phenomenon is real, until they learn how to eliminate the possibility of deception in the mind of the reader.

Jaynes continues, citing a diagram illustrating that

the reader’s total prior probability for deception by all mechanisms must be pushed down below that of ESP.

Pushing a skeptic’s estimate of the total likelihood of “deception by all mechanisms” below his estimate of the likelihood of your claim means establishing trust. Many effective techniques for establishing trust rely on something other than “cold reason.” Some techniques are not even honest (the con in con-man is short for confidence, after all). Rhetoric, for example, need not be used honestly. Rhetoric aims to persuade, and while persuasion requires establishing trust, the very possibility that rhetoric works well enough at establishing trust that it’s useful for establishing unwarranted trust puts the trust-building power of rhetoric under suspicion. Few humans are immune to the blandishments of rhetoric from someone, but when someone strikes us as untrustworthy enough to begin with, the hypothesis that their rhetoric is a confidence trick is often very undead indeed.

In today’s political climate, it’s easy to believe that establishing trust often isn’t feasible. And there’s no guarantee that it must be – indeed there’s a possibility, however slight, that it might be logically impossible.

Jaynes is not the first to observe that high trust among scientists is what enables scientists to keep the zombie hordes at bay long enough for sharing data in common to forge knowledge in common. This process goes by simpler name, learning. Without trust, there’s little hope for even the most rational of arguments to produce learning.

This essay is based off an earlier draft, published just after Halloween last year.

Published in Science & Technology
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

1. Contributor
@Midge

What are the right words to say, then, to somebody who speaks “philosophy of mind” but not cognitive science to differentiate “mind” from “brain”.

Mind and brain are the correct words to use to differentiate mind and brain.

That is about as helpful as, when someone asks you, “When you say ‘cats and dogs’, what do you mean? Please describe to me how folks of your tribe differentiate cats from dogs, It sounds like it might be different from how folks of my tribe differentiate between cats and dogs,” to answer with, “Cats and dogs are the correct words to differentiate cats and dogs.” It does not describe the difference.

To someone who either does not know – or who may simply disagree – with your idea of the difference between “mind” and “brain”, you might as well be using nonsense words, like “wug” and “zib”:

“When you say wug and zib, what do you mean? How do you distinguish between wug and zib?”
“The correct words distinguishing wug and zib are wug and zib.”
That says nothing about what you mean by using wug and zib. Even something like, “Wugs are yellow, and zibs are purple,” would be more helpful than that.

So far, you have told me you’re pretty confident I’m equivocating my wugs and zibs. Yet you cannot describe to me, in words that I can understand, why you think I perform this equivocation. Since, to me, it looks like I’m not equivocating wug and zib, the most reasonable explanation for your complaint that I can come up with is that your idea of the distinction between wug and zib might simply be different from mine.

2. Contributor
@Midge

A.C. Gleason (View Comment):
I think there’s something missing in your reply. Are you trying to say that, if the mind is physically an epiphenomenon, then the mind cannot morally be a creature in its own right, and that therefore other moral creatures like free will, responsibility, rationality, knowledge, and thought, cannot really exist?

Epiphenomenonalism is the view that mind supervenes on matter in a one way relation. If this is true then the best account we can have of free will is so soft that it seems to be the exact same as materialism…

https://plato.stanford.edu/entries/epiphenomenalism/

Gar!

Apparently, epiphenomenon has its plain-English meaning (that is, its Latin root meaning), as a phenomenon “above” another phenomenon, as if arising from it. And that is how I mean the term.

Philosophers, on the other hand, make “epiphenomenalism” mean something very specific – indeed make it mean something so specific that it stops interesting me: A is an epiphenomenon of B if A arises from B, but cannot influence B – except in some rare exceptions where A is permitted, for some reason, to influence B anyhow, but these are supposed to be exceptions, not the rule, and the rule is A cannot influence B.

All right. Well? I don’t care about that. I don’t claim that “mind” cannot influence “brain”, because it seems a pointless thing to say. I mean that the physical manifestation of the mind is in brain activity – that is, physically, the mind arises from brain activity (whatever we might want to say about the mind non-physically is neither here nor there). We can use the word “emergent”, since it seems philosophers have trashed “epiphenomenal”.

Will “emergent” do?

3. Inactive
A.C. Gleason
@aarong3eason

You appear to be the one making the moral judgment that something made manifest by a physical processes cannot also have moral import.

Yes I agree. That is exactly what I’m doing. As long as you mean purely or merely physical.

4. Inactive
A.C. Gleason
@aarong3eason

When you keep making this assertion, what are you trying to say? How are you measuring semantic content in order to say that software (for example) has none. For example, the following statement

Semantic content can’t be measured. Semantic content is by definition abstract and immaterial. Software can only contain syntax. That following statement only conveys semantic content to a mind. In itself it has merely syntax. Watch this talk by John Searle: https://www.youtube.com/watch?v=rHKwIYsPXLg

Artificial Intelligence simply isn’t intelligent. That’s why its called artificial.

5. Inactive
A.C. Gleason
@aarong3eason

suggests that meaning and knowledge could not then be part of the algorithms that make up software, even though algorithms can act as if they know X, or as if Y has meaning. Yet machine learning exists.

Do you have an argument for why machine learning cannot be real learning? Why the meaning a machine learns to assign to Y, or the knowledge it learns of X, cannot be “real”? (And if so, does your argument apply to all biological life except for humans – that is, are insects and dogs permitted to be viewed as performing biological “machine learning” which is nonetheless not “real learning”; while humans are the only creatures capable of “real learning”? – and are therefore somehow not permitted to perform biological machine learning?)

Yes. Searle’s Chinese Room argument. In order to learn something in the relevant sense comprehension must be involved and computers cannot comprehend anything because they are not conscious. It can learn in an aritifical sense, that is the molecules, wiring, etc can all change and do different things but at no point is there anything resembling learning. Okay it resembles learning, but that’s the problem it merely resembles learning.

6. Inactive
A.C. Gleason
@aarong3eason

What are the right words to say, then, to somebody who speaks “philosophy of mind” but not cognitive science to differentiate “mind” from “brain”.

Mind and brain are the correct words to use to differentiate mind and brain.

That is about as helpful as, when someone asks you, “When you say ‘cats and dogs’, what do you mean? Please describe to me how folks of your tribe differentiate cats from dogs, It sounds like it might be different from how folks of my tribe differentiate between cats and dogs,” to answer with, “Cats and dogs are the correct words to differentiate cats and dogs.” It does not describe the difference.

To someone who either does not know – or who may simply disagree – with your idea of the difference between “mind” and “brain”, you might as well be using nonsense words, like “wug” and “zib”:

“When you say wug and zib, what do you mean? How do you distinguish between wug and zib?”
“The correct words distinguishing wug and zib are wug and zib.”
That says nothing about what you mean by using wug and zib. Even something like, “Wugs are yellow, and zibs are purple,” would be more helpful than that.

So far, you have told me you’re pretty confident I’m equivocating my wugs and zibs. Yet you cannot describe to me, in words that I can understand, why you think I perform this equivocation. Since, to me, it looks like I’m not equivocating wug and zib, the most reasonable explanation for your complaint that I can come up with is that your idea of the distinction between wug and zib might simply be different from mine.

So you are telling me that you use the words brain and mind without knowing what they mean? Why don’t you tell me what you think they mean. They refer to very specific things.

7. Inactive
A.C. Gleason
@aarong3eason

All right. Well? I don’t care about that. I don’t claim that “mind” cannot influence “brain”, because it seems a pointless thing to say. I mean that the physical manifestation of the mind is in brain activity – that is, physically, the mind arises from brain activity (whatever we might want to say about the mind non-physically is neither here nor there). We can use the word “emergent”, since it seems philosophers have trashed “epiphenomenal”.

Will “emergent” do?

It’s not pointless at all. It’s very pointed and highly significant. It’s also false and incoherent.\

The mind isn’t physically manifested. Mind is non physical. Brain activity is physical. The mind cannot arise from brain activity.

“Whatever we might want to say about the mind non-physically is neither here nor there” is one helluva parenthetical because it means you are a materialist. This is exactly why I knew you were equivocating mind and brain, you think they’re identical. And that means you don’t believe in the existence of mind. So you should probably stop using the word and instead only refer to brains. So no you aren’t an epiphenomenalist. You’re some kind of reductionist functionalist, which just means you believe mind is an illusion and in reality is merely a function. If that’s Bayesian then I can tell you the moral value of Bayesian is equal to 0. Let me know if you want to do a podcast episode. I think it would be a good discussion and hopefully we could build a rapport for future discussions.