Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
Uncommon Knowledge: Peter Thiel on the Limits of Artificial Intelligence
In our recent sit-down for Uncommon Knowledge, I asked Peter Thiel — with, you’ll note, an assist from Ricochet’s own anonymous — about the prospects for artificial intelligence. What say you, Ricochet? Is Thiel right about the (perhaps inherent) limitations?
Published in General
I’m more concerned about the limits of biological intelligence, ackshully.
;-)
I think that AI is way over-hyped. The atheist left needs a messiah. The “Singularity” is their second coming.
The claim that there are inherent limitations to computation not shared by human brains is a statement of faith.
The claim that there are no inherent limitations to computation not shared by human brains is a statement of faith.
Thiel is right about the inherent limitations of binary computers, but not of the quantum computers that are coming.
The singularity is near (2045). Probably before that, actually, given the intelligence of those who voted for Mr Obama.
He’s right that the computers he’s thinking of (the ones you’re viewing this on) will probably not make for a very advanced AI, but that is a limitation of his thinking. Digital, integrated circuit computers are designed for speed and accuracy, but are not very good at multitasking which seems to be integral to an adaptable intelligence.
But we can make other types of computers, and more have been theorized, and many more will be devised in the future. Each has its own strengths and weaknesses, and it is theoretically possible one could be devised that is clearly superior to our neuron based brains. And then there is the fact that we could just make large, artificial brains out of artificial neurons (which is already being researched).
He’s right that AI has proven to be an infuriatingly deceptive problem (40 years later and we still can’t get robots to see the world as anything but blobs of color) but there are no theoretical limits on the capabilities of AI that we are aware of.
Another thing: My personal opinion is that he is probably right about humans and machines working together, though possibly not quite the way he thinks. It seems to me that before we create an individual artificial intelligence, we will have augmented our own brains through chemical and cybernetic enhancements. Moreover, the individual AI will probably be so similar to what our brains have become that it is more or less just an artificial human (whatever that word may come to mean), at which point we’ve just become an advanced race able to manufacture more of ourselves.
We can program computers do whatever we program them to do. To expect computers to go beyond what we program them to do is science fiction. Show me a computer/program that has gone beyond human programming.
I think that there is most likely an inherent limitation for computers and artificial intelligence. Human intelligence arises from our ability to manipulate signs through language in a way that is beyond any physical explanation. The semiotic relationship between an object, its sign, and an interpreter is irreducible, and cannot be copied by any number of two-way stimulus/response or energy transfers without a loss of meaning.
Walker Percy explains this in his 1989 National Endowment for the Humanities Jefferson Lecture “The Fateful Rift: The San Andreas Fault in the Modern Mind“. (A worthwhile 50 minutes, about an hour with Lynn Cheney’s introduction.)
There is something in the ability to name things, coupling signs and objects by an interpreter, that humans do easily, that seems beyond the ability of a machine.
That’s certainly my non-expert take. I haven’t heard anyone give a cogent answer to why they think so that doesn’t rely solely on first-year philosophy or theology.
Peter,
Mark Blitz of Claremont McKenna College was recently video interviewed by William Kristol and Kristol put to him essentially the same question you put to Thiel. Kristol’s question/comment was more general: “Brain science is fundamentally going to change things so much that we will all look back a few years from now and think all those people were cute when they studied Plato and Aristotle; of course everything serious now is being determined by modern science — both determined in the sense of intellectually explained, and also determined in the sense of this should be running our lives.”
I admire Thiel a lot. Blitz’s response (below) is far more insightful.
A little about Blitz: he’s one of the most renowned teachers of philosophy at Claremont McKenna; his seminar on Aristotle and Kant was a pivotal intellectual event for me. An authority on Heidegger, his recent essay “Understanding Heidegger on Technology” in The New Atlantis won high praise from Peter Lawler (when are you getting Lawler on UK?).
Horrible man that Heidegger was, he’s nonetheless unparalleled in showing the futility of strong AI. Pace Tom Meyer, propounding that futility is hardly first year philosophy.
(In that regard, people may want check out Hubert Dreyfus’s famous Berkeley undergraduate seminar on Heidegger. The last 30 minutes of his opening lecture on 08/27/07– listenable at iTunes here — gives a very clear synopsis of the futility of strong AI. There’s also this book).
I’ll plug also Blitz’s recently published Hoover Institution book, Conserving Liberty. It’s excellent.
From the 1hr, 14min mark:
Modern natural science is necessarily limited by what it can produce.
It can’t make 2+2 equal 5. It can’t make the just unjust. It can’t make the free unfree. It can’t make the true false. It can’t make the faithless faithful. The limit to what can be produced either by modern science or anything else is what exists at the realm of the ends and the goals and the forms that can be produced.
So however powerful modern science is, it’s limited by the kinds of things that one thinks about primarily not from the mode and mechanism of modern science.
That’s I think the central fact.
It does of course have to be the case that the concrete things modern natural science will do will change many things. It will change to some degree our understanding of equality, as people become more and more alike conceivably at a higher level — or one doesn’t know how this will go. It will change lifespans; it will change, as it’s continuing to change, birth as well as death. It runs a risk, of course, of forgetting the deep inviolability of every human being. One of the beauties of individual natural rights is to try to recapture that again, so that your own freedom is never completely subject to someone else’s judgment about better use for you and others. You run those risks. But nonetheless it’s also the fact that modern science can’t change what is the case ultimately about the better and the worse and the meaning of those kinds of possibilities and choices.
So with that at one end, and the power of modern science at the other end, we’ll have a world of some sort, fifty or a hundred years from now, which will look I’m sure in many ways surprisingly like our own and in some important ways different.
[You think ultimately the human things don’t fundamentally get transformed by the scientific enterprise?]
They can’t be transformed. They can be ignored. They can be downplayed. They can be forgotten. The energy and the effort and the discipline required to understand can also be ignored, downplayed or forgotten. That can happen. But that there is in those cases a lowering and a forgetting of something good — that is also the case. And then there’s certain fundamental facts that we deal with all the time that can’t be changed and will always be among us — some of the basic mathematical and scientific facts among them. But also I would say some of the basic facts of human freedom and happiness and excellence.
Again, the limits of things that human beings can do and the limits of things in the world are not limits that any material change can simply overcome. The fact that we are in an age of extraordinary powerful possibilities and material change nonetheless will necessarily run up against those limits.”
Tom- as I alluded to in my previous statement, it was Heidegger writing some eighty years ago who essentially predicted — without a moment’s contemplation of AI — that strong AI was impossible. It was always illusory because you could never technologically create the mystery of human beings’ engagement with being — to use the Heideggerian language. For “being” one might substitute the words “meaningfulness” or “significance.”
Heidegger saw that man can never understand objectively, which is to say scientifically, dasein (or “being there”) or what it means to be in this space experiencing the world. Why?
Because any attempt at objectivity always presupposes the world.
That’s why Heidegger concocts a whole new language (“being there,” “ready-to-hand,” etc.): the point being, you’re always living in the world you’re in, so I’m using this computer without ever thinking about the keyboard because what is primary is the world in which the computer exists. The “being” of the computer therefore retreats, withdraws. Heidegger’s classic example: you use the hammer, you don’t pay attention till the hammer breaks — then you’re aware of the hammer’s being.
Edward Feser nicely zeros-in on this problem — the attempt at objectivity always presupposes the world — in a couple of blog posts:
From “Computers, minds and Aristotle:”
And here, from Logorrhea in the Cell, drawing a certain parallel between AI and intelligent design:
Wait, you’re a Stag, too? If so, crescit cum commercio civitas, my friend.
Robert, I confess I don’t fully follow this, but I also don’t find it terribly persuasive. Nature found a way to make our minds capable of doing all this through — so far as we have evidence — natural processes. There’s little reason I see to suppose it can’t happen electronically.
Perhaps we’ll find out.
Not a Stag. But did take a few graduate courses out there at CGU.
If you don’t find that terribly persuasive, may I again suggest the last 30 min or so of Dreyfus’s course (linked above), at least as a fish-hook; one gets a better lay of the land and handle on some of the terminology.
I don’t recall who said it or where, but I read an argument that real intelligence requires emotion. Without emotion, a processor has no reason to actively do anything it hasn’t been programmed to do. Star Trek illustrated this with the non sequitor of Data, a robot devoid of emotion, who really wants to feel emotion. How does that make sense?
A corollary to this is that I think machine intelligence, that is non-living intelligence, is impossible. An entity that is actively processing information in ways that it was not programmed to would have to considered alive. That’s not to say that artificial intelligence is impossible, only that artificial intelligence would have to be alive. There’s no reason to assume we can’t create life using technology. Living things create other living things all the time.
Frankly, trivially: Data has a concrete goal to be able to interact effectively with humans (and other species), and is able to learn what he needs in order to do so. Of course, it’s postulated that emotions can’t be learned, otherwise the fictional dramatic plot device doesn’t work (cf. Spock being half human, half Vulcan and having homicidal reproductive urges every seven years at “Amok time,” the sixties version of this conundrum). But we know from sociopaths that emotions can be learned—well enough to pass for long periods of time. If you want fiction, look no further than the faceless, emotionless killers of the slasher flicks. It’s not Michael Myers you need to be afraid of; it’s John List.
Well, we have “entities actively processing information in ways they were not programmed to,” and have for decades. For what it’s worth, you aren’t alone in defining life as information processing. Otherwise opposing figures such as Richard Dawkins and Frank Tipler do, too. Now what?