Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 17 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Misthiocracy Member
    Misthiocracy
    @Misthiocracy

    I’m more concerned about the limits of biological intelligence, ackshully.

    ;-)

    • #1
  2. Bryan G. Stephens Thatcher
    Bryan G. Stephens
    @BryanGStephens

    I think that AI is way over-hyped. The atheist left needs a messiah. The “Singularity” is their second coming.

    • #2
  3. Gödel's Ghost Inactive
    Gödel's Ghost
    @GreatGhostofGodel

    The claim that there are inherent limitations to computation not shared by human brains is a statement of faith.

    The claim that there are no inherent limitations to computation not shared by human brains is a statement of faith.

    • #3
  4. user_124695 Inactive
    user_124695
    @DavidWilliamson

    Thiel is right about the inherent limitations of binary computers, but not of the quantum computers that are coming.

    The singularity is near (2045). Probably before that, actually, given the intelligence of those who voted for Mr Obama.

    • #4
  5. Guy Incognito Member
    Guy Incognito
    @

    He’s right that the computers he’s thinking of (the ones you’re viewing this on) will probably not make for a very advanced AI, but that is a limitation of his thinking.  Digital, integrated circuit computers are designed for speed and accuracy, but are not very good at multitasking which seems to be integral to an adaptable intelligence.

    But we can make other types of computers, and more have been theorized, and many more will be devised in the future.  Each has its own strengths and weaknesses, and it is theoretically possible one could be devised that is clearly superior to our neuron based brains.  And then there is the fact that we could just make large, artificial brains out of artificial neurons (which is already being researched).

    He’s right that AI has proven to be an infuriatingly deceptive problem (40 years later and we still can’t get robots to see the world as anything but blobs of color) but there are no theoretical limits on the capabilities of AI that we are aware of.

    • #5
  6. Guy Incognito Member
    Guy Incognito
    @

    Another thing: My personal opinion is that he is probably right about humans and machines working together, though possibly not quite the way he thinks.  It seems to me that before we create an individual artificial intelligence, we will have augmented our own brains through chemical and cybernetic enhancements.  Moreover, the individual AI will probably be so similar to what our brains have become that it is more or less just an artificial human (whatever that word may come to mean), at which point we’ve just become an advanced race able to manufacture more of ourselves.

    • #6
  7. 3rd angle projection Member
    3rd angle projection
    @

    We can program computers do whatever we program them to do. To expect computers to go beyond what we program them to do is science fiction. Show me a computer/program that has gone beyond human programming.

    • #7
  8. Muleskinner Member
    Muleskinner
    @Muleskinner

    I think that there is most likely an inherent limitation for computers and artificial intelligence. Human intelligence arises from our ability to manipulate signs through language in a way that is beyond any physical explanation. The semiotic relationship between an object, its sign, and an interpreter is irreducible, and cannot be copied by any number of two-way stimulus/response or energy transfers without a loss of meaning.

    Walker Percy explains this in his 1989 National Endowment for the Humanities Jefferson Lecture “The Fateful Rift: The San Andreas Fault in the Modern Mind“. (A worthwhile 50 minutes, about an hour with Lynn Cheney’s introduction.)

    There is something in the ability to name things, coupling signs and objects by an interpreter, that humans do easily, that seems beyond the ability of a machine.

    • #8
  9. Tom Meyer Member
    Tom Meyer
    @tommeyer

    The claim that there are inherent limitations to computation not shared by human brains is a statement of faith.

    That’s certainly my non-expert take. I haven’t heard anyone give a cogent answer to why they think so that doesn’t rely solely on first-year philosophy or theology.

    • #9
  10. Robert Lux Inactive
    Robert Lux
    @RobertLux

    Peter,

    Mark Blitz of Claremont McKenna College was recently video interviewed by William Kristol and Kristol put to him essentially the same question you put to Thiel. Kristol’s question/comment was more general: “Brain science is fundamentally going to change things so much that we will all look back a few years from now and think all those people were cute when they studied Plato and Aristotle; of course everything serious now is being determined by modern science — both determined in the sense of intellectually explained, and also determined in the sense of this should be running our lives.”

    I admire Thiel a lot. Blitz’s response (below) is far more insightful.

    A little about Blitz: he’s one of the most renowned teachers of philosophy at Claremont McKenna; his seminar on Aristotle and Kant was a pivotal intellectual event for me. An authority on Heidegger, his recent essay “Understanding Heidegger on Technology” in The New Atlantis won high praise from Peter Lawler (when are you getting Lawler on UK?).

    Horrible man that Heidegger was, he’s nonetheless unparalleled in showing the futility of strong AI. Pace Tom Meyer, propounding that futility is hardly first year philosophy.

    (In that regard, people may want check out Hubert Dreyfus’s famous Berkeley undergraduate seminar on Heidegger. The last 30 minutes of his opening lecture on 08/27/07– listenable at iTunes here — gives a very clear synopsis of the futility of strong AI. There’s also this book).

    I’ll plug also Blitz’s recently published Hoover Institution book, Conserving Liberty. It’s excellent. 

    From the 1hr, 14min mark:

    Modern natural science is necessarily limited by what it can produce.

    It can’t make 2+2 equal 5. It can’t make the just unjust. It can’t make the free unfree. It can’t make the true false. It can’t make the faithless faithful. The limit to what can be produced either by modern science or anything else is what exists at the realm of the ends and the goals and the forms that can be produced.

    So however powerful modern science is, it’s limited by the kinds of things that one thinks about primarily not from the mode and mechanism of modern science.

    That’s I think the central fact.

    It does of course have to be the case that the concrete things modern natural science will do will change many things. It will change to some degree our understanding of equality, as people become more and more alike conceivably at a higher level — or one doesn’t know how this will go. It will change lifespans; it will change, as it’s continuing to change, birth as well as death. It runs a risk, of course, of forgetting the deep inviolability of every human being. One of the beauties of individual natural rights is to try to recapture that again, so that your own freedom is never completely subject to someone else’s judgment about better use for you and others. You run those risks. But nonetheless it’s also the fact that modern science can’t change what is the case ultimately about the better and the worse and the meaning of those kinds of possibilities and choices.

    So with that at one end, and the power of modern science at the other end, we’ll have a world of some sort, fifty or a hundred years from now, which will look I’m sure in many ways surprisingly like our own and in some important ways different.

    [You think ultimately the human things don’t fundamentally get transformed by the scientific enterprise?]

    They can’t be transformed. They can be ignored. They can be downplayed. They can be forgotten. The energy and the effort and the discipline required to understand can also be ignored, downplayed or forgotten. That can happen. But that there is in those cases a lowering and a forgetting of something good — that is also the case. And then there’s certain fundamental facts that we deal with all the time that can’t be changed and will always be among us — some of the basic mathematical and scientific facts among them. But also I would say some of the basic facts of human freedom and happiness and excellence.

    Again, the limits of things that human beings can do and the limits of things in the world are not limits that any material change can simply overcome. The fact that we are in an age of extraordinary powerful possibilities and material change nonetheless will necessarily run up against those limits.”

    • #10
  11. Robert Lux Inactive
    Robert Lux
    @RobertLux

    Robert Lux:Peter,

    Mark Blitz of Claremont McKenna College was recently video interviewed by William Kristol and Kristol put to him essentially the same question you put to Thiel. Kristol’s question/comment was more general: “Brain science is fundamentally going to change things so much that we will all look back a few years from now and think all those people were cute when they studied Plato and Aristotle; of course everything serious now is being determined by modern science — both determined in the sense of intellectually explained, and also determined in the sense of this should be running our lives.”

    I admire Thiel a lot. Blitz’s response (below) is more satisfying.

    A little about Blitz: he’s one of the most renowned teachers of philosophy at Claremont McKenna; his seminar on Aristotle and Kant was a pivotal intellectual event for me. An authority on Heidegger, his recent essay “Understanding Heidegger on Technology” in The New Atlantis won high praise from Peter Lawler (when are you getting Lawler on UK?).

    Horrible man that Heidegger was, he’s nonetheless unparalleled in showing the futility of strong AI. Pace Tom Meyer, propounding that futility is hardly first year philosophy.

    (In that regard, people may want check out Hubert Dreyfus’s famous Berkeley undergraduate seminar on Heidegger. The last 30 minutes of his opening lecture on 08/27/07– listenable at iTunes here — gives a very clear synopsis of the futility of strong AI. There’s also this book).

    I’ll plug also Blitz’s recently published Hoover Institution book, Conserving Liberty. It’s excellent.

    From the 1hr, 14min mark:

    Modern natural science is necessarily limited by what it can produce.

    It can’t make 2+2 equal 5. It can’t make the just unjust. It can’t make the free unfree. It can’t make the true false. It can’t make the faithless faithful. The limit to what can be produced either by modern science or anything else is what exists at the realm of the ends and the goals and the forms that can be produced.

    So however powerful modern science is, it’s limited by the kinds of things that one thinks about primarily not from the mode and mechanism of modern science.

    That’s I think the central fact.

    It does of course have to be the case that the concrete things modern natural science will do will change many things. It will change to some degree our understanding of equality, as people become more and more alike conceivably at a higher level — or one doesn’t know how this will go. It will change lifespans; it will change, as it’s continuing to change, birth as well as death. It runs a risk, of course, of forgetting the deep inviolability of every human being. One of the beauties of individual natural rights is to try to recapture that again, so that your own freedom is never completely subject to someone else’s judgment about better use for you and others. You run those risks. But nonetheless it’s also the fact that modern science can’t change what is the case ultimately about the better and the worse and the meaning of those kinds of possibilities and choices.

    So with that at one end, and the power of modern science at the other end, we’ll have a world of some sort, fifty or a hundred years from now, which will look I’m sure in many ways surprisingly like our own and in some important ways different.

    [You think ultimately the human things don’t fundamentally get transformed by the scientific enterprise?]

    They can’t be transformed. They can be ignored. They can be downplayed. They can be forgotten. The energy and the effort and the discipline required to understand can also be ignored, downplayed or forgotten. That can happen. But that there is in those cases a lowering and a forgetting of something good — that is also the case. And then there’s certain fundamental facts that we deal with all the time that can’t be changed and will always be among us — some of the basic mathematical and scientific facts among them. But also I would say some of the basic facts of human freedom and happiness and excellence.

    Again, the limits of things that human beings can do and the limits of things in the world are not limits that any material change can simply overcome. The fact that we are in an age of extraordinary powerful possibilities and material change nonetheless will necessarily run up against those limits.”

    • #11
  12. Robert Lux Inactive
    Robert Lux
    @RobertLux

    Tom Meyer, Ed.:

    The claim that there are inherent limitations to computation not shared by human brains is a statement of faith.

    That’s certainly my non-expert take. I haven’t heard anyone give a cogent answer to why they think so that doesn’t rely solely on first-year philosophy or theology.

    Tom- as I alluded to in my previous statement, it was Heidegger writing some eighty years ago who essentially predicted — without a moment’s contemplation of AI — that strong AI was impossible. It was always illusory because you could never technologically create the mystery of human beings’ engagement with being — to use the Heideggerian language. For “being” one might substitute the words “meaningfulness” or “significance.”

    Heidegger saw that man can never understand objectively, which is to say scientifically, dasein (or “being there”) or what it means to be in this space experiencing the world. Why? 

    Because any attempt at objectivity always presupposes the world.

    That’s why Heidegger concocts a whole new language (“being there,” “ready-to-hand,” etc.): the point being, you’re always living in the world you’re in, so I’m using this computer without ever thinking about the keyboard because what is primary is the world in which the computer exists. The “being” of the computer therefore retreats, withdraws. Heidegger’s classic example: you use the hammer, you don’t pay attention till the hammer breaks — then you’re aware of the hammer’s being.

    Edward Feser nicely zeros-in on this problem — the attempt at objectivity always presupposes the world — in a couple of blog posts:

    From “Computers, minds and Aristotle:”

    Hubert Dreyfus, summarizing themes that have long characterized his work, also criticizes “Descartes’ understanding of the world as a set of meaningless facts to which the mind assigned what Descartes called values” (p. 80). Attempts to find some computational mechanism by means of which the brain assigns significance or meaning to the world always end up surreptitiously presupposing significance or meaning, and attempts to avoid this result tend to lead to a vicious regress. (This, as Dreyfus argues, is what ultimately underlies the well-known “frame problem” in Artificial Intelligence research and the “binding problem” in neuroscience.) As is well-known, Dreyfus makes good use of the work of writers like Heidegger and Merleau-Ponty in criticizing AI, and in particular the notion that we can make sense of the idea of a world inherently devoid of significance for us. But this phenomenological point does not answer the metaphysical question of how and why the world, and ourselves as part of the world, have significance or meaning in the first place. For that – as I argue in The Last Superstition – we need to turn to the Aristotelian tradition, to the concepts of formal and final causation rejection of which set modern thought, and modern civilization, on its long intellectual and moral downward slide.”

    And here, from Logorrhea in the Cell, drawing a certain parallel between AI and intelligent design:

    Vincent’s attempt to wriggle out of the problem context poses for his position is like certain point-missing attempts to solve the “commonsense knowledge problem” in AI. As Hubert Dreyfus argues, it makes no sense to think that intelligence can be reduced to a set of explicitly formulated rules and representations, because there are always various context-dependent [i.e., world dependent] ways to interpret the rules and representations. To say “Oh, we’ll just put the ‘right’ interpretation into the rules and representations” completely misses the point, since it just adds further rules and representations that are themselves subject to alternative context-dependent interpretations.

    Vincent is doing something similar when he tries to come up with these goofy examples of really long messages written in the cell. It completely misses the point, because that’s just further stuff the import of which depends on a larger context. It also completely misses the point to shout “Skepticism!”, just as an AI defender would be completely missing the point if he accused Dreyfus of being a skeptic. There’s nothing skeptical about it. We can know what the context is and thus we can know what the right interpretation is; we just can’t know the right interpretation apart from all context. 

    • #12
  13. Tom Meyer Member
    Tom Meyer
    @tommeyer

    Robert Lux: A little about Blitz: he’s one of the most renowned teachers of philosophy at Claremont McKenna; his seminar on Aristotle and Kant was a pivotal intellectual event for me. An authority on Heidegger, his recent essay “Understanding Heidegger on Technology” in The New Atlantis won high praise from Peter Lawler (when are you getting Lawler on UK?).

    Wait, you’re a Stag, too? If so, crescit cum commercio civitas, my friend.

    • #13
  14. Tom Meyer Member
    Tom Meyer
    @tommeyer

    Robert, I confess I don’t fully follow this, but I also don’t find it terribly persuasive. Nature found a way to make our minds capable of doing all this through — so far as we have evidence — natural processes. There’s little reason I see to suppose it can’t happen electronically.

    Perhaps we’ll find out.

    • #14
  15. Robert Lux Inactive
    Robert Lux
    @RobertLux

    Tom Meyer, Ed.:

    Robert Lux: A little about Blitz: he’s one of the most renowned teachers of philosophy at Claremont McKenna; his seminar on Aristotle and Kant was a pivotal intellectual event for me. An authority on Heidegger, his recent essay “Understanding Heidegger on Technology” in The New Atlantis won high praise from Peter Lawler (when are you getting Lawler on UK?).

    Wait, you’re a Stag, too? If so, crescit cum commercio civitas, my friend.

    Not a Stag. But did take a few graduate courses out there at CGU.

    If you don’t find that terribly persuasive, may I again suggest the last 30 min or so of Dreyfus’s course (linked above), at least as a fish-hook; one gets a better lay of the land and handle on some of the terminology.

    • #15
  16. user_1030767 Inactive
    user_1030767
    @TheQuestion

    I don’t recall who said it or where, but I read an argument that real intelligence requires emotion.  Without emotion, a processor has no reason to actively do anything it hasn’t been programmed to do.  Star Trek illustrated this with the non sequitor of Data, a robot devoid of emotion, who really wants to feel emotion.  How does that make sense?

    A corollary to this is that I think machine intelligence, that is non-living intelligence, is impossible.  An entity that is actively processing information in ways that it was not programmed to would have to considered alive.  That’s not to say that artificial intelligence is impossible, only that artificial intelligence would have to be alive.  There’s no reason to assume we can’t create life using technology.  Living things create other living things all the time.

    • #16
  17. Gödel's Ghost Inactive
    Gödel's Ghost
    @GreatGhostofGodel

    Michael Sanregret:I don’t recall who said it or where, but I read an argument that real intelligence requires emotion. Without emotion, a processor has no reason to actively do anything it hasn’t been programmed to do. Star Trek illustrated this with the non sequitor of Data, a robot devoid of emotion, who really wants to feel emotion. How does that make sense?

    Frankly, trivially: Data has a concrete goal to be able to interact effectively with humans (and other species), and is able to learn what he needs in order to do so. Of course, it’s postulated that emotions can’t be learned, otherwise the fictional dramatic plot device doesn’t work (cf. Spock being half human, half Vulcan and having homicidal reproductive urges every seven years at “Amok time,” the sixties version of this conundrum). But we know from sociopaths that emotions can be learned—well enough to pass for long periods of time. If you want fiction, look no further than the faceless, emotionless killers of the slasher flicks. It’s not Michael Myers you need to be afraid of; it’s John List.

    A corollary to this is that I think machine intelligence, that is non-living intelligence, is impossible. An entity that is actively processing information in ways that it was not programmed to would have to considered alive. That’s not to say that artificial intelligence is impossible, only that artificial intelligence would have to be alive.

    Well, we have “entities actively processing information in ways they were not programmed to,” and have for decades. For what it’s worth, you aren’t alone in defining life as information processing. Otherwise opposing figures such as Richard Dawkins and Frank Tipler do, too. Now what?

    • #17
Become a member to join the conversation. Or sign in if you're already a member.