More Fuel for the Self-Driving Car Fire

 

Just came across this article this morning. I’ll highlight one paragraph and add emphasis:

The linked report suggests that the artificial intelligence may never be “intelligent” enough to do what human beings are generally capable of doing. (Well, not all of us, of course. A couple of days driving in Florida will tell you that.) That may be true in some ways, but more than raw “intelligence,” the AI systems do not have human intuition. They aren’t as intuitive as humans in terms of trying to guess what the rest of the unpredictable humans will do at any given moment. In some of those cases, it’s not a question of the car not realizing it needs to do something, but rather making a correct guess about what specific action is required.

I’ve made this argument before, that humans are better at winging it than AI — so far.

Admiral Rickover was pretty much against using computers to run the engine room, with a couple of exceptions.  Any task that was deemed too monotonous was one, the other being any task that could be performed quicker by a computer.  Even so, these weren’t really computers in the AI sense, but rather electronic sensors with programming to handle the task at hand.  I’m sure modern submarine engine rooms have more computerization nowadays, but I’ll bet the crew can easily take over if the machines fail . . .

Published in Technology
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 210 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Barfly Member
    Barfly
    @Barfly

    Henry Racette (View Comment):

    Barfly (View Comment):
    I think the ability to learn is distinguishing.

    Machines learn now, don’t they?

    No, not like intelligences. They simulate it.

    Also, synthetic intelligence will be real intelligence. We’ve been over that one several times. Henry, you are being selectively pedantic again.

    • #121
  2. Percival Thatcher
    Percival
    @Percival

    Barfly (View Comment):

    Django (View Comment):

    Can anyone make sense of this? I’m not sure I can.

    Synthetic intelligence (SI) is an alternative/opposite term for artificial intelligence emphasizing that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence.[1][2] John Haugeland proposes an analogy with simulated diamonds and synthetic diamonds—only the synthetic diamond is truly a diamond.[1] Synthetic means that which is produced by synthesis, combining parts to form a whole; colloquially, a human-made version of that which has arisen naturally. A “synthetic intelligence” would therefore be or appear human-made, but not a simulation

    Haugeland and I are using the term identically. An artificial intelligence is something that looks like an intelligence but isn’t. A real intelligence is real regardless what it’s made of.

    Even if it’s only Keith Olbermann?

    • #122
  3. Django Member
    Django
    @Django

    Barfly (View Comment):

    Henry Racette (View Comment):

    Barfly (View Comment):
    I think the ability to learn is distinguishing.

    Machines learn now, don’t they?

    No, not like intelligences. They simulate it.

    Also, synthetic intelligence will be real intelligence. We’ve been over that one several times. Henry, you are being selectively pedantic again.

    If you put a gun to my head and forced me to come up with a definition of real intelligence, I’d say the ability to deal with abstract concepts. Take numbers, arguably the most abstract of all abstractions. If you’ve read or seen demonstrations of how NAND gates can be cascaded to perform arithmetic manipulations, you probably suspect the machine has no idea what it’s doing and has no self-awareness. It can be much faster and more precise than a human at performing those manipulations, but that’s no different conceptually from a self-powered high-speed abacus doing the same thing. Now if humans develop, or an advanced AI develops, a machine that appears to manipulate abstract concepts and can use human language to communicate its ideas, we almost have to ask, “Is this real intelligence, or just an unbelievably advanced simulation?”

    That’s where we will get the philosophers involved, and contrary to what Feynman said, I don’t believe they will be “on the outside asking stupid questions”. 

    • #123
  4. Barfly Member
    Barfly
    @Barfly

    Django (View Comment):
    If you put a gun to my head and forced me to come up with a definition of real intelligence, I’d say the ability to deal with abstract concepts

    Way too restrictive. It’s prediction, all the way down at every level. Even movement is prediction, for the nervous system. Not even all humans can do abstraction.

    • #124
  5. Django Member
    Django
    @Django

    Barfly (View Comment):

    Django (View Comment):
    If you put a gun to my head and forced me to come up with a definition of real intelligence, I’d say the ability to deal with abstract concepts

    Way too restrictive. It’s prediction, all the way down at every level. Even movement is prediction, for the nervous system. Not even all humans can do abstraction.

    What is a Kalman filter except a predictive function? In fact, that German physicist, Sabrina Hossenfelder, said everything is differential equations. Take out the ability to deal with abstract concepts and all that is left is a giant calculator.

    • #125
  6. RufusRJones Member
    RufusRJones
    @RufusRJones

    You guys are really smart.

    • #126
  7. DaveSchmidt Coolidge
    DaveSchmidt
    @DaveSchmidt

    RufusRJones (View Comment):

    You guys are really smart.

    Finally a sentence I understand.  

    • #127
  8. Django Member
    Django
    @Django

    RufusRJones (View Comment):

    You guys are really smart.

    Fooled another one!  

    • #128
  9. Z in MT Member
    Z in MT
    @ZinMT

    I work for a self-driving car company. It is happening, both Waymo and Cruise have commercial services now, and we’ll see commercial driverless 18 wheelers in the next two years in limited areas. It will take longer to expand than people expect. Starting in TX and AZ and expanding out from there.

    • #129
  10. Brian Clendinen Inactive
    Brian Clendinen
    @BrianClendinen

    Z in MT (View Comment):

    I work for a self-driving car company. It is happening, both Waymo and Cruise have commercial services now, and we’ll see commercial driverless 18 wheelers in the next two years in limited areas. It will take longer to expand than people expect. Starting in TX and AZ and expanding out from there.

    Its just going to affect long haulers. Which is not a bad thing, most truck drivers try to get away from those. Still going to need loading and unloading and will not affect flatbeds for a while. To many safety issues with all the different types of loads, the driver is responsible for on flatbeds.

    My question is when will AI-driven Trains happen? To me, that should have been the first thing to be automated. But when you have monopiles they don’t have a lot of incentives to reduce cost.

    • #130
  11. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    Barfly (View Comment):

    Henry Racette (View Comment):

    Barfly (View Comment):
    I think the ability to learn is distinguishing.

    Machines learn now, don’t they?

    No, not like intelligences. They simulate it.

    Also, synthetic intelligence will be real intelligence. We’ve been over that one several times. Henry, you are being selectively pedantic again.

    No, he’s calling you out for presenting a tautology.  No better than saying we’ll know real intelligence when we see it.

    If you want to be taken seriously, present a testable definition that would distinguish between artificial intelligence and real intelligence, whether “real” is natural or synthetic.

    • #131
  12. Matt Bartle Member
    Matt Bartle
    @MattBartle

    FWIW, a company called AutoX claims to have 1000 self-driving taxis operating in China. 

    https://www.autox.ai/en/index.html

    https://www.youtube.com/c/AutoXai

     

    • #132
  13. DaveSchmidt Coolidge
    DaveSchmidt
    @DaveSchmidt

    Matt Bartle (View Comment):

    FWIW, a company called AutoX claims to have 1000 self-driving taxis operating in China.

    https://www.autox.ai/en/index.html

    https://www.youtube.com/c/AutoXai

     

    Probably owned by the army. 

    • #133
  14. Stad Coolidge
    Stad
    @Stad

    I hope this post wins the @peterrobinson “Start the Conversation” award . . .

    • #134
  15. Barfly Member
    Barfly
    @Barfly

    Stad (View Comment):

    I hope this post wins the @ peterrobinson “Start the Conversation” award . . .

    It’s a difficult topic for a thread. I’d find it easier if more people knew something about the differences between neural networks and biological brains. I’ll say “neural networks are nothing like brains” and then someone with no skin in the game wants a conclusive explanation they can understand, all in a comment block.

    Can I start with Hebbian learning, or do I have to go all the way back to calcium channels? Do they know about cortical layers, and inhibitory synapses, or will it take me two pages before I can make the case that L.IV holds a competition between thalamic inputs? By the time I get to concept implementation in layers 2 and 3, Ron DeSantis will be President.

    I suggest that anyone who’s interested but doesn’t know any neuroscience read On Intelligence. It’s a good and fast introduction. In places it’s more of a case for the author’s theories than an instructional text, but mostly the book is a neuroscience backgrounder. It’s focused on what the reader needs before he can understand the author’s theories.

    On Intelligence would be a great choice for a book review post.

    • #135
  16. kedavis Coolidge
    kedavis
    @kedavis

    Stad (View Comment):

    I hope this post wins the @ peterrobinson “Start the Conversation” award . . .

    Speaking of which, I wonder what happened to the James Lileks Member Post Of The Week?

    • #136
  17. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    Barfly (View Comment):

    Stad (View Comment):

    I hope this post wins the @ peterrobinson “Start the Conversation” award . . .

    It’s a difficult topic for a thread. I’d find it easier if more people knew something about the differences between neural networks and biological brains. I’ll say “neural networks are nothing like brains” and then someone with no skin in the game wants a conclusive explanation they can understand, all in a comment block.

    Can I start with Hebbian learning, or do I have to go all the way back to calcium channels? Do they know about cortical layers, and inhibitory synapses, or will it take me two pages before I can make the case that L.IV holds a competition between thalamic inputs? By the time I get to concept implementation in layers 2 and 3, Ron DeSantis will be President.

    I suggest that anyone who’s interested but doesn’t know any neuroscience read On Intelligence. It’s a good and fast introduction. In places it’s more of a case for the author’s theories than an instructional text, but mostly the book is a neuroscience backgrounder. It’s focused on what the reader needs before he can understand the author’s theories.

    On Intelligence would be a great choice for a book review post.

    Sounds like you know enough to do it justice.

    Do.

    • #137
  18. Henry Racette Member
    Henry Racette
    @HenryRacette

    Barfly (View Comment):

    Henry Racette (View Comment):

    Barfly (View Comment):
    I think the ability to learn is distinguishing.

    Machines learn now, don’t they?

    No, not like intelligences. They simulate it.

    Funny how the words fly around.

    So artificial intelligence isn’t real intelligence because artificial intelligence simulates learning, but simulated intelligence is real intelligence because simulated intelligence doesn’t simulate learning, it really learns.

    Sorry, but I don’t think we even begin to know enough about the nature of thought to rule out the possibility of real thought in artificial intelligences.

    And yes, I’m pretty geeky on the whole topic, though I’ve written only a couple of small neural network engines and never built anything complicated. I also like words and their precise meaning, and think it should be possible to define terms without doing a jargon dump or referring us to further reading so we can do it ourselves.

    I think the AI field does pattern matching extraordinarily well. I think humans do, too. I’m not sure there’s very much more than that going on; I don’t think anyone knows for sure.

    • #138
  19. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):
    I think the AI field does pattern matching extraordinarily well.

    If that were true, shouldn’t there be fewer incidents of self-driving cars running into things, etc?

    • #139
  20. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    Henry Racette (View Comment):
    I think the AI field does pattern matching extraordinarily well.

    If that were true, shouldn’t there be fewer incidents of self-driving cars running into things, etc?

    Probably not.

    First, there’s a lot more to a self-driving car than pattern matching.

    Secondly, I followed the sentence you quoted with an observation that human beings also do pattern matching extraordinarily well. Today, and by a large margin, humans do arbitrary pattern matching — pattern matching in a non-constrained visual space full of heaven knows what — far better than do machines.

    For example, I have no idea how a self-driving car would handle a descending hot air balloon or hang-glider approaching the road. As someone who learned to drive in New Mexico, where both hot air balloons and hang-gliders are common, I’ve experienced highway near-encounters with each without running into anything at all. People are good at adapting to really odd situations quickly.

    Thirdly, a certain number of accidents are simply unavoidable from the perspective of the driver. That is, the cause is not visible before it’s too late to be avoided, and the degree of caution required to anticipate the possibility and avoid it would make travel impractical. City driving with blind corners is often like that.

    I don’t know what the current collision rate is for self-driving cars versus human-driven cars. I assume the former will continue to improve, until self-driving cars are actually less likely to be engaged in serious accidents than are human-driven cars. (They may be there already, though at the cost of a lot of overly cautious driving and needless minor accidents. I don’t know.)

     

     

    • #140
  21. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    I think the AI field does pattern matching extraordinarily well.

    If that were true, shouldn’t there be fewer incidents of self-driving cars running into things, etc?

    Probably not.

    First, there’s a lot more to a self-driving car than pattern matching.

    Secondly, I followed the sentence you quoted with an observation that human beings also do pattern matching extraordinarily well. Today, and by a large margin, humans do arbitrary pattern matching — pattern matching in a non-constrained visual space full of heaven knows what — far better than do machines.

    For example, I have no idea how a self-driving car would handle a descending hot air balloon or hang-glider approaching the road. As someone who learned to drive in New Mexico, where both hot air balloons and hang-gliders are common, I’ve experienced highway near-encounters with each without running into anything at all. People are good at adapting to really odd situations quickly.

    Thirdly, a certain number of accidents are simply unavoidable from the perspective of the driver. That is, the cause is not visible before it’s too late to be avoided, and the degree of caution required to anticipate the possibility and avoid it would make travel impractical. City driving with blind corners is often like that.

    I don’t know what the current collision rate is for self-driving cars versus human-driven cars. I assume the former will continue to improve, until self-driving cars are actually less likely to be engaged in serious accidents than are human-driven cars. (They may be there already, though at the cost of a lot of overly cautious driving and needless minor accidents. I don’t know.)

     

     

    So what it amounts to is you have a variable definition of “extraordinarily well.”

    • #141
  22. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    I think the AI field does pattern matching extraordinarily well.

    If that were true, shouldn’t there be fewer incidents of self-driving cars running into things, etc?

    Probably not.

    First, there’s a lot more to a self-driving car than pattern matching.

    Secondly, I followed the sentence you quoted with an observation that human beings also do pattern matching extraordinarily well. Today, and by a large margin, humans do arbitrary pattern matching — pattern matching in a non-constrained visual space full of heaven knows what — far better than do machines.

    For example, I have no idea how a self-driving car would handle a descending hot air balloon or hang-glider approaching the road. As someone who learned to drive in New Mexico, where both hot air balloons and hang-gliders are common, I’ve experienced highway near-encounters with each without running into anything at all. People are good at adapting to really odd situations quickly.

    Thirdly, a certain number of accidents are simply unavoidable from the perspective of the driver. That is, the cause is not visible before it’s too late to be avoided, and the degree of caution required to anticipate the possibility and avoid it would make travel impractical. City driving with blind corners is often like that.

    I don’t know what the current collision rate is for self-driving cars versus human-driven cars. I assume the former will continue to improve, until self-driving cars are actually less likely to be engaged in serious accidents than are human-driven cars. (They may be there already, though at the cost of a lot of overly cautious driving and needless minor accidents. I don’t know.)

     

     

    So what it amounts to is you have a variable definition of “extraordinarily well.”

    “Extraordinarily well” is by definition a relative term so, yes, what is “extraordinary” for an artificial intelligence and what is “extraordinary” — or even “ordinary” — for a human intelligence might vary by quite a bit.

    • #142
  23. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    I think the AI field does pattern matching extraordinarily well.

    If that were true, shouldn’t there be fewer incidents of self-driving cars running into things, etc?

    Probably not.

    First, there’s a lot more to a self-driving car than pattern matching.

    Secondly, I followed the sentence you quoted with an observation that human beings also do pattern matching extraordinarily well. Today, and by a large margin, humans do arbitrary pattern matching — pattern matching in a non-constrained visual space full of heaven knows what — far better than do machines.

    For example, I have no idea how a self-driving car would handle a descending hot air balloon or hang-glider approaching the road. As someone who learned to drive in New Mexico, where both hot air balloons and hang-gliders are common, I’ve experienced highway near-encounters with each without running into anything at all. People are good at adapting to really odd situations quickly.

    Thirdly, a certain number of accidents are simply unavoidable from the perspective of the driver. That is, the cause is not visible before it’s too late to be avoided, and the degree of caution required to anticipate the possibility and avoid it would make travel impractical. City driving with blind corners is often like that.

    I don’t know what the current collision rate is for self-driving cars versus human-driven cars. I assume the former will continue to improve, until self-driving cars are actually less likely to be engaged in serious accidents than are human-driven cars. (They may be there already, though at the cost of a lot of overly cautious driving and needless minor accidents. I don’t know.)

     

     

    So what it amounts to is you have a variable definition of “extraordinarily well.”

    “Extraordinarily well” is by definition a relative term so, yes, what is “extraordinary” for an artificial intelligence and what is “extraordinary” — or even “ordinary” — for a human intelligence might vary by quite a bit.

    I suggest you don’t use the same word in both cases then, at least not so close to each other.

    • #143
  24. Django Member
    Django
    @Django

    FWIW, Google fired the engineer who claimed their AI was sentient. I didn’t follow the story closely, but he seemed to have very little evidence for his claim. 

    • #144
  25. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    I think the AI field does pattern matching extraordinarily well.

    If that were true, shouldn’t there be fewer incidents of self-driving cars running into things, etc?

    Probably not.

    First, there’s a lot more to a self-driving car than pattern matching.

    Secondly, I followed the sentence you quoted with an observation that human beings also do pattern matching extraordinarily well. Today, and by a large margin, humans do arbitrary pattern matching — pattern matching in a non-constrained visual space full of heaven knows what — far better than do machines.

    For example, I have no idea how a self-driving car would handle a descending hot air balloon or hang-glider approaching the road. As someone who learned to drive in New Mexico, where both hot air balloons and hang-gliders are common, I’ve experienced highway near-encounters with each without running into anything at all. People are good at adapting to really odd situations quickly.

    Thirdly, a certain number of accidents are simply unavoidable from the perspective of the driver. That is, the cause is not visible before it’s too late to be avoided, and the degree of caution required to anticipate the possibility and avoid it would make travel impractical. City driving with blind corners is often like that.

    I don’t know what the current collision rate is for self-driving cars versus human-driven cars. I assume the former will continue to improve, until self-driving cars are actually less likely to be engaged in serious accidents than are human-driven cars. (They may be there already, though at the cost of a lot of overly cautious driving and needless minor accidents. I don’t know.)

     

     

    So what it amounts to is you have a variable definition of “extraordinarily well.”

    “Extraordinarily well” is by definition a relative term so, yes, what is “extraordinary” for an artificial intelligence and what is “extraordinary” — or even “ordinary” — for a human intelligence might vary by quite a bit.

    I suggest you don’t use the same word in both cases then, at least not so close to each other.

    LOLing out loud.

    Okay. I’ll take that under advisement.

    • #145
  26. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    I think the AI field does pattern matching extraordinarily well.

    If that were true, shouldn’t there be fewer incidents of self-driving cars running into things, etc?

    Probably not.

    First, there’s a lot more to a self-driving car than pattern matching.

    Secondly, I followed the sentence you quoted with an observation that human beings also do pattern matching extraordinarily well. Today, and by a large margin, humans do arbitrary pattern matching — pattern matching in a non-constrained visual space full of heaven knows what — far better than do machines.

    For example, I have no idea how a self-driving car would handle a descending hot air balloon or hang-glider approaching the road. As someone who learned to drive in New Mexico, where both hot air balloons and hang-gliders are common, I’ve experienced highway near-encounters with each without running into anything at all. People are good at adapting to really odd situations quickly.

    Thirdly, a certain number of accidents are simply unavoidable from the perspective of the driver. That is, the cause is not visible before it’s too late to be avoided, and the degree of caution required to anticipate the possibility and avoid it would make travel impractical. City driving with blind corners is often like that.

    I don’t know what the current collision rate is for self-driving cars versus human-driven cars. I assume the former will continue to improve, until self-driving cars are actually less likely to be engaged in serious accidents than are human-driven cars. (They may be there already, though at the cost of a lot of overly cautious driving and needless minor accidents. I don’t know.)

     

     

    So what it amounts to is you have a variable definition of “extraordinarily well.”

    “Extraordinarily well” is by definition a relative term so, yes, what is “extraordinary” for an artificial intelligence and what is “extraordinary” — or even “ordinary” — for a human intelligence might vary by quite a bit.

    I suggest you don’t use the same word in both cases then, at least not so close to each other.

    LOLing out loud.

    Okay. I’ll take that under advisement.

    What, you think people reading your original comment as most people would, wouldn’t take that to mean the “extraordinary” ability of humans and the “extraordinary” ability of AI are being put on somewhat equal footing?

    • #146
  27. Henry Racette Member
    Henry Racette
    @HenryRacette

    Django (View Comment):

    FWIW, Google fired the engineer who claimed their AI was sentient. I didn’t follow the story closely, but he seemed to have very little evidence for his claim.

    I read up on it a bit and concluded that the guy was reading way too much into a machine that was extraordinarily good (sorry, KE) at pattern recognition and text completion. (Then the guy went public with it, which was just dumb.) I don’t like Google, but can’t fault them for cutting the guy loose.

    I think we’re now at the point where machines can pass the Turing Test for a wide swath of the population, at least as long as the conversation is kept to some moderate length. The Turing Test, of course, is essentially a test of skill at mimicry, and mimicry is what text completion is all about.

    Mimicry is a process of statistical prediction. So is, at least in large part, driving a car. Machines are good at this now.

    If there’s something else going on in our heads, something beyond incredibly deep and feedback-laden pattern recognition, then perhaps machines will never get there. I don’t think we know there’s anything more going on, but I’m equally sure that we don’t know there isn’t.

    I’m wary of anyone who says “it can’t possibly be that…” about things that are still largely mysterious and unknown. That goes for the creation science folk who assure me that science can’t answer this or that question about physical reality, and it goes as well for those who claim that there’s something about human thought and consciousness that can never be reproduced artificially. We just don’t know enough to speak with certainty.

    However, I am pretty sure that modern text-prediction pattern matching heuristic artificial intelligence systems are as mechanistic and un-self-aware as any other machine intelligence of the last few decades… just much larger and faster and better trained and more impressive.

    • #147
  28. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    I think the AI field does pattern matching extraordinarily well.

    If that were true, shouldn’t there be fewer incidents of self-driving cars running into things, etc?

    Probably not.

    First, there’s a lot more to a self-driving car than pattern matching.

    Secondly, I followed the sentence you quoted with an observation that human beings also do pattern matching extraordinarily well. Today, and by a large margin, humans do arbitrary pattern matching — pattern matching in a non-constrained visual space full of heaven knows what — far better than do machines.

    For example, I have no idea how a self-driving car would handle a descending hot air balloon or hang-glider approaching the road. As someone who learned to drive in New Mexico, where both hot air balloons and hang-gliders are common, I’ve experienced highway near-encounters with each without running into anything at all. People are good at adapting to really odd situations quickly.

    Thirdly, a certain number of accidents are simply unavoidable from the perspective of the driver. That is, the cause is not visible before it’s too late to be avoided, and the degree of caution required to anticipate the possibility and avoid it would make travel impractical. City driving with blind corners is often like that.

    I don’t know what the current collision rate is for self-driving cars versus human-driven cars. I assume the former will continue to improve, until self-driving cars are actually less likely to be engaged in serious accidents than are human-driven cars. (They may be there already, though at the cost of a lot of overly cautious driving and needless minor accidents. I don’t know.)

     

     

    So what it amounts to is you have a variable definition of “extraordinarily well.”

    “Extraordinarily well” is by definition a relative term so, yes, what is “extraordinary” for an artificial intelligence and what is “extraordinary” — or even “ordinary” — for a human intelligence might vary by quite a bit.

    I suggest you don’t use the same word in both cases then, at least not so close to each other.

    LOLing out loud.

    Okay. I’ll take that under advisement.

    What, you think people reading your original comment as most people would, wouldn’t take that to mean the “extraordinary” ability of humans and the “extraordinary” ability of AI are being put on somewhat equal footing?

    Yes.

    • #148
  29. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    I think the AI field does pattern matching extraordinarily well.

    If that were true, shouldn’t there be fewer incidents of self-driving cars running into things, etc?

    Probably not.

    First, there’s a lot more to a self-driving car than pattern matching.

    Secondly, I followed the sentence you quoted with an observation that human beings also do pattern matching extraordinarily well. Today, and by a large margin, humans do arbitrary pattern matching — pattern matching in a non-constrained visual space full of heaven knows what — far better than do machines.

    For example, I have no idea how a self-driving car would handle a descending hot air balloon or hang-glider approaching the road. As someone who learned to drive in New Mexico, where both hot air balloons and hang-gliders are common, I’ve experienced highway near-encounters with each without running into anything at all. People are good at adapting to really odd situations quickly.

    Thirdly, a certain number of accidents are simply unavoidable from the perspective of the driver. That is, the cause is not visible before it’s too late to be avoided, and the degree of caution required to anticipate the possibility and avoid it would make travel impractical. City driving with blind corners is often like that.

    I don’t know what the current collision rate is for self-driving cars versus human-driven cars. I assume the former will continue to improve, until self-driving cars are actually less likely to be engaged in serious accidents than are human-driven cars. (They may be there already, though at the cost of a lot of overly cautious driving and needless minor accidents. I don’t know.)

     

     

    So what it amounts to is you have a variable definition of “extraordinarily well.”

    “Extraordinarily well” is by definition a relative term so, yes, what is “extraordinary” for an artificial intelligence and what is “extraordinary” — or even “ordinary” — for a human intelligence might vary by quite a bit.

    I suggest you don’t use the same word in both cases then, at least not so close to each other.

    LOLing out loud.

    Okay. I’ll take that under advisement.

    What, you think people reading your original comment as most people would, wouldn’t take that to mean the “extraordinary” ability of humans and the “extraordinary” ability of AI are being put on somewhat equal footing?

    Yes.

    Try it out on some non-engineer types, and get back to me.

    • #149
  30. Django Member
    Django
    @Django

    Henry Racette (View Comment):

    Django (View Comment):

    FWIW, Google fired the engineer who claimed their AI was sentient. I didn’t follow the story closely, but he seemed to have very little evidence for his claim.

    I read up on it a bit and concluded that the guy was reading way too much into a machine that was extraordinarily good (sorry, KE) at pattern recognition and text completion. (Then the guy went public with it, which was just dumb.) I don’t like Google, but can’t fault them for cutting the guy loose.

    I think we’re now at the point where machines can pass the Turing Test for a wide swath of the population, at least as long as the conversation is kept to some moderate length. The Turing Test, of course, is essentially a test of skill at mimicry, and mimicry is what text completion is all about.

    Mimicry is a process of statistical prediction. So is, at least in large part, driving a car. Machines are good at this now.

    If there’s something else going on in our heads, something beyond incredibly deep and feedback-laden pattern recognition, then perhaps machines will never get there. I don’t think we know there’s anything more going on, but I’m equally sure that we don’t know there isn’t.

    I’m wary of anyone who says “it can’t possibly be that…” about things that are still largely mysterious and unknown. That goes for the creation science folk who assume me that science can’t answer this or that question about physical reality, and it goes as well for those who claim that there’s something about human thought and consciousness that can never be reproduced artificially. We just don’t know enough to speak with certainty.

    However, I am pretty sure that modern text-prediction pattern matching heuristic artificial intelligence systems are as mechanistic and un-self-aware as any other machine intelligence of the last few decades… just much larger and faster and better trained and more impressive.

    Way back in 1977, Julian Jaynes wrote a book called The Origin of Consciousness in the Breakdown of the Bicameral Mind. It is still in print and was described by Daniel Dennet as “one of those books that is either complete rubbish or a work of consummate genius, nothing in between! Probably the former, but I’m hedging my bets.” Some of his ideas might be useful in determining whether a silicon-based intelligence has become sentient

    • #150
Become a member to join the conversation. Or sign in if you're already a member.