More Fuel for the Self-Driving Car Fire

 

Just came across this article this morning. I’ll highlight one paragraph and add emphasis:

The linked report suggests that the artificial intelligence may never be “intelligent” enough to do what human beings are generally capable of doing. (Well, not all of us, of course. A couple of days driving in Florida will tell you that.) That may be true in some ways, but more than raw “intelligence,” the AI systems do not have human intuition. They aren’t as intuitive as humans in terms of trying to guess what the rest of the unpredictable humans will do at any given moment. In some of those cases, it’s not a question of the car not realizing it needs to do something, but rather making a correct guess about what specific action is required.

I’ve made this argument before, that humans are better at winging it than AI — so far.

Admiral Rickover was pretty much against using computers to run the engine room, with a couple of exceptions.  Any task that was deemed too monotonous was one, the other being any task that could be performed quicker by a computer.  Even so, these weren’t really computers in the AI sense, but rather electronic sensors with programming to handle the task at hand.  I’m sure modern submarine engine rooms have more computerization nowadays, but I’ll bet the crew can easily take over if the machines fail . . .

Published in Technology
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 210 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Columbo Member
    Columbo
    @Columbo

    Solyndra on wheels!

    • #1
  2. Bryan G. Stephens Thatcher
    Bryan G. Stephens
    @BryanGStephens

    I think someday we will pull it off, but not soon. 

     

    • #2
  3. Matt Bartle Member
    Matt Bartle
    @MattBartle

    It’ll work reliably when all the cars are computer-driven. People are too unpredictable.

    • #3
  4. Bryan G. Stephens Thatcher
    Bryan G. Stephens
    @BryanGStephens

    Matt Bartle (View Comment):

    It’ll work reliably when all the cars are computer-driven. People are too unpredictable.

    I trained my kids to stop if they see a ball enter the road because a Child is likely to follow. AI going to do that? 

    OK, maybe that is added to the program. Blind drives?

    People can do better

    • #4
  5. Stad Coolidge
    Stad
    @Stad

    Bryan G. Stephens (View Comment):

    Matt Bartle (View Comment):

    It’ll work reliably when all the cars are computer-driven. People are too unpredictable.

    I trained my kids to stop if they see a ball enter the road because a Child is likely to follow. AI going to do that?

    OK, maybe that is added to the program. Blind drives?

    People can do better

    I guarantee you once all cars are computer driven, they will be put under control of a government master computer . . .

    • #5
  6. George Savage Contributor
    George Savage
    @GeorgeSavage

    Matt Bartle (View Comment):

    It’ll work reliably when all the cars are computer-driven. People are too unpredictable.

    Matt, I think you may have identified the next crusade.  Here’s the scenario: Given the enormous investment in fully autonomous cars, together with the lack of success at handling edge cases that humans hardly notice, developers will agitate for mandatory autonomous driving.  Rather than jettisoning the goal in light of repeated failures, the command-and-control types will declare human drivers to be the obstacle to reaching Utopia.

    • #6
  7. The Great Adventure Coolidge
    The Great Adventure
    @TGA

    Bryan G. Stephens (View Comment):

    Matt Bartle (View Comment):

    It’ll work reliably when all the cars are computer-driven. People are too unpredictable.

    I trained my kids to stop if they see a ball enter the road because a Child is likely to follow. AI going to do that?

    OK, maybe that is added to the program. Blind drives?

    People can do better

    I have a brother-in-law who is a a Capt in his local police force.  I remember chatting with him  in his front yard about 10 years or so ago and someone was driving up his residential street at an obviously excessive rate of speed.  He continued the conversation, calmly walked over and picked up one of his kids soccer balls, and rolled  it out into the street as the car approached.  Dude hit the brakes pretty hard.

    So even if all cars were run by computers and could predict what each other was going to do, they would still have to handle what humans are going to do.

    • #7
  8. DonG (CAGW is a Scam) Coolidge
    DonG (CAGW is a Scam)
    @DonG

    First, duh!   Second, we won’t have cars in 10 years anyway.   Third, this is a breakthrough; now they can insert wires into roads to make it easy for cars to stay in lanes.   Admitting constraints are required is the first step to a workable solution.

    • #8
  9. Mad Gerald Coolidge
    Mad Gerald
    @Jose

    Sam Harris was talking about self-driving cars a few years ago.  He addressed the scenario where a child ran in front of a car, and the driver would have to decide whether to swerve into a crowd of pedestrians (or something similar).

    His proposal was to form a panel of genius experts in ethics to examine all the possible outcomes and program the AI to select the least bad outcome.  The thought of the EXPERTS deciding the outcome in advance made me uncomfortable. 

    • #9
  10. David C. Broussard Coolidge
    David C. Broussard
    @Dbroussa

    One of the interesting side effects of AI was that pretty much everyone in the industry thought that AI would replace routine tasks first but never replace human decisions.  This hasn’t proven to be true in many cases.  It turns out that many of those monotonous tasks aren’t actually completely routine.  Yes, the welding robot on a car assembly line can be a robot, but so many other tasks end up being much harder to program because of minor variations.  For example, we use automated harvesting with combines where the on-board computer uses GPS to align the harvester on the rows for planting, maintenance, and harvesting.  But, there is still a driver in the combine to deal with unknowable circumstances.

    AI driving is relatively easily, if all the vehicles are AIs.  Then the human factor is removed from the equation and its fairly simple for each vehicle to know what the other will do because they all use either the same, or similar algorithms.  One potential solution would be for highways to have self-driving only lanes, similar to HOV lanes where human driven vehicles aren’t present.  Its been shown that in those situations, the lanes could operate at higher speeds with less gaps between vehicles (as in inches as opposed to feet between cars going 70mph).  That could do a lot to reduce congestion, but doesn’t mean that self-driving is ubiquitous.

    In my world, AI is really taking off doing information analysis of unstructured data.  One example is taking documents like invoices, credit memos, etc. and processing them based on the data in them.  This is a place that AI has really taken off and will continue to grow.  It will get rid of middle management and information workers by reducing their need to review and store content which is interesting.

    • #10
  11. Full Size Tabby Member
    Full Size Tabby
    @FullSizeTabby

    Not just unpredictable humans. My bugaboo is deer. If one runs in front of me, I know to look for others nearby. 

    We can make all vehicles “autonomous” so that all vehicle movements are predictable, which may solve vehicle-to-vehicle conflicts, but unless we’re going to wall off vehicle pathways from all spaces containing other humans or animals, unpredictability will remain in the system. 

    I had read a while ago that in a world in which autonomous vehicles and human-controlled vehicles shared the roadways, one of the frequent causes of accidents was that the autonomous vehicles didn’t behave the way human drivers expected human-controlled vehicles to behave. The rules based, often abrupt, sometimes binary, and always reactionary decision making by the autonomous vehicles caused them to maneuver in ways human drivers found surprising. 

    • #11
  12. Django Member
    Django
    @Django

    I heard a news report that truck drivers are in demand and a real shortage is expected. Made me wonder if we could or should develop a parallel interstate highway system with traffic restricted to self-driving trucks. Make them hybrid. Place automated charging stations at appropriate intervals. Two problems tackled at once: 1) the distribution part of the supply chain, 2) extended test cases for self-driving vehicles. 

    • #12
  13. Stad Coolidge
    Stad
    @Stad

    Full Size Tabby (View Comment):
    I had read a while ago that in a world in which autonomous vehicles and human-controlled vehicles shared the roadways, one of the frequent causes of accidents was that the autonomous vehicles didn’t behave the way human drivers expected human-controlled vehicles to behave.

    That’s an observation I haven’t thought of before.  Thanks!

    • #13
  14. Gossamer Cat Coolidge
    Gossamer Cat
    @GossamerCat

    Matt Bartle (View Comment):

    It’ll work reliably when all the cars are computer-driven. People are too unpredictable.

    And nobody is jaywalking or playing in the street or thinking about it and there are no deer, cats, dogs, bison, cows etc on the side of the road.   Or maybe all of these are more predictable.

    (Sorry, I hadn’t read the other comments that said the same thing before I replied to this).

    • #14
  15. Mad Gerald Coolidge
    Mad Gerald
    @Jose

    Django (View Comment):

    I heard a news report that truck drivers are in demand and a real shortage is expected. Made me wonder if we could or should develop a parallel interstate highway system with traffic restricted to self-driving trucks. Make them hybrid. Place automated charging stations at appropriate intervals. Two problems tackled at once: 1) the distribution part of the supply chain, 2) extended test cases for self-driving vehicles.

    I’ve seen an article that described trucks with AI that follow each other.  The lead truck had a human driver, followed by half a dozen with AI.

    • #15
  16. Django Member
    Django
    @Django

    Mad Gerald (View Comment):

    Django (View Comment):

    I heard a news report that truck drivers are in demand and a real shortage is expected. Made me wonder if we could or should develop a parallel interstate highway system with traffic restricted to self-driving trucks. Make them hybrid. Place automated charging stations at appropriate intervals. Two problems tackled at once: 1) the distribution part of the supply chain, 2) extended test cases for self-driving vehicles.

    I’ve seen an article that described trucks with AI that follow each other. The lead truck had a human driver, followed by half a dozen with AI.

    Well, maybe I’m not crazy. 

    • #16
  17. Flicker Coolidge
    Flicker
    @Flicker

    Bryan G. Stephens (View Comment):

    Matt Bartle (View Comment):

    It’ll work reliably when all the cars are computer-driven. People are too unpredictable.

    I trained my kids to stop if they see a ball enter the road because a Child is likely to follow. AI going to do that?

    OK, maybe that is added to the program. Blind drives?

    People can do better

    I don’t know enough to make any firm pronouncements on this, but safe human drivers do respond a ball’s bouncing out into the street and many other nearly unconscious signs that need to be addressed.  I would think that these reactions could be programmed into driving software even if the programmers or their resultant software didn’t quite understand them.

    • #17
  18. Flicker Coolidge
    Flicker
    @Flicker

    Stad (View Comment):

    Bryan G. Stephens (View Comment):

    Matt Bartle (View Comment):

    It’ll work reliably when all the cars are computer-driven. People are too unpredictable.

    I trained my kids to stop if they see a ball enter the road because a Child is likely to follow. AI going to do that?

    OK, maybe that is added to the program. Blind drives?

    People can do better

    I guarantee you once all cars are computer driven, they will be put under control of a government master computer . . .

    And will only be available to the government.

    • #18
  19. Flicker Coolidge
    Flicker
    @Flicker

    George Savage (View Comment):

    Matt Bartle (View Comment):

    It’ll work reliably when all the cars are computer-driven. People are too unpredictable.

    Matt, I think you may have identified the next crusade. Here’s the scenario: Given the enormous investment in fully autonomous cars, together with the lack of success at handling edge cases that humans hardly notice, developers will agitate for mandatory autonomous driving. Rather than jettisoning the goal in light of repeated failures, the command-and-control types will declare human drivers to be the obstacle to reaching Utopia.

    And then once gas-powered cars are eliminated, electric cars will be determined to be even more dangerous to the environment than gas-powered, and electrics will be reduced to a only a small percentage of the cars currently on the road.

    The point isn’t safety or the environment, it’s denial of freedom of movement.

    • #19
  20. Flicker Coolidge
    Flicker
    @Flicker

    Full Size Tabby (View Comment):

    We can make all vehicles “autonomous” so that all vehicle movements are predictable, which may solve vehicle-to-vehicle conflicts, but unless we’re going to wall off vehicle pathways from all spaces containing other humans or animals, unpredictability will remain in the system. 

    Even if the reasoning can’t be programmed in, the observable behavior could be, I would think.

    • #20
  21. Flicker Coolidge
    Flicker
    @Flicker

    Stad (View Comment):

    Full Size Tabby (View Comment):
    I had read a while ago that in a world in which autonomous vehicles and human-controlled vehicles shared the roadways, one of the frequent causes of accidents was that the autonomous vehicles didn’t behave the way human drivers expected human-controlled vehicles to behave.

    That’s an observation I haven’t thought of before. Thanks!

    Yes, the prime example is a motorist rear-ending a computer-driven vehicle that’s making a right turn at a stop sign or a light because the human doesn’t expect the computer-driven vehicle to actually stop and wait.

    • #21
  22. DonG (CAGW is a Scam) Coolidge
    DonG (CAGW is a Scam)
    @DonG

    Mad Gerald (View Comment):
    Sam Harris was talking about self-driving cars a few years ago.  He addressed the scenario where a child ran in front of a car, and the driver would have to decide whether to swerve into a crowd of pedestrians (or something similar).

    The biggest problem with self-driving is that transfers 100% of the liability of accidents from the driver to the manufacturer.   The auto insurance industry is an $800B/year business, none of the car makers can afford that kind of liability.

    • #22
  23. David Foster Member
    David Foster
    @DavidFoster

    DonG (CAGW is a Scam) (View Comment):
    The biggest problem with self-driving is that transfers 100% of the liability of accidents from the driver to the manufacturer.

    I’d think this would be the case only if the driver uses the car in conditions approved the manufacturer for self-driving…for example, the spec might say: Do not use in heavy snow, or Do not use when winds exceed 30 mph, or Do not use on roads that are not certified for self-driving.

     

     

    • #23
  24. Flicker Coolidge
    Flicker
    @Flicker

    DonG (CAGW is a Scam) (View Comment):

    Mad Gerald (View Comment):
    Sam Harris was talking about self-driving cars a few years ago. He addressed the scenario where a child ran in front of a car, and the driver would have to decide whether to swerve into a crowd of pedestrians (or something similar).

    The biggest problem with self-driving is that transfers 100% of the liability of accidents from the driver to the manufacturer. The auto insurance industry is an $800B/year business, none of the car makers can afford that kind of liability.

    My favorite car is nearly 30 years old, and it’s owner’s manual is at least 10% warnings against misuse.  I have a newer chain saw and it’s manual is more than 50% warnings of misuse.  The purpose of these warnings is to protect the manufacturer from liability.

    I doubt the manufacturers would allow themselves to be held culpable for accidents of even computer piloted driving.  But can you imagine the owner’s manual for a driverless car?

    • #24
  25. Django Member
    Django
    @Django

    Flicker (View Comment):

    DonG (CAGW is a Scam) (View Comment):

    Mad Gerald (View Comment):
    Sam Harris was talking about self-driving cars a few years ago. He addressed the scenario where a child ran in front of a car, and the driver would have to decide whether to swerve into a crowd of pedestrians (or something similar).

    The biggest problem with self-driving is that transfers 100% of the liability of accidents from the driver to the manufacturer. The auto insurance industry is an $800B/year business, none of the car makers can afford that kind of liability.

    My favorite car is nearly 30 years old, and it’s owner’s manual is at least 10% warnings against misuse. I have a newer chain saw and it’s manual is more than 50% warnings of misuse. The purpose of these warnings is to protect the manufacturer from liability.

    I doubt the manufacturers would allow themselves to be held culpable for accidents of even computer piloted driving. But can you imagine the owner’s manual for a driverless car?

    You can worry when the manual warns you against speaking harshly to the car. 

    • #25
  26. Flicker Coolidge
    Flicker
    @Flicker

    Django (View Comment):

    Flicker (View Comment):

    DonG (CAGW is a Scam) (View Comment):

    Mad Gerald (View Comment):
    Sam Harris was talking about self-driving cars a few years ago. He addressed the scenario where a child ran in front of a car, and the driver would have to decide whether to swerve into a crowd of pedestrians (or something similar).

    The biggest problem with self-driving is that transfers 100% of the liability of accidents from the driver to the manufacturer. The auto insurance industry is an $800B/year business, none of the car makers can afford that kind of liability.

    My favorite car is nearly 30 years old, and it’s owner’s manual is at least 10% warnings against misuse. I have a newer chain saw and it’s manual is more than 50% warnings of misuse. The purpose of these warnings is to protect the manufacturer from liability.

    I doubt the manufacturers would allow themselves to be held culpable for accidents of even computer piloted driving. But can you imagine the owner’s manual for a driverless car?

    You can worry when the manual warns you against speaking harshly to the car.

    Yes, of course.  I hadn’t thought about that.

    • #26
  27. Phil Turmel Coolidge
    Phil Turmel
    @PhilTurmel

    Django (View Comment):

    I heard a news report that truck drivers are in demand and a real shortage is expected. Made me wonder if we could or should develop a parallel interstate highway system with traffic restricted to self-driving trucks. Make them hybrid. Place automated charging stations at appropriate intervals. Two problems tackled at once: 1) the distribution part of the supply chain, 2) extended test cases for self-driving vehicles.

    We have something very close to this already, down to the separate lanes.  It’s called the railroad.  One “driver” for dozens if not hundreds of “trucks”.

    That’s the driver to replace, if any.  The AI doesn’t even have to steer.

    • #27
  28. Henry Racette Member
    Henry Racette
    @HenryRacette

    I don’t want a self-driving car, ever.

    Having said that, I have a lot of respect for the ability of big neural networks to become very good at pattern matching, and I think they’ll eventually be better at analyzing street scenes and handling even unpredictable situations than are humans.

    Of course, I think humans are, essentially, big neural networks, or something very close to it — at least as far as cognition is concerned.

    But I still don’t want a self-driving car.

    • #28
  29. Barfly Member
    Barfly
    @Barfly

    Bryan G. Stephens (View Comment):

    I think someday we will pull it off, but not soon.

     

    When we do, we’ll all know about it pretty quickly.

    • #29
  30. Barfly Member
    Barfly
    @Barfly

    I think Artificial is the right word for today’s machine intelligence. When we do make a real intelligence, and we will soon, it’ll best be called Synthetic Intelligence. It will be real, not artificial.

    I’ve taken a couple of shots at it myself, one serious. Spent three whole months at it, did nothing else but drink beer and fish. I backed off when I realized my neuron model didn’t hack it and I needed more reading.

     

    • #30
Become a member to join the conversation. Or sign in if you're already a member.