Quote of the Day: We Know a Hawk From a Handsaw. (And a Cat From Guacamole.)

 

“One shortcoming of current machine-learning programs is that they fail in surprising and decidedly non-human ways. A team of Massachusetts Institute of Technology students recently demonstrated, for instance, how one of Google’s advanced image classifiers could be easily duped into mistaking an obvious image of a turtle for a rifle, and a cat for some guacamole.” — Jerry Kaplan, The Wall Street Journal, June 2, 2018

We have recently been bombarded with stories about AI (Artificial Intelligence, for those of you who live in farm country and think it means something else), and about how our meager human brains will soon not be able to keep up with those super-smart machines. Self-driving cars. Computers that accurately diagnose, and even treat, medical conditions. Robots that perform surgery and manage eldercare. Autonomous military drones. Siri. Predictive applications to “enhance” your Internet experience (Amazon, Pandora, etc.). Chatbots. Legal assistants. And, of course, the omnipresent Google.

So I was strangely reassured by today’s Quote of the Day, which appeared in a WSJ article focusing on efforts to make self-driving cars fail in predictable ways (so that, for example, they do not mistake light reflected back from their camera lenses for truck headlights rushing towards them from the other direction, and run off the road as a result. Or so they don’t perform like the self-driving Uber test vehicle in Tempe, AZ, which killed a pedestrian walking her bike across the road because its algorithms, which did recognize her presence, mistook it for “ghosting” in the poorly-lit night.)

Elon Musk, the CEO of Tesla, isn’t happy about the bad rap self-driving cars are getting, believing that the press is out to get Tesla, and that the “holier-than-thou hypocrisy of the big media companies [lays] claim to the truth but [publishes] only enough to sugarcoat the lie.” In his view, this is “why the republic no longer respects them.” Because they are out to get Tesla.

In his view, maybe. I’m thinking that his essentially accurate view of press shortcomings, and why the public disrespects big media, might be suffering from tunnel vision. Which, as I understand it is another thing self-driving cars aren’t so good at.

Research shows we’d be much more accepting of self-driving cars if they failed in the same sorts of ways as cars driven by humans do — misjudging a curve and approaching at too high a speed, failing to notice the car in the “blind spot,” going the wrong way up a one-way street, distracted driving while texting or on the phone–rather than in the spectacularly unpredictable ways they sometimes do. Even if there are ultimately far fewer accidents, the sheer unpredictability of today’s AI “fails,” makes us queasy.

No doubt many of these concerns will be addressed with time, money, and more research, and as the Wall Street Journal article makes clear, the future probably belongs, at least in large part, to AI and its many applications.

But, fellow humans, all is not yet lost, and with luck (our own unpredictability wild card, and one the machines haven’t sussed out yet), it never will be. Gird your loins, get behind the wheel, put your foot on the gas pedal, and press onward!

Oh, and pass the guacamole. Head first, please.

(I’ve always loved this ad, which seems largely representative of much of my life. And appropriate for this post. )

.

Published in Humor
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 22 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Front Seat Cat Member
    Front Seat Cat
    @FrontSeatCat

    That raccoon commercial is hilarious – isn’t it odd how we have more amazing technology and more everything than we’ve ever had as human beings, and we still fight and hate and we have a huge substance abuse problem, etc. etc. Technology will never replace basic human needs.

    • #1
  2. WillowSpring Member
    WillowSpring
    @WillowSpring

    One of the early experiments with computer-human interaction was in the early 60’s.  It was called Eliza –https://en.wikipedia.org/wiki/ELIZA and the communications was via teletype.  It simulated some sort of therapist and would parrot back a comment in the form of a question:

    subject – “I hate my mother”

    Eliza – “Tell me more about your mother”

    and so on.

    I don’t think it is mentioned in the Wikipedia article, but Eliza’s “creator” – Joseph Weizenbaum – eventually took it offline, since so many users were treating it as a real analyst and telling it things they should have kept private.

    You are right about the AI mistakes being very surprising.  In my early career (late 60’s), I worked on one of first ‘viable’ speech recognition systems.  One of its worse mistakes on a Navy project was to confuse “Sub” and “Destroyer”

    In the early days, one expression was that “If it is doing something we don’t know how to do, it is AI, once we know how to do it, it is an Algorithm”

    • #2
  3. Major Major Major Major Member
    Major Major Major Major
    @OldDanRhody

    She: Research shows we’d be much more accepting of self-driving cars if they failed in the same sorts of ways as cars driven by humans do–misjudging a curve and approaching at too high a speed, failing to notice the car in the ‘blind spot,” going the wrong way up a one-way street, distracted driving while texting or on the phone–rather than in the spectacularly unpredictable ways they sometimes do.

    I’m going to go with a “no” here, these being the sorts of things at which I would demand that the self-driver be superior to the average human driver.

    • #3
  4. She Member
    She
    @She

    Major Major Major Major (View Comment):

    She: Research shows we’d be much more accepting of self-driving cars if they failed in the same sorts of ways as cars driven by humans do–misjudging a curve and approaching at too high a speed, failing to notice the car in the ‘blind spot,” going the wrong way up a one-way street, distracted driving while texting or on the phone–rather than in the spectacularly unpredictable ways they sometimes do.

    I’m going to go with a “no” here, these being the sorts of things at which I would demand that the self-driver be superior to the average human driver.

    Yes, well, there’s a contrarian in every crowd . . .  (I agree, actually).

    • #4
  5. Seawriter Contributor
    Seawriter
    @Seawriter

    I have often felt that the reason some crave artificial intelligence is because of the lack of actual intelligence. That lack is demonstrated by the belief artificial intelligence could substitute for the actual thing.

    • #5
  6. She Member
    She
    @She

    Seawriter (View Comment):

    I have often felt that the reason some crave artificial intelligence is because of the lack of actual intelligence. That lack is demonstrated by the belief artificial intelligence could substitute for the actual thing.

    So true.  Also, there are a lot of magical thinkers in the world.  I learned that almost forty years ago, in the early days of office automation, and a bit later during the infancy of personal computers. 

    Once folks grasped a basic function that the machine could actually perform, they were quick to project other capabilities and powers onto it, ones that it wasn’t designed for and couldn’t possibly fulfill.  Cue the inevitable disappointment when the cake fell just a short while later.

    Our expectations have changed, and certainly they are more sophisticated today than they were in, say 1983, but I still think there’s a fair amount of that going on.

     

    • #6
  7. Vectorman Inactive
    Vectorman
    @Vectorman

    WillowSpring (View Comment):

    You are right about the AI mistakes being very surprising. In my early career (late 60’s), I worked on one of first ‘viable’ speech recognition systems. One of its worse mistakes on a Navy project was to confuse “Sub” and “Destroyer”

    In the early days, one expression was that “If it is doing something we don’t know how to do, it is AI, once we know how to do it, it is an Algorithm”

    If AI worked, it would be able to analyze patterns, form a preliminary design, set up test procedures, modify the design, and run final (compliance) tests. Then submit a patent, own the design for 20 years, make lots of money, invest in multiple AI machines, and end up owning the whole world. Humans would have no purpose and would die off. Sounds like an early Star Trek episode.


    We have 4 openings left in the June 2018 Sign-Up Sheet and Schedule, along with tips for finding great quotes. It’s the easiest way to start a Ricochet conversation, so why not sign up today?

    • #7
  8. James Gawron Inactive
    James Gawron
    @JamesGawron

    She (View Comment):

    Seawriter (View Comment):

    I have often felt that the reason some crave artificial intelligence is because of the lack of actual intelligence. That lack is demonstrated by the belief artificial intelligence could substitute for the actual thing.

    So true. Also, there are a lot of magical thinkers in the world. I learned that almost forty years ago, in the early days of office automation, and a bit later during the infancy of personal computers.

    Once folks grasped a basic function that the machine could actually perform, they were quick to project other capabilities and powers onto it, ones that it wasn’t designed for and couldn’t possibly fulfill. Cue the inevitable disappointment when the cake fell just a short while later.

    Our expectations have changed, and certainly they are more sophisticated today than they were in, say 1983, but I still think there’s a fair amount of that going on.

     

    She & Sea,

    I used to be a little snarky and say “artificial intelligence is for the artificially intelligent”. No need for this with Musk and the rest of the self-driving car enthusiasts. Their crashing, burning, crippling, and killing, pretty well speaks for itself. No need for me to pile on.

    What it really is all about is having a little awe for Gd’s supreme creation. Gd created the whole world first so the world would be ready on the sixth day for the creation of man, the pinnacle of creation. We humans aren’t Gd like but we are pretty darn good because Gd does good work. We need to appreciate ourselves more. Computers and such are just tools to assist in the preparation of information or in the performance of routine tasks. You never go wrong if you just think of them as that. If you expect more then odds are you will run head-on into a brick wall, literally or figuratively.

    Regards,

    Jim

    • #8
  9. Valiuth Member
    Valiuth
    @Valiuth

    People spend too much time worrying about the future and technology in general. What everyone forgets is that humans are highly adaptable, more so than any of our machines. We will learn to live around  our creations. If they remove work, we will invariably find new work to occupy our time. 

    • #9
  10. Nanda Pajama-Tantrum Member
    Nanda Pajama-Tantrum
    @

    :-D

    • #10
  11. CarolJoy Coolidge
    CarolJoy
    @CarolJoy

    Elon Musk should spend a day or two figuring out what will happen to the world economy when delivery driver jobs world wide go the route of the blacksmiths’. Already experts tell us some 250 million jobs will disappear.

    Will that fact not  increase the likelihood of poverty? Will there be revolutions in various parts of the world once those jobs disappear?

    In Europe, I imagine various governmental analysts are figuring out how to temper the European economy with that job loss situation.  Here in the USA, we are realizing that an entire segment of our society wants increased immigration, at the same time that AI’s realities are letting us know how more and more Americans will be removed from their jobs.

     

    • #11
  12. Seawriter Contributor
    Seawriter
    @Seawriter

    CarolJoy (View Comment):
    Elon Musk should spend a day or two figuring out what will happen to the world economy when delivery driver jobs world wide go the route of the blacksmiths’. Already experts tell us some 250 million jobs will disappear.

    Umm . . . no. Delivery jobs are not going to go away because of self-driving cars (or trucks). Someone has to get the goods from the vehicle to the customer. Unless you expect the customers to unload the vehicles themselves, you have to have human in the truck. You also need a human to get a receipt for acceptance of the delivery, ensure that the right goods are delivered, accept payment, etc. Self-driving vehicles may change the nature of a deliveryman’s job, but it will not eliminate it. It may actually increase employment opportunities, since the delivery men/women will not need commercial drivers’ licenses and more people can work in the delivery industry.

    That experts predict 250 million jobs will disappear is just more proof that experts are individuals with a lot of knowledge but limited imagination. 

    • #12
  13. She Member
    She
    @She

    Seawriter (View Comment):

    CarolJoy (View Comment):
    Elon Musk should spend a day or two figuring out what will happen to the world economy when delivery driver jobs world wide go the route of the blacksmiths’. Already experts tell us some 250 million jobs will disappear.

    Umm . . . no. Delivery jobs are not going to go away because of self-driving cars (or trucks). Someone has to get the goods from the vehicle to the customer. Unless you expect the customers to unload the vehicles themselves, you have to have human in the truck. You also need a human to get a receipt for acceptance of the delivery, ensure that the right goods are delivered, accept payment, etc. Self-driving vehicles may change the nature of a deliveryman’s job, but it will not eliminate it. It may actually increase employment opportunities, since the delivery men/women will not need commercial drivers’ licenses and more people can work in the delivery industry.

    That experts predict 250 million jobs will disappear is just more proof that experts are individuals with a lot of knowledge but limited imagination.

    I don’t know if I can speak to this from a position of authority.  All I can describe is the “gap” that often appeared in the working world, between the proposal and the ROI (wherein we were always trying to eliminate “people” in all their imperfections, requirements for health care, workman’s comp, fuss and bother, and other individually sourced benefits, so that we could replace them with fixed cost machines, or nothing at all, and the actual, real-world, outcomes of such projects, in which it seemed that automation often required more people to support it, or differently-abled people to support it, than we had anticipated.  All I can conclude is that drawing sweeping conclusions about what’s likely to happen is probably a mistake.  And that we might want to give it a chance.

     

     

    • #13
  14. Instugator Thatcher
    Instugator
    @Instugator

    CarolJoy (View Comment):
    Already experts tell us some 250 million jobs will disappear.

    Nah, they still have to load and unload the vehicle.

    • #14
  15. Instugator Thatcher
    Instugator
    @Instugator

    Seawriter (View Comment):
    That experts predict 250 million jobs will disappear is just more proof that experts are individuals with a lot of knowledge but limited imagination. 

    I am stealing this concept

    • #15
  16. Locke On Member
    Locke On
    @LockeOn

    She (View Comment):

    Once folks grasped a basic function that the machine could actually perform, they were quick to project other capabilities and powers onto it, ones that it wasn’t designed for and couldn’t possibly fulfill. Cue the inevitable disappointment when the cake fell just a short while later

    That’s a common observation and turns out to be deeply rooted.  Nearly 30 years ago I was involved in some early research on how and whether people attribute human attributes (emotions, agendas, intelligence, etc.) to computers, when there is nothing of the sort going on inside the box.  More formal studies at Stanford showed this is very common and easy to invoke.  Probably an evolved pattern, since the only prototype we have for ‘contingent behavior’ (the way Stanford put it) is other humans, and when you invoke that prototype, you get all the other expectations for human-human interaction and reflective ‘theory of mind’ behavior.

    The Eliza program mentioned above fits the pattern. It was modeled on a Rogerian psychologist who would simply reflect back statements as a question to elicit further expression.  As a program, it was a fairly simple-minded syntactic pattern matcher, the best you could do at the time, and had no actual model beyond that, though people would impute all sorts of things to it.  Weizenbaum’s book called “Computer Power and Human Reason” started from that point – it seriously freaked him out.  We are a long way down that rabbit hole now.

    • #16
  17. Aaron Miller Inactive
    Aaron Miller
    @AaronMiller

    While enjoying a rare outing with my siblings, I suddenly felt the fever that had been passed around for weeks. Five minutes before, I was walking around, talking and laughing with them. But for a few minutes I had to sit, drink lots of water, and compose myself. Something was wrong. Then I was overheating, turning pale and weak. My siblings — who know me better than most — didn’t notice anything unusual until I said “I’m overheating” and rushed to a nearby room to lie on the cold floor. 

    Even human intelligence often fails because of assumptions through which new information is filtered. My family had seen me well moments before, so changes in my behavior were interpreted in ways that maintained that presumption. 

    One assumption is that AI can or cannot reproduce particular human behaviors well. Believers would benefit from skeptics in their midst. 

    As the virtual therapist story shows, AI doesn’t need to be as sophisticated as human intelligence to be mistaken as such. Assumptions and desires will fill in the gaps. With automobiles, errors can be deadly or expensive, so appearance is not enough.

    • #17
  18. EJHill Podcaster
    EJHill
    @EJHill

    At some point doesn’t your AI car have to be taught to kill you? I mean what if it has to make a decision that it’s you or that group of kids that just jumped into the middle of the road?

    • #18
  19. Pilli Inactive
    Pilli
    @Pilli

    EJHill (View Comment):

    At some point doesn’t your AI car have to be taught to kill you? I mean what if it has to make a decision that it’s you or that group of kids that just jumped into the middle of the road?

    Hence the circumstances behind the movie “I Robot”.  The three laws of robotics came into play.

    • #19
  20. HankMorgan Inactive
    HankMorgan
    @HankMorgan

    Pilli (View Comment):

    EJHill (View Comment):

    At some point doesn’t your AI car have to be taught to kill you? I mean what if it has to make a decision that it’s you or that group of kids that just jumped into the middle of the road?

    Hence the circumstances behind the movie “I Robot”. The three laws of robotics came into play.

    AI at this point makes me think of Asimov’s short story “The Machine That Won the War”. Sure, it was MULTIVAC’s analysis that did everything and the humans were just cogs feeding it data and reading the results.

    • #20
  21. Rick Banyan Member
    Rick Banyan
    @RickBanyan

    @HankMorgan, you might need to reread “The Machine that Won the War.” By the end of the war, people feeding information to the machine were cooking their data, the machine was not working properly because of poor maintenance, and the guy doing the final analysis was relying on the flip of a coin. Humans were anything but “cogs” in the process of decision making, but the team operating the machine maintained the illusion that the system worked until the war ended and then they “confessed” their true roles.

    • #21
  22. HankMorgan Inactive
    HankMorgan
    @HankMorgan

    Rick Banyan (View Comment):

    @HankMorgan, you might need to reread “The Machine that Won the War.” By the end of the war, people feeding information to the machine were cooking their data, the machine was not working properly because of poor maintenance, and the guy doing the final analysis was relying on the flip of a coin. Humans were anything but “cogs” in the process of decision making, but the team operating the machine maintained the illusion that the system worked until the war ended and then they “confessed” their true roles.

    It seems I need to work on my sarcasm-on-the-internet skills.

    But yes, that’s pretty much how AI works now, with humans on both ends making sure it doesn’t accept/do something obviously stupid.

    • #22
Become a member to join the conversation. Or sign in if you're already a member.