Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
“One shortcoming of current machine-learning programs is that they fail in surprising and decidedly non-human ways. A team of Massachusetts Institute of Technology students recently demonstrated, for instance, how one of Google’s advanced image classifiers could be easily duped into mistaking an obvious image of a turtle for a rifle, and a cat for some guacamole.” — Jerry Kaplan, The Wall Street Journal, June 2, 2018
We have recently been bombarded with stories about AI (Artificial Intelligence, for those of you who live in farm country and think it means something else), and about how our meager human brains will soon not be able to keep up with those super-smart machines. Self-driving cars. Computers that accurately diagnose, and even treat, medical conditions. Robots that perform surgery and manage eldercare. Autonomous military drones. Siri. Predictive applications to “enhance” your Internet experience (Amazon, Pandora, etc.). Chatbots. Legal assistants. And, of course, the omnipresent Google.
So I was strangely reassured by today’s Quote of the Day, which appeared in a WSJ article focusing on efforts to make self-driving cars fail in predictable ways (so that, for example, they do not mistake light reflected back from their camera lenses for truck headlights rushing towards them from the other direction, and run off the road as a result. Or so they don’t perform like the self-driving Uber test vehicle in Tempe, AZ, which killed a pedestrian walking her bike across the road because its algorithms, which did recognize her presence, mistook it for “ghosting” in the poorly-lit night.)
Elon Musk, the CEO of Tesla, isn’t happy about the bad rap self-driving cars are getting, believing that the press is out to get Tesla, and that the “holier-than-thou hypocrisy of the big media companies [lays] claim to the truth but [publishes] only enough to sugarcoat the lie.” In his view, this is “why the republic no longer respects them.” Because they are out to get Tesla.
In his view, maybe. I’m thinking that his essentially accurate view of press shortcomings, and why the public disrespects big media, might be suffering from tunnel vision. Which, as I understand it is another thing self-driving cars aren’t so good at.
Research shows we’d be much more accepting of self-driving cars if they failed in the same sorts of ways as cars driven by humans do — misjudging a curve and approaching at too high a speed, failing to notice the car in the “blind spot,” going the wrong way up a one-way street, distracted driving while texting or on the phone–rather than in the spectacularly unpredictable ways they sometimes do. Even if there are ultimately far fewer accidents, the sheer unpredictability of today’s AI “fails,” makes us queasy.
No doubt many of these concerns will be addressed with time, money, and more research, and as the Wall Street Journal article makes clear, the future probably belongs, at least in large part, to AI and its many applications.
But, fellow humans, all is not yet lost, and with luck (our own unpredictability wild card, and one the machines haven’t sussed out yet), it never will be. Gird your loins, get behind the wheel, put your foot on the gas pedal, and press onward!
Oh, and pass the guacamole. Head first, please.
(I’ve always loved this ad, which seems largely representative of much of my life. And appropriate for this post. )