Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
Your Self-driving Car Is Programmed to Kill You
From MIT Technology Review comes this vision of the future:
How should [a self-driving] car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?
Not sure “random” is what I’d choose, but lets keep going:
Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?
The problem, for people who will build and sell self-driving cars, is that if we know they’re programmed to kill us, we may be less inclined to buy them. We’re funny that way.
But researchers decided to explore public opinion on this very topic. They help focus groups and did some polling and came up with this conclusion. Brace yourselves:
People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.
That about sums up the entirety of the human experience.
Like anything, though, the more questions you ask the more complex it gets:
Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?
Or, how about this: Drive your own [expletive redacted] car.
Published in Culture, Technology
When the people who want to force us all into driverless cars are the same ones who want to take away all guns, I daresay the game is given away.
This line . . .
. . . is why James Lileks is a national treasure.
Rob,
It doesn’t make any difference what the programmers choose to do. My lawyer just can’t wait for the self-driving car no matter how it’s programmed. He says that he intends to make a fortune (cookie) off of it. I’m not sure what he means by that.
All I know is that he’s only lost the one case. I hope those car manufacturers know what they’re doing. Somebody could get hurt.
Regards,
Jim
Nothing magic about this. The user of the car is responsible for everything the car does, just as he would be for every other tool or toy he uses.
Eric Hines
Aftermarket accessory. Problem solved. :-)
Cars might not even be available for purchase in the future. They might be provided as services, rather than products. As with software, the user agreement might “reserve the right” to alter or cancel your “user experience” at any time.
But this certainly gives new meaning to canceling your user experience.
Don’t forget, these are all going to be massively connected; expect all parties’ auto and medical insurance companies to be interested parties whose interconnected AIs will have a say in the outcome.
Tesla is pioneering push updates for your onboard software; once cars with this capability are a significant presence on the roads, one of the first things to happen will probably be for the car to autonomously drive to a safe spot, lock the doors and call for the cops if any mandatory insurance (car, medical, whatever) has lapsed.
They’ll just lower the national speed limit to about 10 mph, or whatever speed guarantees the occupants can’t be killed, to a high probability, then drive the car to protect those outside the car. The slow speed won’t be a problem, since the central planners will not let you leave your house except to work for the state anyway, and they will be working towards the day when no one has to go anywhere for anything. One will sit at home on the dole, watch the can’t shut it off TV whenever one is awake, and get all medical services etc from the Robots, working to ensure man is wiped from the face of the earth, so the environmentalists will be happy. The trips to work will be ended, when you are moved into the work place to live, until your job too can be ended.
A rather gloomy view of the future driven by the big step of removing the freedom a self-driven car allows. It terrifies us when our kids, get their licenses, but in a world without drivers it becomes way too easy for the visions of 1984, or Brave New World to actually be implemented.
James: “My lawyer just can’t wait for the self-driving car no matter how it’s programmed. He says that he intends to make a fortune (cookie) off of it. I’m not sure what he means by that.”
MV: “Maybe this. If traffic deaths go from thirty thousand to three thousand, will car manufacturers get 27,000 thank you notes? No. They’ll get three thousand law suits. Just like in medicine, lawyers will see this as an opportunity.”
Drew: “When the people who want to force us all into driverless cars are the same ones who want to take away all guns, I daresay the game is given away.”
MV: “But what if in some wonderful future we have private transportation systems rather than our current socialized ones? And the owner of a freeway decides to only allow driverless cars in order to improve the safety of his customers?”
Personally, I can’t wait. I have things I’d much rather be doing with my travel time than driving. Or cruising for parking. I’d have it drop me off in front of my destination, send it off somewhere, and call it back when I need it. I imagine I’ll be sending it to pick up some groceries after calling in my order. Paying next to nothing for insurance (assuming we can keep lawyers out of the game). Or traveling a hundred and twenty miles an hour while reading a book on the above mentioned private freeway. The possibilities are endless when we let free markets work their magic.
Or maybe I won’t even have the hassles of personal ownership. I’ll just call a driverless Uber whenever I need one. With millions of others doing the same, there will certainly be one waiting just around the corner.
The car should have a programmed instinct for self-preservation, just as a person does. Otherwise, it will be bumping into all sorts of things when smaller dilemmas arise too. For example, swerving and losing a side mirror instead of honking at the old lady crossing the road. It should therefore do what causes the least damage to itself. Anything else will result in a lot of minor accidents.
So do I. But if I don’t have a manual override–I’m a better driver than any machine–I’m not interested. This is what hackers are for.
Eric Hines
Not so simple. Is the motorcycle showboating? Is the car speeding? Without taking into account the good or bad behavior of both sides, this will devolve into a system which can be gamed. I submit that it can’t be done, because we only loosely manage to do it with people instead of computers. Or to re-phrase that, by the time this algorithm is reliable in a sustainable way, humans won’t be making these sorts of decisions.
The car will know that that’s an endangered variant of possum but this is a common oak.
The fantasy of a world with driverless cars is 180 degrees flipped from the reality of such a world. The reality will have nothing to do with the free market or with your independence and everything to do with government central planning. Surely you have enough real-world experience to know this.
Drew, I hope you are wrong but fear you are right. As with computers, though, maybe this new technology can move too fast for govt to keep up, and offer too good a thing for people to allow too much govt interference.
Squirrels will become as common as cockroaches without drivers to keep the population in check.
Goes to show that even though you’re a PhD chemist with an IQ around 160 you don’t necessarily see all the ramifications of your ideas:
What if a paper bag blows in front of the car? A person instantly knows what it is, and not to take possibly dangerous evasive maneuvers. Will the driverless car know what it is? Can it distinguish a bag from a dog from a child? Either it will be programmed to try to distinguish between these, in which case, unlike a person, it will not always be right and therefore kill a dog or child. Or it will be programmed to always take evasive maneuvers without trying to figure out what the obstacle is. And that would get old quick. Either way, I think people are going to wish they could drive their own cars.
Not to mention, everyone talks about this as if people only use cars to get from point A to point B. They don’t. How does a driverless car handle that? Let’s just drive around this part of the city and see what houses are for sale or just to get a feel for what the neighborhoods are like… The car could respond to voice directions but when you think how you actually drive in that type of situation, voicing it really sounds arduous. There’s going to be major buyers remorse with this gee wiz novelty.
Can we please not descend into this mass hysteria every single time someone mentions a self-driving car?
Seriously people. Chill out, pour yourself a bourbon, and imagine all exciting Ricochet posts you could write during that 90 minute commute your car now does for you.
People don’t have a say in government anymore. : (
Doctor Susan Calvin, please pick up the white courtesy phone…
Spiff?
I scanned through too fast and missed that.
Nah.
Bourbon seems to have hired kale’s publicist. It’s everywhere!
It’s probably been everywhere for awhile. I’m a late-if-ever adopter. The bourbon blitz found me via Steve’s Small Batch Bourbon Vanilla Ice Cream on a recent high pressure visit back east. Yum!
Great piece, Rob! Wish George Orwell were still around to expand MIT Technology Review’s little foray into “algorithmic morality” into a series treatment for Syfy.
Dr. Asimov please pick up the white courtesy phone….whoops I think he’s dead already. Now who will explain the Three Laws of Robotics.
I sure hope it isn’t this guy.
Regards,
Jim
Sorry, showboaters’ lives matter, speeders’ lives matter. That part of the algorithm is easy.
This ought to be on a bumper sticker. It’s genius! And I say that as someone fond of squirrels.
The point is: no one will ride a car that would drive itself into a wall to save something or someone else.
Off topic Rob, but you’ve got me thinking. Next time my Uber driver shows up, before entering I’ll ask the question: In the event of an unavoidable accident, do you intend to minimize the loss of life, even if it means sacrificing the life of your occupant……..?
Assuming he or she doesn’t immediately lock the doors, drive-off and give me a Five-Star Insanity rating, but instead answers the question, at least it will be a window into the soul. A programming feature surely missing in our future driverless cars.
What do human drivers normally do when a group of idiots step into the middle of the street? I find it hard to believe they do not attempt to swerve out of the way.