Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
How should [a self-driving] car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?
Not sure “random” is what I’d choose, but lets keep going:
Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?
The problem, for people who will build and sell self-driving cars, is that if we know they’re programmed to kill us, we may be less inclined to buy them. We’re funny that way.
But researchers decided to explore public opinion on this very topic. They help focus groups and did some polling and came up with this conclusion. Brace yourselves:
People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.
That about sums up the entirety of the human experience.
Like anything, though, the more questions you ask the more complex it gets:
Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?
Or, how about this: Drive your own [expletive redacted] car.