I recently came across in MIT Technology Review a fascinating piece entitled “Why Self-Driving Cars Must Be Programmed to Kill.”
The story zeroes in on a thorny question of “how the car should be programmed to act in the event of an unavoidable accident.” It asked: “Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?”
The article posed a hypothetical crisis: Ten people suddenly appear in the street. The driver, moving at high speed and unable to stop in time, can save his life by hitting one or all of the pedestrians, or risk his life by veering out of control at high speed.
The point is that at some point in the future, automotive engineers need to help a robotic car make ethical decisions. If so, who chooses the more ethical course?
While pondering this, it suddenly came to me, “Why are we insisting that a self-driving make life-or-death decisions?”
Of course, I want to save as many innocent pedestrians as possible. But I don’t want to harm my passengers, either. Certainly, I don’t want to die.
We’re posing an extreme situation in which there is no single “right” answer. Given that uncertainty, why not let the car do the moral dirty work? Afterwards, the survivors can survey the carnage and say, “Well, the car was just following its program.”
But right now, according to the digital logic of the technology, the only choice is to “save the occupants at all costs.” Life, and most of its ethical choices, is more complicated than that. Many – surveys say most – drivers would risk their own life to save others. (The catch is that most of those surveyed were answering the question as pedestrians, rather than owners of self-driving cars.)
To read the rest of this article, visit EBN sister site EE Times.