Kill 1 to Save 5: The Choice a Driverless Car May Make
Driverless cars will have to make an impossible moral choice
Posted Jun 27, 2016
If you’ve taken an ethics course in the last dozen years, you are probably familiar with the Trolley Problem, a thought experiment devised by the late British philosopher Philippa Foot. If you have never heard of this problem, let me describe the scenario to you.
Imagine you are on a hill and see that there is a speeding train, which cannot see around the curve. Five people are tied to the tracks. Their fate seems sealed. As luck would have it, though, you happen to be standing next to a switch. If you throw the switch the train will be diverted onto another track before reaching the hapless victims.
What do you do? Divert the train, of course.
But there is a problem. On the sidetrack is one person minding his own business. So if you shunt the train, one person will die.
Now what do you do? The problem is a little more difficult. Still most people say they would throw the switch because by doing so they have saved five lives at the expense of one. This is reflection of one way to think about ethical problems—the best action is that which leads to the greatest good for the greatest number. It is a calculation that guides a lot of thinking.
But as philosophers are wont to do, they make matters more complicated by changing the conditions. One variation is to imagine that you are on a bridge and the train is headed for the five people. There is no switch this time. But not all is lost. It happens that you are standing next to a very large person, someone so big that there is no doubt that if you were to push them over and in front of the train, the train would stop before striking the five.
Now what would you do? Although the outcome is the same—saving five lives by sacrificing one—most people wouldn’t shove the person over. By literally putting your hands on someone the outcome is shifted from a calculation to an emotion, from the deaths being a statistic, to a death being a tragedy of your own doing.
While many thought experiments are great for class discussion without any seeming real-life application, the Trolley Problem is in the forefront of today’s technological revolution. Driverless cars, still in the developmental stage but certain to appear in the showroom in the next several years, will have to have programmed into them an answer to this thought experiment.
Imagine that in front of your driverless car five children step in front of you. The car, programmed for safety, swerves to avoid hitting them. But let’s say that in so doing, it has to strike a pedestrian on the sidewalk. Five-for-one. But what if instead of hitting one person, in order to avoid hitting the children it crashed into a wall, killing the driver.
You can imagine a number of variations. Here are just two: instead of school children, they are old people returning to the geriatric center; instead of a stranger on the sidewalk, it is your relative.
Similar decisions have already been made. The military has to decide who and how many will die in order to accomplish a mission. Hospital triage procedures are another example, as is establishing protocols for transplants regarding organs in short supply. Police policy regarding active shooter situations is to ignore those who are in need of assistance and go after the shooter instead.
No matter which algorithm Silicon Valley programmers create for driverless cars it won’t satisfy everyone’s moral sensibility. And that is because this is an ethical a dilemma with contradictory answers. It is one thing to sit in front of a computer to come up with a moral decision. It is another to be in the car when the decision is personal.
Perhaps the best approach is to allow drivers to choose which of the options they want. Another is to turn off the program, in which case the point of having a driverless car no longer matters.