Are You Smarter Than a Self-Driving Car?

Our machines are only as rational (not empathetic) as we design them to be.

Posted Mar 22, 2016

Back in 1967, British philosopher Philippa Foote came up with "the trolley problem." Philosophers and others have debated this moral quandary ever since. Phrased most succinctly, the trolley problem asks:

Should the driver of a runaway tram sure to hit and kill five workmen redirect the tram onto another track so that it kills only one workman?

James Lin/FreeImages
Source: James Lin/FreeImages

Numerous permutations of that ostensibly simple problem have been put forth. Now there's a new book gathering many of those arguments: The Trolley Problem Mysteries by F. M. Kamm, who teaches philosophy and public policy at Harvard, and edited by Eric Rakowski, a law professor at University of California Berkeley.

IS YOUR CAR SMART ENOUGH?

What makes this old philosophical question especially timely right this minute is that self-driving cars may have to be engineered using algorithms to make precisely those sorts of life and death decisions.

In the excellent Los Angeles Times Op-Ed Will your driverless car kill you so others may live?, we learn, for example, that there may come a time when your car makes a decision you don't like. Quoting from the op-ed:

It's 2025. You and your daughter are riding in a driverless car.... The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can't get traction. Your car does some calculations: If it continues braking, there's a 90% chance that it will kill at least three children. Should it save them by steering you and your daughter off the cliff?

... Driverless cars will be programmed to avoid collisions with pedestrians and other vehicles. They will also be programmed to protect the safety of their passengers. What happens in an emergency when these two aims come into conflict?

Do you find it scary that an engineer somewhere will be programming your car to decide whether or not to kill you?

IS CREATIVITY THE ANSWER?

We tried out the trolley problem on our then five-year-old granddaughter. As might be expected, her response was emotional, and also creative. She kept changing the problem in small ways so that no one would get hurt. Not accepting the premise as given may be a way to program our machines to determine a third choice which is better than the first two, if there is one.

Another question: Can (should) a doctor cut up a healthy person to harvest his organs so that five ill people may live? Of course not. The Trolley Problem Mysteries goes into some depth (but with surprising clarity for such complex concepts) to differentiate between killing and letting die.

Then there's the situation in which you have to decide whether to give up your life preserver so that another may live, and every possible permutation of that. Beware, though: as soon as you come to a snap intuitive decision, more detail is added to the question which may then have you scratching your head in perplexity.

If you enjoy thinking about and discussing moral quandaries with your friends, get a copy of The Trolley Problem Mysteries. And be sure that whoever designs your future self-driving car is wise enough to program it rationally (or for your personal benefit, which may not be the same thing).

Copyright (c) 2016 by Susan K. Perry, author of Kylie’s Heel