daniel_rockmore's picture
Professor of Mathematics, William H. Neukom 1964 Distinguished Professor of Computational Science, Director of the Neukom Institute for Computational Science, Dartmouth College
The Trolley Problem

The history of science is littered with “thought experiments,” a term dreamed up by Albert Einstein (“gedankenexperiment”) for an imagined scenario able to sharply articulate the crux of some intellectual puzzle, and in so doing excite some deep thinking on the way to a solution or related discovery. Among the most famous are Einstein’s tale of chasing a light beam that led him to a theory of special relativity and Erwin Schrödinger’s story of the poor cat, stuck in a fiendishly designed quantum mechanical box, forever half-alive and half-dead, that highlighted the complex interactions between wave mechanics and measurement.

“The Trolley Problem” is another thought experiment, one that arose in moral philosophy. There are many versions, but here is one: A trolley is rolling down the tracks and reaches a branchpoint. To the left, one person is trapped on the tracks, and to the right, five people. You can throw a switch that diverts the trolley from the track with the five to the track with the one. Do you? The trolley can’t brake. What if we know more about the people on the tracks? Maybe the one is a child and the five are elderly? Maybe the one is a parent and the others are single? How do all these different scenarios change things? What matters? What are you valuing and why?

It’s an interesting thought experiment, but these days it’s more than that. As we increasingly offload our decisions to machines and the software that manages them, developers and engineers increasingly will be confronted with having to encode—and thus directly code—important and, potentially, life and death decision making into machines. Decision making always comes with a value system, a “utility function,” whereby we do one thing or another because one pathway reflects a greater value for the outcome than the other. Sometimes the value might seem obvious or trivial—this blender is recommended to you over that one for the probability that you will purchase it, based on various historical data; these pair of shoes are a more likely purchase (or perhaps not the most likely, but worth a shot because they are kind of expensive—this gets us to probabilistic calculations and expected returns) than another. This song versus that song, etc.

But sometimes there is more at stake: this news or that news? More generally, this piece of information or that piece of information on a given subject? The values embedded in the program may start shaping your values and with that, society’s. Those are some pretty high stakes. The trolley problem shows us that the value systems that pervade programming can literally be a matter of life and death: Soon we will have driverless trolleys, driverless cars, and driverless trucks. Shit happens and choices need to be made: the teenager on the bike in the breakdown lane or the Fortune 500 CEO and his assistant in the stopped car ahead? What does your algorithm do and why?

We will build driverless cars and they will come with a moral compass—literally. The same will be true of our robot companions. They’ll have values and will necessarily be moral machines and ethical automata, whose morals and ethics are engineered by us. “The Trolley Problem” is a gedankenexperiment for our age, shining a bright light on the complexities of engineering our new world of humans and machines.