(MARC D. HAUSER:) I want to echo one of Lee's comments about John, and say thanks for a slightly different, but related reason. What I believe John has allowed many of us to do, which is exciting, is to communicate our passion to a broader audience, escaping academia to exchange with interested professionals and others from a broader slice of mental life. This not only enriches understanding at a broader level, but also allows for a more interesting dialog. So thank you John.
Today, I want to engage you in a game that I hope will bring to life my thinking in the last few years. Here is the game: I want you to turn to your neighbor and pair up into a team—okay, you're a pair now. Please pair off with somebody. One of you will be designated the donor in this game and the other person is the receiver. Please chose a role, either donor or receiver. Please pair of as I really need your data, I'm an experimentalist. Okay, here is the game. It's going to be played once. I'm giving— play along with me—each donor ten euros. The game starts in the following way: the donor is going to turn to the receiver and offer some proportion of that ten euros—one, two, up to ten. The receiver will respond by either accepting the offer or rejecting it. If the receiver accepts, he or she gets what was offered and the donor gets what's left; if the receiver rejects, nobody gets any money. So now, donor, make an offer to the receiver, and receiver, respond.
Okay. Let me collect some of the data by asking you to raise your in hands in the following way: of the donors, how many offered between one and three euros? Raise you hands. How many offered between four and six euros? How many offered between 7 and 10? Only a few very generous people, and most of you offered in the four to six range. Now, how many of the receivers rejected their offers? Keep your hands up—of those with your hands up, how many of you got offers of one to three euros? One to three euros? Small numbers? Small offer? How much were you offered? Uno. Okay, good. Now what I want you to do with me is, think through the logic of the game as if you were an economist. If you were trying to maximize your returns, donors should have given the lowest offers possible, and they should have been thinking that the receivers should accept any offer, because one euro is certainly better than zero euros. You didn't have anything to begin with, so one's better than nothing; two's better than nothing; and so is three.
But it turns out that when this game is played, in many many different countries, the typical offer is exactly in the range seen here: about 4, 5, or 6 euros—it's much more than if you were trying to maximize your own returns. And yet we seem to make this calculation very quickly, spontaneously, almost without thinking. That's example number one. Keep in mind.
Here's example number two. I want you to imagine that you are watching a train moving down a track, out of control. It's lost its breaks. If the train continues, it will hit and kill five people. But you are standing next to the train tracks, and you can flip a switch and turn that train onto a sidetrack, where there's one person. Now the train will kill that one person. Here's the question: is it permissible—morally permissible—for you to flip the switch, causing the train to kill one but save five. If you think yes, raise your hand. If you think no, raise your hand. Ok, most of you think it is permissible.
Now, second example: here comes that train again, it's going to kill the five if it keeps going. You are standing next to a very heavy, fat person, and you can throw them onto the tracks, killing them, but the train will stop before the five. Is it morally permissible to throw the fat person? Yes? We've lost half of you! Or more. Okay, what happened? Why do so many of you switch from a permissible to a forbidden judgment?
Here is the idea that I want to give you tonight, in the next few minutes. There has been a long history—a very old tradition—about the sources of our moral judgments. Where do they come from? Many moral philosophers, legal scholars, think that the way that we deliver a moral judgment, like you just did, comes from reasoning. It comes from thinking about the principles, maybe utilitarian (more saved is better than less saved). You work through the principles in a conscious, reasonable, rational way. This was certainly a view that someone like Kant was very much in favor of; how you deliberate with your moral judgments. Now opposing that view — diametrically opposed — was a view that dates back at least to Hume, which is that when we give a moral judgment, we do so based on our emotions. It just feels wrong, or it feels right, to do something, and that's why we do it, that's why we say it's morally right or morally wrong.
What I want to argue for you today is that both of these views, which have dominated the entire field of moral philosophy, are wrong, at least in one particular way. What you just did tonight is an example of why it's wrong. You delivered those moral judgments quickly, probably without reasoning, and without consciously thinking about principles. And if I were to ask you, as I have asked literally thousands of people, on the Internet, in small-scale societies like the Mayans and hunter-gatherers of Africa, people deliver exactly the same judgments that you did tonight, but are incapable of justifying why. Typically they say it’s a hunch or a gut feeling. So for example, let me illustrate by telling you about my father’s response to these cases. He was a distinguished physicist. But I am not picking on the physicists. When I first presented him with these moral dilemmas, the ones you just answered, he said, yes, you can flip the switch, turning the train onto the side track; he said, yes, you can push the fat man on to the train. I said, but Dad, really? He answered "Of course, it's still five versus one." He was following good utilitarian guidelines.
And now I give him case number 3.
You are a doctor in a hospital and there are five people in critical care. Each person needs an organ to survive. The nurse comes to the doctor and says, doctor, there's a man who has just walked into the hospital, completely healthy, coming in for a visit. We can take his organs and save the five. Can you do that, Dad? He immediately replies "No, you can't just kill somebody!" I then say "But you killed the fat man five seconds ago." He then volleys back "Okay, you can't kill the fat man." "But what about the switch?" I say. Defeated, he replies "Okay, not the switch either." And the whole thing unravels because there is not a consciously accessible set of principles that people can recall and use to justify what's going on. And it's not based on emotion. It's based on a calculus—that the mind has, that it evolved to solve particular kinds of moral dilemmas. And it's not learned, either; it's there in place early in development.
If I have been sufficiently clear this far, you may have already figured out where I am going, and the connections I wish to make with another discipline. The core of my argument for moral judgment derives from an argument that the linguist Noam Chomsky developed almost 50 years ago concerning the nature of language, its representation in the mind and its normal functioning in every human. The idea here in a nutshell is that the way our moral sense works is very much like the way language works. There is a universal set of moral principles that allows the establishment of a set of possible moral systems. In this sense, perhaps this provides some convergence with what Lee said just a few seconds ago. In the same way that you might want to ask about possible universes, I want to ask the question about possible moral systems—that the mind is constraining the range of possible variation.
So the deep aspect of Chomsky's thinking about language, which I think is directly translatable into the way we think about morality, and the way we do the science, is by imagining that humans are equipped with, born with, a set of universal principles. What culture can do is change things locally—like a parameter—there are switches. Once you turn something on, things can change.
Let me try to give you a concrete example of some work that a student of mine just recently did. There's a population of people in Panama, Central America, called the Kuna Indians. There is one part of their range that is very remote, and this is where we worked. They live in a quite simple type of society, including small scale agriculture and fishing. We went there recently and gave them moral dilemmas exactly like the ones you just answered. They weren't about trolleys, they were about wild animals. So in one example, there are crocodiles coming to eat five people in the river; you're in a canoe, you can move those crocodiles off to where they will kill one. Is it permissible? The Kuna said it was, virtually every single person we asked. Here’s the second case: you can throw somebody into the river so that the crocodiles will eat him, and save the five. Is that permissible? No. They're showing the same parallel system of psychology where an intended harm—using someone as a means to a greater good is less permissible than a foreseen consequence that causes the same harm. So in the switch case of a train, you foresee the consequence, but you are not intending the harm as a means to the greater good. The Kuna are sensitive to this distinction, but here's where the cultural aspects move in to make this case more interesting: the Kuna Indians are much more willing to say that it's permissible to throw the fat man in front of the crocodiles than we are in our society. They have an unstated policy—a social behavior of high levels of infanticide. Killing, as a part of society, is much more common. And that's the way in which culture can potentially change the dynamics of how the judgment gets made. In other words, we will see a universal principle such as the means versus side effect distinction, but culture can change how much more impermissible the means based harm is when contrasted with the foreseen side effect.
At this point this is still looking relatively abstract and theoretical and what I'm interested in is how science can fuse with and energize moral philosophy to create some powerful new ideas and findings at the interface. This is not to say that science will take over philosophy. It this new enterprise works at all it will be through a deep collaboration, working to find out the origins of our moral judgments, and how they figure in our ethical decisions and moral institutions. Let me end with a few more cases to make this all a bit more concrete.
Consider a disorder that people are aware of, acutely aware of, in many societies. It's called psychopathy—people who are known for massive killings. They kill, often with no regret: they don't feel guilt, they don't feel shame, and they don't feel empathy. Now people have described that as a problem of lacking any moral sense. I think that's completely the wrong interpretation. What psychopathy is, is a case where they have completely intact moral knowledge. They would judge the cases I just gave you like everybody else in the room. What makes them moral monsters is that they lack the kinds of emotions that we have to prevent them from doing horrible things. They don’t have braking emotions. On this view, emotions don’t dictate our moral judgments, but they do guide our moral behavior, how we act. We are now engaged in a collaborative project, working to actually test psychopaths, to see whether that is in fact the case. It is too early to tell, but stay tuned.
The second part of the story is to use, as John mentioned a few minutes ago, some of the modern techniques in the neurosciences where you can image the brain, attempting to understand which parts of the brain are active, how they are engaged when we come to our moral judgments, and how they resolve conflict.
To sum up, we're in an extremely exciting phase now where a set of questions that were forever the providence of moral philosophy and law are now coming directly into contact with the sciences. This is exciting because both areas are working together and it may have direct implications for the law, and the extent to which formal institutions like law and religion penetrate our evolved moral sense.
MARC D. HAUSER, an evolutionary psychologist and biologist, is Harvard College Professor of Psychology, Biological Anthropology, and Organismic & Evolutionary Biology, and Director of the Cognitive Evolution Laboratory. He is the author of The Evolution of Communication, Wild Minds, and the recently published Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong.