Home
About
Features
Editions
Press
Events
Dinner
Question Center
Video
Subscribe

THE NEW SCIENCE OF MORALITY
An Edge Conference

JOSHUA D. GREENE

Now, it's true that, as scientists, our basic job is to describe the world as it is. But I don't think that that's the only thing that matters. In fact, I think the reason why we're here, the reason why we think this is such an exciting topic, is not that we think that the new moral psychology is going to cure cancer. Rather, we think that understanding this aspect of human nature is going to perhaps change the way we think and change the way we respond to important problems and issues in the real world. If all we were going to do is just describe how people think and never do anything with it, never use our knowledge to change the way we relate to our problems, then I don't think there would be much of a payoff. I think that applying our scientific knowledge to real problems is the payoff.



[JOSHUA D. GREENE:] First, thanks so much to The Edge Foundation for bringing us together for this wonderful event. I want to talk about an issue that came up, quite conveniently, in a discussion of John Haidt's wonderful presentation. This is the issue of the "is" and the "ought," of descriptive moral psychology and normative prescription, a recommendation about how we should live. 

Now, it's true that, as scientists, our basic job is to describe the world as it is. But I don't think that that's the only thing that matters. In fact, I think the reason why we're here, the reason why we think this is such an exciting topic, is not that we think that the new moral psychology is going to cure cancer. Rather, we think that understanding this aspect of human nature is going to perhaps change the way we think and change the way we respond to important problems and issues in the real world.  If all we were going to do is just describe how people think and never do anything with it, never use our knowledge to change the way we relate to our problems, then I don't think there would be much of a payoff. I think that applying our scientific knowledge to real problems is the payoff.

So, I come to this field both as a philosopher and as a scientist, and so, my real, core interest is in this relationship between the "is" of moral psychology, the "is" of science, and the "ought" of morality. What I'd like to do is present an alternative metaphor [to the one that Haidt offered], an alternative analogy, to the big picture of moral psychology. 

So, Jon presented the idea of different taste receptors, corresponding to different moral interests. I think that it's not just about different tastes, or just about different intuitions. I think that there's a fundamental split between intuitions and reasoning, or between intuitions and more controlled processes. And I think Jon agrees with that.

From a descriptive point of view, I think Jon is absolutely right, and that he and others have done a wonderful job of opening our eyes to all of the morality that's out there that we've been missing. Descriptively, I think Jon couldn't be any more right. 

But normatively, I think that there is something special about moral reasoning. And that, while it may be WEIRD [in the sense outlined by Henrich and colleagues], it may be what we need moving forward, and that I think it's not just an accident, an obsession, that people, as the world started to become WEIRDer, started to take on people like Bentham and Kant as their leading lights. It makes sense that, as the world became WEIRDer, we started to need more reasoning, in a way that we didn't really need it before, when we were living in small communities. And that the problems that we face are going to require us to draw on parts of our psychology that we don't exercise very naturally, but that philosophers and scientists, I think, are at least more used to exercising, although not always in the service of progress.

So, the analogy:  Like many of you, I have a camera. I'm not a real shutterbug, but I like my camera because it actually makes my life easier. My camera has a bunch of little automatic settings on it. That is, if you want to take a picture of someone in indoor lighting from about three feet away, you just put it in the portrait setting, and it configures all of the settings of the camera to do that pretty well. If you want to take a picture of a mountain in broad daylight from far away, you put it in the landscape setting, the action setting for sports, the night setting for night shooting. You get the idea. And it has these little preset configurations that work very well for the kinds of standard photographic situations that the manufacturer of the camera can anticipate. 

But fortunately, the camera doesn't just have these point-and-shoot settings. It also has a manual mode. You can put it in manual mode, and you can adjust the F-stop and everything else yourself. And that's what you want to use if your goal, or your purposes, or your situation, is not the kind of thing that the manufacturer of the camera could anticipate. If you want to do something funky where you've got your subject off to the side, and you want the person out of focus, in low light, you have to put it in manual mode. You can't use one of the automatic settings to do something creative or different or funky. 

It's a great strategy to have both automatic settings and a manual mode because they allow you to navigate the ubiquitous design tradeoff between efficiency and flexibility. The automatic settings give you efficiency. Point and shoot. And most of the time, it's going to work pretty well. The manual mode is not very efficient. You have to sit there and fiddle with it yourself. You have to know what you're doing. You can make mistakes. But, in principle, you can do anything with it. It allows you to tackle new kinds of problems.

I think the human brain overall ... not just when it comes to morality, but overall ... uses the same design strategy as the camera that I described. We have automatic settings, which, as Jon Haidt suggested, you can think of as being like taste receptors that automatically make you say, "Ooh, I like that," or "Ooh, I don't like that." 

But that's not all there is. We also have the ability to think consciously and deliberately, and flexibly, about new problems. That's the brain's manual mode. And, I think that certain kinds of moral thinking and moral reasoning, while they do have roots in some of these taste receptors, they really owe much more to the elaboration of those taste receptors, through a kind of reasoning, through a kind of abstraction. And that, it's not just, "Pick your taste receptor."

I think, really, the biggest question is, are we going to rely on our intuitions, on our instincts, on our taste receptors?  Or are we going to do something else?  Now, some people might deny that there really is a "something else" that we can do. I disagree. I think that we can.

So, let me first give you a couple of examples of what I call "dual-process" thinking more generally. When I say dual-process, I mean the idea of having these automatic settings and having the manual mode.

And I should also say, with apologies to Liz and other people who emphasize this, this is an oversimplification. The brain is not clearly, simply divided into automatic and controlled. But I think that, if you want to understand the world of the brain, this is the continent-level view. Before we can understand cities and towns and nations, we have to know what continent we're on. And I think those are the two big continents, the automatic and the controlled.

So, an example of automatic and controlled in the non-moral domain comes from appetite and self-control. When it comes to food, we all like things that are fatty and tasty and sweet, and a lot of us, at least, think that we might be inclined to eat too many of these things. 

So, there's a nice study that was done by Baba Shiv, and Alexander ... I hope I'm pronouncing this correctly ... Fedurickin?  (Laughter). [Fedorikhin]  I'm probably not pronouncing it correctly. Very simple, elegant study. They had people come in for what they thought was a memory experiment. Some of them were told to remember a short little number. Some of them were told to remember a longer number. The longer number imposes what we call a "cognitive load."  Keeps the manual mode busy.

And then they said, "Go down the hall to the other room, and by the way, there are snacks for you."  And some of these snacks are yummy chocolate cake ... they didn't tell them this up front ... others are fruit salad. "Pick one snack and then go on your way."  And, what they predicted, and what they found, is that, when people had the higher cognitive load, when the manual mode was kept busy, they were more likely to choose the chocolate cake, especially if they described themselves as being on a diet or looking to watch their weight.

And the idea is that, we have an automatic tendency that says, "Hey!  Yummy, fat, sweet things?  Go for it!"  And then another part of our brain that can say, "Well, that might be yummy, but in the long run, you'd rather be slim."  And so, there's this tension between the automatic impulse and the controlled impulse, which is trying to achieve some kind of larger goal for yourself.

I think we see the same kind of thing in moral psychology. And here I'll turn to what has recently become the sort of fruit fly of moral psychology. This is a classic example of going narrow and deep, rather than broad, but I think it's illuminating. And I know a lot of you are familiar with this, but for the uninitiated, I'll go through it. This is the Trolley Problem that philosophers have been arguing about for several decades now, and that has in the last 10 years has become, as I've said, a kind of focal point for testing ideas in moral psychology.

So, the Trolley Problem, at least one version of it, goes like this:  You've got a trolley that's headed towards these five people, and they're going to die if nothing is done. But you can hit a switch so that the trolley will turn away from the five and onto one person on another side track. And the question is, is it okay to turn the trolley away from the five and onto the one?  And here, most people, about 90 percent of people, say that it is.

Next case, the trolley is headed towards five people once again. You're on a footbridge, over the tracks, in between the trolley and the five people, and the only way to save them, we will stipulate ... somewhat unrealistic ... is to push this large person ... you can imagine, maybe a person wearing a giant backpack ... off of the bridge and onto the tracks. He'll be crushed by the train, but using this person as a trolley-stopper, you can save the other five people. Here, most people say that this is not okay.

Now, there are a lot of things that are unrealistic about this case. It may not tell you everything you'd want to know about moral psychology. But there is a really interesting question here, which is, why do people quite reliably say that it's okay to trade one life for five in the first case, where you're turning the trolley away from the five and onto the one, but not okay to save five lives by pushing someone in front of the trolley, even if you assume that this is all going to work and that there are no sort of logistical problems with actually using someone as a trolley-stopper?

So, I and other people have looked at this, almost every way possible now. A lot of different ways. With brain imaging, by looking at how patients with various kinds of brain damage respond to this, with psychophysiology, with various kinds of behavioral manipulations. 

And I think ... not everyone here agrees with this ... that the results from these studies clearly support this kind of dual-process view, where the idea is that there's an emotional response that makes you say, "No, no, no, don't push the guy off the footbridge."  But then we have this manual mode kind of response that says, "Hey, you can save five lives by doing this. Doesn't this make more sense?"  And in a case like the footbridge case, these two things conflict.

What's the evidence for this?  As I said, there's a lot of different evidence. I'll just take what I think is probably the strongest piece, which is based on some work that Marc has done, and this has been replicated by other groups. If you look at patients who have emotion-related brain damage ... that is, damage to a part of the brain called the ventromedial prefrontal cortex ... they are four to five times more likely to say things like, "Sure, go ahead and push the guy off the footbridge."

And the idea is that, if you don't have an emotional response that's making you say, "No, no, no, don't do this, this feels wrong," then instead, you're going to default to manual mode. You're going to say, well, five lives versus one. That sounds like a good deal."  And that's, indeed, what these patients do.

Now, this is a short talk. I could go on at length. I usually go on for an hour, just talking about how trolley dilemmas and the research done in them supports this dual-process picture. If you want to ask me more about that, I'm happy to talk about it when we have a discussion, or later.

So, this raises a more general question. Which aspects of our psychology should we trust, whether we're talking about the trolley problem or talking about some moral dilemma or problem in the real world?  Should we be relying on our automatic settings, or should we be relying on manual mode?

And, what I like about the camera analogy is that it points towards an answer to that question. The answer is not, "Automatic settings are good, manual mode is bad."  The answer is not, "Manual mode is good, automatic settings are bad" ... which is what a lot of people think I think, but it's not what I think. They're good for different things. 

Automatic settings are good for the kinds of situations where we have the right kind of training. And, I should say, where the instincts, the intuitions, have the right kind of training. Where they're the kinds of situations in which you can size it up efficiently and give an appropriate response. Where automatic settings are bad, or likely to be bad, is when we're dealing with a fundamentally new problem, one that we don't have the right kind of training for. 

Okay. So, how do automatic settings get smart?  How do they get the right kind of intelligence that's going to allow them to handle a situation well?  There are three ways that this can happen.

First, an instinct can be what we think of paradigmatically as an instinct. That is, as something that's biologically entrained. So, if you have an animal that has an innate fear of its predators, that would be a case of learning happening on a biological, evolutionary scale. You have this instinct, as one of these creatures, because the other members of your species that didn't have it died. And so, you have their indirect genetic inheritance. That's one way that an automatic response can be smart.

And that's not the only way. Another way is through cultural experience. Let me ask you, how many of you have had a run-in with the Ku Klux Klan, or with Nazis?  I don't see any hands going up. But I bet that, if I showed you pictures of swastikas, or men in pointy white hoods, a lot of you would have a flash of negative emotional response to these things. It doesn't come from biology. Biology doesn't know from Ku Klux Klan. And it doesn't come from individual learning. You've never had personal experiences with these groups. It comes from cultural learning.  But you can have instincts that are trained culturally, independent of your individual experience.

And finally, you could have individual experience, as when a child learns not to touch the hot stove. So, biology, culture and individual learning are all ways that our automatic settings can be trained up and can become adaptive.

Now, that covers a lot. But it doesn't cover everything. And, I think there are two kinds of ways that we are now facing problems that our instincts, whether they're biologically, culturally, or individually trained, are not prepared for.

First of all, a lot of our problems, moral dilemmas, are the result of modern technology. For example, we have the ability to bomb people on the other side of the world. Or we have the ability to help people on the other side of the world. We have the ability to safely terminate the life of a fetus. We have the ability to do a lot of things that our ancestors were never able to do, and that our cultures may not have had a lot of trial-and-error experience with. How many cultures have had trial-and-error experience with saving the world from global warming?  None. Because we're on trial number one, and we're not even through it yet. Right?  So, that's one place.

Another place is inter-cultural contact. Our psychology, I think, is primarily designed for (A) getting along with people within our own group, and (B) dealing, either nicely or nastily, with members of other groups. And so, the modern world is quite unusual, from an anthropological point of view, in terms of having people with different moral intuitions rubbing up against each other.

And so, as a result of these two main things, technology and cultural interchange, we've got problems that our automatic settings, I think, are very unlikely to be able to handle. And I think this is why we need manual mode. This is why, even if descriptively, careful, controlled, moral reasoning has not been very important ... if you took a catalog of all the moral thinking that's ever gone on, either recently or in human history, moral reasoning would be something like what Jon said, one percent. But moving forward, and dealing with the unique modern problems that we face, I think moral reasoning is likely to be very important.

And I think that we're too quick to use our point-and-shoot morality to deal with complicated problems that it wasn't designed, in any sense, to handle.

So, let me return for a moment. So, when do our intuitions do well and when do they not do well?  First, let me go back to our fruit fly, the trolley problem. So, once again, people say that it's okay to hit the switch. People say that it's not okay to push the guy off the footbridge. Why do we say that?  What is it that we're responding to?

Well, one of the things that it seems like we're responding to is this:  Merely the difference between harming somebody in a physically direct way, versus not harming somebody in a physically direct way. So, if you give people the footbridge case, "Is it okay to push the guy off the footbridge?", at least in one version that I did, about 30 percent of people will say that it's okay. Not most people. 

If you change it, ask a different group of people, "Suppose that you can drop this guy through a trapdoor in the footbridge, so that he'll just land on the tracks and get run over by the train, and that way you can save the five," the number doubles, to about 60 percent. You actually now get a narrow majority of people saying that it's okay. Now, this is not the only factor at work here, but it's probably the single biggest factor. This factor is, what I call "personal force."  The difference between pushing directly, or doing something in a more mechanically mediated kind of way.

Now, we have this intuition. We don't even realize that this is what's affecting our judgment. But we can step back and reason and say, "Hmm… Is that why I said that there's a difference between these two cases?  Does that really make sense?"  You know, you might think, "Gosh, I wouldn't want to associate with someone who's willing to push somebody off of a footbridge," and that may be true. But consider this. Suppose someone called you from a footbridge and said, "Hey, here's this situation that's about to happen. Should I do this or not?"  You would never say, "Well, it depends. Will you be using your hands, or will you be using a switch, to land the guy on the track?"  It's very hard to say that the presence or absence of "personal force" is something that matters morally, but it is something that our taste receptors are sensitive to, right?  And so, I think we can do better than our taste receptors, and it's not really just one taste receptor versus another.
               So, let me give you some more real-world examples where I think this matters.  About 30 years ago, Peter Singer posed to the philosophical world what is, I think, the most important moral problem we face. And he dramatized it with the following pair of cases:

You're walking by a shallow pond, and there's a child who's drowning there. And you could wade in and save this child easily, but if you do this, you're going to ruin your new Italian suit. (In something I wrote, I said, "It cost you $500."  And my colleague Dan Gilbert says, "They cost a lot more than that, Josh."  (Laughter). Two-thousand-dollar suit.)  Now, you say, is it okay to let the child drown?  Most of us would say, you're a moral monster if you say, "I'm going to let this child drown because I'm worried about my Armani suit."

Now, next case:  There are children on the other side of the world who are desperately in need of food and medicine, and by making a donation smaller than $2,000, you can probably save at least one of their lives. And you say, "Well, I'd like to save them, but I have my eye on this Armani suit, and so I think I'm going to buy the Armani suit instead of saving them."  There, we say, well, you ain't no saint, but we don't think that you're a moral monster for choosing to spend your money on luxury goods, instead of saving other people's lives.

I think that this may be a case of emotional under-reacting. And it makes sense from an evolutionary perspective. That is, we have emotional responses that are going to tug at our heartstrings when someone's right in front of us. But our heartstrings don't reach all the way to Africa. And so, it just may be a shortcoming of our cognitive design that we feel the pull of people who are in danger right in front of us, at least more than we otherwise would, but not people on the other side of the world.

Another example is what we're doing to the environment. If the environmental damage that we’re doing ... not just to the plants and atmosphere, but to our great-great-grandchildren, who we hope are going to live in the world ... if that felt like an act of violence, we would probably be responding to our environmental problems very differently.

I think there are also cases where we're likely to be emotionally overreacting. Physician-assisted suicide, I think, is a nice example of this. And here, this is my opinion. This is not stating this as a scientific fact, in any of these cases. According to the American Medical Situation, if somebody wants to die because they're in terrible pain and they have no good prospects for living, the American Medical Association says, it's okay to pull the plug. It's okay to allow them to die, to withhold or withdraw lifesaving support, but you're not allowed to give them something, even if they want it, that would actually kill them. Likewise, you can give them something ... if it's morphine, let's say ... to keep them comfortable, as long as your intention is to keep them comfortable, and you know that it's going to kill them. But your intention has to be to keep them comfortable.

I think that this is just a kind of moral squeamishness, and some cultures have come to this conclusion.  It feels like a horrible act of violence to give somebody something that's going to kill them, and so, we say, this is a violation of the "sanctity of human life."  I think this is a case where our emotions, or at least some people's emotions, overreact.

So, is there an alternative to point-and-shoot morality?  I think that there is. And this is a much longer discussion, and I see I only have a few minutes left, so I'm only going to give you sort of a bare taste of this. But I think that, what a lot of moral philosophy has done ... and I think what we do as lay philosophers, making our cases out in the wider world ... is we use our manual mode, we use our reasoning, to rationalize and justify our automatic settings. 

And I think that, actually, this is the fundamental purpose of the concept of rights. That, when we talk about rights ... "a fetus has a right to life," "a woman has a right to choose," Iran says they're not going to give up any of their "nuclear rights," Israel says, "We have a right to defend ourselves" ... I think that rights are actually just a cognitive, manual mode front for our automatic settings. And that they have no real independent reality. This is obviously a controversial claim.

And I think the Kantian tradition [which gives primacy to rights], actually is manual mode. It's reasoning. But it's reasoning in the service of rationalizing and justifying those intuitions as Jonathan Haidt has argued. Although Jon has argued, more broadly, that this is generally what goes on in moral discourse and moral philosophy.

I do think that there's a way out, and I think that it was people like Jeremy Bentham and, more recently, Peter Singer, who've shown the way forward. Bentham is a geek. Bentham is, in many ways, emotionally and socially tone-deaf. But, we have a real engineering problem here.

So, let me make an analogy with physics. If you want to get around the supermarket, you don't need an Einstein to show you how to navigate the aisles of the supermarket. Your intuitive physical intuitions will work pretty well. If you want to send a rocket to the moon, then you'd better put your physical instincts aside and do some geeky math. 

And I think that what Bentham was doing was doing the geeky math of modern morality. Thinking, you know, by natural inclination, and in a way that's been very important to the modern world, thinking, "What here really matters?  What can we really justify, and what seems to be just taste receptors that may or may not be firing in a way that, when we understand what's going on, will still make sense to us?"

And so, I'm sorry to leave you with such a vague prescription, but I'm out of time, and it's a big and complicated topic. But I think that geeky, manual mode thinking is not to be underestimated. Because, only a small part of the world may be WEIRD, in the Joe Henrich and John Haidt sense. But the world is getting WEIRDer and WEIRDer. We're dealing with cultures coming together with very different taste receptors, very different intuitions. We're dealing with moral problems that are created by modern technology, that we have no reliable, instinctive way of dealing with it. The way I like to put it is that, it would be a kind of cognitive miracle if our instincts were able to handle these problems. 

And what I like about this idea is that — and this is what allows us to cross the is, ought divide — no matter what you think your standard for good or bad is, it's still a cognitive miracle if that standard is going to be built into those taste receptors. Because those taste receptors, whatever you think they ought to know, they couldn't possibly know it, because they don't have the biological, cultural or individual experience to get things right in a point-and-shoot kind of way.

So I would say, in closing, that we shouldn't just be relying on our moral tastes. That, there's a whole continent that may be not so important descriptively, but I think, moving forward, is going to be very important normatively. And it's important to understand it and to not discount it. And a better future may lie in a kind of geeky, detached, non-intuitive moral thinking, that no one finds particularly comfortable, but that we're all capable of doing, regardless of where we come from.


JOSHUA GREENE TALK:


Joshua D. Greene

There is no topic more fascinating or important than morality. From hot-button political issues to the he-said-she-said of office gossip, morality is on everyone's mind. Cultural conservatives warn of imminent moral decay, while liberals and secularists fear an emerging "Endarkenment," brought on by the right's moral zealotry. Every major political decision — Should we go to war? Should we act to preserve the environment? — is also a moral decision, and the choices we make will determine whether our species will continue to thrive, or be yet another ephemeral dot in evolution's Petri dish.

We and our brains evolved in small, culturally homogeneous communities, each with its own moral perspective. The modern world, of course, is full of competing moral perspectives, often violently so. Our biggest social problems — war, terrorism, the destruction of the environment, etc. — arise from our unwitting tendency to apply paleolithic moral thinking (also known as "common sense") to the complex problems of modern life. Our brains trick us into thinking that we have the Moral Truth on our side when in fact we don't, and blind us to important truths that our brains were not designed to appreciate. Our brains prevent us from seeing the world from alternative moral perspectives, and make us reluctant to even try. When making important policy decisions, we rely on gut feelings that are smart, but not smart enough.

That's the bad news. The good news is that parts of the human brain are highly flexible, and that by depending more on these cognitive systems, we can adapt our moral thinking to the modern world. But to do this we must put aside common sense and think in ways that strike most people as very unnatural.

JOSHUA D. GREEENE is a cognitive neuroscientist and a philosopher, received his bachelor's degree in philosophy from Harvard (1997) and his Ph.D. from Princeton (2002). In 2006 he joined the faculty of Harvard University's Department of Psychology as an assistant professor. His primary research interest is the psychological and neuroscientific study of morality, focusing on the interplay between emotional and "cognitive" processes in moral decision making. His broader interests cluster around the intersection of philosophy, psychology, and neuroscience. He is currently writing a book about the philosophical implications of our emerging scientific understanding of morality.

Links:

Joshua Greene's Homepage
Joshua Greene's CV
Harvard's Moral Cognition Lab

Articles & Press:

From neural 'is' to moral 'ought': what are the moral implications of neuroscientific moral psychology?, in Nature Neuroscience
The Secret Joke of Kant's Soul, in Moral Psychology
For the law, neuroscience changes nothing and everything, By Joshua Greene and Jonathan Cohen, The Royal Society
How (and where) does moral judgment work? By Joshua Greene and Jonathan Haidt, in TRENDS in Cognitive Sciences
Patterns of neural activity associated with honest and dishonest moral decisions Joshua D. Greene and Joseph M. Paxton, in PNAS
Pushing moral buttons: The interaction between personal force and intention in moral judgment By Joshua D. Greene et al, in Cognition
The Neural Bases of Cognitive Conflict and Control in Moral Judgment By Joshua D. Greene et al, in Neuron

Joshua D. Greene's Edge Bio page


Back to: THE NEW SCIENCE OF MORALITY


John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2010 By Edge Foundation, Inc
All Rights Reserved.

|Top|