Question Center

An Edge Conference


What I want to do today is talk about some ideas I've been exploring concerning the origin of human kindness. And I'll begin with a story that Sarah Hrdy tells at the beginning of her excellent new book, "Mothers And Others."  She describes herself flying on an airplane. It’s a crowded airplane, and she's flying coach. She's waits in line to get to her seat; later in the flight, food is going around, but she's not the first person to be served; other people are getting their meals ahead of her. And there's a crying baby. The mother's soothing the baby, the person next to them is trying to hide his annoyance, other people are coo-cooing the baby, and so on.
As Hrdy points out, this is entirely unexceptional. Billions of people fly each year, and this is how most flights are. But she then imagines what would happen if every individual on the plane was transformed into a chimp. Chaos would reign. By the time the plane landed, there'd be body parts all over the aisles, and the baby would be lucky to make it out alive.
The point here is that people are nicer than chimps.

[PAUL BLOOM:] I'd like to thank the Edge Foundation for putting together this workshop, and I'd also like to thank all of my colleagues here. It's because of the extraordinary theoretical and empirical work of the people in this room that the study of morality is, I think, the most exciting field in all of psychology. So I'm really glad to be included among this group.
What I want to do today is talk about some ideas I've been exploring concerning the origin of human kindness. And I'll begin with a story that Sarah Hrdy tells at the beginning of her excellent new book, "Mothers And Others."  She describes herself flying on an airplane. It’s a crowded airplane, and she's flying coach. She's waits in line to get to her seat; later in the flight, food is going around, but she's not the first person to be served; other people are getting their meals ahead of her. And there's a crying baby. The mother's soothing the baby, the person next to them is trying to hide his annoyance, other people are coo-cooing the baby, and so on.
As Hrdy points out, this is entirely unexceptional. Billions of people fly each year, and this is how most flights are. But she then imagines what would happen if every individual on the plane was transformed into a chimp. Chaos would reign. By the time the plane landed, there'd be body parts all over the aisles, and the baby would be lucky to make it out alive.
The point here is that people are nicer than chimps. Human niceness shows up in all sorts of other ways. Americans give hundreds of billions of dollars each year to charity.   Now, you might be cynical about some of that giving, but some of it seems to be genuinely motivated by concern for strangers. We leave tips at restaurants. We leave tips in our hotel rooms. This last one is striking: Some of us, when leaving this hotel, will leave money for the maid, even though this act has no possible selfish benefit. It doesn’t help our reputation; it won’t improve future service. We do it anyway, because we feel that it is right.
My favorite experiment on adult human niceness was done by Stanley Milgram many years ago. Milgram was a Yale psychologist who is most famous for his obedience experiments, where he found that people would kill strangers if asked to do so in the right way. But he was also interested in niceness, and he did an experiment in which he left stamped envelopes scattered around New Haven. The question was how many of them would be delivered. And the answer was well over half. Now it wasn’t indiscriminate: If instead of a person’s name on the letter, it was “Friends of the Nazi Party”, people wouldn't deliver it. Presumably they'd look at it, they'd throw it in the garbage, they'd say to hell with that.

In a more recent study, another psychologist replicated the study but didn't even put stamps on the letters. Still, one in five letters came back. This is extraordinarily nice.
I'm a developmental psychologist and I'm interested in where this niceness comes from. It turns out that at least some of it seems to be hard-wired, emerging naturally. It is not taught.
The idea here was anticipated by Adam Smith hundreds of years ago. Adam Smith was the founder of modern economics, and he was very sophisticated when it came to human sentiment. He pointed out that when you see somebody in pain, you feel their pain — to at least some extent — as if it was yours. And you're motivated to make it go away, you're motivated to help. This is a primitive good that doesn't reduce to any other good.

It turns out that some such empathy exists even in babies. When babies hear crying, they'll start to cry themselves. Now, some very cynical psychologists worried that this isn’t empathy at all. It’s because babies are so stupid that when they hear another baby crying, they think they're crying themselves, so they get upset and they cry some more. In response, though, other psychologists did experiments where they exposed babies to tape recorded sounds of their own cries and tape recorded sounds of other baby's cries. And they found that the babies cry more to the sounds of other babies than to their own cries, suggesting this response really is other-directed. Furthermore, when a baby sees someone in pain, even silent pain, the baby will get distressed. And as soon as babies are old enough to move around their bodies, they'll try to make the pain go away. They'll stroke the other person, or they'll try to hand over a toy or a bottle.

In some recent work that Roy Baumeister mentioned in passing, Felix Warneken and Michael Tomasello set up a clever experiment where they put toddlers in situations where nobody is looking at them, and then an adult comes in, and has some sort of minor crisis, such as reaching for something and being unable to get to it, or trying to get access to a cabinet with his arms too full to open the door. And Warneken and Tomasello find that toddlers, more often than not, will spontaneously toddle over and try to help.

In my own research, I've been interested not so much in moral action or altruistic behavior, but in moral cognition, moral intelligence. And this is a series of studies that I've been doing in collaboration with Karen Wynn, my colleague at Yale, who runs the Yale Infant Lab, and a wonderful graduate student named Kiley Hamlin, who's now an Assistant Professor at the University of British Columbia.
We created a set of one-act morality plays. For each of these, there is a character who tries to do something and there's a good guy and there's a bad guy. These are animated figures or simple geometrical objects, or puppets. For instance, in one of our studies, a character would be struggling to get up a hill. One guy would come and push him up. Another guy would come and push him down. In another, a character would be playing with a ball. He rolls the ball to another puppet. They look at each other and the puppet rolls it back. He rolls the ball to another puppet, they look at each other, and then this other puppet runs away with the ball. In a third one-act play, there's a puppet trying to open up a transparent box. The baby can see that there is a toy in there. And one puppet comes and helps to open the box and, later, a different puppet jumps on the box, slamming it closed.

These are three examples; we have a couple of more scenarios in the works now. What we find is that if you ask toddlers of 19 months of age, “Who is the good guy?” and “Who is the bad guy?”, they respond in the same way that adults do. They point to the proactive agent, the person who helps the character achieve his goals, as the good guy, and they point to the disrupter, the thwarter, as the bad guy.

Now, maybe that’s not so exciting — these are fairly old kids. But what we've done is we've pushed the age lower and lower. In one set of studies, we present the baby with both characters, and we see where the baby will reach for, which one the baby will choose. Keep in the mind that everything is counterbalanced, and the person who's offering the choice is always blind to the roles of the different characters, to avoid the problem of unconscious cuing. Also, the parents have their eyes closed during the study.
We find that, down to six months of age, they'll reach for the good guy. We also have neutral conditions, and these tell us that they'd rather reach for the good guy than to a neutral guy, but they'd rather reach for a neutral guy than to a bad guy. This suggests that there are two forces at work — they are drawn toward the good guy and drawn away from the bad guy.
In a recent study that was just published in the journal Developmental Science, we test three-month-olds. Now, three-month-olds are blobs; they are meatloafs. They can't coordinate their actions well enough to reach. But we know from the six-month-old study that before babies reach, they look to where they're going to reach. So for the three-month-olds, we record where they look. And, as predicted, they look to the good guy, not to the bad guy.

Does this show morality, a moral instinct? No. What it shows is babies are sensitive to third-party interactions of a positive and negative nature, and this influences how they behave toward these characters, and, later on, how they talk about them. And I think that that is relevant to morality. I think it's a useful moral foundation. But how moral is it?  Are these truly moral judgments?  And the honest answer is we don't know. This is something that we’re actively exploring, but, as you could imagine, when you're dealing with six-month-olds, it's difficult to study.
We are embarking on some experiments that try to address this issue, along with a Yale graduate student, Neha Mahajan. One aspect of mature morality is that you not only approach a good guy and avoid a bad guy, but you believe that a good guy should be rewarded and a bad guy should be punished. So we tested 19-month-olds to see whether they share this intuition. Using our usual paradigms, we have a good guy and a bad guy and we ask the children to give a treat to one of them. And what we find is they usually give it to the good guy. We also have a punishment condition, so we say to the child: You have to take a treat from one of these characters. They'll tend to take it from the bad guy.

Recently, with nine-month-olds, we did a study looking at their notions of justice. And to do this, we have a two-act play. In the first act, you have a good guy and a bad guy. And they do their good guy/bad guy actions, the ones that I described before. In the second act, what happens is two more characters come in. In one condition, one of the characters rewards the good guy and the other character punishes the good guy. And we find that babies prefer, by reaching, the character who rewarded the good guy.
Now, this is not so surprising, because we had the previous finding that babies like positive actors. Maybe this is all that’s going on. The second condition's more interesting. You have a good guy and a bad guy, then one character comes in and rewards the bad guy; another character comes in and punishes the bad guy. Now the  babies robustly prefer the one who punishes the bad guy, suggesting that they will favor bad actions when they are done to those who are themselves bad. This suggests some rudimentary — and I'm happy to put into square quotes — some rudimentary sense of “justice”.
There are other studies looking at baby morality from Rene Baillargeon’s lab at University of Illinios and Luca Surian’s lab from University of Trento as well as from other labs. These also support the idea that there's both a surprisingly precocious grasp of moral notions and a surprisingly precocious propensity for moral action. Now, some would argue that I could stop my talk now because I've solved the problem I've set out to answer. The human niceness that we are interested in exists in babies; it is part of our hard-wired inheritance. We are, as Dacher Keltner put it, “born to be good”. To the extent you find a narrowing of this kindness in adults, this is due to the corrupting forces of culture and society.
This is not the argument I wish to make. I find the idea of an innately pure kindness to be extremely implausible. For one thing, our brains have evolved through natural selection. And that means that the main force that shaped our psyche is differential reproductive success. Our minds have evolved through processes such as kin selection and reciprocal altruism. We should therefore be biased in favor of those who share our genes at the expense of those who don't, and we should be biased in favor of those who we are in continued interaction with at the expense of strangers.
Also, there is now a substantial amount of developmental evidence suggesting that this kindness that we see early on is parochial. It is narrow. It is applies to those that a baby is in immediate contact with, and does not extend more generally until quite late in development.
Here are some sources of evidence for this claim. We've known for a long time that babies are biased towards the familiar when it comes to individuals. A baby will prefer to look at her mother's face than the face of a stranger. A baby will prefer to listen to her mother's voice as opposed to the voice of a stranger. This bias also extends to categories. Babies prefer to listen to their native language rather than to a language that's different from theirs. Babies who are raised in white households prefer to look at white people than at black people. Babies who are raised in black households prefer to look at black people than at white people.
We know that this last fact isn't because the babies know that they themselves are white or black, because babies that are raised in multi-ethnic environments show no bias. It has to do with the people around them. And as they get older, this bias in preference translates into a bias in behavior. Young children prefer to imitate and learn from who look like them and those who speak the same language as them. Around the age of nine months, they'll show stranger anxiety — they avoid new people.
There are also studies now with preschool children, older children, and adolescents that show that it is fairly easy to get them to categorize in favor their own group over others, even when the group is established in the most minimal and arbitrary circumstances. This is all based on Taijfel’s work on “minimal groups”. For instance, in experiments by Bigler and others, you take a bunch of children and you say okay, kids, I have some red t-shirts and blue t-shirts, I'm just going to give them to you guys. You get the children to put on the t-shirts, so that now you have a red t-shirt group and the blue t-shirt group. Now you approach a child from the red t-shirt group, and you say: I have some candy to give out, and you can't get any, but I'm asking you how you give it to other people. Who do you want to give it to?  Do you want to give it to everybody equally, or you want to give it more to the red or more to the blue?
It turns out that children are biased to give more to their own group, even when they don’t personally profit from the giving. And hen asked about the properties of their group — who's nice, who's mean, who's smart, who's stupid — a child who just put on a red t-shirt will tend to favor the red t-shirt group over the blue t-shirt group — even though it’s perfectly clear that the assigned were divided on an arbitrary basis.
Yet another bit of bad news about human nature comes from economic games. Many of you are familiar with the ultimatum game, and this is just one of a series of games thought up by behavioral economists that purport to show niceness among adults, that we are generous in certain ways. Now, I am highly skeptical about what these studies really show, and we could talk about that in the question period. But what's interesting for these purposes is that children behave quite differently from adults in these games.

I'll give you one example. This is the dictator game. The dictator game is actually even simpler than the ultimatum game. I choose two people at random. One of them, the subject, is lucky. He gets some money, say $100. Now he can give as much as he wants to the other individual, everything from the entire $100 to nothing to all. This other person will never know who made the choice — it’s entirely anonymous.
From a self-interested perspective, the subject should just keep all the money. But what you find is that people actually give. People give roughly 30 percent. Some people give nothing, but some people give half, and some people even give more than half. This is surprisingly nice. Ernst Fehr and Simon Gachter recently did this with children. What they did was set up a very simple ultimatum game. They gave children two candies. And they say to each child: You can either keep both candies or you can give one of them to this stranger.
Seven- and eight-year-olds will often choose to do a split. But younger children almost always keep both candies. So to the extent there is generosity to strangers, it emerges late. Now, one problem with the standard ultimatum game is that you are pitting two impulses against each other. The child might have an equity/kindness/fairness impulse, so there is a desire to share, but the child might also like candies, so there's a desire to keep both of them. You're pitting them against each other. Maybe children’s hunger trumps their generosity.
Fehr and Gachter explored this with another study. The child got to choose between either getting a candy and giving another person a candy versus getting a candy and giving the other person nothing. Now, from a consequentialist point of view, this is not a head-scratcher. There are not two competing impulses. One can be nice without suffering any penalty. But, until about the age of seven or eight, children are perfectly indifferent. The numbers are about 50 percent — they don't care. It's not like they hate the other anonymous person and want to deprive him of the candy, it is that they have no feelings either way.
This shouldn’t surprise us. Maybe it’s even better that we could have expected. The dominant trend of humanity has been to view strangers — non-relatives, those from other tribes — with hatred, fear and disgust. Jared Diamond talks about the groups in Papua New Guinea that he encountered. And he points out, for individual to leave his or her tribe and just walk into another, strange tribe would be tantamount to suicide. Others have observed that the words that human groups use to describe themselves and others reflect this same animus towards strangers. So groups tend to have a word for themselves that often means something like person or human. Then they have a word for other people. And now sometimes this is just “The Others”, like in the TV show "Lost”. But sometimes they describe the other group using the same word they use for prey, or food.
So there’s a puzzle, then, because the niceness we see now in the world today, by at least some people in the world, seems to clash with our natural morality, which is nowhere near as nice. How did we end up bridging the gap?  How have we gotten so much nicer?
Note that I'm been focusing here on questions of our kindness to strangers, but this question could be asked about other aspects of morality, such as the origin of new moral ideas, such that slavery is wrong or that we shouldn’t be sexist or racist.
These are deep puzzles. I’ll end this talk with two compatible theories of the emergence of mature human kindness.

The first involves increased interdependence. This is something that Robert Wright has been arguing in a series of books, and Peter Singer and Steven Pinker have also discussed it, in different forms. The idea is that as you come into connect with more and more people, in a situation where there is interdependence, where your life is improved by being able to connect with the other person, where there is a non-zero-sum relationship, you will come to care about their fates.

This is niceness grounded in enlightened selfishness. As Robert Wright once said in a talk, “One of the reasons I don't want to bomb the Japanese is that they built my minivan.”  Because he's in a commercial relation to these people, his compassion gets extended to where it wouldn't have otherwise been.
There is some support for this view, coming from a study by Joseph Henrich and his colleagues that was published in Science a few months ago. Henrich et al. looked at 15 societies, and they had the people in these societies play a series of economic games. They found considerable variation in how nice people are to anonymous strangers, and then did some analyses to see what determines this niceness. One finding is that capitalism makes people nicer. That is, immersion in a market economy has a significant relationship with how nice we are to anonymous strangers, presumably because if you're in a market economy, you're used to dealing with other people in long-term relationships, even if they're not your family and they're not your friends. The second factor was membership in a world religion — Christianity or Islam. This makes people nicer, perhaps because it immerses people into a larger social group and entrains them to deal with strangers.

Another explanation for the increase in human niceness is the power of stories. It is one of the consequences of fiction and of journalism is that they can bring distant people closer to you. You can come to think of them as if they are kin or neighbors. This can extend one's sympathies towards individuals, but, as Martha Nussbaum and many others have argued, it can also expand one's sympathy towards groups.

Consider moral progress in the United States, I think that the great moral change in our society over the last 50 to 100 years has been the changing attitudes of whites towards African-Americans. And the great moral change in the last ten years has been in straight people's attitudes towards gay people. I think for both cases, the engine driving this change was not philosophical argument or theological pronouncements or legal analyses, it was fiction. It was imagination. It was being exposed to members of these other groups in sympathetic contexts. I would argue, more specifically, that one of the great forces of moral change in our time is the American sitcom.
I'll end by saying that this speaks to one of the issues that occupies many people in this room, which is the role of rational deliberation in morality. There seems to be a contradiction here. On the one hand, social psychologists have a million demonstrations that people are impervious to rational argument. And so the reason why I've come to my views about slavery or gay people or whatever, is most likely not because somebody gave me a real persuasive argument. On the other hand, we know full well that rational thought has made a difference in the world. Just as a recent example, Peter Singer's thoughts on issues such as how to treat non-human animals have changed the world.

I think one way out of this — and this is very similar to something that Jonathan Haidt has argued — is that reasons do affect us, but they do so indirectly, through the medium of emotions. If so, this suggests a research project of tremendous importance, one that asks: How do people come to have new moral ideas and how do they convey these ideas in ways that persuade others? 
I've made three arguments here. The first is that humans are in a very interesting way, nice. The second is that we have evolved a moral sense, and this moral sense is powerful, and can explain much of our niceness. It is far richer than many empiricists would have believed. But the third argument is that this moral sense is not enough. That accomplishments we see and we admire so much in our species are due to factors other than our evolutionary history. They are due to our culture, our intelligence and our imagination.


Paul Bloom

The human moral sense is fascinating. Putting aside the intriguing case of psychopaths, every normal adult is appalled by acts of cruelty, such as the rape of a child, the swindling of the elderly, or the humiliation and betrayal of a lover. Every normal adult is also uplifted by acts of kindness, like those heroes who jump onto subways tracks to rescue fallen strangers from oncoming trains. There is a universal urge to help those in need and to punish wrongdoers; we feel pride when we do the right thing and guilt when we don't.

Other moral feelings and impulses aren't so universal. As your typical liberal academic, I am morally appalled by tea party demonstrators, abortion clinic bombers, the NRA, the use of waterboarding to interrogate prisoners, and Sarah Palin. But I have to swallow the fact that roughly half of my fellow Americans feel just the same about gay rights demonstrators, abortionists, the ACLU, and Barack Obama.

Where does this all come from? How much of it is learned? Why are some moral judgments universal and others violently conflicting?

My answer is this: Humans are born with a hard-wired morality. A deep sense of good and evil is bred in the bone. I'm aware that this might sound outlandish, but it's supported now by research in several laboratories, including my own research at Yale. Babies and toddlers can judge the goodness and badness of others' actions; they want to reward the good and punish the bad; they act to help those in distress; they feel guilt, shame, pride, and righteous anger. I am admittedly biased, but I think these are the most exciting findings to come out of psychology in the last many years.

PAUL BLOOM is a professor of psychology at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is past-president of the Society for Philosophy and Psychology, and co-editor of Behavioral and Brain Sciences, one of the major journals in the field.

Dr. Bloom has written for scientific journals such as Nature and Science, and for popular outlets such as The New York Times, the Guardian, and the Atlantic. He is the author or editor of four books, including How Children Learn the Meanings of Words, and Descartes' Baby: How the Science of Child Development Explains What Makes Us Human. His newest book, How Pleasure Works: The New Science of Why We Like What We Like, was published in June, 2010.


Paul Bloom's Yale University Home Page
Paul Bloom's CV
Yale Mind and Development Lab

Articles & Press:

The Moral Life of Babies, in New York Times Magazine
How Do Morals Change, in Nature
Interview, on Big Think
The Long and Short of It, in New York Times
No Smiting, in New York Times Book Review
Natural Happiness, in New York Times Magazine
What's Inside a Baby's Head, in Slate
First Person Plural, in Atlantic