Home
About
Features
Editions
Press
Events
Dinner
Question Center
Video
Subscribe

Edge 325 — August 31, 2010
14,900 words

STEPHEN H. SCHNEIDER
1945 — 2010

THE THIRD CULTURE
THE NEW SCIENCE OF MORALITY
An Edge Conference

Photo Album

SAM HARRIS
Edge Video

ROY BAUMEISTER
Edge Video

PAUL BLOOM
Edge Video

EDGE IN THE NEWS

New York Times, Wiener Zietung, El Mundo (Spain), Boston Globe,
Membrana
(Russia), USA Today, Aftenposten (Norway)


subscribe

STEPHEN H. SCHNEIDER
1945 — 2010

Warming is unequivocal, that's true. But that's not a sophisticated question. A much more sophisticated question is how much of the climate Ma Earth, a perverse lady, gives us is from her, and how much is caused by us. That's a much more sophisticated, and much more difficult question.

Stanford climate researcher Stephen H. Schneider, a long-time friend, colleague and Edge contributor, died last month at the age of 65 of a heart attack while on a flight to London.

To remember him, Edge asked Andrew Revkin and Stewart Brand to have an email conversation about his influence on their thinking. From 1995 through 2009, he covered the environment for The New York Times as a staff reporter and he continues to write his "Dot Earth" blog for The Times Op-Ed section. With his 1968 National Book Award-winning Whole Earth Catalog, Brand was one of the founders of the ecology movement. He is the author of recently-published Whole Earth Discipline.

Below, is a 20-minute EdgeVideo interview with Stephen Schneider from our April 2008 feature on his work, "Modeling the Future".

JB

STEPHEN H. SCHNEIDER, a climatologist, was Professor of Environmental Biology and Global Change at Stanford University, a Co-Director at the Center for Environment Science and Policy of the Freeman Spogli Institute for International Studies and a Senior Fellow in the Stanford Woods Institute for the Environment. He was the author of Laboratory Earth: The Planetary Gamble We Can't Afford to Lose.

Stephen Schneider's Edge Bio Page

PERMALINK

REMEMBERING STEPHEN SCHNEIDER: Andrew Revkin & Stewart Brand

STEWART BRAND: What I appreciated most about Steve — along with all the significant work he did on climate science and climate policy — was his readiness to declare in public when his mind had been changed by new and better data.


He warned about global cooling when it looked like particulate aerosols were dominating climate change, and then as soon as more thorough models indicated that the effects from increasing greenhouse gases would swamp the cooling effects of aerosols, he reversed his position right away and explained why.

Likewise, several months after he first participated in warnings about "nuclear winter," he publicized new studies indicating that the initial fears were exaggerated.

That's intellectual honesty.


ANDREW REVKIN: I first got to know Steve while reporting a long cover story for Science Digest on nuclear winter (published March, 1985), followed soon after by our interactions while I was trying to determine the fate of Vladimir Alexandrov, a Soviet climate modeler and spokesman on nuclear winter (and probable spy for someone; it was never clear whether for the USSR, USA, or both) who had spent months working on supercomputers at the National Center for Atmospheric Research with Steve and others and vanished in Spain in the mid 1980s while attending a conference on nuclear-free cities.

I, too, was impressed with Steve's eagerness to follow the data, including his work with Starley Thompson of NCAR that concluded the cooling effect of smoke lofted from immolated cities after a nuclear war would be more "nuclear autumn" than nuclear winter. Some scientists, particularly Alan Robock at Rutgers, say Steve was wrong about that conclusion, although my sense is there's enough uncertainty in the science of post-war cooling that it'll never be a significant influence should someone be pondering pushing the button.

In my 1985 article, Steve was one of those who, along with Freeman Dyson, emphasized the importance of recognizing and acknowledging uncertainties as much as the established facts in considering policy options.

And as a communicator, of course, I was soon captivated by Steve's passion for diving into the public arena, but also for clarifying that, on policy questions, a scientist's views were as shaped by values as that of anyone else.

He was a frequent source of mine on climate science and policy from 1988, when I bumped into him at the first International Conference on the Changing Atmosphere, in Toronto, Canada, on through about one week before he died.

But I've already found it necessary to draw on his insights after his death.

A few weeks ago, an anonymous comment contributor on Dot Earth, "Wmar," asserted that there was now no need to press for policies to limit risks from global warming because the hypothesis "has been proved to have been falsifiable" — as if there is one simple question in play, as if decisions about such risks are a simple yes/no function of the data.

I responded by quoting from a 2006 e-mail message from Steve, which I'd never published:

"Wmar," you keep trying to set up the question of responding to the risks of human-driven climate change as if there is a single falsifiable hypothesis that determines — yes or no — whether action is justified (on emissions, separate from adaptation). This will never be that easy.

This is the way Steve Schneider put the situation in an e-mail to me in 2006 (I'll be publishing a "Schneidergate" collection sometime later this summer):

"...To be risk averse is good policy in my VALUE SYSTEM — and we always must admit that how to take risk — with climate damages or costs of mitigation/adaptation — is not science but world views and risk aversion philosophy — and whether you fear more the type one error (wrong forecast so you wasted resources by acting on it) or type two error (right forecast but too uncertain so you didn't act and it happened and you really got hurt by not hedging) is a value tradeoff..."

My guess is that your values shape your interpretation of the science (and the interpretations of your intellectual antagonists here), and also fuel your eagerness to portray the response question as subject to the certainty (or lack of it) in the science. Any chance that's right?

I guarantee I'll be drawing on my "Schneidergate" e-mail storehouse for a long time to come.


STEWART BRAND: Andrew, how would you compare and contrast Steve with other major players in the climate change drama?


ANDREW REVKIN: I saw him as more up front about the limited role of science in determining societal responses to global warming than most of his peers — many of whom, still today, seem surprised, almost affronted, that society hasn't jumped to respond to the message they see as so clearcut.

And of course he was one of a handful of scientists immersed at the interface of climate science and policy who stressed that the UNcertainties were the reason for action — even as others sometimes tried to downplay the uncertainties as a way to jog the public and policymakers.


STEWART BRAND: Do say more.

"Brash," "feisty," "outspoken" — those common adjectives about Steve mix interestingly with his willingness to change his mind when persuaded by broader data or deeper models. His normal conversational mode was argument, often in full rant. He put it right out front in his book titles — "The Patient from Hell" and "Science as a Contact Sport." Often under attack, he gave as good as he got.

I think what saved him and his science is that he argued just as ferociously with himself.


ANDREW REVKIN: Relentlessly energetic, feisty and in a hurry, but recognizing the realities of the world.


Andrew Revkin's Edge Bio Page
Stewart Brand's Edge Bio Page


THE NEW SCIENCE OF MORALITY
An Edge Conference

Photo Album

Talks by

Sam Harris, Roy Baumeister, Paul Bloom

We are pleased to present three more talks — by Sam Harris, Roy Baumeister, Paul Bloom — from the Edge "New Science of Morality Conference" in July. Below please find (a) videos of the 25-minute talks; (b) downloadable MP3 audio files; and (c) transcripts of the talks.

[EDITOR'S NOTE: Marc Hauser, one of the nine participants at the conference, has withdrawn his contribution.]



THE NEW SCIENCE OF MORALITY
An Edge Conference

SAM HARRIS

...I think we should differentiate three projects that seem to me to be easily conflated, but which are distinct and independently worthy endeavors. The first project is to understand what people do in the name of "morality." We can look at the world, witnessing all of the diverse behaviors, rules, cultural artifacts, and morally salient emotions like empathy and disgust, and we can study how these things play out in human communities, both in our time and throughout history. We can examine all these phenomena in as nonjudgmental a way as possible and seek to understand them. We can understand them in evolutionary terms, and we can understand them in psychological and neurobiological terms, as they arise in the present. And we can call the resulting data and the entire effort a "science of morality". This would be a purely descriptive science of the sort that I hear Jonathan Haidt advocating.

Sam Harris Talk:
Text
Video




Sam Harris Talk Permalink


[SAM HARRIS:] What I intended to say today has been pushed around a little bit by what has already been said and by a couple of sidebar conversations. That is as it should be, no doubt. But if my remarks are less linear than you would hope, blame that — and the jet lag.

I think we should differentiate three projects that seem to me to be easily conflated, but which are distinct and independently worthy endeavors. The first project is to understand what people do in the name of "morality." We can look at the world, witnessing all of the diverse behaviors, rules, cultural artifacts, and morally salient emotions like empathy and disgust, and we can study how these things play out in human communities, both in our time and throughout history. We can examine all these phenomena in as nonjudgmental a way as possible and seek to understand them. We can understand them in evolutionary terms, and we can understand them in psychological and neurobiological terms, as they arise in the present. And we can call the resulting data and the entire effort a "science of morality". This would be a purely descriptive science of the sort that I hear Jonathan Haidt advocating.

For most scientists, this project seems to exhaust all that legitimate points of contact between science and morality — that is, between science and judgments of good and evil and right and wrong. But I think there are two other projects that we could concern ourselves with, which are arguably more important.

The second project would be to actually get clearer about what we mean, and s hould mean, by the term "morality," Understanding how it relates to human well-being altogether, and to actually use this new discipline to think more intelligently about how to maximize human well-being. Of course, philosophers may think that this begs some of the important questions, and I'll get back to that. But I think this is a distinct project, and it's not purely descriptive. It's a normative project. The question is, how can we think about moral truth in the context of science?

The third project is a project of persuasion: How can we persuade all of the people who are committed to silly and harmful things in the name of "morality" to change their commitments, to have different goals in life, and to lead better lives? I think that this third project is actually the most important project facing humanity at this point in time. It subsumes everything else we could care about — from arresting climate change, to stopping nuclear proliferation, to curing cancer, to saving the whales. Any effort that requires that we collectively get our priorities straight and marshal massive commitments of time and resources would fall within the scope of this project. To build a viable global civilization we must begin to converge on the same economic, political, and environmental goals.

Obviously the project of moral persuasion is very difficult — but it strikes me as especially difficult if you can't figure out in what sense anyone could ever be right and wrong about questions of morality or about questions of human values. Understanding right and wrong in universal terms is Project Two, and that's what I'm focused on.

There are impediments to thinking about Project Two: the main one being that most right-thinking, well-educated, and well-intentioned people — certainly most scientists and public intellectuals, and I would guess, most journalists — have been convinced that something in the last 200 years of intellectual progress has made it impossible to actually speak about "moral truth." Not because human experience is so difficult to study or the brain too complex, but because there is thought to be no intellectual basis from which to say that anyone is ever right or wrong about questions of good and evil.

My aim is to undermine this assumption, which is now the received opinion in science and philosophy. I think it is based on several fallacies and double standards and, frankly, on some bad philosophy. The first thing I should point out is that, apart from being untrue, this view has consequences.

In 1947, when the United Nations was attempting to formulate a universal declaration of human rights, the American Anthropological Association stepped forward and said, it can't be done. This would be to merely foist one provincial notion of human rights on the rest of humanity. Any notion of human rights is the product of culture, and declaring a universal conception of human rights is an intellectually illegitimate thing to do. This was the best our social sciences could do with the crematory of Auschwitz still smoking.

But, of course, it has long been obvious that we need to converge, as a global civilization, in our beliefs about how we should treat one another. For this, we need some universal conception of right and wrong. So in addition to just not being true, I think skepticism about moral truth actually has consequences that we really should worry about.

Definitions matter. And in science we are always in the business of framing conversations and making definitions. There is nothing about this process that condemns us to epistemological relativism or that nullifies truth claims. We define "physics" as, loosely speaking, our best effort to understand the behavior of matter and energy in the universe. The discipline is defined with respect to the goal of understanding how matter behaves.

Of course, anyone is free to define "physics" in some other way. A Creationist physicist could come into the room and say, "Well, that's not my definition of physics. My physics is designed to match the Book of Genesis." But we are free to respond to such a person by saying, "You know, you really don't belong at this conference. That's not 'physics' as we are interested in it. You're using the word differently. You're not playing our language game." Such a gesture of exclusion is both legitimate and necessary. The fact that the discourse of physics is not sufficient to silence such a person, the fact that he cannot be brought into our conversation about physics, does not undermine physics as a domain of objective truth.

And yet, on the subject of morality, we seem to think that the possibility of differing opinions, the fact that someone can come forward and say that his morality has nothing to do with human flourishing — but depends upon following shariah law, for instance — the fact that such position can be articulated proves, in some sense, that there's no such thing as moral truth. Morality, therefore, must be a human invention. The fact that it is possible to articulate a different position is considered a problem for the entire field. But this is a fallacy.

We have an intuitive physics, but much of our intuitive physics is wrong with respect to the goal of understanding how matter and energy behave in this universe. I am saying that we also have an intuitive morality, and much of our intuitive morality may be wrong with respect to the goal of maximizing human flourishing — and with reference to the facts that govern the well-being of conscious creatures, generally.

So I will argue, briefly, that the only sphere of legitimate moral concern is the well-being of conscious creatures. I'll say a few words in defense of this assertion, but I think the idea that it has to be defended is the product of several fallacies and double standards that we're not noticing. I don't know that I will have time to expose all of them, but I'll mention a few.

Thus far, I've introduced two things: the concept of consciousness and the concept of well-being. I am claiming that consciousness is the only context in which we can talk about morality and human values. Why is consciousness not an arbitrary starting point? Well, what's the alternative? Just imagine someone coming forward claiming to have some other source of value that has nothing to do with the actual or potential experience of conscious beings. Whatever this is, it must be something that cannot affect the experience of anything in the universe, in this life or in any other.

If you put this imagined source of value in a box, I think what you would have in that box would be — by definition — the least interesting thing in the universe. It would be — again, by definition — something that cannot be cared about. Any other source of value will have some relationship to the experience of conscious beings. So I don't think consciousness is an arbitrary starting point. When we're talking about right and wrong, and good and evil, and about outcomes that matter, we are necessarily talking about actual or potential changes in conscious experience.

I would further add to that the concept of "well-being" captures everything we can care about in the moral sphere. The challenge is to have a definition of well-being that is truly open-ended and can absorb everything we care about. This is why I tend not to call myself a "consequentialist" or a "utilitarian," because traditionally, these positions have bounded the notion of consequences in such a way as to make them seem very brittle and exclusive of other concerns — producing a kind of body count calculus that only someone with Asperger's could adopt.

Consider the Trolley Problem: If there just is, in fact, a difference between pushing a person onto the tracks and flipping a switch — perhaps in terms of the emotional consequences of performing these actions — well, then this difference has to be taken into account. Or consider Peter Singer's Shallow Pond problem: We all know that it would take a very different kind of person to walk past a child drowning in a shallow pond, out of concern for getting one's suit wet, than it takes to ignore an appeal from UNICEF. It says much more about you if you can walk past that pond. If we were all this sort of person, there would be terrible ramifications as far as the eye can see. It seems to me, therefore, that the challenge is to get clear about what the actual consequences of an action are, about what changes in human experience are possible, and about which changes matter.

In thinking about a universal framework for morality, I now think in terms of what I call a "moral landscape." Perhaps there is a place in hell for anyone who would repurpose a cliché in this way, but the phrase, "the moral landscape" actually captures what I'm after: I'm envisioning a space of peaks and valleys, where the peaks correspond to the heights of flourishing possible for any conscious system, and the valleys correspond to the deepest depths of misery.

To speak specifically of human beings for the moment: any change that can affect a change in human consciousness would lead to a translation across the moral landscape. So changes to our genome, and changes to our economic systems — and changes occurring on any level in between that can affect human well-being for good or for ill — would translate into movements within this hypothetical space of possible human experience.

A few interesting things drop out of this model: Clearly, it is possible, or even likely, that there are many peaks on the moral landscape. To speak specifically of human communities: perhaps there is a way to maximize human flourishing in which we follow Peter Singer as far as we can go, and somehow train ourselves to be truly dispassionate to friends and family, without weighting our children's welfare more than the welfare of other children, and perhaps there's another peak where we remain biased toward our own children, within certain limits, while correcting for this bias by creating a social system which is, in fact, fair. Perhaps there are a thousand different ways to tune the variable of selfishness versus altruism, to land us on a peak on the moral landscape.

However, there will be many more ways to not be on a peak. And it is clearly possible to be wrong about how to move from our present position to the nearest available peak. This follows directly from the observation that whatever conscious experiences are possible for us are a product of the way the universe is. Our conscious experience arises out of the laws of nature, the states of our brain, and our entanglement with the world. Therefore, there are right and wrong answers to the question of how to maximize human flourishing in any moment.

This becomes incredibly easy to see when we imagine there being only two people on earth: we can call them Adam and Eve. Ask yourself, are there right and wrong answers to the question of how Adam and Eve might maximize their well-being? Clearly there are. Wrong answer number one: they can smash each other in the face with a large rock. This will not be the best strategy to maximize their well-being.

Of course, there are zero sum games they could play. And yes, they could be psychopaths who might utterly fail to collaborate. But, clearly, the best responses to their circumstance will not be zero-sum. The prospects of their flourishing and finding deeper and more durable sources of satisfaction will only be exposed by some form of cooperation. And all the worries that people normally bring to these discussions — like deontological principles or a Rawlsian concern about fairness — can be considered in the context of our asking how Adam and Eve can navigate the space of possible experiences so as to find a genuine peak of human flourishing, regardless of whether it is the only peak. Once again, multiple, equivalent but incompatible peaks still allow for a realistic space in which there are right and wrong answers to moral questions.

One thing we must not get confused about is the difference between answers in practice and answers in principle. Needless to say, fully understanding the possible range of experiences available to Adam and Eve represents a fantastically complicated problem. And it gets more complicated when we add 6.7 billion to the experiment. But I would argue that it's not a different problem; it just gets more complicated.

By analogy, consider economics: Is economics a science yet? Apparently not, judging from the last few years. Maybe economics will never get better than it is now. Perhaps we'll be surprised every decade or so by something terrible, and we'll be forced to concede that we're blinded by the complexity of our situation. But to say that it is difficult or impossible to answer certain problems in practice does not even slightly suggest that there are no right and wrong answers to these problems in principle.

The complexity of economics would never tempt us to say that there are no right and wrong ways to design economic systems, or to respond to financial crises. Nobody will ever say that it's a form of bigotry to criticize another country's response to a banking failure. Just imagine how terrifying it would be if the smartest people around all more or less agreed that we had to be nonjudgmental about everyone's view of economics and about every possible response to a global economic crisis.

And yet that is exactly where we stand as an intellectual community on the most important questions in human life. I don't think you have enjoyed the life of the mind until you have witnessed a philosopher or scientist talking about the "contextual legitimacy" of the burka, or of female genetic excision, or any of these other barbaric practices that we know cause needless human misery. We have convinced ourselves that somehow science is by definition a value-free space, and that we can't make value judgments about beliefs and practices that needlessly derail our attempts to build happy and sane societies.

The truth is, science is not value-free. Good science is the product of our valuing evidence, logical consistency, parsimony, and other intellectual virtues. And if you don't value those things, you can't participate in the scientific conversation. I'm saying we need not worry about the people who don't value human flourishing, or who say they don't. We need not listen to people who come to the table saying, "You know, we want to the cut heads off adulterers at half-time at our soccer games because we have a book dictated by the Creator of the universe which says we should." In response, we are free to say, "Well, you appear to be confused about everything. Your "physics" isn't physics, and your "morality" isn't morality." These are equivalent moves, intellectually speaking. They are borne of the same entanglement with real facts about the way the universe is. In terms of morality, our conversation can proceed with reference to facts about the changing experiences of conscious creatures. It seems to me to be just as legitimate, scientifically, to define "morality" in this way as it is to define "physics" in terms of the behavior of matter and energy. But most people engaged in of the scientific study of morality don't seem to realize this.


SAM HARRIS TALK:


Sam Harris

SAM HARRIS is a neuroscientist and the author of The End of Faith and Letter to a Christian Nation. He and his work have been discussed in Newsweek, TIME, The New York Times, Scientific American, Nature, Rolling Stone, and many other journals. His writing has appeared in Newsweek, The New York Times, The Los Angeles Times, The Times (London), The Boston Globe, The Atlantic, The Annals of Neurology, PLoS ONE, and elsewhere.

Mr. Harris is a Co-Founder and CEO of Project Reason, a nonprofit foundation devoted to spreading scientific knowledge and secular values in society. He received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA. He is the author of the forthcoming The Moral Landscape: How Science Can Determine Human Values (Free Press).

Links:

Sam Harris's Home Page
Project Reason

Articles & Press:

Science can answer moral questions, TED Talk
The Four Horsemen: Richard Dawkins, Daniel Dennett, Sam Harris and Christopher Hitchens, Video
Rolling Stone 40th Anniversary
Fact Impact, in Newsweek
What Your Brain Looks Like on Faith, in Time
The New Wars of Religion, in The Economist
The New Atheists, in The Nation
The Celestial Teapot, in The New Republic

Books:


THE NEW SCIENCE OF MORALITY
An Edge Conference

ROY BAUMEISTER

And so that said, in terms of trying to understand human nature, well, and morality too, nature and culture certainly combine in some ways to do this, and I'd put these together in a slightly different way, it's not nature's over here and culture's over there and they're both pulling us in different directions. Rather, nature made us for culture. I'm convinced that the distinctively human aspects of psychology, the human aspects of evolution were adaptations to enable us to have this new and better kind of social life, namely culture.

Culture is our biological strategy. It's a new and better way of relating to each other, based on shared information and division of labor, interlocking roles and things like that. And it's worked. It's how we solve the problems of survival and reproduction, and it's worked pretty well for us in that regard. And so the distinctively human traits are ones often there to make this new kind of social life work.

Now, where does this leave us with morality?

Roy Baumeister Talk:
Text
Video




MP3 Audio Download

Roy Baumeister Talk Permalink



[ROY BAUMEISTER:] John asked us to give some of our own personal quest or struggle or background of this. I don't know. The thing is, I was actually raised by wolves and it was not a happy childhood, you missed out on a lot of things. It taught me a lot but, you know, when I got to adolescence and I didn't want to be a wolf anymore, and what they taught me was useless.

Ever since, I've been trying to figure out, so as not to miss out on any more, what human life was all about and psychology is good for that. Figure out how the parts fit together. And to do that, one has to be something of a generalist because I have to know what's in all the parts, and well, today, to be a generalist, you know, you got to be fast because there's so little time, so much to know.

I go from area to area, trying to size things up. One thing I've learned is, caring about what is the right answer just slows you down. It gets in the way. And these are a lot of topics that people care very much about and have strong opinions. I'd rather just not care. I aspire not to have political views.

Going from area to area, I notice some patterns that come up over and over again. One thing is that I’ve become increasingly skeptical of reductionism. It seems like reductionism is always proved wrong in the long run. In psychology we had behaviorism and freudian psychoanalysis, that were going to explain everything. They explained some things and we learned a lot, but they could not explain everything. Far from it.

To some extent we're now going through this with the brain and evolution: many people think these will explain everything. Well, certainly we are going to learn a lot, and have already. But we need to be attentive to continuities that we're the same as animals, and also perhaps the ways in which we are different, in order to put them together.

Beyond the reductionism, another thing is that motivation tends to be undervalued compared to cognition and ability. And a third point is we tend to have the individual focus, so we tend to neglect and undervalue the interpersonal dimension, and things are perhaps more interpersonal than we are typically inclined to think.

And so that said, in terms of trying to understand human nature, well, and morality too, nature and culture certainly combine in some ways to do this, and I'd put these together in a slightly different way, it's not nature's over here and culture's over there and they're both pulling us in different directions. Rather, nature made us for culture. I'm convinced that the distinctively human aspects of psychology, the human aspects of evolution were adaptations to enable us to have this new and better kind of social life, namely culture.

Culture is our biological strategy. It's a new and better way of relating to each other, based on shared information and division of labor, interlocking roles and things like that. And it's worked. It's how we solve the problems of survival and reproduction, and it's worked pretty well for us in that regard. And so the distinctively human traits are ones often there to make this new kind of social life work.

Now, where does this leave us with morality?  Well, it's not so much the purpose to facilitate individual salvation or perfection, or whatever, as I quoted McIntyre in our discussions earlier today, but rather morality is the set of rules to enable people to live together. It serves the purpose of making the culture work, as culture depends on cooperating with each other, there's trust, shared assumptions, things like that.

Although nature and culture, in that sense, are working together, there are some conflicts; in particular, nature's made us, at least in a very basic way, selfish. The brain is selfish, and maybe it's the selfish gene, not the selfish individual or whatever. But there's still a natural selfishness, whereas culture needs people to overcome this to some degree because you have to cooperate with others and do things that are detrimental to your short-term, and even your long-term self-interest. In order for culture to work, you have to keep your promises, you have to wait your turn, pay your taxes, even maybe send your offspring into battle to risk their lives. It goes against the grain, biologically. But these are the sorts of things that morality promotes, to try to get people to overcome their natural selfish impulses, to do things that make the system work. And that benefits everyone in the long run.

Morality does this, and of course laws, too. We haven't said that much about laws, but laws regulate behavior in a lot of the same ways that morality does. They prescribe a lot of the same things, restraining self-interest to do what is better for the group, and so that the system will operate effectively. And there's a big difference between the laws and morals, which is mainly in the force they use. Why people have to do moral things in practice is because of concern with their reputation, and it's based, therefore, on long-term relationships. If you cheat someone you're living next door to, for the rest of your life they're going to know that, and other people are going to know that and you'll be punished and it will compromise your outcomes long-term.

As society got larger and more complex and moved to more stranger interactions, laws have had to step in to take their place, because you can cheat a stranger whom you'll never see again, and get away with it. Anyway, you're seeing here the neglected interpersonal dimension in understanding morality. Morality depends on relationships. And it's there, again, to regulate interpersonal behavior so that people cooperate, so that the system can work.

Now, consider some of the traits that evolved to enable people to overcome these selfish impulses so as to do what's best for the group and the system and so on. Among those, self-regulation is central. I think in part I got invited here, is I have a history of doing research on self-regulation and self-control. The essence of self-regulation is to override one response so that you can do something else —usually something that's more desirable, better either in the long run, or better for the group.

That is why we've called self-control the moral muscle. I'm going to unpack that and comment on both parts. It's moral: self-control is moral in the sense that it enables you to do these morally good things, sometimes detrimental to self-interest. So if you get lists of morals, whether it's the Seven Deadly Sins or the Ten Commandments or a list of virtues and so on, they're mostly about self-control. And you can really see self-control as central to them, so there are the Seven Deadly Sins of gluttony, wrath, and greed and the rest. They're mostly self-control failures. Likewise, the virtues are exemplary patterns of self-control. So that's the moral part of the ‘moral muscle’, it's a capacity to enable us to do these moral actions, which are good for the group, even though overcoming this short-term self-interest.

The muscle part, that's kind of emerged from our lab work, independent of any moral aspect. There seems to be a limited capacity to exert self-control that gets used up. It's like a muscle, it gets tired. As we found in many studies, after people will do some kind of self-control task, then they go to a different context with completely different self-control demands, they do worse on it – as if they used a muscle and it got tired there.

So it's a limited resource that gets exhausted. The muscle, there are other aspects of the muscle analogy. If you exercise self-control regularly, you get stronger. I wouldnt want people to say, well, if self-control and morality's a limited capacity, I'm never going to do anything to exert self-control because I don't want to waste it. No, au contraire, you should exert it regularly; it will make you stronger and give you greater capacity to do things.

And certainly then we find that when people have exerted this muscle and it's tired, so to speak, or when they've depleted, you know, ego depletion's a term for it, depleted their resources, then behavior drifts toward being less moral. So we found that people are perhaps more gratuitously aggressive towards somebody else after they've exerted self-control and used up some of their “moral muscle” resources.

In a study on cheating and stealing we published a couple of years ago, people had to type up an essay about what they had done recently, either not using words with the letter 'a' or not using words containing the letter, 'x.'  There are a lot more words contain an 'a' than 'x’, so the former requires much more self-control and overriding. And so when you're trying to make up a sentence and you keep reaching the point, oh look, there's an 'a' in that word and you have to override it, and so it uses self-control to keep overriding one response and coming up with another, that depletes people's resources. So they were more depleted in the “A” than in the “X” condition.

Afterwards, then they went to another room, supposedly another experiment where they're taking an arithmetic test and they're being paid for the number of ones they get right. They either scored it themselves, or the experimenter scored it for them. Of the four conditions (depleted or not, and self-scored or experimenter scored) all got about the same number right —except for the depleted people who scored their own tests, they somehow claimed to get a whole lot more right. It was not plausible they were actually getting smarter by virtue of having typed while not using words with the letter 'a' in them, because when the experimenter scored them, he couldn't find any difference. Got about the same number right. But when nobody was checking and their answer sheet was shredded and they said, you know, I got six correct. Then suddenly they got a whole lot more correct. So that suggests increase in lying and cheating, and effectively stealing money from the experiment.

There are some other findings, too, depleted people are more likely to engage in sexual misbehavior, and so on. So moral behavior does seem to go down when people have depleted their moral muscle capacity. More recently, we're working with Marc Hauser on seeing if depletion changes, how people make moral judgments of others, that's proving a little bit more slippery. But again, this kind of process is geared toward regulating your behavior more than your thinking about others. So it's not surprising that it shows up right there.

A couple of other things we've found, relevant here. Choice seems to deplete the same muscle as self-control, it's the same resource. So we have people make a lot of choices about which of these two products would you buy and so on, afterwards then their self-control is damaged, too, so making choices uses up the resource needed for self-control. That resource seems to be tied into some physiological processes. We found changes with the glucose levels in the bloodstream, and so something about doing these advanced kinds of self-control acts uses up this resource and depleted self-control in the bloodstream.

If you give people a drink, after manipulation we give them lemonade mixed with sugar or with Splenda, and Splenda they still act bad, but they got sugar in there, it gives a quick dose of glucose to the bloodstream and suddenly their behavior is more self-controlled; in some cases more moral, making more rational decisions and so forth. And conversely, too, if they're depleted from self-control, then their choice process is changed to be more shallow and so forth.

In terms of self-regulation plus choice, I mean, you start now to think that this same capacity is used, the same resource used for choosing and for self-control, and in maybe a couple of other things as well. There are some data on initiative. So instead of talking about it in terms of regulatory depletion, we're trying to come up with a bigger term, and that's how I got to talking about free will.

Free will is another of these topics where people are very emotional on both sides, and they have a lot of passionate feelings. And I don't really want to deal with that. Let me try to forestall some by saying when I'm trying to develop a scientific theory of free will, there's nothing supernatural, nothing that's noncausal in there. Let’s understand the processes by which people make these choices and exert self-control. And I think there is a social reality corresponding to this, and certainly behaving with self-control, behaving morally, making moral choices and making certain kinds of choices, these are the things we will associate with free will. And so in that sense, there's a real phenomenon there. Whether it deserves the term 'free will' depends kind of on this or that definition.

I'm surprised, I've been to this conference, I was at this conference in Israel with Bloom and Pizarro on morality, and yet nobody at either conference mentioned free will, really, in any talk or discussion. Yet it seems to me that this is a natural way to build this theory and extend it. So part of my interest in this topic, morality assumes that the person can do different things. And it says, well, this act is good and that act is bad, so it's a way to persuade you to do one thing rather than the other.

And likewise, moral judgments about people are based on the assumption that the person could have, and essentially the moral judgment says the person should have acted differently. And legal judgments, of course, very much the same sort of thing.

So I see I'm well ahead of schedule here. Let me comment on a couple of other points. In terms of evolution and morality, there was a recent article by David Barash saying well, there's the fairness instinct, you can see look in other animals, cited the Frans De Waal’s study, in which monkeys were mad if they saw another monkey getting a nicer treat for the same action. They'd say well, look, I think there's a fairness instinct. Again, I'm skeptical of reductionism, and you know, we need to attend to both the continuities and also the differences between human and animal behavior. To call it a fairness instinct seems a little overstated; it's a step in that direction, but you know, it's not that impressive.

If you have two dogs and you give one of them a treat, the other looks at you like, well, what about me?  But what you don't see is the over-benefitted one complain. The other dog doesn't say, well, I'll share my biscuit with you, or I'm not going to eat mine until the other dog gets one too. Yet human behavior does show some of those patterns. And so I think if we want to see a fairness instinct, we need to see both the over-benefitted and the under-benefitted one complain. And perhaps even more, to get to what the human, you have to have a third party saying no, you got more than this one and that's not fair, and intervening to redistribute, as happens all over the world in human societies.

In the Israel conference, Paul Bloom was talking about moral progress, too. And Steven Pinker has a recent book on that as well, I gather. Yes, the world's gotten to be a better place, but again, I'm not sure that we're morally better people. The laws, I mean, I mentioned the laws, are very much responsible for accomplishing that. I's a lot of third-party intervention to tell people not to do that. That reflects really some things that are new in human culture, perhaps not seen so much in other creatures.

Let me draw some conclusiosn here. Culture, I want to say, is humankind's biological strategy. It's our new way of solving the basic biological problems of survival and reproduction. We take our sick children to the hospital, we ask the government to give tax breaks for research, or to provide tax breaks to families with children or whatever. It's been very successful, culture. It worked very well for us, but requires a lot of advanced psychological traits.

One might ask, if culture works so well, why don’t other species use it?  Well, they don't have as many capacities. Culture requires advanced psychological capabilities. And so human evolution maybe added some new things, or at least took what was small in other animals and made it larger and more central. Self-control is present in other animals, but needs to be developed much more thoroughly in humans because culture has a lot more rules, a lot more regulations, of the laws and morals and so on. So a lot more needing to override your behavior to bring it into line with standards.

Morality in the full-fledged sense, and I'm going with the cultural materialist view that culture is a system that basically has to provide for the material and social needs of the individuals. And so regulates behavior for that, and morality comes with it, in a full-fledged sense, comes with culture. Tells people what to do to override their self-interest, and at least their short-term, and to follow the system's rules. The system works, and because of that we all live better, but we all have to cooperate to a significant degree in order for the system to work. And so morality is this set of rules to help us do that.

Self-control, then, is one of the crucial mechanisms that had to improve in humans, to enable culture to succeed. So it's an inner capacity, limited energy expensive, and so on, to alter your behavior, override responses, and enable one them to change one's behavior to fit in with the requirements of the system so that it will work. And then free will, again, you can see continuity with animals, their choice and agency in other creatures, and free will perhaps a more advanced form of agency, that evolved out of that, and more adapted to working in culture using meaningful reasons and operating within the context of the shared group.

It enables the human animal to relate to its social and cultural environment. I mean, a simple way of the basic agency of the squirrel and so on, is to enable that little animal to deal with its physical environment, but the free will as an advanced form enables the human being to deal with its cultural environment. And recognizes that as humans we can be somewhat more than animals, control our behavior in these advanced ways, need to make the system work. And once it works for us, then it has provided the immense benefits that it has. 


ROY BAUMEISTER VIDEO:


Roy Baumeister

At present I am reading, thinking, and writing about self-control, choice, free will, addiction, and related matters. These are highly relevant to morality. The essence of the idea of free will is that a person is/was capable of acting differently. Moral principles only make sense on the basis of that assumption, insofar as they exhort people to make responsible choices by acting in one manner rather than the other. Moral judgments often depend on whether a person acted on his or her own free will and essentially state whether the person should have acted differently.

I have been led by a circuitous route to the conclusion that the human being was designed by nature for culture: That is, the distinctively human traits are those that enable us to participate in this new kind of social life, namely culture. Culture is humankind’s biological strategy. To understand human traits, therefore, it is useful to ask how each trait would have been selected for as a way of helping an individual flourish in this new kind of social environment.

Social life inevitably breeds conflict, because different group members want the same food or mate or resource. Evolution adapted predatory aggression to resolve intraspecies conflicts, such as by making it often non-lethal. Human culture has however developed alternative means of resolving disputes, including morality. Morality is a system that allows group members to live together in reasonable peace and productive harmony, not least by restraining natural tendencies toward selfishness. Therefore, to be cultural, humans had to evolve a capacity to behave (and think and feel) morally. The similarities and the differences among various moral systems can be understood on the basis of the requirements of group life.

My interest in free will is not focused on the old debate of whether people do or do not have it. Rather, there is a real social phenomenon associated with the idea of free will, and that is what I seek to understand. For me, this developed out of studies on self-control, which we have dubbed “the moral muscle” because it enables individuals to overcome selfish and other antisocial impulses to do what is best for the group. Most virtues embody effective self-control, and most vices are failures thereof. The link between moral (and legal) responsibility and perceived free will adds another dimension to this social reality.

When I first studied moral philosophy back in college, I got stuck on the question of why one should bother obeying moral rules, especially if doing so goes against self-interest, and apart from fear of punishment. Decades later, I am seeing a two-pronged answer that reconciles with (enlightened) self-interest. First, obeying moral rules helps the cultural system to operate, and the health and prosperity (and survival and reproduction) of individuals depends heavily on the effective operation of the system. Second, cultural beings have moral reputations, and others treat them well or badly on the basis of those reputations.

ROY BAUMEISTER is Francis Eppes Eminent Scholar and head of the social psychology graduate program at Florida State University. He received his PhD in 1978 from Princeton in experimental social psychology and maintains an active laboratory, but he also seeks to understand human nature in the big picture, such as by tackling broad philosophical problems with social science methods. He has nearly 450 publications. He is among the most widely influential psychologists in the world, as indicated by being cited over a thousand times each year in the scientific literature. His 27 books include Meanings of Life, Evil: Inside Human Violence and Cruelty, The Cultural Animal: Human Nature, Meaning, and Social Life, Is There Anything Good about Men?, and the forthcoming (with John Tierney) Willpower: The Rediscovery of Humans’ Greatest Strength.

Links:

Roy Baumeister Home Page
The Baumeister & Tice Lab
Roy Baumeister, in Wikipedia

Articles & Press:

Cultural Animal, Roy Baumeister's Psychology Today Blog
Is There Anything Good About Men?
Exploding the Self-Esteem Myth, in Scientific American
Ego Depletion: Is the Active Self a Limited Resource?

Books:

Roy Baumeister's Edge Bio page


THE NEW SCIENCE OF MORALITY
An Edge Conference

PAUL BLOOM

What I want to do today is talk about some ideas I've been exploring concerning the origin of human kindness. And I'll begin with a story that Sarah Hrdy tells at the beginning of her excellent new book, "Mothers And Others."  She describes herself flying on an airplane. It’s a crowded airplane, and she's flying coach. She's waits in line to get to her seat; later in the flight, food is going around, but she's not the first person to be served; other people are getting their meals ahead of her. And there's a crying baby. The mother's soothing the baby, the person next to them is trying to hide his annoyance, other people are coo-cooing the baby, and so on.
               
As Hrdy points out, this is entirely unexceptional. Billions of people fly each year, and this is how most flights are. But she then imagines what would happen if every individual on the plane was transformed into a chimp. Chaos would reign. By the time the plane landed, there'd be body parts all over the aisles, and the baby would be lucky to make it out alive.
               
The point here is that people are nicer than chimps.

Paul Bloom Talk:
Text
Video




MP3 Audio Download

Paul Bloom Talk Permalink


[PAUL BLOOM:] I'd like to thank the Edge Foundation for putting together this workshop, and I'd also like to thank all of my colleagues here. It's because of the extraordinary theoretical and empirical work of the people in this room that the study of morality is, I think, the most exciting field in all of psychology. So I'm really glad to be included among this group.
       
What I want to do today is talk about some ideas I've been exploring concerning the origin of human kindness. And I'll begin with a story that Sarah Hrdy tells at the beginning of her excellent new book, "Mothers And Others."  She describes herself flying on an airplane. It’s a crowded airplane, and she's flying coach. She's waits in line to get to her seat; later in the flight, food is going around, but she's not the first person to be served; other people are getting their meals ahead of her. And there's a crying baby. The mother's soothing the baby, the person next to them is trying to hide his annoyance, other people are coo-cooing the baby, and so on.
               
As Hrdy points out, this is entirely unexceptional. Billions of people fly each year, and this is how most flights are. But she then imagines what would happen if every individual on the plane was transformed into a chimp. Chaos would reign. By the time the plane landed, there'd be body parts all over the aisles, and the baby would be lucky to make it out alive.
               
The point here is that people are nicer than chimps. Human niceness shows up in all sorts of other ways. Americans give hundreds of billions of dollars each year to charity.   Now, you might be cynical about some of that giving, but some of it seems to be genuinely motivated by concern for strangers. We leave tips at restaurants. We leave tips in our hotel rooms. This last one is striking: Some of us, when leaving this hotel, will leave money for the maid, even though this act has no possible selfish benefit. It doesn’t help our reputation; it won’t improve future service. We do it anyway, because we feel that it is right.
               
My favorite experiment on adult human niceness was done by Stanley Milgram many years ago. Milgram was a Yale psychologist who is most famous for his obedience experiments, where he found that people would kill strangers if asked to do so in the right way. But he was also interested in niceness, and he did an experiment in which he left stamped envelopes scattered around New Haven. The question was how many of them would be delivered. And the answer was well over half. Now it wasn’t indiscriminate: If instead of a person’s name on the letter, it was “Friends of the Nazi Party”, people wouldn't deliver it. Presumably they'd look at it, they'd throw it in the garbage, they'd say to hell with that.

In a more recent study, another psychologist replicated the study but didn't even put stamps on the letters. Still, one in five letters came back. This is extraordinarily nice.
               
I'm a developmental psychologist and I'm interested in where this niceness comes from. It turns out that at least some of it seems to be hard-wired, emerging naturally. It is not taught.
               
The idea here was anticipated by Adam Smith hundreds of years ago. Adam Smith was the founder of modern economics, and he was very sophisticated when it came to human sentiment. He pointed out that when you see somebody in pain, you feel their pain — to at least some extent — as if it was yours. And you're motivated to make it go away, you're motivated to help. This is a primitive good that doesn't reduce to any other good.

It turns out that some such empathy exists even in babies. When babies hear crying, they'll start to cry themselves. Now, some very cynical psychologists worried that this isn’t empathy at all. It’s because babies are so stupid that when they hear another baby crying, they think they're crying themselves, so they get upset and they cry some more. In response, though, other psychologists did experiments where they exposed babies to tape recorded sounds of their own cries and tape recorded sounds of other baby's cries. And they found that the babies cry more to the sounds of other babies than to their own cries, suggesting this response really is other-directed. Furthermore, when a baby sees someone in pain, even silent pain, the baby will get distressed. And as soon as babies are old enough to move around their bodies, they'll try to make the pain go away. They'll stroke the other person, or they'll try to hand over a toy or a bottle.

In some recent work that Roy Baumeister mentioned in passing, Felix Warneken and Michael Tomasello set up a clever experiment where they put toddlers in situations where nobody is looking at them, and then an adult comes in, and has some sort of minor crisis, such as reaching for something and being unable to get to it, or trying to get access to a cabinet with his arms too full to open the door. And Warneken and Tomasello find that toddlers, more often than not, will spontaneously toddle over and try to help.

In my own research, I've been interested not so much in moral action or altruistic behavior, but in moral cognition, moral intelligence. And this is a series of studies that I've been doing in collaboration with Karen Wynn, my colleague at Yale, who runs the Yale Infant Lab, and a wonderful graduate student named Kiley Hamlin, who's now an Assistant Professor at the University of British Columbia.
               
We created a set of one-act morality plays. For each of these, there is a character who tries to do something and there's a good guy and there's a bad guy. These are animated figures or simple geometrical objects, or puppets. For instance, in one of our studies, a character would be struggling to get up a hill. One guy would come and push him up. Another guy would come and push him down. In another, a character would be playing with a ball. He rolls the ball to another puppet. They look at each other and the puppet rolls it back. He rolls the ball to another puppet, they look at each other, and then this other puppet runs away with the ball. In a third one-act play, there's a puppet trying to open up a transparent box. The baby can see that there is a toy in there. And one puppet comes and helps to open the box and, later, a different puppet jumps on the box, slamming it closed.

These are three examples; we have a couple of more scenarios in the works now. What we find is that if you ask toddlers of 19 months of age, “Who is the good guy?” and “Who is the bad guy?”, they respond in the same way that adults do. They point to the proactive agent, the person who helps the character achieve his goals, as the good guy, and they point to the disrupter, the thwarter, as the bad guy.

Now, maybe that’s not so exciting — these are fairly old kids. But what we've done is we've pushed the age lower and lower. In one set of studies, we present the baby with both characters, and we see where the baby will reach for, which one the baby will choose. Keep in the mind that everything is counterbalanced, and the person who's offering the choice is always blind to the roles of the different characters, to avoid the problem of unconscious cuing. Also, the parents have their eyes closed during the study.
       
We find that, down to six months of age, they'll reach for the good guy. We also have neutral conditions, and these tell us that they'd rather reach for the good guy than to a neutral guy, but they'd rather reach for a neutral guy than to a bad guy. This suggests that there are two forces at work — they are drawn toward the good guy and drawn away from the bad guy.
        
In a recent study that was just published in the journal Developmental Science, we test three-month-olds. Now, three-month-olds are blobs; they are meatloafs. They can't coordinate their actions well enough to reach. But we know from the six-month-old study that before babies reach, they look to where they're going to reach. So for the three-month-olds, we record where they look. And, as predicted, they look to the good guy, not to the bad guy.

Does this show morality, a moral instinct? No. What it shows is babies are sensitive to third-party interactions of a positive and negative nature, and this influences how they behave toward these characters, and, later on, how they talk about them. And I think that that is relevant to morality. I think it's a useful moral foundation. But how moral is it?  Are these truly moral judgments?  And the honest answer is we don't know. This is something that we’re actively exploring, but, as you could imagine, when you're dealing with six-month-olds, it's difficult to study.
       
We are embarking on some experiments that try to address this issue, along with a Yale graduate student, Neha Mahajan. One aspect of mature morality is that you not only approach a good guy and avoid a bad guy, but you believe that a good guy should be rewarded and a bad guy should be punished. So we tested 19-month-olds to see whether they share this intuition. Using our usual paradigms, we have a good guy and a bad guy and we ask the children to give a treat to one of them. And what we find is they usually give it to the good guy. We also have a punishment condition, so we say to the child: You have to take a treat from one of these characters. They'll tend to take it from the bad guy.

Recently, with nine-month-olds, we did a study looking at their notions of justice. And to do this, we have a two-act play. In the first act, you have a good guy and a bad guy. And they do their good guy/bad guy actions, the ones that I described before. In the second act, what happens is two more characters come in. In one condition, one of the characters rewards the good guy and the other character punishes the good guy. And we find that babies prefer, by reaching, the character who rewarded the good guy.
       
Now, this is not so surprising, because we had the previous finding that babies like positive actors. Maybe this is all that’s going on. The second condition's more interesting. You have a good guy and a bad guy, then one character comes in and rewards the bad guy; another character comes in and punishes the bad guy. Now the  babies robustly prefer the one who punishes the bad guy, suggesting that they will favor bad actions when they are done to those who are themselves bad. This suggests some rudimentary — and I'm happy to put into square quotes — some rudimentary sense of “justice”.
               
There are other studies looking at baby morality from Rene Baillargeon’s lab at University of Illinios and Luca Surian’s lab from University of Trento as well as from other labs. These also support the idea that there's both a surprisingly precocious grasp of moral notions and a surprisingly precocious propensity for moral action. Now, some would argue that I could stop my talk now because I've solved the problem I've set out to answer. The human niceness that we are interested in exists in babies; it is part of our hard-wired inheritance. We are, as Dacher Keltner put it, “born to be good”. To the extent you find a narrowing of this kindness in adults, this is due to the corrupting forces of culture and society.
       
This is not the argument I wish to make. I find the idea of an innately pure kindness to be extremely implausible. For one thing, our brains have evolved through natural selection. And that means that the main force that shaped our psyche is differential reproductive success. Our minds have evolved through processes such as kin selection and reciprocal altruism. We should therefore be biased in favor of those who share our genes at the expense of those who don't, and we should be biased in favor of those who we are in continued interaction with at the expense of strangers.
       
Also, there is now a substantial amount of developmental evidence suggesting that this kindness that we see early on is parochial. It is narrow. It is applies to those that a baby is in immediate contact with, and does not extend more generally until quite late in development.
       
Here are some sources of evidence for this claim. We've known for a long time that babies are biased towards the familiar when it comes to individuals. A baby will prefer to look at her mother's face than the face of a stranger. A baby will prefer to listen to her mother's voice as opposed to the voice of a stranger. This bias also extends to categories. Babies prefer to listen to their native language rather than to a language that's different from theirs. Babies who are raised in white households prefer to look at white people than at black people. Babies who are raised in black households prefer to look at black people than at white people.
       
We know that this last fact isn't because the babies know that they themselves are white or black, because babies that are raised in multi-ethnic environments show no bias. It has to do with the people around them. And as they get older, this bias in preference translates into a bias in behavior. Young children prefer to imitate and learn from who look like them and those who speak the same language as them. Around the age of nine months, they'll show stranger anxiety — they avoid new people.
       
There are also studies now with preschool children, older children, and adolescents that show that it is fairly easy to get them to categorize in favor their own group over others, even when the group is established in the most minimal and arbitrary circumstances. This is all based on Taijfel’s work on “minimal groups”. For instance, in experiments by Bigler and others, you take a bunch of children and you say okay, kids, I have some red t-shirts and blue t-shirts, I'm just going to give them to you guys. You get the children to put on the t-shirts, so that now you have a red t-shirt group and the blue t-shirt group. Now you approach a child from the red t-shirt group, and you say: I have some candy to give out, and you can't get any, but I'm asking you how you give it to other people. Who do you want to give it to?  Do you want to give it to everybody equally, or you want to give it more to the red or more to the blue?
       
It turns out that children are biased to give more to their own group, even when they don’t personally profit from the giving. And hen asked about the properties of their group — who's nice, who's mean, who's smart, who's stupid — a child who just put on a red t-shirt will tend to favor the red t-shirt group over the blue t-shirt group — even though it’s perfectly clear that the assigned were divided on an arbitrary basis.
               
Yet another bit of bad news about human nature comes from economic games. Many of you are familiar with the ultimatum game, and this is just one of a series of games thought up by behavioral economists that purport to show niceness among adults, that we are generous in certain ways. Now, I am highly skeptical about what these studies really show, and we could talk about that in the question period. But what's interesting for these purposes is that children behave quite differently from adults in these games.

I'll give you one example. This is the dictator game. The dictator game is actually even simpler than the ultimatum game. I choose two people at random. One of them, the subject, is lucky. He gets some money, say $100. Now he can give as much as he wants to the other individual, everything from the entire $100 to nothing to all. This other person will never know who made the choice — it’s entirely anonymous.
               
From a self-interested perspective, the subject should just keep all the money. But what you find is that people actually give. People give roughly 30 percent. Some people give nothing, but some people give half, and some people even give more than half. This is surprisingly nice. Ernst Fehr and Simon Gachter recently did this with children. What they did was set up a very simple ultimatum game. They gave children two candies. And they say to each child: You can either keep both candies or you can give one of them to this stranger.
               
Seven- and eight-year-olds will often choose to do a split. But younger children almost always keep both candies. So to the extent there is generosity to strangers, it emerges late. Now, one problem with the standard ultimatum game is that you are pitting two impulses against each other. The child might have an equity/kindness/fairness impulse, so there is a desire to share, but the child might also like candies, so there's a desire to keep both of them. You're pitting them against each other. Maybe children’s hunger trumps their generosity.
               
Fehr and Gachter explored this with another study. The child got to choose between either getting a candy and giving another person a candy versus getting a candy and giving the other person nothing. Now, from a consequentialist point of view, this is not a head-scratcher. There are not two competing impulses. One can be nice without suffering any penalty. But, until about the age of seven or eight, children are perfectly indifferent. The numbers are about 50 percent — they don't care. It's not like they hate the other anonymous person and want to deprive him of the candy, it is that they have no feelings either way.
       
This shouldn’t surprise us. Maybe it’s even better that we could have expected. The dominant trend of humanity has been to view strangers — non-relatives, those from other tribes — with hatred, fear and disgust. Jared Diamond talks about the groups in Papua New Guinea that he encountered. And he points out, for individual to leave his or her tribe and just walk into another, strange tribe would be tantamount to suicide. Others have observed that the words that human groups use to describe themselves and others reflect this same animus towards strangers. So groups tend to have a word for themselves that often means something like person or human. Then they have a word for other people. And now sometimes this is just “The Others”, like in the TV show "Lost”. But sometimes they describe the other group using the same word they use for prey, or food.
               
So there’s a puzzle, then, because the niceness we see now in the world today, by at least some people in the world, seems to clash with our natural morality, which is nowhere near as nice. How did we end up bridging the gap?  How have we gotten so much nicer?
               
Note that I'm been focusing here on questions of our kindness to strangers, but this question could be asked about other aspects of morality, such as the origin of new moral ideas, such that slavery is wrong or that we shouldn’t be sexist or racist.
               
These are deep puzzles. I’ll end this talk with two compatible theories of the emergence of mature human kindness.

The first involves increased interdependence. This is something that Robert Wright has been arguing in a series of books, and Peter Singer and Steven Pinker have also discussed it, in different forms. The idea is that as you come into connect with more and more people, in a situation where there is interdependence, where your life is improved by being able to connect with the other person, where there is a non-zero-sum relationship, you will come to care about their fates.

This is niceness grounded in enlightened selfishness. As Robert Wright once said in a talk, “One of the reasons I don't want to bomb the Japanese is that they built my minivan.”  Because he's in a commercial relation to these people, his compassion gets extended to where it wouldn't have otherwise been.
               
There is some support for this view, coming from a study by Joseph Henrich and his colleagues that was published in Science a few months ago. Henrich et al. looked at 15 societies, and they had the people in these societies play a series of economic games. They found considerable variation in how nice people are to anonymous strangers, and then did some analyses to see what determines this niceness. One finding is that capitalism makes people nicer. That is, immersion in a market economy has a significant relationship with how nice we are to anonymous strangers, presumably because if you're in a market economy, you're used to dealing with other people in long-term relationships, even if they're not your family and they're not your friends. The second factor was membership in a world religion — Christianity or Islam. This makes people nicer, perhaps because it immerses people into a larger social group and entrains them to deal with strangers.

Another explanation for the increase in human niceness is the power of stories. It is one of the consequences of fiction and of journalism is that they can bring distant people closer to you. You can come to think of them as if they are kin or neighbors. This can extend one's sympathies towards individuals, but, as Martha Nussbaum and many others have argued, it can also expand one's sympathy towards groups.

Consider moral progress in the United States, I think that the great moral change in our society over the last 50 to 100 years has been the changing attitudes of whites towards African-Americans. And the great moral change in the last ten years has been in straight people's attitudes towards gay people. I think for both cases, the engine driving this change was not philosophical argument or theological pronouncements or legal analyses, it was fiction. It was imagination. It was being exposed to members of these other groups in sympathetic contexts. I would argue, more specifically, that one of the great forces of moral change in our time is the American sitcom.
 
I'll end by saying that this speaks to one of the issues that occupies many people in this room, which is the role of rational deliberation in morality. There seems to be a contradiction here. On the one hand, social psychologists have a million demonstrations that people are impervious to rational argument. And so the reason why I've come to my views about slavery or gay people or whatever, is most likely not because somebody gave me a real persuasive argument. On the other hand, we know full well that rational thought has made a difference in the world. Just as a recent example, Peter Singer's thoughts on issues such as how to treat non-human animals have changed the world.

I think one way out of this — and this is very similar to something that Jonathan Haidt has argued — is that reasons do affect us, but they do so indirectly, through the medium of emotions. If so, this suggests a research project of tremendous importance, one that asks: How do people come to have new moral ideas and how do they convey these ideas in ways that persuade others? 
               
I've made three arguments here. The first is that humans are in a very interesting way, nice. The second is that we have evolved a moral sense, and this moral sense is powerful, and can explain much of our niceness. It is far richer than many empiricists would have believed. But the third argument is that this moral sense is not enough. That accomplishments we see and we admire so much in our species are due to factors other than our evolutionary history. They are due to our culture, our intelligence and our imagination.


PAUL BLOOM TALK:


Paul Bloom

The human moral sense is fascinating. Putting aside the intriguing case of psychopaths, every normal adult is appalled by acts of cruelty, such as the rape of a child, the swindling of the elderly, or the humiliation and betrayal of a lover. Every normal adult is also uplifted by acts of kindness, like those heroes who jump onto subways tracks to rescue fallen strangers from oncoming trains. There is a universal urge to help those in need and to punish wrongdoers; we feel pride when we do the right thing and guilt when we don't.

Other moral feelings and impulses aren't so universal. As your typical liberal academic, I am morally appalled by tea party demonstrators, abortion clinic bombers, the NRA, the use of waterboarding to interrogate prisoners, and Sarah Palin. But I have to swallow the fact that roughly half of my fellow Americans feel just the same about gay rights demonstrators, abortionists, the ACLU, and Barack Obama.

Where does this all come from? How much of it is learned? Why are some moral judgments universal and others violently conflicting?

My answer is this: Humans are born with a hard-wired morality. A deep sense of good and evil is bred in the bone. I'm aware that this might sound outlandish, but it's supported now by research in several laboratories, including my own research at Yale. Babies and toddlers can judge the goodness and badness of others' actions; they want to reward the good and punish the bad; they act to help those in distress; they feel guilt, shame, pride, and righteous anger. I am admittedly biased, but I think these are the most exciting findings to come out of psychology in the last many years.

PAUL BLOOM is a professor of psychology at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is past-president of the Society for Philosophy and Psychology, and co-editor of Behavioral and Brain Sciences, one of the major journals in the field.

Dr. Bloom has written for scientific journals such as Nature and Science, and for popular outlets such as The New York Times, the Guardian, and the Atlantic. He is the author or editor of four books, including How Children Learn the Meanings of Words, and Descartes' Baby: How the Science of Child Development Explains What Makes Us Human. His newest book, How Pleasure Works: The New Science of Why We Like What We Like, was published in June, 2010.

Links:

Paul Bloom's Yale University Home Page
Paul Bloom's CV
Yale Mind and Development Lab

Articles & Press:

The Moral Life of Babies, in New York Times Magazine
How Do Morals Change, in Nature
Interview, on Big Think
The Long and Short of It, in New York Times
No Smiting, in New York Times Book Review
Natural Happiness, in New York Times Magazine
What's Inside a Baby's Head, in Slate
First Person Plural, in Atlantic

Books:


NEW YORK TIMES - DOT EARTH
August 28, 2010

ON HARVARD MISCONDUCT, CLIMATE RESEARCH AND TRUST
By Andrew C. Revkin

Earlier this week I was invited to join an e-mail discussion involving a variegated array of scientists and science communicators exploring a provocative question posed by one of them (I'll leave the identities out, but will invite them to weigh in here).

The conversation encompassed the case of Marc Hauser, the Harvard specialist in cognition found guilty of academic misconduct, and assertions that climate research suffered far too much from group think, protective tribalism and willingness to spin findings to suit an environmental agenda.

The question? "Maybe science—in some fields, not necessarily all of them—is much more corrupt than anyone wants to acknowledge." ...


WIENER ZEITUNG (VIienna)

THE CAPRICIOUS WAY IN THE FUTURE (Der launische Weg in die Zukunft) Leading researchers on discoveries that fundamentally changelife on earth

By Eva Stanzl

...But what if leading scientists provide philosophical reflections on discoveries that could change our future? Would they also exude anxiety and pessimism - particularly because the state of knowledge always deepens? John Brockmann, a former performance artist, editor of the Internet magazine "Edge" and head of a literary agency in New York, has obtained such considerations. Where he edited Volume "What idea will change everything?" (Fischer), the science looks sober in the future. Instead of painting colorful outlook on the wall, the authors explore the possibilities of existing innovations. Is no trace of fear, but not of utopia. ...

Google Translation | German Language Original


EL MUNDO
August 26, 2010

WHAT IS A MEMORY
Arcadi Espada

A correspondence with Sam Cooke:

Dear Researcher:

I am a Spanish journalist, who works in the newspaper El Mundo and is interested in issues of neuroscience. I read with great interest "Improving the memory, erase the memory: the future of our past," the Spanish translation of his article included in What's Next? Dispatches on the Future of Science, edited by Max Brockman. In this article makes you some references to the future possibility could erase certain memories and the possibility of adding new ones. I do not care now the plausibility of these hypotheses, if not somewhat earlier. What does it mean to isolate a memory?

Google Translation | Spanish Original


MEMBRANA (Russia)
July 22, 2010

QUANTUM TIME MACHINE RESOLVES THE PARADOX OF KILLING GRANDFATHER

Whatever happened to the positive protagonist of the standard action movie, we know beforehand - he survived. Law of the genre. Now scientists have substantiated a similar law of nature for the displacements in time. If the hypothesis is correct, the traveler will never be able to kill his grandfather in the past: something must reject the bullet, knife or a brick in the last minute.

Google Translation | Russian Language Original


USA TODAY
August 8, 2010

NEUROSCIENCE OR 'NEUROSEXISM'? BOOK CLAIMS BRAIN SCANS SELL SEXES SHORTBy Dan Vergano

"There are real, and in some cases sizable, sex differences with respect to some cognitive (thinking) abilities," psychologist Diane Halpern of Claremont (Calif.) McKenna College argued in a 2008 Edge Foundation essay. "But we have no reason to expect that complex phenomena like cognitive development have simple answers," she added, arguing that neither brain wiring nor discrimination alone can explain the differences between men and women.


AFTENPOSTEN (Norway)
August 6, 2010

ANOTHER TYPE OF THINKING: TO BE AN "INTELLECTUAL" TODAY REQUIRES KNOWLEDGE OF SCIENCE AND TECHNOLOGY

Bjørn Vassnes

John Brockman was a literary agent for Richard Dawkins and Steven Pinker, among other leading figures of what he called "the third culture," and he created a digital meeting place, edge.org, where many of the world's sharpest minds regularly participate in interesting, but understandable discussions on everything from the Internet's effect on the human brain to the root causes behind terrorism.

Google Translation | Norwegian Original


STRAITS TIMES SINGAPORE
July 31, 2010

HAS THE NET STALLED OUR THINKING?
By Andy Ho

EVERY year, a United States-based non-profit group called The Edge Foundation poses a big question to renowned thought leaders.

This year, 172 individuals were asked to talk about the Internet. Here is a sample of the most interesting responses just posted on its read-only website. ...


subscribe

THE EDGE ANNUAL QUESTION BOOK SERIES
Edited by John Brockman

"An intellectual treasure trove"
San Francisco Chronicle


THIS WILL CHANGE EVERYTHING: IDEAS THAT WILL SHAPE THE FUTURE
(*)
Edited by John Brockman

Harper Perennial

NOW IN BOOKSTORES AND ONLINE!

[click to enlarge]

Contributors include: RICHARD DAWKINS on cross-species breeding; IAN McEWAN on the remote frontiers of solar energy; FREEMAN DYSON on radiotelepathy; STEVEN PINKER on the perils and potential of direct-to-consumer genomics; SAM HARRIS on mind-reading technology; NASSIM NICHOLAS TALEB on the end of precise knowledge; CHRIS ANDERSON on how the Internet will revolutionize education; IRENE PEPPERBERG on unlocking the secrets of the brain; LISA RANDALL on the power of instantaneous information; BRIAN ENO on the battle between hope and fear; J. CRAIG VENTER on rewriting DNA; FRANK WILCZEK on mastering matter through quantum physics.


"a provocative, demanding clutch of essays covering everything from gene splicing to global warming to intelligence, both artificial and human, to immortality... the way Brockman interlaces essays about research on the frontiers of science with ones on artistic vision, education, psychology and economics is sure to buzz any brain." (Chicago Sun-Times)

"11 books you must read — Curl up with these reads on days when you just don't want to do anything else: 5. John Brockman's This Will Change Everything: Ideas That Will Shape the Future" (Forbes India)

"Full of ideas wild (neurocosmetics, "resizing ourselves," "intuit[ing] in six dimensions") and more close-to-home ("Basketball and Science Camps," solar technology"), this volume offers dozens of ingenious ways to think about progress" (Publishers Weekly — Starred Review)

"A stellar cast of intellectuals ... a stunning array of responses...Perfect for: anyone who wants to know what the big thinkers will be chewing on in 2010. " (New Scientist)

"Pouring over these pages is like attending a dinner party where every guest is brilliant and captivating and only wants to speak with you—overwhelming, but an experience to savor." (Seed)

* based On The Edge Annual Question — 2009: "What Will Change Everything?)

Edge Foundation, Inc. is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.