Question Center

Edge 323 — July 29, 2010
14,400 words


An Edge Seminar

Roy Baumeister, Paul Bloom, Joshua D. Greene, Jonathan Haidt,
Sam Harris, Josua Knobe, Elizabeth Phelps, David Pizarro



David Brooks, New York Times
Andrew Sullivan, The Daily Dish
ordan Mejias, Frankfurter Allgemeine Zeitung



An Edge Conference

Roy Baumeister, Paul Bloom, Joshua D. Greene, Jonathan Haidt,
Sam Harris, Josua Knobe, Elizabeth Phelps, David Pizarro

The Mayflower Inn
Washington, CT
Eastover Farm
Bethlehem, CT

Tuesday July 20 - Thursday, July 22, 2010

Edge Events at Eastover Farm
The Edge Dinner


By John Brockman

Something radically new is in the air: new ways of understanding physical systems, new ways of thinking about thinking that call into question many of our basic assumptions. A realistic biology of the mind, advances in evolutionary biology, physics, information technology, genetics, neurobiology, psychology, engineering, the chemistry of materials: all are questions of critical importance with respect to what it means to be human. For the first time, we have the tools and the will to undertake the scientific study of human nature.

This began in the early seventies, when, as a graduate student at Harvard, evolutionary biologist Robert Trivers wrote five papers that set forth an agenda for a new field: the scientific study of human nature. In the past thirty-five years this work has spawned thousands of scientific experiments, new and important evidence, and exciting new ideas about who and what we are presented in books by scientists such as Richard Dawkins, Daniel C. Dennett, Steven Pinker, and Edward O. Wilson among many others.

In 1975, Wilson, a colleague of Trivers at Harvard, predicted that ethics would someday be taken out of the hands of philosophers and incorporated into the "new synthesis" of evolutionary and biological thinking. He was right.

Scientists engaged in the scientific study of human nature are gaining sway over the scientists and others in disciplines that rely on studying social actions and human cultures independent from their biological foundation.

No where is this more apparent than in the field of moral psychology. Using babies, psychopaths, chimpanzees, fMRI scanners, web surveys, agent-based modeling, and ultimatum games, moral psychology has become a major convergence zone for research in the behavioral sciences.

So what do we have to say? Are we moving toward consensus on some points? What are the most pressing questions for the next five years? And what do we have to offer a world in which so many global and national crises are caused or exacerbated by moral failures and moral conflicts? It seems like everyone is studying morality these days, reaching findings that complement each other more often than they clash.



Culture is humankind’s biological strategy, according to Roy F. Baumeister, and so human nature was shaped by an evolutionary process that selected in favor of traits conducive to this new, advanced kind of social life (culture). To him, therefore, studies of brain processes will augment rather than replace other approaches to studying human behavior, and he fears that the widespread neglect of the interpersonal dimension will compromise our understanding of human nature. Morality is ultimately a system of rules that enables groups of people to live together in reasonable harmony. Among other things, culture seeks to replace aggression with morals and laws as the primary means to solve the conflicts that inevitably arise in social life. Baumeister’s work has explored such morally relevant topics as evil, self-control, choice, and free will. [More]

According to Yale psychologist Paul Bloom, humans are born with a hard-wired morality. A deep sense of good and evil is bred in the bone. His research shows that babies and toddlers can judge the goodness and badness of others' actions; they want to reward the good and punish the bad; they act to help those in distress; they feel guilt, shame, pride, and righteous anger. [More]

Harvard cognitive neuroscientist and philosopher Joshua D. Greene sees our biggest social problems — war, terrorism, the destruction of the environment, etc. — arising from our unwitting tendency to apply paleolithic moral thinking (also known as "common sense") to the complex problems of modern life. Our brains trick us into thinking that we have Moral Truth on our side when in fact we don't, and blind us to important truths that our brains were not designed to appreciate. [More]

University of Virginia psychologist Jonathan Haidt's research indicates that morality is a social construction which has evolved out of raw materials provided by five (or more) innate "psychological" foundations: Harm, Fairness, Ingroup, Authority, and Purity. Highly educated liberals generally rely upon and endorse only the first two foundations, whereas people who are more conservative, more religious, or of lower social class usually rely upon and endorse all five foundations. [More]

The failure of science to address questions of meaning, morality, and values, notes neuroscientist Sam Harris, has become the primary justification for religious faith. In doubting our ability to address questions of meaning and morality through rational argument and scientific inquiry, we offer a mandate to religious dogmatism, superstition, and sectarian conflict. The greater the doubt, the greater the impetus to nurture divisive delusions. [More]

A lot of Yale experimental philosopher Joshua Knobe's recent research has been concerned with the impact of people's moral judgments on their intuitions about questions that might initially appear to be entirely independent of morality (questions about intention, causation, etc.). It has often been suggested that people's basic approach to thinking about such questions is best understood as being something like a scientific theory. He has offered a somewhat different view, according to which people's ordinary way of understanding the world is actually infused through and through with moral considerations. He is arguably most widely known for what has come to be called "the Knobe effect" or the "Side-Effect Effect." [More]

NYU psychologist Elizabeth Phelps investigates the brain activity underlying memory and emotion. Much of Phelps' research has focused on the phenomenon of "learned fear," a tendency of animals to fear situations associated with frightening events. Her primary focus has been to understand how human learning and memory are changed by emotion and to investigate the neural systems mediating their interactions. A recent study published in Nature by Phelps and her colleagues, shows how fearful memories can be wiped out for at least a year using a drug-free technique that exploits the way that human brains store and recall memories. [More]

Disgust has been keeping Cornell psychologist David Pizarro particularly busy, as it has been implicated by many as an emotion that plays a large role in many moral judgments. His lab results have shown that an increased tendency to experience disgust (as measured using the Disgust Sensitivity Scale, developed by Jon Haidt and colleagues), is related to political orientation. [More]

[EDITOR'S NOTE: Marc Hauser, one of the nine participants at the conference, has withdrawn his contribution.]

Among the members of the press in attendance were: Sharon Begley, Newsweek, Drake Bennett, Ideas, Boston Globe, David Brooks, OpEd Columnist, New York Times, Daniel Engber, Slate, Amanda Gefter, Opinion Editor, New Scientist, Jordan Mejias, Frankfurter Allgemeine Zeitung, Gary Stix, Scientific American, Pamela Weintraub, Discover Magazine.


Each of the nine participants led a 45-minute session on Day One that consisted of a 25-minute talk, followed by 20-minutes of discussion.

Day Two consisted of two 90-minute open discussions on "The New Science of Morality". The first session, "Consensus/Outstanding Disagreements", explored the the scientific aspects of where we are, how much consensus we have, and what empirical or theoretical questions are still outstanding in the science of morality. The second session, "Applications/Implications", gave the participants an opportunity to think big about how the science of morality can be applied to make the world a better place, make governments work better, improve corporate governance, law, the Internet, etc. The goal for Day Two: to begin work on a consensus document on the state of moral psychology to be published on Edge in the near future.

We are pleased to make the entire 10-hours of talks and discussions available to the Edge community. Over the next month we will serialize the conference by rolling out one or two of 45-minute sessions as an Edge Edition. This will include HD video of the 25-minute talk (with complete text), the 20-minute discussion, and a doublowdable audio MP3 of the talk. We will end the series with the last two ninety-minute discussions on "The New Science of Morality".

We begin here with the Jonathan Haidt's talk followed by the discussion. ...


See below the recent pieces by David Brooks in The New York Times, and Jordan Mejias in Frankfurter Allgemeine Zeitung. ...



Howard Gardner, Geoffrey Miller, Brian Eno, James Fowler, Rebecca Mackinnon, Jaron Lanier, Eva Wisten, Brian Knutson, Andrian Kreye, Anonymous, Alison Gopnik, Robert Trivers ...


An Edge Seminar


I just briefly want to say, I think it's also crucial, as long as you're going to be a nativist and say, "oh, you know, evolution, it's innate," you also have to be a constructivist. I'm all in favor of reductionism, as long as it's paired with emergentism. You've got to be able to go down to the low level, but then also up to the level of institutions and cultural traditions and, you know, all kinds of local factors. A dictum of cultural psychology is that "culture and psyche make each other up." You know, we psychologists are specialists in the psyche. What are the gears turning in the mind? But those gears turn, and they evolved to turn, in various ecological and economic contexts. We've got to look at the two-way relations between psychology and the level above us, as well as the reductionist or neural level below us.


[JONATHAN HAIDT:] As the first speaker, I'd like to thank the Edge Foundation for bringing us all together, and bringing us all together in this beautiful place. I'm looking forward to having these conversations with all of you.

I was recently at a conference on moral development, and a prominent Kohlbergian moral psychologist stood up and said, "Moral psychology is dying."  And I thought, well, maybe in your neighborhood property values are plummeting, but in the rest of the city, we are going through a renaissance. We are in a golden age.

My own neighborhood is the social psychology neighborhood, and it's gotten really, really fun, because all these really great ethnic groups are moving in next door. Within a few blocks, I can find cognitive neuroscientists and primatologists, developmental psychologists, experimental philosophers and economists. We are in a golden age. We are living through the new synthesis in ethics that E.O. Wilson called for in 1975. We are living through an age of consilience.

We're sure to disagree on many points today, but I think that we here all agree on a number of things. We all agree that, to understand morality, you've got to think about evolution and culture. You've got to know something about chimpanzees and bonobos and babies and psychopaths. You've got to know the differences between them. You've got to study the brain and the mind, and you've got to put it all together.

My hope for this conference is that we can note many of our points of agreement, as well as our disagreements. My hope is that the people who watch these talks on the Web will come away sharing our sense of enthusiasm and optimism, and mutual respect.

When I was a graduate student in Philadelphia, I had a really weird experience in a restaurant. I was walking on Chestnut Street, and I saw a restaurant called The True Taste. And I thought, well, okay, what is the true taste?  So I went inside and looked at the menu. The menu had five sections. They were labeled "Brown Sugars," "Honeys," "Molasses" and "Artificials."  And I thought this was really weird, and I went over to the waiter and I said, "What's going on?  Don't you guys serve food?"

And it turns out, the waiter was actually the owner of the restaurant as well, and the only employee. And, he explained to me that this was a tasting bar for sweeteners. It was the first of its kind in the world. And I could have sweeteners from 32 countries. He said that he had no background in the food industry, he'd never worked in a restaurant, but he was a Ph.D. biologist who worked at the Monell Chemical Senses Center in Philadelphia.

And, in his research, he discovered that, of all the five taste receptors ... you know, there's sweet, sour, salty, bitter and savory ... when people experience sweet taste, they get the biggest hit of dopamine. And that told him that sweetness is the true taste, the one that we most crave. And he thought, he reasoned, that it would be most efficient to have a restaurant that just focuses on that receptor, that will maximize the units of pleasure per calorie. So he opened the restaurant.

I asked him, "Well, okay, how's business going?"  And he said, "Terrible. But at least I'm doing better than the chemist down the street, who opened a salt-tasting bar."  (Laughter).

Now, of course, this didn't really happen to me, but it's a metaphor for how I feel when I read moral philosophy and some moral psychology. Morality is so rich and complex. It's so multifaceted and contradictory. But many authors reduce it to a single principle, which is usually some variant of welfare maximization. So that would be the sugar. Or sometimes, it's justice and related notions of fairness and rights. And that would be the chemist down the street. So basically, there's two restaurants to choose from. There's the utilitarian grille, and there's the deontological diner. That's pretty much it.

We need metaphors and analogies to think about difficult topics, such as morality.  An analogy that Marc Hauser and John Mikhail have developed in recent years is that morality is like language. And I think it's a very, very good metaphor. It illuminates many aspects of morality. It's particularly good, I think, for sequences of actions that occur in time with varying aspects of intentionality.

But, once we expand the moral domain beyond harm, I find that metaphors drawn from perception become more illuminating, more useful. I'm not trying to say that the language analogy is wrong or deficient. I'm just saying, let's think of another analogy, a perceptual analogy.

So if you think about vision, touch, and taste, for all three senses, our bodies are built with a small number of specialized receptors. So, in the eye, we've got  four kinds of cells in the retina to detect different frequencies of light. In our skin, we've got three kinds of receptors for temperature and pressure and tissue damage or pain. And on our tongues, we have these five kinds of taste receptor.

I think taste offers the closest, the richest, source domain for understanding morality. First, the links between taste, affect, and behavior are as clear as could be. Tastes are either good or bad. The good tastes, sweet and savory, and salt to some extent, these make us feel "I want more."  They make us want to approach. They say, "this is good."  Whereas, sour and bitter tell us, "whoa, pull back, stop."

Second, the taste metaphor fits with our intuitive morality so well that we often use it in our everyday moral language. We refer to acts as "tasteless," as "leaving a bad taste" in our mouths. We make disgust faces in response to certain violations.

Third, every culture constructs its own particular cuisine, its own way of pleasing those taste receptors. The taste analogy gets at what's universal—that is, the taste receptors of the moral mind—while it leaves plenty of room for cultural variation. Each culture comes up with its own particular way of pleasing these receptors, using local ingredients, drawing on historical traditions.

And fourth, the metaphor has an excellent pedigree. It was used 2,300 years ago in China by Mencius, who wrote, "Moral principles please our minds as beef and mutton and pork please our mouths."  It was also a favorite of David Hume, but I'll come back to that.

So, my goal in this talk is to develop the idea that moral psychology is like the psychology of taste in some important ways. Again, I'm not arguing against the language analogy. I'm just proposing that taste is also a very useful one. It helps show us morality in a different light. It brings us to some different conclusions.

As some of you know, I'm the co-developer of a theory called Moral Foundations Theory, which specifies a small set of social receptors that are the beginnings of moral judgment. These are like the taste receptors of the moral mind. I'll mention this theory again near the end of my talk.

But before I come back to taste receptors and moral foundations, I want to talk about two giant warning flags. Two articles published in "Behavioral and Brain Sciences," under the wise editorship of Paul Bloom. And I think these articles are so important that the abstracts from these two articles should be posted in psychology departments all over the country, in just the way that, when you go to restaurants, they've got, you know, How to Help a Choking Victim. And by law, that's got to be in restaurants in some states. (Laughter).

So, the first article is called "The Weirdest People in the World," by Joe Henrich, Steve Heine and Ara Norenzayan, and it was published last month in BBS. And the authors begin by noting that psychology as a discipline is an outlier in being the most American of all the scientific fields. Seventy percent of all citations in major psych journals refer to articles published by Americans. In chemistry, by contrast, the figure is just 37 percent. This is a serious problem, because psychology varies across cultures, and chemistry doesn't.

So, in the article, they start by reviewing all the studies they can find that contrast people in industrial societies with small-scale societies. And they show that industrialized people are different, even at some fairly low-level perceptual processing, spatial cognition. Industrialized societies think differently.

The next contrast is Western versus non-Western, within large-scale societies. And there, too, they find that Westerners are different from non-Westerners, in particular on some issues that are relevant for moral psychology, such as individualism and the sense of self.

Their third contrast is America versus the rest of the West. And there, too, Americans are the outliers, the most individualistic, the most analytical in their thinking styles.

And the final contrast is, within the United States, they compare highly educated Americans to those who are not. Same pattern.

All four comparisons point in the same direction, and lead them to the same conclusion, which I've put here on your handout. I'll just read it. "Behavioral scientists routinely publish broad claims about human psychology and behavior based on samples drawn entirely from Western, Educated, Industrialized, Rich and Democratic societies."  The acronym there being WEIRD. "Our findings suggest that members of WEIRD societies are among the least representative populations one could find for generalizing about humans. Overall, these empirical patterns suggest that we need to be less cavalier in addressing questions of human nature, on the basis of data drawn from this particularly thin and rather unusual slice of humanity."

As I read through the article, in terms of summarizing the content, in what way are WEIRD people different, my summary is this: The WEIRDer you are, the more you perceive a world full of separate objects, rather than relationships, and the more you use an analytical thinking style, focusing on categories and laws, rather than a holistic style, focusing on patterns and contexts.

Now, let me state clearly that these empirical facts about "WEIRD-ness", they don't in any way imply that our morality is wrong, only that it is unusual. Moral psychology is a descriptive enterprise, not a normative one. We have WEIRD chemistry. The chemistry produced by Western, Educated, Industrialized, Rich, Democratic societies is our chemistry, and it's a very good chemistry. And we have every reason to believe it's correct. And if a Ayurvedic practitioner from India were to come to a chemistry conference and say, "Good sirs and madams, your chemistry has ignored our Indian, you know, our 5,000-year-old chemistry," the chemists might laugh at them, if they were not particularly polite, and say, "Yeah, that's right. You know, we really don't care about your chemistry."

But suppose that same guy were to come to this conference and say, "You know, your moral psychology has ignored my morality, my moral psychology."  Could we say the same thing?  Could we just blow him off and say, "Yeah, we really don't care"?  I don't think that we could do that. And what if the critique was made by an American Evangelical Christian, or by an American conservative?  Could we simply say, "We just don't care about your morality"?  I don't think that we could.

Morality is like The Matrix, from the movie "The Matrix."  Morality is a consensual hallucination, and when you read the WEIRD people article, it's like taking the red pill. You see, oh my God, I am in one particular matrix. But there are lots and lots of other matrices out there.

We happen to live in a matrix that places extraordinary value on reason and logic. So, the question arises, is our faith justified?  Maybe ours is right and the others are wrong. What if reasoning really is the royal road to truth?  If so, then maybe the situation is like chemistry after all. Maybe WEIRD morality, with this emphasis on individual rights and welfare, maybe it's right, because we are the better reasoners. We had The Enlightenment. We are the heirs of The Enlightenment. Everyone else is sitting in darkness, giving credence to religion, superstition and tradition. So maybe our matrix is the right one.

Well, let's turn to the second article. It's called, "Why Do Humans Reason?  Arguments for an Argumentative Theory," by Hugo Mercier and Dan Sperber. The article is a review of a puzzle that has bedeviled researchers in cognitive psychology and social cognition for a long time. The puzzle is, why are humans so amazingly bad at reasoning in some contexts, and so amazingly good in others? 

So, for example, why can't people solve the Wason Four-Card Task, lots of basic syllogisms?  Why do people sometimes do worse when you tell them to think about a problem or reason through it, than if you don't give them any special instructions? 

Why is the confirmation bias, in particular— this is the most damaging one of all—why is the confirmation bias so ineradicable?  That is, why do people automatically search for evidence to support whatever they start off believing, and why is it impossible to train them to undo that?  It's almost impossible. Nobody's found a way to teach critical thinking that gets people to automatically reflect on, well, what's wrong with my position?

And finally, why is reasoning so biased and motivated whenever self-interest or self-presentation are at stake?  Wouldn't it be adaptive to know the truth in social situations, before you then try to manipulate?

The answer, according to Mercier and Sperber, is that reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments. That's why they call it The Argumentative Theory of Reasoning. So, as they put it, and it's here on your handout, "The evidence reviewed here shows not only that reasoning falls quite short of reliably delivering rational beliefs and rational decisions. It may even be, in a variety of cases, detrimental to rationality. Reasoning can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions. This explains the confirmation bias, motivated reasoning, and reason-based choice, among other things."

Now, the authors point out that we can and do re-use our reasoning abilities. We're sitting here at a conference. We're reasoning together. We can re-use our argumentative reasoning for other purposes. But even there, it shows the marks of its heritage. Even there, our thought processes tend towards confirmation of our own ideas. Science works very well as a social process, when we can come together and find flaws in each other's reasoning. We can't find the problems in our own reasoning very well. But, that's what other people are for, is to criticize us. And together, we hope the truth comes out.

But the private reasoning of any one scientist is often deeply flawed, because reasoning can be counted on to seek justification and not truth. The problem is especially serious in moral psychology, where we all care so deeply and personally about what is right and wrong, and where we are almost all politically liberal. I don't know of any Conservatives. I do know of a couple of people in moral psychology who don't call themselves liberal. I think, Roy, are you one?  Not to out you, but ... (Laughter).

ROY BAUMEISTER: I'm pretty apolitical, I guess.      

JONATHAN HAIDT: Okay. So there's you, and there's Phil Tetlock,  who don't call themselves Liberals, as far as I know. But I don't know anyone who calls themselves a Conservative. We have a very, very biased field, which means we don't have the diversity to really be able to challenge each other's confirmation biases on a number of matters. So, it's all up to you today, Roy.

So, as I said, morality is like The Matrix. It's a consensual hallucination. And if we only hang out with people who share our matrix, then we can be quite certain that, together, we will find a lot of evidence to support our matrix, and to condemn members of other matrices.

So, I think the Mercier and Sperber article offers strong empirical support for a basically Humean perspective — David Hume — a Humean perspective on moral reasoning. Hume famously wrote that "reason is and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them."  When Hume died, in 1776, he left us a strong foundation for what he and his contemporaries called "the moral sciences." 

The subtitle of my talk today is "A Taste Analogy in Moral Psychology: Picking up Where Hume Left Off."  And, at the bottom of the handout, I've listed some of the features that I think would characterize such a continuation, a continuation of Hume's project.

So, Hume was a paragon of Enlightenment thinking. He was a naturalist, which meant that he believed that morality was part of the natural world, and we can understand morality by studying human beings, not by studying Scripture or a priori logic. Let's look out at the world to do moral psychology, to do the moral sciences. So, that's why I've listed Naturalism, or Naturalist, as the first of the seven features there.

Second, Hume was a nativist. Now, he didn't know about Darwin. He didn't know about evolution. But, if he did, he would have embraced Darwin and evolution quite warmly. Hume believed that morals were like aesthetic perceptions, that they were "Founded entirely on the particular fabric and constitution of the human species."

Third, Hume was a sentimentalist. That is, he thought that the key threads of this fabric were the many moral sentiments. And you can see his emphasis on sentiment in the second quotation that I have on your handout, where he uses the taste metaphor. He says, "Morality is nothing in the abstract nature of things, but is entirely relative to the sentiment or mental taste of each particular being, in the same manner as the distinctions of sweet and bitter, hot and cold arise from the particular feeling of each sense or organ. Moral perceptions, therefore, ought not to be classed with the operations of the understanding, but with the tastes or sentiments."

Now, some of these sentiments can be very subtle, and easily mistaken for products of reasoning, Hume said. And that's why I think, and I've argued, that the proper word for us today is not "sentiment" or "emotion."  It's actually "intuition."  A slightly broader term and a more sort of cognitive-sounding term.

Moral intuitions are rapid, automatic and effortless. Since we've had the automaticity revolution in social psychology in the '90s, beginning with John Bargh and others, our thinking's turned a lot more towards automatic versus controlled processes, rather than emotion versus cognition. So, intuition is clearly a type of cognition, and I think the crucial contrast for us in moral psychology is between various types of cognition, some of which are very affectively laden, others of which are less so, or not at all.

Fourth, Hume was a pluralist, because he was to some degree a virtue ethicist. Virtue ethics is the main alternative to deontology and utilitarianism in philosophy. Virtues are social skills. Virtues are character traits that a person needs in order to live a good, praiseworthy, or admirable life. The virtues of a rural farming culture are not the same as the virtues of an urban commercial or trading culture, nor should they be. So virtues are messy. Virtue theories are messy.

If you embrace virtue theory, you say goodbye to the dream of finding one principle, one foundation, on which you can rest all of morality. You become a pluralist, as I've listed down there. And you also become a non-parsimonist. That is, of course parsimony's always valuable in sciences, but my experience is that we've sort of elevated Occam's Razor into Occam's Chainsaw. Which is, if you can possibly cut it away and still have it stand, do it. And I think, in especially moral psychology, we've grossly disfigured our field by trying to get everything down to one if we possibly can. So I think, if you embrace virtue ethics, at least you put less of a value on parsimony than moral psychologists normally do.

But what you get in return for this messiness is, you get the payoff for being a naturalist. That is, you get a moral theory that fits with what we know about human nature elsewhere. So, I often use the metaphor that the mind is like a rider on an elephant. The rider is conscious, controlled processes, such as reasoning. The elephant is the other 99 percent of what goes on in our minds, things that are unconscious and automatic.

Virtue theories are about training the elephant. Virtue theories are about cultivating habits, not just of behavior, but of perception. So, to develop the virtue of kindness, for example, is to have a keen sensitivity to the needs of other people, to feel compassion when warranted, and then to offer the right kind of help with a full heart.

Utilitarianism and deontology, by contrast, are not about the elephant at all. They are instruction manuals for riders. They say, "here's how you do the calculation to figure out the right thing to do, and just do it."  Even if it feels wrong. "Tell the truth, even if it's going to hurt your friends," say some deontologists. "Spend less time and money on your children, so that you have more time and money to devote to helping children in other countries and other continents, where you can do more good."  These may be morally defensible and logically defensible positions, but they taste bad to most people. Most people don't like deontology or utilitarianism.

So, why hasn't virtue ethics been the dominant approach?  What happened to virtue ethics, which flourished in ancient Greece, in ancient China, and through the Middle Ages, and all the way up through David Hume and Ben Franklin?  What happened to virtue ethics?

Well, if we were to write a history of moral philosophy, I think the next chapter would be called, "Attack of the Systemizers."  Most of you know that autism is a spectrum. It's not a discrete condition. And Simon Baron-Cohen tells us that we should think about it as two dimensions. There's systemizing and empathizing. So, systemizing is the drive to analyze the variables in a system, and to derive the underlying rules that govern the behavior of a system. Empathizing is the drive to identify another person's emotions and thoughts, and to respond to these with appropriate emotion.

So, if you place these two dimensions, you make a 2x2 space, you get four quadrants. And, autism and Asperger's are, let's call it the bottom right corner of the bottom right quadrant. That is, very high on systemizing, very low on empathizing.  People down there have sort of the odd behaviors and the mind-blindness that we know as autism or Asperger's.

The two major ethical systems that define Western philosophy were developed by men who either had Asperger's, or were pretty darn close. For Jeremy Bentham, the principal founder of utilitarianism, the case is quite strong.   According to an article titled "Asperger's Syndrome and the Eccentricity and Genius of Jeremy Bentham," published in the Journal of Bentham Studies, (Laughter), Bentham fit the criteria quite well. I'll just give a single account of his character from John Stuart Mill, who wrote, "In many of the most natural and strongest feelings of human nature, he had no sympathy. For many of its graver experiences, he was altogether cut off. And the faculty by which one mind understands a mind different from itself, and throws itself into the feelings of that other mind was denied him by his deficiency of imagination."

For Immanuel Kant, the case is not quite so clear. He also was a loner who loved routine, feared change, focused on his few interests, to the exclusion of all else. And, according to one psychiatrist, Michael Fitzgerald, who diagnoses Asperger's in historical figures and shows how it contributed to their genius, Fitzgerald thinks that Kant would be diagnosed with Asperger's. I think the case is not nearly so clear. I think Kant did have better social skills, more ability to empathize. So I wouldn't say that Kant had Asperger's, but I think it's safe to say that he was about as high as could possibly be on systemizing, while still being rather low on empathizing, although not the absolute zero that Bentham was.

Now, what I'm doing here, yes, it is a kind of an ad hominem argument. I'm not saying that their ethical theories are any less valid normatively because of these men's unusual mental makeup. That would be the wrong kind of ad hominem argument. But I do think that, if we're doing history in particular, we're trying to understand, why did philosophy and then psychology, why did we make what I'm characterizing as a wrong turn?  I think personality becomes relevant.

And, I think what happened is that, we had these two ultra-systemizers, in the late 18th and early 19th century. These two ultra-systemizers, during the early phases of the Industrial Revolution, when Western society was getting WEIRDer, and we were in general shifting towards more systemized and more analytical thought. You had these two hyper-systemized theories, and especially people in philosophy just went for it, for the next 200 years, it seems. All it is is, you know, utility, no. Deontology. You know, rights, harm.

And so, you get this very narrow battle of two different systemized groups, and virtue ethics--which fit very well with The Enlightenment Project; you didn't need God for virtue ethics at all--virtue ethics should have survived quite well. But it kind of drops out. And I think personality factors are relevant.

Because philosophy went this way, into hyper-systemizing, and because moral psychology in the 20th century followed them, referring to Kant and other moral philosophers, I think we ended up violating the two giant warning flags that I talked about, from these two BBS articles. We took WEIRD morality to be representative of human morality, and we've placed way too much emphasis on reasoning, treating it as though it was capable of independently seeking out moral truth.

I've been arguing for the last few years that we've got to expand our conception of the moral domain, that it includes multiple moral foundations, not just sugar and salt, and not just harm and fairness, but a lot more as well. So, with Craig Joseph and Jesse Graham and Brian Nosek, I've developed a theory called Moral Foundations Theory, which draws heavily on the anthropological insights of Richard Shweder.

Down here, I've just listed a very brief summary of it. That the five most important taste receptors of the moral mind are the following…care/harm, fairness/cheating, group loyalty and betrayal, authority and subversion, sanctity and degradation. And that moral systems are like cuisines that are constructed from local elements to please these receptors.

So, I'm proposing, we're proposing, that these are the five best candidates for being the taste receptors of the moral mind. They're not the only five. There's a lot more. So much of our evolutionary heritage, of our perceptual abilities, of our language ability, so much goes into giving us moral concerns, the moral judgments that we have. But I think this is a good starting point. I think it's one that Hume would approve of. It uses the same metaphor that he used, the metaphor of taste.

So, in conclusion, I think we should pick up where Hume left off. We know an awful lot more than Hume did about psychology, evolution and neuroscience. If Hume came back to us today, and we gave him a few years to read up on the literature and get up to speed, I think he would endorse all of these criteria. I've already talked about what it means to be a naturalist, a nativist, an intuitionist, a pluralist, and a non-parsimonist.

I just briefly want to say, I think it's also crucial, as long as you're going to be a nativist and say, "oh, you know, evolution, it's innate," you also have to be a constructivist. I'm all in favor of reductionism, as long as it's paired with emergentism. You've got to be able to go down to the low level, but then also up to the level of institutions and cultural traditions and, you know, all kinds of local factors.  A dictum of cultural psychology is that "culture and psyche make each other up."  You know, we psychologists are specialists in the psyche. What are the gears turning in the mind?  But those gears turn, and they evolved to turn, in various ecological and economic contexts. We've got to look at the two-way relations between psychology and the level above us, as well as the reductionist or neural level below us.

And then finally, the last line there. We've got to be very, very cautious about bias. I believe that morality has to be understood as a largely tribal phenomenon, at least in its origins. By its very nature, morality binds us into groups, in order to compete with other groups.

And as I said before, nearly all of us doing this work are secular Liberals. And that means that we're at very high risk of misunderstanding those moralities that are not our own. If we were judges working on a case, we'd pretty much all have to recuse ourselves. But we're not going to do that, so we've got to just be extra careful to seek out critical views, to study moralities that aren't our own, to consider, to empathize, to think about them as possibly coherent systems of beliefs and values that could be related to coherent, and even humane, human ways of living and flourishing.

So, that's my presentation. That's what I think the moral sciences should look like in the 21st century. Of course, I've created this presentation using my reasoning skills, and I know that my reasoning is designed only to help me find evidence to support this view. So, I thank you for all the help you're about to give me in overcoming my confirmation bias, by pointing out all the contradictory evidence that I missed.



Jonathan Haidt

Morality is a social construction, but it is constructed out of evolved raw materials provided by five (or more) innate "psychological" foundations. In surveys and experiments I have conducted in the USA, Europe, Brazil, and India, I have consistently found that highly educated liberals generally rely upon and endorse only the first two foundations (Harm and Fairness), whereas people who are more conservative, more religious, or of lower social class usually rely upon and endorse all five foundations.

Each culture's morality is unique, but an aspect shared by all five-foundation moralities is that they do not regard society as a social contract created for the benefit of individuals. Rather, they see society in more organic terms, as an entity that is of value in and of itself, and they think the building blocks of society are not individuals but rather groups and institutions. The point of moral regulation is to enhance the integrity of these building blocks and to improve the way the blocks fit together, in order to ward off the ever-present danger of social decay.

The Ingroup, Authority, and Purity foundations are moral foundations because they constrain individuals; they pull them away from self-serving, pleasure-seeking individualism by binding individuals into groups and institutions. (Think about the transformation of an 18 year old who enlists in the army.) Liberals do not see this binding as necessary or as desirable, hence they do not see a moral system based on these foundations as worthy of anything but contempt. They think their opponents are motivated by greed, fear, racism, and blind obedience to scripture or tradition.

What a shame. If liberals could only step out of their righteous bubble, they'd be able to solve these riddles, which at present befuddle their thinking and curse their projects.

JONATHAN HAIDT is Professor in the Social Psychology area of the  Department of Psychology at the University of Virginia, where he does research on morality and emotion, and how they vary across cultures.

He studies morality — its emotional foundations, cultural variations, and developmental course. His early research on moral intuition changed the field of moral psychology, moving it away from its previous focus on moral reasoning.

His current work on the "five foundations of morality" is changing the field again, moving it beyond its traditional focus on issues of harm and fairness, and drawing attention to the moral issues that animate political conservatives and religious believers. This work has been profiled twice in the New York Times — once in a Science Times article by Nicholas Wade, and once in a magazine essay by Stephen Pinker on morality.

Haidt has published 75 academic articles, in Science, Psychological Review, and other leading journals. He is the co-editor of Flourishing: Positive Psychology and the Life Well Lived (2003, APA press), and is the author of The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom, and the forthcomingThe Righteous Mind: Why Good People Are Divided By Politics and Religion (Pantheon Books).


Jonathan Haidt's Homepage
Moral Foundations Theory Homepage
Civil Politics Homepage
The Happiness Hypothesis

Articles & Press:

• Jonathan Haidt on the moral roots of liberals and conservatives, TED Talk — video
Morality 2012, New Yorker Conference — video
Moral Psychology and the Misunderstanding of Religion: A Talk with Jonathan Haidt, in Edge
The Moral InstinctBy Steven Pinker, in New York Times Magazine
The New Synthesis in Moral Psychology, in Science
Is 'Do Unto Others' Written Into Our Genes?, in The New York Times


Jonathan Haidt's Edge Bio page

Joshua D. Greene

There is no topic more fascinating or important than morality. From hot-button political issues to the he-said-she-said of office gossip, morality is on everyone's mind. Cultural conservatives warn of imminent moral decay, while liberals and secularists fear an emerging "Endarkenment," brought on by the right's moral zealotry. Every major political decision — Should we go to war? Should we act to preserve the environment? — is also a moral decision, and the choices we make will determine whether our species will continue to thrive, or be yet another ephemeral dot in evolution's Petri dish.

We and our brains evolved in small, culturally homogeneous communities, each with its own moral perspective. The modern world, of course, is full of competing moral perspectives, often violently so. Our biggest social problems — war, terrorism, the destruction of the environment, etc. — arise from our unwitting tendency to apply paleolithic moral thinking (also known as "common sense") to the complex problems of modern life. Our brains trick us into thinking that we have the Moral Truth on our side when in fact we don't, and blind us to important truths that our brains were not designed to appreciate. Our brains prevent us from seeing the world from alternative moral perspectives, and make us reluctant to even try. When making important policy decisions, we rely on gut feelings that are smart, but not smart enough.

That's the bad news. The good news is that parts of the human brain are highly flexible, and that by depending more on these cognitive systems, we can adapt our moral thinking to the modern world. But to do this we must put aside common sense and think in ways that strike most people as very unnatural.

JOSHUA D. GREEENE is a cognitive neuroscientist and a philosopher, received his bachelor's degree in philosophy from Harvard (1997) and his Ph.D. from Princeton (2002). In 2006 he joined the faculty of Harvard University's Department of Psychology as an assistant professor. His primary research interest is the psychological and neuroscientific study of morality, focusing on the interplay between emotional and "cognitive" processes in moral decision making. His broader interests cluster around the intersection of philosophy, psychology, and neuroscience. He is currently writing a book about the philosophical implications of our emerging scientific understanding of morality.


Joshua Greene's Homepage
Joshua Greene's CV
Harvard's Moral Cognition Lab

Articles & Press:

From neural 'is' to moral 'ought': what are the moral implications of neuroscientific moral psychology?, in Nature Neuroscience
The Secret Joke of Kant's Soul, in Moral Psychology
For the law, neuroscience changes nothing and everything, By Joshua Greene and Jonathan Cohen, The Royal Society
How (and where) does moral judgment work? By Joshua Greene and Jonathan Haidt, in TRENDS in Cognitive Sciences
Patterns of neural activity associated with honest and dishonest moral decisions Joshua D. Greene and Joseph M. Paxton, in PNAS
Pushing moral buttons: The interaction between personal force and intention in moral judgment By Joshua D. Greene et al, in Cognition
The Neural Bases of Cognitive Conflict and Control in Moral Judgment By Joshua D. Greene et al, in Neuron

Joshua D. Greene's Edge Bio page

Sam Harris

The people of Albania have a venerable tradition of vendetta called "Kanun": if a man commits a murder, his victim’s family can kill any one of his male relatives in reprisal. If a boy has the misfortune of being the son or brother of a murderer, he must spend his days and nights in hiding, forgoing a proper education, adequate health care, and the pleasures of a normal life. Untold numbers of Albanian men and boys live as prisoners of their homes even now. Can we say that the Albanians are morally wrong to have structured their society in this way? Is their tradition of blood feud a form of evil? Are their values inferior to our own?

Most people imagine that science cannot pose, much less answer, questions of this sort. How could we ever say, as a matter of scientific fact, that one way of life is better, or more moral, than another? Whose definition of "better" or "moral" would we use? While many scientists now study the evolution of morality, as well as its underlying neurobiology, the purpose of their research is merely to describe how human beings think and behave. No one expects science to tell us how we ought to think and behave. Controversies about human values are controversies about which science officially has no opinion.

The failure of science to address questions of meaning, morality, and values has become the primary justification for religious faith. Even among religious fundamentalists, the defense one most often hears for belief in God is not that there is compelling evidence that God exists, but that faith in Him provides the only guidance for living a good life.

This split in our thinking between reason and values has immense social consequences. To the degree that we doubt our ability to address questions of meaning and morality through rational argument and scientific inquiry, we offer a mandate to religious dogmatism, superstition, and sectarian conflict. The greater the doubt, the greater the impetus to nurture divisive delusions.

I propose that answers to questions of human value can be visualized on a "moral landscape" — a space of real and potential outcomes whose peaks correspond to states of the greatest possible wellbeing and whose valleys represent the deepest depths of suffering. Different ways of thinking and behaving — different cultural practices, ethical codes, modes of government, etc. — translate into movements across this landscape. Such changes can be analyzed objectively on many levels — ranging from biochemistry to economics — but they have their crucial realization as states and capacities of the human brain.

SAM HARRIS is a neuroscientist and the author of The End of Faith and Letter to a Christian Nation. He and his work have been discussed in Newsweek, TIME, The New York Times, Scientific American, Nature, Rolling Stone, and many other journals. His writing has appeared in Newsweek, The New York Times, The Los Angeles Times, The Times (London), The Boston Globe, The Atlantic, The Annals of Neurology, PLoS ONE, and elsewhere.

Mr. Harris is a Co-Founder and CEO of Project Reason, a nonprofit foundation devoted to spreading scientific knowledge and secular values in society. He received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA. He is the author of the forthcoming The Moral Landscape: How Science Can Determine Human Values (Free Press).


Sam Harris's Home Page
Project Reason

Articles & Press:

Science can answer moral questions, TED Talk
The Four Horsemen: Richard Dawkins, Daniel Dennett, Sam Harris and Christopher Hitchens, Video
Rolling Stone 40th Anniversary
Fact Impact, in Newsweek
What Your Brain Looks Like on Faith, in Time
The New Wars of Religion, in The Economist
The New Atheists, in The Nation
The Celestial Teapot, in The New Republic


Roy Baumeister

At present I am reading, thinking, and writing about self-control, choice, free will, addiction, and related matters. These are highly relevant to morality. The essence of the idea of free will is that a person is/was capable of acting differently. Moral principles only make sense on the basis of that assumption, insofar as they exhort people to make responsible choices by acting in one manner rather than the other. Moral judgments often depend on whether a person acted on his or her own free will and essentially state whether the person should have acted differently.

I have been led by a circuitous route to the conclusion that the human being was designed by nature for culture: That is, the distinctively human traits are those that enable us to participate in this new kind of social life, namely culture. Culture is humankind’s biological strategy. To understand human traits, therefore, it is useful to ask how each trait would have been selected for as a way of helping an individual flourish in this new kind of social environment.

Social life inevitably breeds conflict, because different group members want the same food or mate or resource. Evolution adapted predatory aggression to resolve intraspecies conflicts, such as by making it often non-lethal. Human culture has however developed alternative means of resolving disputes, including morality. Morality is a system that allows group members to live together in reasonable peace and productive harmony, not least by restraining natural tendencies toward selfishness. Therefore, to be cultural, humans had to evolve a capacity to behave (and think and feel) morally. The similarities and the differences among various moral systems can be understood on the basis of the requirements of group life.

My interest in free will is not focused on the old debate of whether people do or do not have it. Rather, there is a real social phenomenon associated with the idea of free will, and that is what I seek to understand. For me, this developed out of studies on self-control, which we have dubbed “the moral muscle” because it enables individuals to overcome selfish and other antisocial impulses to do what is best for the group. Most virtues embody effective self-control, and most vices are failures thereof. The link between moral (and legal) responsibility and perceived free will adds another dimension to this social reality.

When I first studied moral philosophy back in college, I got stuck on the question of why one should bother obeying moral rules, especially if doing so goes against self-interest, and apart from fear of punishment. Decades later, I am seeing a two-pronged answer that reconciles with (enlightened) self-interest. First, obeying moral rules helps the cultural system to operate, and the health and prosperity (and survival and reproduction) of individuals depends heavily on the effective operation of the system. Second, cultural beings have moral reputations, and others treat them well or badly on the basis of those reputations.

ROY BAUMEISTER is Francis Eppes Eminent Scholar and head of the social psychology graduate program at Florida State University. He received his PhD in 1978 from Princeton in experimental social psychology and maintains an active laboratory, but he also seeks to understand human nature in the big picture, such as by tackling broad philosophical problems with social science methods. He has nearly 450 publications. He is among the most widely influential psychologists in the world, as indicated by being cited over a thousand times each year in the scientific literature. His 27 books include Meanings of Life, Evil: Inside Human Violence and Cruelty, The Cultural Animal: Human Nature, Meaning, and Social Life, Is There Anything Good about Men?, and the forthcoming (with John Tierney) Willpower: The Rediscovery of Humans’ Greatest Strength.


Roy Baumeister Home Page
The Baumeister & Tice Lab
Roy Baumeister, in Wikipedia

Articles & Press:

Cultural Animal, Roy Baumeister's Psychology Today Blog
Is There Anything Good About Men?
Exploding the Self-Esteem Myth, in Scientific American
Ego Depletion: Is the Active Self a Limited Resource?

Roy Baumeister's Edge Bio page

Paul Bloom

The human moral sense is fascinating. Putting aside the intriguing case of psychopaths, every normal adult is appalled by acts of cruelty, such as the rape of a child, the swindling of the elderly, or the humiliation and betrayal of a lover. Every normal adult is also uplifted by acts of kindness, like those heroes who jump onto subways tracks to rescue fallen strangers from oncoming trains. There is a universal urge to help those in need and to punish wrongdoers; we feel pride when we do the right thing and guilt when we don't.

Other moral feelings and impulses aren't so universal. As your typical liberal academic, I am morally appalled by tea party demonstrators, abortion clinic bombers, the NRA, the use of waterboarding to interrogate prisoners, and Sarah Palin. But I have to swallow the fact that roughly half of my fellow Americans feel just the same about gay rights demonstrators, abortionists, the ACLU, and Barack Obama.

Where does this all come from? How much of it is learned? Why are some moral judgments universal and others violently conflicting?

My answer is this: Humans are born with a hard-wired morality. A deep sense of good and evil is bred in the bone. I'm aware that this might sound outlandish, but it's supported now by research in several laboratories, including my own research at Yale. Babies and toddlers can judge the goodness and badness of others' actions; they want to reward the good and punish the bad; they act to help those in distress; they feel guilt, shame, pride, and righteous anger. I am admittedly biased, but I think these are the most exciting findings to come out of psychology in the last many years.

PAUL BLOOM is a professor of psychology at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is past-president of the Society for Philosophy and Psychology, and co-editor of Behavioral and Brain Sciences, one of the major journals in the field.

Dr. Bloom has written for scientific journals such as Nature and Science, and for popular outlets such as The New York Times, the Guardian, and the Atlantic. He is the author or editor of four books, including How Children Learn the Meanings of Words, and Descartes' Baby: How the Science of Child Development Explains What Makes Us Human. His newest book, How Pleasure Works: The New Science of Why We Like What We Like, was published in June, 2010.


Paul Bloom's Yale University Home Page
Paul Bloom's CV
Yale Mind and Development Lab

Articles & Press:

The Moral Life of Babies, in New York Times Magazine
How Do Morals Change, in Nature
Interview, on Big Think
The Long and Short of It, in New York Times
No Smiting, in New York Times Book Review
Natural Happiness, in New York Times Magazine
What's Inside a Baby's Head, in Slate
First Person Plural, in Atlantic


David Pizarro

My primary research interests are in moral judgment, the effects of emotion on judgment, and on the overlap between these two. In my lab in the psychology department at Cornell, we also study a wide range of topics involving emotion, judgment, and behavior.

One of my primary interests is in how people arrive at judgments about moral responsibility. Most people seem to have intuitions about what sorts of things matter when determining whether a person deserves blame (or praise) for any given act. In another ongoing set of studies, we have demonstrated that moral reasoning can be influenced by motivations that may have nothing to do with moral concerns.

Disgust has been keeping us particularly busy, as it has been implicated by many as an emotion that plays a large role in many moral judgments). In our lab, we have shown that an increased tendency to experience disgust (as measured using the Disgust Sensitivity Scale, developed by Jon Haidt and colleagues), is related to political orientation.

We have shown that even for people who may not be willing (or aware) of their attitudes toward homosexuality, the degree of disgust sensitivity predicts so-called "implicit" attitudes toward homosexuality (. Finally, in ongoing work we have shown that manipulating disgust with a noxious odor leads to greater antipathy "toward gays and lesbians, but does not shift attitudes that might be associated with liberal or conservative beliefs.

I also have a general interest in the influence of emotional states on thinking and deciding. I am particularly interested in specific emotions (anger, disgust, fear, etc.), and on "visceral" affective states (e.g., thirst, hunger, sexual arousal) and their impact on how we process information, how we remember events, and how these emotions impact our moral judgments.

DAVID PIZARRO , a psychologist at Cornell University, has his primary interest in moral judgment; particularly moral intuitions (especially concerning moral responsibility, and the permissibility or impermissibility of certain acts), and in biases that affect moral judgment. While intuitions are foundational principles on which people base their morality (e.g., that an act has to be intentional in order receive blame for it, or that killing someone is worse than letting them die), biases in moral judgment are the unintended consequence of certain cognitive and emotional processes (e.g., judging someone as more guilty of a crime because they are a racial minority).

Pizarro also has a general interest in the influence of emotional states on thinking and deciding. He is particularly interested in specific emotions (anger, disgust, fear, etc.) and their differential impact on how we process information, how we remember events, and how these emotions impact our moral judgments of others.


David Pizarro's Home Page
David Pizarro's CV
David Pizarro's Journal Articles

Articles & Press:

Easily grossed out? You're more likely a conservative,
says Cornell psychologist
, in Cornell Chronicle
Studies: Conservatives easier to disgust, in Washington Times
Science Digest: The Politics of Yuck, in Washington Post
Study: Conservatives More Easily Disgusted Than Liberals, Fox News

David Pizarro's Edge Bio page

Elizabeth Phelps

My research examines the cognitive neuroscience of emotion, learning and memory, notions of notions of fairness and economic decision making, especially with regard to the neural correlates, linked to emotion. My primary focus has been to understand how human learning and memory are changed by emotion and to investigate the neural systems mediating their interactions. I have approached this topic from a number of different perspectives, with an aim of achieving a more global understanding of the complex relations between emotion and memory. As much as possible, I have tried to let the questions drive the research, not the techniques or traditional definitions of research areas. I have used a number of techniques (behavioral studies, physiological measurements, brain-lesion studies, fMRI) and have worked with a number of collaborators in other domains (social and clinical psychologists, psychiatrists, neuroscientists, economists, physicists). It is my belief that having focused questions and a broad approach to answering these questions has enhanced the overall quality of my research program and the cross-disciplinary relevance and appeal of my work.

ELIZABETH A. PHELPS received her PhD from Princeton University in 1989, served on the faculty of Yale University until 1999, and is currently the Silver Professor of Psychology and Neural Science at New York University. Her laboratory has earned widespread acclaim for its groundbreaking research on how the human brain processes emotion, particularly as it relates to learning, memory and decision-making.  Dr. Phelps is the recipient of the 21st Century Scientist Award from the James S. McDonnell Foundation and a fellow of the American Association for the Advancement of Science and the Society for Experimental Psychology.  She has served on the Board of Directors of the Association for Psychological Science and the Society for Neuroethics, was the President of the Society for Neuroeconomics and is the current editor of the APA journal Emotion.


Elizabeth Phelp's Home Page
Phelps Lab

Articles & Press:

Making the paper: Elizabeth Phelps, in Nature
Fear memories erased without drugs by Lizzie Buchen, in Nature
Preventing the return of fear in humans using reconsolidation update mechanisms by Schiller. Monfils, Raio. Johnson, LeDoux & Phelps, in Nature
Train Your Mind, Kick Your Craving by Sharon Begley, in Newsweek

Elizabeth Phelp's Edge Bio page

Joshua Knobe

Suppose you look out in the world and see a person engaged in some important activity. Ultimately, you might arrive at a conclusion about whether what she is doing is morally right or wrong, but before you can even begin asking about that sort of question, it seems that you have to go through an earlier step. You have to get clear about what is actually happening in the situation. So you might start by trying to figure out what the person intends to accomplish. Or what impact her actions will have on the situation as a whole. Or whether she will be making people happy or unhappy. All of these questions seem perfectly straightforward and distinct from any controversial moral claims. Then later, once you have gotten a handle on all of the straightforward factual questions, you can go on to address the moral questions, trying to decide whether the person’s action is morally right or wrong.

This, at least, is the usual picture. But an ever-growing body of evidence suggests that this picture is deeply mistaken. The evidence does not seem to suggest that moral judgment is just some extra step added on after we have figured out basically what is happening. Rather, it looks like moral judgment is actually exerting an influence from the very beginning.

Over the past few years, a series of recent experimental studies have reexamined the ways in which people answer seemingly ordinary questions about human behavior. Did this person act intentionally? What did her actions cause? Did she make people happy or unhappy? It had long been assumed that people’s answers to these questions somehow preceded all moral thinking, but the latest research has been moving in a radically different direction. It is beginning to appear that people’s whole way of making sense of the world might be suffused with moral judgment, so that people’s moral beliefs can actually transform their most basic understanding of what is happening in a situation.

JOSHUA KNOBE is a faculty member in Yale University’s Program in Cognitive Science. He is one of the founders of the ‘experimental philosophy’ movement, which seeks to use experimental methods to address the traditional problems of philosophy. Accordingly, his publications have appeared both in leading psychology journals (Psychological Science, Cognition, Journal of Personality and Social Psychology) and in leading philosophy journals (Journal of Philosophy, Nous, Analysis). His work has been discussed in popular media venues including the New York Times, the BBC and Slate. He is coeditor, with Shaun Nichols, of Experimental Philosophy.


Joshua Knobe's Yale webpage
The Experimental Philosophy Page
Joshua Knobe on Wikipedia

Articles & Press:

The New New Philosophy By Kwame Anthony Appiah, in New York Times Magazine
Lessons from the Park in the Chronicle of Higher Education
Interview on Bloggingheads.tv
Can a Robot, an Insect or God be Aware?, in Scientific American
The X-Philes: Philosophy Meets the real world by Jon Lackman, in Slate


Joshua Knobe's Edge Bio page

Psychologist, Harvard University; Author, Changing Minds

Enlightenment ideas were the product of white male Christians living in the 18th century. They form the basis of the Universal Declaration of Human Rights and other Western-inflected documents. But in our global world, Confucian societies and Islamic societies have their own guidelines about progress, individuality, democratic processes, human obligations. In numbers they represent more of humanity and are likely to become even more numerous in this century. What do the human sciences have to contribute to an understanding of these 'multiple voices' ? Can they combined harmoniously or are there unbridgeable gaps?

Evolutionary Psychologist, University of New Mexico; Author, Spent: Sex, Evolution, and Consumer Behavior

1) Many people become vegans, protect animal rights, and care about the long-term future of the environment. It seems hard to explain these 'green virtues' in terms of the usual evolutionary-psychology selection pressures: reciprocity, kin selection, group selection -- so how can we explain their popularity (or unpopularity?)

2) What are the main sex differences in human morality, and why?

3) What role did costly signaling play in the evolution of human morality (i.e. 'showing off' certain moral virtues' to attract mates, friends, or allies, or to intimidate rival individuals or competing groups)?

4) Given the utility of 'adaptive self-deception' in human evolution -- one part of the mind not knowing what adaptive strategies another part is pursuing -- what could it mean to have the moral virtue of 'integrity' for an evolved being?

5) Why do all 'mental illnesses' (depression, mania, schizophrenia, borderline, psychopathy, narcissism, mental retardation, etc.) reduce altruism, compassion, and loving-kindness? Is this partly why they are recognized as mental illnesses?

Artist; Composer; Recording Producer: U2, Cold Play, Talking Heads, Paul Simon; Recording Artist

Is morality a human invention - a way of trying to stabilise human societies and make them coherent - or is there evidence of a more fundamental sense of morality in creatures other than humans?

Another way of asking this question is: are there moral concepts that are not specifically human?

Yet another way of asking this is: are moral concepts specifically the province of human brains? And, if they are, is there any basis for suggesting that there are any 'absolute' moral precepts?

Or: do any other creatures exhibit signs of 'honour' or 'shame'?

Political Scientist, University of California, San Diego; Coauthor, Connected

Given recent evidence about the power of social networks, what is our personal responsibility to our friends' friends?

Blogger & Cofounder, Global Voices Online; Former CNN journalist and head of CNN bureaus in Beijing & Tokyo; Visiting Fellow, Princeton University's Center for Information Technology Policy

Does the human race require a major moral evolution in order to survive? Isn't part of the problem that our intelligence has vastly out-evolved our morality, which is still stuck back in the paleolithic age? Is there anything we can do? Or is this the tragic flaw that dooms us? Might technology help to facilitate or speed up our moral evolution, as some say technology is already doing for human intelligence? We have artificial intelligence and augmented reality. What about artificial or augmented morality?

Musician, Computer Scientist; Pioneer of Virtural Reality; Author, You Are Not A Gadget: A Manifesto

A crucial topic is how group interactions change moral perception. To what degree are there clan-oriented processes inherent in the human brain? In particular, how can well-informed software designs for network-mediated social experience play a role in changing behavior and values? Is there anything specific that can be done to reduce mob-like phenomena, as is spawned in online forums like 4chan's /b/, without resorting to degrees of imposed control? This is where a science of moral psychology could inform engineering.

Journalist; Author, Single in Manhattan

What's would be a good definition - a few examples - of common moral sense? How does an averagely moral human think and behave (it's easy to paint a picture of the actions of an immoral person...) Now, how can this be expanded?

Could an understanding/acceptance of the idea that we are all having unconscious instincts for what's right and wrong replace the idea of religion as necessary for moral behavior?

What tends to be the hierarchy of "blinders" - the arguments we, consciously or unconsciously, use to relabel exploitative acts as good? (I did it for God, I did it for the German People, I did it for Jodie Foster...) What evolutionary purpose have they filled?

Psychologist & Neuroscientist, Stanford

What is the difference between morality and emotion? How can scientists distinguish between the two (or should they)? Why has Western culture been so historically reluctant to recognize emotion as a major influence on moral judgments?

Feuilleton Editor, Sueddutsche Zeitung

Is there a fine line or a wide gap between morality and ideology?


1. Some of the new literature on moral psychology feels like traditional discussions of ethics with a few numbers attached from surveys; almost like old ideas in a new can. As an outsider I'd be curious to know what's really new here. Specifically, if William James were resurrected what might be the new findings we could explain to him that would astound him or fundamentally change his way of thinking?

2. Is there a reason to believe there is such a thing as moral psychology that transcends upbringing and culture? Are we really studying a fundamental feature of the mind or simply the outcome of a social process?

Psychologist, UC, Berkeley; Author, The Philosophical Baby

Many people have proposed an evolutionary psychology/ nativist view of moral capacities. But surely one of the most dramatic and obvious features of our moral capacities is their capacity for change and even radical transformation with new experiences. At the same time this transformation isn't just random but seems to have a progressive quality. Its analogous to science which presents similar challenges to a nativist view. And even young children are empirically, capable of this kind of change in both domains. How do we get to new and better conceptions of the world, cognitive or moral, if the nativists are right?

Evolutionary Biologist, Rutgers University; Coauthor, Genes In Conflict: The Biology of Selfish Genetic Elements


What is it? When does it occur? What function does it serve? How is it related, if at all, to guilt? Is it related to "morality" and if so how?

Key point, John, is that shame is a complex mixture of self and other: Tiger Woods SHAMES his wife in public — he may likewise be ashamed.

If i fuck a goat i may feel ashamed if someone saw it, but absent harm to the goat, not clear how i should respond if i alone witness it.

July 28, 2010

Moral reasoning


How do you train a moral muscle? American researchers take their first steps on
the path to a science of morality without God hypothesis. The last word should
have the reason.

By Jordan Mejias

[Google translation:]

28th July 2010 One was missing and had he turned up, the illustrious company would have had nothing more to discuss and think. Even John Brockman, literary agent, and guru of the third culture, it could not move, stop by in his salon, which he every summer from the virtuality of the Internet, click on edge.org moved, in a New England idyl. There, in the green countryside of Washington, Connecticut, it was time to morality as a new science. When new it was announced, because their devoted not philosophers and theologians, but psychologists, biologists, neurologists, and at most such philosophers, based on experiments and the insights of brain research. They all had to admit, even to be on the search, but they missed not one who lacked the authority in matters of morality: God.

The secular science dominated the conference. As it should come to an end, however, a consensus first, were the conclusions apart properly. Even on the question of whether religion should be regarded as part of evolution, remained out of the clear answer. Agreement, the participants were at least that is to renounce God. Him, the unanimous result of her certainly has not been completed or not may be locked investigations, did not owe the man morality. That it is innate in him, but did so categorically not allege any. Only on the findings that morality is a natural phenomenon, there was agreement, even if only to a certain degree. For, should be understood not only the surely. Besides nature makes itself in morality and the culture just noticeable, and where the effect of one ends and the other begins, is anything but settled.

Better be nice

In a baby science, as Elizabeth Phelps, a neuroscientist at New York University, called the moral psychology may by way of surprise not much groping. How about some with free will, will still remain for the foreseeable future a mystery. Moral instincts was, after all, with some certainty Roy Baumeister, a social psychologist at Florida State University, are not built into us. We are only given the ability to acquire systems of morality. Gives us to be altruistic, we are selfish by nature, benefits. It is moral to be compared with a muscle, the fatigue, but can also be strengthened through regular training. What sounds easier than is done, if not clear what is to train as well. A moral center that we can selectively edit points, our brain does not occur.

But amazingly, with all that we are nice to each other are forced reproduction, and Paul Bloom, a psychologist at Yale, is noticed. Obviously, we have realized that our lives more comfortable when others do not fight us. Factors of Nettigkeitswachstums Bloom also recognizes in capitalism that will work better with nice people, and world religions, which act in large groups and their dynamics as it used to strangers to meet each other favorably. The fact that we have developed over the millennia morally beneficial, holds not only he has been proved. Even the neurologist Sam Harris, author of "The Moral Landscape. How Science Can Determine Human Values "(Free Press), wants to make this progress not immoral monsters like Hitler and Stalin spoil. ...

[...Continue: German language original | Google translation]

25 JUL 2010


Edge held a seminar on morality. Here's Joshua Knobe:

Over the past few years, a series of recent experimental studies have reexamined the ways in which people answer seemingly ordinary questions about human behavior. Did this person act intentionally? What did her actions cause? Did she make people happy or unhappy? It had long been assumed that people's answers to these questions somehow preceded all moral thinking, but the latest research has been moving in a radically different direction. It is beginning to appear that people's whole way of making sense of the world might be suffused with moral judgment, so that people's moral beliefs can actually transform their most basic understanding of what is happening in a situation.

David Brooks' illuminating column on this topic covered the same ground:


...Advantage Locke over Hobbes.


July 23, 2010

Scientific research is showing that we are born with an innate moral sense.


Washington, Conn.

Where does our sense of right and wrong come from? Most people think it is a gift from God, who revealed His laws and elevates us with His love. A smaller number think that we figure the rules out for ourselves, using our capacity to reason and choosing a philosophical system to live by.

Moral naturalists, on the other hand, believe that we have moral sentiments that have merged from a long history of relationships. To learn about morality, you don't rely upon revelation or metaphysics; you observe people as they live.

This week a group of moral naturalists gathered in Connecticut at a conference organized by the Edge Foundation. ...

By the time humans came around, evolution had forged a pretty firm foundation for a moral sense. Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.

Paul Bloom of Yale noted that this moral sense can be observed early in life. Bloom and his colleagues conducted an experiment in which they showed babies a scene featuring one figure struggling to climb a hill, another figure trying to help it, and a third trying to hinder it. ...



Edited by John Brockman

"An intellectual treasure trove"
San Francisco Chronicle

Edited by John Brockman

Harper Perennial


[click to enlarge]

Contributors include: RICHARD DAWKINS on cross-species breeding; IAN McEWAN on the remote frontiers of solar energy; FREEMAN DYSON on radiotelepathy; STEVEN PINKER on the perils and potential of direct-to-consumer genomics; SAM HARRIS on mind-reading technology; NASSIM NICHOLAS TALEB on the end of precise knowledge; CHRIS ANDERSON on how the Internet will revolutionize education; IRENE PEPPERBERG on unlocking the secrets of the brain; LISA RANDALL on the power of instantaneous information; BRIAN ENO on the battle between hope and fear; J. CRAIG VENTER on rewriting DNA; FRANK WILCZEK on mastering matter through quantum physics.

"a provocative, demanding clutch of essays covering everything from gene splicing to global warming to intelligence, both artificial and human, to immortality... the way Brockman interlaces essays about research on the frontiers of science with ones on artistic vision, education, psychology and economics is sure to buzz any brain." (Chicago Sun-Times)

"11 books you must read — Curl up with these reads on days when you just don't want to do anything else: 5. John Brockman's This Will Change Everything: Ideas That Will Shape the Future" (Forbes India)

"Full of ideas wild (neurocosmetics, "resizing ourselves," "intuit[ing] in six dimensions") and more close-to-home ("Basketball and Science Camps," solar technology"), this volume offers dozens of ingenious ways to think about progress" (Publishers Weekly — Starred Review)

"A stellar cast of intellectuals ... a stunning array of responses...Perfect for: anyone who wants to know what the big thinkers will be chewing on in 2010. " (New Scientist)

"Pouring over these pages is like attending a dinner party where every guest is brilliant and captivating and only wants to speak with you—overwhelming, but an experience to savor." (Seed)

* based On The Edge Annual Question — 2009: "What Will Change Everything?)

Edge Foundation, Inc. is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.