Jonathan Haidt : As the first speaker, I'd like to thank the Edge Foundation for bringing us all together, and bringing us all together in this beautiful place. I'm looking forward to having these conversations with all of you.
I was recently at a conference on moral development, and a prominent Kohlbergian moral psychologist stood up and said, "Moral psychology is dying." And I thought, well, maybe in your neighborhood property values are plummeting, but in the rest of the city, we are going through a renaissance. We are in a golden age.
My own neighborhood is the social psychology neighborhood, and it's gotten really, really fun, because all these really great ethnic groups are moving in next door. Within a few blocks, I can find cognitive neuroscientists and primatologists, developmental psychologists, experimental philosophers and economists. We are in a golden age. We are living through the new synthesis in ethics that E.O. Wilson called for in 1975. We are living through an age of consilience.
We're sure to disagree on many points today, but I think that we here all agree on a number of things. We all agree that, to understand morality, you've got to think about evolution and culture. You've got to know something about chimpanzees and bonobos and babies and psychopaths. You've got to know the differences between them. You've got to study the brain and the mind, and you've got to put it all together.
My hope for this conference is that we can note many of our points of agreement, as well as our disagreements. My hope is that the people who watch these talks on the Web will come away sharing our sense of enthusiasm and optimism, and mutual respect.
When I was a graduate student in Philadelphia, I had a really weird experience in a restaurant. I was walking on Chestnut Street, and I saw a restaurant called The True Taste. And I thought, well, okay, what is the true taste? So I went inside and looked at the menu. The menu had five sections. They were labeled "Brown Sugars," "Honeys," "Molasses" and "Artificials." And I thought this was really weird, and I went over to the waiter and I said, "What's going on? Don't you guys serve food?"
And it turns out, the waiter was actually the owner of the restaurant as well, and the only employee. And, he explained to me that this was a tasting bar for sweeteners. It was the first of its kind in the world. And I could have sweeteners from 32 countries. He said that he had no background in the food industry, he'd never worked in a restaurant, but he was a Ph.D. biologist who worked at the Monell Chemical Senses Center in Philadelphia.
And, in his research, he discovered that, of all the five taste receptors ... you know, there's sweet, sour, salty, bitter and savory ... when people experience sweet taste, they get the biggest hit of dopamine. And that told him that sweetness is the true taste, the one that we most crave. And he thought, he reasoned, that it would be most efficient to have a restaurant that just focuses on that receptor, that will maximize the units of pleasure per calorie. So he opened the restaurant.
I asked him, "Well, okay, how's business going?" And he said, "Terrible. But at least I'm doing better than the chemist down the street, who opened a salt-tasting bar." (Laughter).
Now, of course, this didn't really happen to me, but it's a metaphor for how I feel when I read moral philosophy and some moral psychology. Morality is so rich and complex. It's so multifaceted and contradictory. But many authors reduce it to a single principle, which is usually some variant of welfare maximization. So that would be the sugar. Or sometimes, it's justice and related notions of fairness and rights. And that would be the chemist down the street. So basically, there's two restaurants to choose from. There's the utilitarian grille, and there's the deontological diner. That's pretty much it.
We need metaphors and analogies to think about difficult topics, such as morality. An analogy that Marc Hauser and John Mikhail have developed in recent years is that morality is like language. And I think it's a very, very good metaphor. It illuminates many aspects of morality. It's particularly good, I think, for sequences of actions that occur in time with varying aspects of intentionality.
But, once we expand the moral domain beyond harm, I find that metaphors drawn from perception become more illuminating, more useful. I'm not trying to say that the language analogy is wrong or deficient. I'm just saying, let's think of another analogy, a perceptual analogy.
So if you think about vision, touch, and taste, for all three senses, our bodies are built with a small number of specialized receptors. So, in the eye, we've got four kinds of cells in the retina to detect different frequencies of light. In our skin, we've got three kinds of receptors for temperature and pressure and tissue damage or pain. And on our tongues, we have these five kinds of taste receptor.
I think taste offers the closest, the richest, source domain for understanding morality. First, the links between taste, affect, and behavior are as clear as could be. Tastes are either good or bad. The good tastes, sweet and savory, and salt to some extent, these make us feel "I want more." They make us want to approach. They say, "this is good." Whereas, sour and bitter tell us, "whoa, pull back, stop."
Second, the taste metaphor fits with our intuitive morality so well that we often use it in our everyday moral language. We refer to acts as "tasteless," as "leaving a bad taste" in our mouths. We make disgust faces in response to certain violations.
Third, every culture constructs its own particular cuisine, its own way of pleasing those taste receptors. The taste analogy gets at what's universal—that is, the taste receptors of the moral mind—while it leaves plenty of room for cultural variation. Each culture comes up with its own particular way of pleasing these receptors, using local ingredients, drawing on historical traditions.
And fourth, the metaphor has an excellent pedigree. It was used 2,300 years ago in China by Mencius, who wrote, "Moral principles please our minds as beef and mutton and pork please our mouths." It was also a favorite of David Hume, but I'll come back to that.
So, my goal in this talk is to develop the idea that moral psychology is like the psychology of taste in some important ways. Again, I'm not arguing against the language analogy. I'm just proposing that taste is also a very useful one. It helps show us morality in a different light. It brings us to some different conclusions.
As some of you know, I'm the co-developer of a theory called Moral Foundations Theory, which specifies a small set of social receptors that are the beginnings of moral judgment. These are like the taste receptors of the moral mind. I'll mention this theory again near the end of my talk.
But before I come back to taste receptors and moral foundations, I want to talk about two giant warning flags. Two articles published in "Behavioral and Brain Sciences," under the wise editorship of Paul Bloom. And I think these articles are so important that the abstracts from these two articles should be posted in psychology departments all over the country, in just the way that, when you go to restaurants, they've got, you know, How to Help a Choking Victim. And by law, that's got to be in restaurants in some states. (Laughter).
So, the first article is called "The Weirdest People in the World," by Joe Henrich, Steve Heine and Ara Norenzayan, and it was published last month in BBS. And the authors begin by noting that psychology as a discipline is an outlier in being the most American of all the scientific fields. Seventy percent of all citations in major psych journals refer to articles published by Americans. In chemistry, by contrast, the figure is just 37 percent. This is a serious problem, because psychology varies across cultures, and chemistry doesn't.
So, in the article, they start by reviewing all the studies they can find that contrast people in industrial societies with small-scale societies. And they show that industrialized people are different, even at some fairly low-level perceptual processing, spatial cognition. Industrialized societies think differently.
The next contrast is Western versus non-Western, within large-scale societies. And there, too, they find that Westerners are different from non-Westerners, in particular on some issues that are relevant for moral psychology, such as individualism and the sense of self.
Their third contrast is America versus the rest of the West. And there, too, Americans are the outliers, the most individualistic, the most analytical in their thinking styles.
And the final contrast is, within the United States, they compare highly educated Americans to those who are not. Same pattern.
All four comparisons point in the same direction, and lead them to the same conclusion, which I've put here on your handout. I'll just read it. "Behavioral scientists routinely publish broad claims about human psychology and behavior based on samples drawn entirely from Western, Educated, Industrialized, Rich and Democratic societies." The acronym there being WEIRD. "Our findings suggest that members of WEIRD societies are among the least representative populations one could find for generalizing about humans. Overall, these empirical patterns suggest that we need to be less cavalier in addressing questions of human nature, on the basis of data drawn from this particularly thin and rather unusual slice of humanity."
As I read through the article, in terms of summarizing the content, in what way are WEIRD people different, my summary is this: The WEIRDer you are, the more you perceive a world full of separate objects, rather than relationships, and the more you use an analytical thinking style, focusing on categories and laws, rather than a holistic style, focusing on patterns and contexts.
Now, let me state clearly that these empirical facts about "WEIRD-ness", they don't in any way imply that our morality is wrong, only that it is unusual. Moral psychology is a descriptive enterprise, not a normative one. We have WEIRD chemistry. The chemistry produced by Western, Educated, Industrialized, Rich, Democratic societies is our chemistry, and it's a very good chemistry. And we have every reason to believe it's correct. And if a Ayurvedic practitioner from India were to come to a chemistry conference and say, "Good sirs and madams, your chemistry has ignored our Indian, you know, our 5,000-year-old chemistry," the chemists might laugh at them, if they were not particularly polite, and say, "Yeah, that's right. You know, we really don't care about your chemistry."
But suppose that same guy were to come to this conference and say, "You know, your moral psychology has ignored my morality, my moral psychology." Could we say the same thing? Could we just blow him off and say, "Yeah, we really don't care"? I don't think that we could do that. And what if the critique was made by an American Evangelical Christian, or by an American conservative? Could we simply say, "We just don't care about your morality"? I don't think that we could.
Morality is like The Matrix, from the movie "The Matrix." Morality is a consensual hallucination, and when you read the WEIRD people article, it's like taking the red pill. You see, oh my God, I am in one particular matrix. But there are lots and lots of other matrices out there.
We happen to live in a matrix that places extraordinary value on reason and logic. So, the question arises, is our faith justified? Maybe ours is right and the others are wrong. What if reasoning really is the royal road to truth? If so, then maybe the situation is like chemistry after all. Maybe WEIRD morality, with this emphasis on individual rights and welfare, maybe it's right, because we are the better reasoners. We had The Enlightenment. We are the heirs of The Enlightenment. Everyone else is sitting in darkness, giving credence to religion, superstition and tradition. So maybe our matrix is the right one.
Well, let's turn to the second article. It's called, "Why Do Humans Reason? Arguments for an Argumentative Theory," by Hugo Mercier and Dan Sperber. The article is a review of a puzzle that has bedeviled researchers in cognitive psychology and social cognition for a long time. The puzzle is, why are humans so amazingly bad at reasoning in some contexts, and so amazingly good in others?
So, for example, why can't people solve the Wason Four-Card Task, lots of basic syllogisms? Why do people sometimes do worse when you tell them to think about a problem or reason through it, than if you don't give them any special instructions?
Why is the confirmation bias, in particular— this is the most damaging one of all—why is the confirmation bias so ineradicable? That is, why do people automatically search for evidence to support whatever they start off believing, and why is it impossible to train them to undo that? It's almost impossible. Nobody's found a way to teach critical thinking that gets people to automatically reflect on, well, what's wrong with my position?
And finally, why is reasoning so biased and motivated whenever self-interest or self-presentation are at stake? Wouldn't it be adaptive to know the truth in social situations, before you then try to manipulate?
The answer, according to Mercier and Sperber, is that reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments. That's why they call it The Argumentative Theory of Reasoning. So, as they put it, and it's here on your handout, "The evidence reviewed here shows not only that reasoning falls quite short of reliably delivering rational beliefs and rational decisions. It may even be, in a variety of cases, detrimental to rationality. Reasoning can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions. This explains the confirmation bias, motivated reasoning, and reason-based choice, among other things."
Now, the authors point out that we can and do re-use our reasoning abilities. We're sitting here at a conference. We're reasoning together. We can re-use our argumentative reasoning for other purposes. But even there, it shows the marks of its heritage. Even there, our thought processes tend towards confirmation of our own ideas. Science works very well as a social process, when we can come together and find flaws in each other's reasoning. We can't find the problems in our own reasoning very well. But, that's what other people are for, is to criticize us. And together, we hope the truth comes out.
But the private reasoning of any one scientist is often deeply flawed, because reasoning can be counted on to seek justification and not truth. The problem is especially serious in moral psychology, where we all care so deeply and personally about what is right and wrong, and where we are almost all politically liberal. I don't know of any Conservatives. I do know of a couple of people in moral psychology who don't call themselves liberal. I think, Roy, are you one? Not to out you, but ... (Laughter).
ROY BAUMEISTER: I'm pretty apolitical, I guess.
JONATHAN HAIDT: Okay. So there's you, and there's Phil Tetlock, who don't call themselves Liberals, as far as I know. But I don't know anyone who calls themselves a Conservative. We have a very, very biased field, which means we don't have the diversity to really be able to challenge each other's confirmation biases on a number of matters. So, it's all up to you today, Roy.
So, as I said, morality is like The Matrix. It's a consensual hallucination. And if we only hang out with people who share our matrix, then we can be quite certain that, together, we will find a lot of evidence to support our matrix, and to condemn members of other matrices.
So, I think the Mercier and Sperber article offers strong empirical support for a basically Humean perspective-- David Hume--a Humean perspective on moral reasoning. Hume famously wrote that "reason is and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." When Hume died, in 1776, he left us a strong foundation for what he and his contemporaries called "the moral sciences."
The subtitle of my talk today is "A Taste Analogy in Moral Psychology: Picking up Where Hume Left Off." And, at the bottom of the handout, I've listed some of the features that I think would characterize such a continuation, a continuation of Hume's project.
So, Hume was a paragon of Enlightenment thinking. He was a naturalist, which meant that he believed that morality was part of the natural world, and we can understand morality by studying human beings, not by studying Scripture or a priorilogic. Let's look out at the world to do moral psychology, to do the moral sciences. So, that's why I've listed Naturalism, or Naturalist, as the first of the seven features there.
Second, Hume was a nativist. Now, he didn't know about Darwin. He didn't know about evolution. But, if he did, he would have embraced Darwin and evolution quite warmly. Hume believed that morals were like aesthetic perceptions, that they were "Founded entirely on the particular fabric and constitution of the human species."
Third, Hume was a sentimentalist. That is, he thought that the key threads of this fabric were the many moral sentiments. And you can see his emphasis on sentiment in the second quotation that I have on your handout, where he uses the taste metaphor. He says, "Morality is nothing in the abstract nature of things, but is entirely relative to the sentiment or mental taste of each particular being, in the same manner as the distinctions of sweet and bitter, hot and cold arise from the particular feeling of each sense or organ. Moral perceptions, therefore, ought not to be classed with the operations of the understanding, but with the tastes or sentiments."
Now, some of these sentiments can be very subtle, and easily mistaken for products of reasoning, Hume said. And that's why I think, and I've argued, that the proper word for us today is not "sentiment" or "emotion." It's actually "intuition." A slightly broader term and a more sort of cognitive-sounding term.
Moral intuitions are rapid, automatic and effortless. Since we've had the automaticity revolution in social psychology in the '90s, beginning with John Bargh and others, our thinking's turned a lot more towards automatic versus controlled processes, rather than emotion versus cognition. So, intuition is clearly a type of cognition, and I think the crucial contrast for us in moral psychology is between various types of cognition, some of which are very affectively laden, others of which are less so, or not at all.
Fourth, Hume was a pluralist, because he was to some degree a virtue ethicist. Virtue ethics is the main alternative to deontology and utilitarianism in philosophy. Virtues are social skills. Virtues are character traits that a person needs in order to live a good, praiseworthy, or admirable life. The virtues of a rural farming culture are not the same as the virtues of an urban commercial or trading culture, nor should they be. So virtues are messy. Virtue theories are messy.
If you embrace virtue theory, you say goodbye to the dream of finding one principle, one foundation, on which you can rest all of morality. You become a pluralist, as I've listed down there. And you also become a non-parsimonist. That is, of course parsimony's always valuable in sciences, but my experience is that we've sort of elevated Occam's Razor into Occam's Chainsaw. Which is, if you can possibly cut it away and still have it stand, do it. And I think, in especially moral psychology, we've grossly disfigured our field by trying to get everything down to one if we possibly can. So I think, if you embrace virtue ethics, at least you put less of a value on parsimony than moral psychologists normally do.
But what you get in return for this messiness is, you get the payoff for being a naturalist. That is, you get a moral theory that fits with what we know about human nature elsewhere. So, I often use the metaphor that the mind is like a rider on an elephant. The rider is conscious, controlled processes, such as reasoning. The elephant is the other 99 percent of what goes on in our minds, things that are unconscious and automatic.
Virtue theories are about training the elephant. Virtue theories are about cultivating habits, not just of behavior, but of perception. So, to develop the virtue of kindness, for example, is to have a keen sensitivity to the needs of other people, to feel compassion when warranted, and then to offer the right kind of help with a full heart.
Utilitarianism and deontology, by contrast, are not about the elephant at all. They are instruction manuals for riders. They say, "here's how you do the calculation to figure out the right thing to do, and just do it." Even if it feels wrong. "Tell the truth, even if it's going to hurt your friends," say some deontologists. "Spend less time and money on your children, so that you have more time and money to devote to helping children in other countries and other continents, where you can do more good." These may be morally defensible and logically defensible positions, but they taste bad to most people. Most people don't like deontology or utilitarianism.
So, why hasn't virtue ethics been the dominant approach? What happened to virtue ethics, which flourished in ancient Greece, in ancient China, and through the Middle Ages, and all the way up through David Hume and Ben Franklin? What happened to virtue ethics?
Well, if we were to write a history of moral philosophy, I think the next chapter would be called, "Attack of the Systemizers." Most of you know that autism is a spectrum. It's not a discrete condition. And Simon Baron-Cohen tells us that we should think about it as two dimensions. There's systemizing and empathizing. So, systemizing is the drive to analyze the variables in a system, and to derive the underlying rules that govern the behavior of a system. Empathizing is the drive to identify another person's emotions and thoughts, and to respond to these with appropriate emotion.
So, if you place these two dimensions, you make a 2x2 space, you get four quadrants. And, autism and Asperger's are, let's call it the bottom right corner of the bottom right quadrant. That is, very high on systemizing, very low on empathizing. People down there have sort of the odd behaviors and the mind-blindness that we know as autism or Asperger's.
The two major ethical systems that define Western philosophy were developed by men who either had Asperger's, or were pretty darn close. For Jeremy Bentham, the principal founder of utilitarianism, the case is quite strong. According to an article titled "Asperger's Syndrome and the Eccentricity and Genius of Jeremy Bentham," published in the Journal of Bentham Studies, (Laughter), Bentham fit the criteria quite well. I'll just give a single account of his character from John Stuart Mill, who wrote, "In many of the most natural and strongest feelings of human nature, he had no sympathy. For many of its graver experiences, he was altogether cut off. And the faculty by which one mind understands a mind different from itself, and throws itself into the feelings of that other mind was denied him by his deficiency of imagination."
For Immanuel Kant, the case is not quite so clear. He also was a loner who loved routine, feared change, focused on his few interests, to the exclusion of all else. And, according to one psychiatrist, Michael Fitzgerald, who diagnoses Asperger's in historical figures and shows how it contributed to their genius, Fitzgerald thinks that Kant would be diagnosed with Asperger's. I think the case is not nearly so clear. I think Kant did have better social skills, more ability to empathize. So I wouldn't say that Kant had Asperger's, but I think it's safe to say that he was about as high as could possibly be on systemizing, while still being rather low on empathizing, although not the absolute zero that Bentham was.
Now, what I'm doing here, yes, it is a kind of an ad hominem argument. I'm not saying that their ethical theories are any less valid normatively because of these men's unusual mental makeup. That would be the wrong kind of ad hominem argument. But I do think that, if we're doing history in particular, we're trying to understand, why did philosophy and then psychology, why did we make what I'm characterizing as a wrong turn? I think personality becomes relevant.
And, I think what happened is that, we had these two ultra-systemizers, in the late 18th and early 19th century. These two ultra-systemizers, during the early phases of the Industrial Revolution, when Western society was getting WEIRDer, and we were in general shifting towards more systemized and more analytical thought. You had these two hyper-systemized theories, and especially people in philosophy just went for it, for the next 200 years, it seems. All it is is, you know, utility, no. Deontology. You know, rights, harm.
And so, you get this very narrow battle of two different systemized groups, and virtue ethics--which fit very well with The Enlightenment Project; you didn't need God for virtue ethics at all--virtue ethics should have survived quite well. But it kind of drops out. And I think personality factors are relevant.
Because philosophy went this way, into hyper-systemizing, and because moral psychology in the 20th century followed them, referring to Kant and other moral philosophers, I think we ended up violating the two giant warning flags that I talked about, from these two BBS articles. We took WEIRD morality to be representative of human morality, and we've placed way too much emphasis on reasoning, treating it as though it was capable of independently seeking out moral truth.
I've been arguing for the last few years that we've got to expand our conception of the moral domain, that it includes multiple moral foundations, not just sugar and salt, and not just harm and fairness, but a lot more as well. So, with Craig Joseph and Jesse Graham and Brian Nosek, I've developed a theory called Moral Foundations Theory, which draws heavily on the anthropological insights of Richard Shweder.
Down here, I've just listed a very brief summary of it. That the five most important taste receptors of the moral mind are the following…care/harm, fairness/cheating, group loyalty and betrayal, authority and subversion, sanctity and degradation. And that moral systems are like cuisines that are constructed from local elements to please these receptors.
So, I'm proposing, we're proposing, that these are the five best candidates for being the taste receptors of the moral mind. They're not the only five. There's a lot more. So much of our evolutionary heritage, of our perceptual abilities, of our language ability, so much goes into giving us moral concerns, the moral judgments that we have. But I think this is a good starting point. I think it's one that Hume would approve of. It uses the same metaphor that he used, the metaphor of taste.
So, in conclusion, I think we should pick up where Hume left off. We know an awful lot more than Hume did about psychology, evolution and neuroscience. If Hume came back to us today, and we gave him a few years to read up on the literature and get up to speed, I think he would endorse all of these criteria. I've already talked about what it means to be a naturalist, a nativist, an intuitionist, a pluralist, and a non-parsimonist.
I just briefly want to say, I think it's also crucial, as long as you're going to be a nativist and say, "oh, you know, evolution, it's innate," you also have to be a constructivist. I'm all in favor of reductionism, as long as it's paired with emergentism. You've got to be able to go down to the low level, but then also up to the level of institutions and cultural traditions and, you know, all kinds of local factors. A dictum of cultural psychology is that "culture and psyche make each other up." You know, we psychologists are specialists in the psyche. What are the gears turning in the mind? But those gears turn, and they evolved to turn, in various ecological and economic contexts. We've got to look at the two-way relations between psychology and the level above us, as well as the reductionist or neural level below us.
And then finally, the last line there. We've got to be very, very cautious about bias. I believe that morality has to be understood as a largely tribal phenomenon, at least in its origins. By its very nature, morality binds us into groups, in order to compete with other groups.
And as I said before, nearly all of us doing this work are secular Liberals. And that means that we're at very high risk of misunderstanding those moralities that are not our own. If we were judges working on a case, we'd pretty much all have to recuse ourselves. But we're not going to do that, so we've got to just be extra careful to seek out critical views, to study moralities that aren't our own, to consider, to empathize, to think about them as possibly coherent systems of beliefs and values that could be related to coherent, and even humane, human ways of living and flourishing.
So, that's my presentation. That's what I think the moral sciences should look like in the 21st century. Of course, I've created this presentation using my reasoning skills, and I know that my reasoning is designed only to help me find evidence to support this view. So, I thank you for all the help you're about to give me in overcoming my confirmation bias, by pointing out all the contradictory evidence that I missed.