Molly Crockett: "The Neuroscience of Moral Decision Making"

Molly Crockett: "The Neuroscience of Moral Decision Making"

HeadCon '14
Molly Crockett [11.18.14]

Imagine we could develop a precise drug that amplifies people's aversion to harming others; on this drug you won't hurt a fly, everyone taking it becomes like Buddhist monks. Who should take this drug? Only convicted criminals—people who have committed violent crimes? Should we put it in the water supply? These are normative questions. These are questions about what should be done. I feel grossly unprepared to answer these questions with the training that I have, but these are important conversations to have between disciplines. Psychologists and neuroscientists need to be talking to philosophers about this. These are conversations that we need to have because we don't want to get to the point where we have the technology but haven't had this conversation, because then terrible things could happen.

MOLLY CROCKETT is an associate professor in the Department of Experimental Psychology, University of Oxford; Wellcome Trust Postdoctoral Fellow, Wellcome Trust Centre for Neuroimaging. Molly Crockett's Edge Bio Page


THE NEUROSCIENCE OF MORAL DECISION MAKING

I'm a neuroscientist at the University of Oxford in the UK. I'm interested in decision making, specifically decisions that involve tradeoffs; for example, tradeoffs between my own self-interest and the interests of other people, or tradeoffs between my present desires and my future goals.

One thing that's always fascinated me, specifically about human decision making, is the fact that we have multiple conflicting motives in our decision process. And not only do we have these forces pulling us in different directions, but we can reflect on this fact. We can witness the tug of war that happens when we're trying to make a difficult decision. One thing that is great about our ability to reflect on this process is that it suggests that we can intervene somehow in our decisions. We can make better decisions—more self-controlled decisions, or more moral decisions.
 
The reason I've become interested in the neuroscience of decision making is because I have this sense that pulling apart the different moving parts of this process and looking under the hood will give us clues about where we might be able to intervene and shape our own decisions.

One case study for this is moral decision making. When we can see that there's a selfish option and we can see that there's an altruistic or a cooperative option, we can reason our way through the decision, but there are also gut feelings about what's right and what's wrong. I've studied the neurobiology of moral decision making, specifically how different chemicals in our brains—neuromodulators—can shape the process of making moral decisions and push us one way or another when we're reasoning and deciding.

Neuromodulators are chemicals in the brain. There are a bunch of different neuromodulator systems that serve different functions. Events out in the world activate these systems and then they perfuse into different regions of the brain and influence the way that information is processed in those regions. All of you have experience with neuromodulators. Some of you are drinking cups of coffee right now. Many of you probably had wine with dinner last night. Maybe some of you have other experiences that are a little more interesting.

But you don't need to take drugs or alcohol to influence your neurochemistry. You can also influence your neurochemistry through natural events: Stress influences your neurochemistry, sex, exercise, changing your diet. There are all these things out in the world that feed into our brains through these chemical systems. I've become interested in studying if we change these chemicals in the lab, can we cause changes in people's behavior and their decision making?

One thing to keep in mind about the effects of these different chemicals on our behavior is that the effects here are subtle. The effect sizes are really small. This has two consequences for doing research in this area. The first is because the effect sizes are so small, the published literature on this is likely to be underpowered. There are probably a lot of false positives out there. We heard earlier that there is a lot of thought on this in science, not just in psychology but in all of science about how we can do better powered experiments, and how we can create a set of data that will tell us what's going on.

The other thing—and this is what I've been interested in—is because the effects of neuromodulators are so subtle, we need precise measures in the lab of the behaviors and decision processes that we're interested in. It's only with precise measures that we're going to be able to pick up these subtle effects of brain chemistry, which maybe at the individual level aren't going to make a dramatic difference in someone's personality, but at the aggregate level, in collective behaviors like cooperation and public goods problems, these might become important on a global scale.

How can we measure moral decision making in the lab in a precise way, and also in a way that we can agree is actually moral? This is an important point. One big challenge in this area is there's a lot of disagreement about what constitutes a moral behavior. What is moral? We heard earlier about cooperation—maybe some people think that's a moral decision but maybe other people don't. That's a real issue for getting people to cooperate.

First we have to pick a behavior that we can all agree is moral, and secondly we need to measure it in a way that tells us something about the mechanism. We want to have these rich sets of data that tell us about these different moving parts—these different pieces of the puzzle—and then we can see how they map onto different parts of the brain and different chemical systems.

What I'm going to do over the next 20 minutes is take you through my thought process over the past several years. I tried a bunch of different ways of measuring the effects of neurochemistry on what at one point I think is moral decision making, but then turns out maybe is not the best way to measure morality.  And I'll show you how I tried to zoom in on more advanced and sophisticated ways of measuring the cognitions and emotions that we care about in this context.

When I started this work several years ago, I was interested in punishment and economic games that you can use to measure punishment—if someone treats you unfairly then you can spend a bit of money to take money away from them. I was interested specifically in the effects of a brain chemical called serotonin on punishment. The issues that I'll talk about here aren't specific to serotonin but apply to this bigger question of how can we change moral decision making.

When I started this work the prevailing view about punishment was that punishment was a moral behavior—a moralistic or altruistic punishment where you're suffering a cost to enforce a social norm for the greater good. It turned out that serotonin was an interesting chemical to be studying in this context because serotonin has this long tradition of being associated with prosocial behavior. If you boost serotonin function, this makes people more prosocial. If you deplete or impair serotonin function, this makes people antisocial. If you go by the logic that punishment is a moral thing to do, then if you enhance serotonin, that should increase punishment. What we actually see in the lab is the opposite effect. If you increase serotonin people punish less, and if you decrease serotonin people punish more.

That throws a bit of a spanner in the works of the idea that punishment is this exclusively prosocially minded act. And this makes sense if you just introspect into the kinds of motivations that you go through if someone treats you unfairly and you punish them. I don't know about you, but when that happens to me I'm not thinking about enforcing a social norm or the greater good, I just want that guy to suffer; I just want him to feel bad because he made me feel bad.

The neurochemistry adds an interesting layer to this bigger question of whether punishment is prosocially motivated, because in some ways it's a more objective way to look at it. Serotonin doesn't have a research agenda; it's just a chemical. We had all this data and we started thinking differently about the motivations of so-called altruistic punishment. That inspired a purely behavioral study where we give people the opportunity to punish those who behave unfairly towards them, but we do it in two conditions. One is a standard case where someone behaves unfairly to someone else and then that person can punish them. Everyone has full information, and the guy who's unfair knows that he's being punished.

Then we added another condition, where we give people the opportunity to punish in secret— hidden punishment. You can punish someone without them knowing that they've been punished. They still suffer a loss financially, but because we obscure the size of the stake, the guy who's being punished doesn't know he's being punished. The punisher gets the satisfaction of knowing that the bad guy is getting less money, but there's no social norm being enforced.

What we find is that people still punish a lot in the hidden punishment condition. Even though people will punish a little bit more when they know the guy who's being punished will know that he's being punished—people do care about norm enforcement—a lot of punishment behavior can be explained by a desire for the norm violator to have a lower payoff in the end. This suggests that punishment is potentially a bad way to study morality because the motivations behind punishment are, in large part, spiteful.

Another set of methods that we've used to look at morality in the lab and how it's shaped by neurochemistry is trolley problems—the bread and butter of moral psychology research. These are hypothetical scenarios where people are asked whether it's morally acceptable to harm one person in order to save many others.

We do find effects of neuromodulators on these scenarios and they're very interesting in their own right. But I've found this tool unsatisfying for the question that I'm interested in, which is: How do people make moral decisions with real consequences in real time, rather than in some hypothetical situation? I'm equally unsatisfied with economic games as a tool for studying moral decision making because it's not clear that there's a salient moral norm in something like cooperation in a public goods game, or charitable giving in a dictator game. It's not clear that people feel guilty if they choose the selfish option in these cases.

After all this I've gone back to the drawing board and thought about what is the essence of morality? There's been some work on this in recent years. One wonderful paper by Kurt Gray, Liane Young, and Adam Waytz argues that the essence of morality is harm, specifically intentional interpersonal harm—an agent harming a patient. Of course morality is more than this; absolutely morality is more than this. It will be hard to find a moral code that doesn't include some prohibition against harming someone else unless you have a good reason.

What I wanted to do was create a measure in the lab that can precisely quantify how much people dislike causing interpersonal harms. What we came up with was getting people to make tradeoffs between personal profits—money—and pain in the form of electric shocks that are given to another person.

What we can do with this method is calculate, in monetary terms, how much people dislike harming others. And we can fit computational models to their decision process that give us a rich picture of how people make these decisions -- not just how much harm they're willing to deliver or not -- but what is the precise value they place on the harm of others relative to, for example, harm to themselves? What is the relative certainty or uncertainty with which they're making those decisions? How noisy are their choices? If we're dealing with monetary gains or losses, how does loss aversion factor into this?

We can get a more detailed picture of the data and of the decision process from using methods like these, which are largely inspired by work on non-social decision making and computational neuroscience where a lot of progress has been made in recent years. For example, in foraging environments how do people decide whether to go left or right when there are fluctuating reward contingencies in the environment?

What we're doing is importing those methods to the study of moral decision making and a lot of interesting stuff has come out of it. As you might expect there is individual variation in decision making in this setting. Some people care about avoiding harm to others and other people are like, "Just show me the money, I don't care about the other person." I even had one subject who was almost certainly on the psychopathy scale. When I explained to him what he had to do he said, "Wait, you're going to pay me to shock people? This is the best experiment ever!" Whereas other people are uncomfortable and are even distressed by this. This is capturing something real about moral decision making.

One thing that we're seeing in the data is that people who seem to be more averse to harming others are slower when they're making their decisions. This is an interesting contrast to Dave's work where the more prosocial people are faster. Of course there are issues that we need to work out about correlation versus causation in response times and decision making, but there are some questions here in thinking about the differences between a harm context and helping context. It may be that the heuristics that play out in a helping context come from learning about what is good and latch onto neurobiological systems that approach rewards and get invigorated when there are awards around, in contrast to neurobiological systems that avoid punishments and slow down or freeze when there are punishments around.

In the context of tradeoffs between profit for myself and pain for someone else, it makes sense that people who are maximizing the profit for themselves are going to be faster because if you're considering the harm to someone else, that's an extra computational step you have to take. If you're going to factor in someone else's suffering—the negative externality of your decisions—you have to do that computation and that's going to take a little time.

In this broader question of the time course of moral decision making, there might be a sweet spot where on the one hand you have an established heuristic of helping that's going to make you faster, but at the same time considering others is also a step that requires some extra processing. This makes sense.

When I was developing this work in London I was walking down the street one day checking my phone, as we all do, and this kid on a bike in a hoodie came by and tried to steal my phone. He luckily didn't get it, it just crashed to the floor -- he was an incompetent thief. In thinking about what his thought process was during that time, he wasn't thinking about me at all. He had his eye on the prize. He had his eye on the phone, he was thinking about his reward. He wasn't thinking about the suffering that I would feel if I lost my phone. That's a broader question to think about in terms of the input of mentalizing to moral decision making.

Another observation is that people who are nicer in this setting seem to be more uncertain in their decision making. If you look at the parameters that describe uncertainty, you can see that people who are nicer seem to be more noisy around their indifference point. They waver more in these difficult decisions.

So I've been thinking about uncertainty and its relationship to altruism and social decision making, more generally. One potentially fruitful line of thought is that social decisions—decisions that affect other people—always have this inherent element of uncertainty. Even if we're a good mentalizer, even if we're the best possible mentalizer, we're never going to fully know what it is like to be someone else and how another person is going to experience the effects of our actions on them.

One thing that it might make sense to do if we want to co-exist peacefully with others is we simulate how our behavior is going to effect others, but we err on the side of caution. We don't want to impose an unbearable cost on someone else so we think, "Well, I might dislike this outcome a certain amount but maybe my interaction partner is going to dislike it a little more so I'm just going to add a little extra safety—a margin of error—that's going to move me in the prosocial direction." We're seeing this in the context of pain but this could apply to any cost—risk or time cost.

Imagine that you have a friend who is trying to decide between two medical procedures. One procedure produces the most desirable outcome, but it also has a high complication or a high mortality rate. Another procedure doesn't achieve as good of an outcome but it's much safer. Suppose your friend says to you, "I want you to choose which procedure I'm going to have. I want you to choose for me." First of all, most of us would be very uncomfortable making that decision for someone else. Second, my intuition is that I would definitely go for the safer option because if something bad happened in the risky decision, I would feel terrible.

This idea that we can't access directly someone else's utility function is a rather old idea and it goes back to the 1950s with the work of John Harsanyi, who did some work on what he called interpersonal utility comparisons. How do you compare one person's utility to another person's utility? This problem is important, particularly in utilitarian ethics, because if you want to maximize the greatest good for the greatest number, you have to have some way of measuring the greatest good for each of those numbers.

The challenge of doing this was recognized by the father of utilitarianism, Jeremy Bentham, who said, "'Tis vain to talk of adding quantities which after the addition will continue to be as distinct as they were before; one man's happiness will never be another man's happiness: a gain to one man is no gain to another: you might as well pretend to add 20 apples to 20 pears."

This problem has still not been solved. Harsanyi has done a lot of great work on this but what he ended up with—his final solution—was still an approximation that assumes that people have perfect empathy, which we know is not the case. There's still room in this area for exploration.

The other thing about uncertainty is that, on one hand it could lead us towards prosocial behavior, but on the other hand there's evidence that uncertainty about outcomes and about how other people react to those outcomes can license selfish behavior. Uncertainty can also be exploited for personal gain for self-serving interests.

Imagine you're the CEO of a company. You're trying to decide whether to lay off some workers in order to increase shareholder value. If you want to do the cost benefit analysis, you have to calculate what's the negative utility for the workers of losing their jobs and how does that compare to the positive utility of the shareholders for getting these profits? Because you can't directly access how the workers are going to feel, and how the shareholders are going to feel, there's space for self-interest to creep in, particularly if there are personal incentives to push you one direction or the other.

There's some nice work that has been done on this by Roberto Weber and Jason Dana who have shown that if you put people in situations where outcomes are ambiguous, people will use this to their advantage to make the selfish decision but still preserve their self-image as being a moral person. This is going to be an important question to address. When does uncertainty lead to prosocial behavior because we don't want to impose an unbearable cost on someone else? And when does it lead to selfish behavior because we can convince ourselves that it's not going to be that bad?

These are things we want to be able to measure in the lab and to map different brain processes—different neurochemical systems—onto these different parameters that all feed into decisions. We're going to see progress over the next several years because in this non-social computational neuroscience there are smart people who are mapping how basic decisions work. All people like me have to do is import those methods to studying more complex social decisions. There's going to be a lot of low-hanging fruit in this area over the next few years.

Once we figure out how all this works—and I do think it's going to be a while—I've been misquoted sometimes about saying morality pills are just around the corner, and I assure you that this is not the case. It's going to be a very long time before we're able to intervene in moral behavior and that day may never even come. The reason why this is such a complicated problem is because working out how the brain does this is the easy part. The hard part is what to do with that. This is a philosophical question. If we figure out how all the moving parts work, then the question is should we intervene and if so how should we intervene?

Imagine we could develop a precise drug that amplifies people's aversion to harming others; on this drug you won't hurt a fly, everyone taking it becomes like Buddhist monks. Who should take this drug? Only convicted criminals—people who have committed violent crimes? Should we put it in the water supply? These are normative questions. These are questions about what should be done. I feel grossly unprepared to answer these questions with the training that I have, but these are important conversations to have between disciplines. Psychologists and neuroscientists need to be talking to philosophers about this. These are conversations that we need to have because we don't want to get to the point where we have the technology but haven't had this conversation, because then terrible things could happen.

The last thing that I'll say is it's also interesting to think about the implications of this work, the fact that we can shift around people's morals by giving them drugs. What are the implications of this data for our understanding of what morality is?

There's increasing evidence now that if you give people testosterone or influence their serotonin or oxytocin, this is going to shift the way they make moral decisions. Not in a dramatic way, but in a subtle yet significant way. And because the levels and function of our neuromodulators are changing all the time in response to events in our environment, that means that external circumstances can play a role in what you think is right and what you think is wrong.

Many people may find this to be deeply uncomfortable because we like to think of our morals as being core to who we are and one of the most stable things about us. We like to think of them as being written in stone. If this is not the case, then what are the implications for our understanding of who we are and what we should think about in terms of enforcing norms in society? Maybe you might think the solution is we should just try to make our moral judgments from a neutral stance, like the placebo condition of life. That doesn't exist. Our brain chemistry is shifting all the time so it's this very unsteady ground that we can't find our footing on.

At the end of the day that's how I try to avoid being an arrogant scientist who's like, "I can measure morality in the lab." I have deep respect for the instability of these things and these are conversations that I find deeply fascinating.


THE REALITY CLUB

L.A. PAUL: I had a question about how you want to think about these philosophical issues. Sometimes they get described as autonomy. You said if we could discover some chemical that would improve people’s moral capacities, do we put it in the water? The question I have is a little bit related to imaginability. In  other words, the guy who tried to steal your phone. The thought was: If he were somehow better able to imagine how I would respond, he would somehow make maybe a better moral judgment. There’s an interesting normative versus descriptive question there because on the one hand, it might be easier to justify putting the drug in the water if it made people better at grasping true moral facts.

What if it just made them better at imagining various scenarios so that they acted in a morally better way, but in fact it had no connection at all to reality, it just made their behavior better. It seems like it’s important to make that distinction even with the work that you’re doing. Namely, are you focusing on how people actually act or are you focusing on the psychological facts? Which one are we prioritizing and which one are we using to justify whatever kinds of policy implications?

CROCKETT: This goes back to the question of do we want to be psychologists or economists if we're confronted with a worldly, all-powerful being. I am falling squarely in the psychologist camp in that it's so important to understand the motivations behind why people do the things they do -- because if you change the context, then people might behave differently. If you're just observing behavior and you don't know why that behavior occurs, then you could make incorrect predictions.

Back to your question, one thought that pops up is it's potentially less controversial to enhance capabilities that people think about as giving them more competence in the world.

PAUL: There's interesting work on organ donors in particular. When people are recruiting possible organ donors and they’re looking at the families who have to make the decision, it turns out that that you get better results by encouraging the families of potential donors to imagine that the daughter was killed in a car accident, the recipient of the organ will be 17 and also loves horses. It could just be some dude with a drug problem who’s going to get the organ, but the measured results of the donating family are much better if that family engages in this fictitious imagining even though it has no connection at all to the truth. It's not always simple. In other words, the moral questions sometimes come apart from the desired empirical result.

CROCKETT:  One way that psychologists and neuroscientists can contribute to this discussion is to be as specific and precise as possible in understanding how to shape motivation versus how to shape choices. I don't have a good answer about the right thing to do in this case, but I agree that it is an important question.

 

DAVID PIZARRO: I have a good answer. This theme was something that was emerging at the end with Dave’s talk about promoting behavior versus understanding the mechanisms. There is—even if you are a psychologist and you have an interest in this—a way in which, in the mechanisms, you could say, "I’m going to take B.F. Skinner’s learning approach and say what I care about is essentially the frequency of the behavior. What are the things that I have to do to promote the behavior that I want to promote?"

 

You can get these nice, manipulated contingencies in the environment between reward and punishment. Does reward work better than punishment?

I want to propose that we have two very good intuitions, one, which should be discarded when we’re being social scientists, is what do we want our kids to be like? I want my kid to be good for the right reasons. In other words, I want her to develop a character that I can be proud of and that she can be proud of. I want her to donate to charity not because she’s afraid that if she doesn’t people will judge her poorly but because she genuinely cares about other people.

When I’m looking at society, and the more and more work that we do that might have implications for society, we should set aside those concerns. That is, we should be comfortable saying that there is one question about what the right reasons are and what the right motivations are in a moral sense. There’s another question that should ask from a public policy perspective: what will maximize the welfare of my society? I don’t give a rat’s ass why people are doing it!

It shouldn't make a difference if you’re doing it because you’re ashamed (like Jennifer might be talking about later): "I want to sign up for the energy program because I will get mocked by my peers," or if you’re doing it because you realize this is a calling that God gave to you­—to insert this little temperature reducer during California summers. That "by any means necessary" approach that seems so inhuman to us as individuals is a perfectly appropriate strategy to use when we’re making decisions for the public.

CROCKETT: Yes, that makes sense and it's a satisficing approach rather than a maximizing approach. One reason why we care about the first intuition so much is because in the context in which we evolved, which was small group interactions, someone who does a good thing for the right reasons is going to be more reliable and more trustworthy over time than someone who does it for an externally incentivized reason.

PIZARRO: And it may not be true. right? It may turn out to be wrong.

DAVID RAND: That's right, but I think it’s still true that it’s not just about when you were in a small group—hunter-gatherer—but in general: if you believe something for the right reason, then you’ll do it even if no one is watching. That creates a more socially optimal outcome than if you only do it when someone is watching.

PIZARRO: It’s an empirical question though. I don't know if it’s been answered. For instance, the fear of punishment...

RAND: We have data, of a flavor. If you look at people that cooperate in repeated prisoners dilemmas, they’re no more or less likely to cooperate in one shot, or they’re no more likely to give in a dictator game. When the rule is in place, everybody cooperates regardless of whether they’re selfish or not. When no incentive is there, selfish people go back to being selfish.

SARAH-JAYNE BLAKEMORE: There’s also data that in newsagents in the UK, where sometimes you can take a newspaper and put money in the slot, and if you put a couple of eyes above the money slot, people are more likely to pay their dues than if you don’t put any eyes there.

PIZARRO: That’s certainly not acting for the right reason. That can’t be the right reason.

RAND: You were bringing up the issue of thinking about the consequences for yourself versus the other person. When we’re thinking about how these decisions get made, there are two stages that are distinct but get lumped together a lot conceptually and measurement-wise. You have to understand what the options are, and then once you know what the options are, you have to choose which one you prefer. It seems to me that automatic versus deliberative processing has opposite roles in those two domains. Obviously to understand the problem you have to think about it. If you’re selfish, you don’t need to spend time to think about the decision because it’s obviously what to do. We try to separate those things by explaining the decision beforehand when you’re not constrained. Then when it comes time to make the decision, you put people under time pressure.

CROCKETT: That can explain what's going on and that's a good point because these ideas about uncertainty and moral wiggle room, those are going to play the biggest role in the first part—in the construing of the problem. Is this a moral decision or is this not a moral decision? Potentially also playing the biggest role is this idea you were talking about earlier about how do people internalize what is the right thing to do? How do you establish that this is the right thing to do?

We should talk more about this because, methodologically, this is important to separate out.

HUGO MERCIER: Can I say something about this issue of mentalizing? You're right in drawing attention to the importance of mentalizing in making moral decisions or moral judgments. It seems that the data indicates that we’re not very good at it, that we have biases and we tend to not be very good when we think about what might have caused other people’s behavior.

The reason is that in everyday life, as contrasted with many experimental settings, we can talk to people. If you do something that I think is bad, and we know from data about how people explain themselves, that spontaneously you’re going to tell me why you did this and you’re going to try to justify yourself. I don’t have to do the work of trying to figure out why you did this, what kind of excuse you might have had because you’re going to do it for me. Then we set up these experiments in which you don’t have this feedback and it’s just weird. It's not irrelevant because there are many situations in which that happens as well, but we still have to keep in mind that it is unnatural. In most of these games and most of these experiments, if you could just let people talk, they would find a good solution. The thing with the shocks, if the people could talk with each other, you could say "Well I’m happy to take the shock if you want to share the money." Then again I’m not saying it's not interesting to do the experiments at all, but we have to keep in mind that it’s kind of weird.

CROCKETT: That's true to a certain extent. A lot of moral decisions, particularly in the cooperation domain out in the real world, do usually involve some sort of communication. Increasingly, however, a lot of moral decisions are individual in the sense they involve someone that's not there. If you're deciding whether to buy a product that is fair trade or not, or if you're a politician making a decision about a health policy that's going to affect hundreds, thousands of people, millions of people who are not there. Some of the most wide-reaching moral decisions are made by an individual who does not see those who are going to bear the consequences of that decision. It's important to study both.

MERCIER: Maybe by realizing that the context in which these mechanisms of mentalizing evolved was one in which you had a huge amount of feedback can help us to better understand what happens when we don’t have this feedback.

CROCKETT: Maybe that's why we see selfish behavior is that we're used to having an opportunity to justify it when now there are many cases in which you don't have to justify it.

FIERY CUSHMAN: One of the things that’s unique and cool about your research is the focus on neuromodulators, whereas most research on how the brain processes morality has been on neural computation. Obviously, those things are inter-related. I guess I’ve always been, I don't know if confused is the right word, about what neuromodulators are for. It seems like neural computation can be incredibly precise. You can get a Seurat or a Vermeer out of neural computation, whereas neuromodulators give you Rothkos and Pollocks.

Why does the brain have such blunt tools? How does thinking about neuromodulators in particularly as a very blunt tool but also a very wide ranging one, inform your thought about their role in moral judgment as opposed again to neural computation?

CROCKETT: It's important to distinguish between the tools we have as researchers for manipulating neuromodulators, which are incredibly blunt, versus the way that these systems work in the brain, which are extremely precise. The serotonin system, for example, has at least 17 different kinds of receptors. Those receptors do different things and they're distributed differentially in the brain. Some types of receptors are only found subcortically and other receptors have their highest concentration in the medial prefrontal cortex. There's a high degree of precision in how these chemicals can influence brain processing in more local circuits.

To answer the first part of your question, the function of these systems is because cognition is not a one-size-fits-all kind of program. Sometimes you want to be more focused on local details at the exclusion of the bigger picture. Other times you want to be able to look at the bigger picture at the exclusion of small details. Whether you want to be processing in one way or the other is going to depend profoundly on the environmental context.

If you're in a very stressful situation, you want to be focusing your attention on how to get out of that situation. You don't want to be thinking about what you're going to have for breakfast tomorrow. Conversely if things are chilled out, that's the time when you can engage in long-term planning. There's evidence that things like stress, environmental events, events that have some important consequence for the survival of the organism are going to activate these systems which then shape cognition in such a way that's adaptive. That's the way that I think about neuromodulators.

Serotonin is interesting in this context because it's one of the least well understood in terms of how this works. The stress example that I was talking about, noradrenaline and cortisol and those neuromodulators are understood fairly well. Noradrenaline is stimulated by stress and it increases the signal to noise ratio in the prefrontal cortex and it focuses your attention.

Serotonin does tons of different things but it is one of the very few, if not the only major neuromodulator that can only be synthesized if you continually have nutritional input. You make serotonin from tryptophan, which is an amino acid that you can only get from the diet. You can only get it from eating foods that have tryptophan, which is most foods, but especially high protein foods. If you're in a famine, you're not going to be making as much serotonin.

This is interesting in an evolutionary context because when does it make sense to cooperate and care about the welfare of your fellow beings? When resources are abundant, then that's when you should be building relationships. When resources are scarce, maybe you want to be looking out for yourself, although there are some interesting wrinkles in there that Dave and I have talked about before where there could be an inverted U-shaped function where cooperation is critical in times of stress.

Perhaps one function of serotonin is to shape our social preferences in such a way that's adaptive to the current environmental context.