To Err Is Primate

To Err Is Primate

Laurie R. Santos [7.27.11]
Introduction by:
Laurie R. Santos

Why do house sellers, professional golfers, experienced investors, and the rest of us succumb to strategies that make us systematically go wrong?

 

INTRODUCTION

People are fascinated by research into the mental lives of monkeys and apes—but not always for the right reasons. What they usually want to know is whether these animals share certain important traits with humans, such as syntax, social reasoning, or altruism. Just how special are we? This question is irresistible, and isn’t going to go away. But the best work in this area is a lot more subtle than this.

This brings me to my colleague Laurie Santos, one of best young scientists in the field of psychology. She does experiments with non-human primates—including capuchins in her laboratory at Yale and rhesus macaques at a field site in Cayo Santiago—as a way to develop and test subtle theories of the nature and evolution of certain central human capacities. Much of her recent research focuses on biases in reasoning and decision-making; she is one of the founders of the exciting new field of comparative behavioral economics.

Santos asks hard questions and makes important discoveries. Her writing and thinking display an easy facility with a range of literatures; she is living proof of a comment once made by Jerry Fodor, that the best interdisciplinary conversations are those that occur inside a single head.

— Paul Bloom
 

LAURIE R. SANTOS is an associate professor of psychology at Yale University and the director of its Comparative Cognition Laboratory. She received her BA (1997) in psychology and biology and her PhD (2003) in psychology from Harvard University. She has investigated a number of topics in comparative cognition, including the evolutionary origins of irrational decision making and prosocial behavior. She is the recipient of Harvard’s Goethals Award for Teaching Excellence, Yale’s Greer Memorial Prize for Outstanding Junior Faculty, and the Stanton Prize from the Society for Philosophy and Psychology for outstanding contributions to interdisciplinary research.

PAUL BLOOM is the Brooks and Suzanne Ragen Professor of Psychology at Yale University. His most recent book is How Pleasure Works.

Excerpted from Future Science: Essays From The Cutting Edge, Edited by Max Brockman (Vintage Books, 2011)


TO ERR IS PRIMATE

[LAURIE SANTOS:]  It was the final shot of the tournament for the world’s number one player. After three tense rounds in the 2009 Barclays Tournament, Tiger Woods was now one putt away from another tournament win. His fairway shot was nearly perfect—his ball had landed just seven feet from the hole. Making this putt would earn him a birdie on the last hole and a hefty payoff. He practically beamed as he stepped up to a putt he had sunk thousands of times before. After the tournament, he would be asked if he had approached this particular shot any differently. “Absolutely not,” he would emphasize. “Every putt you hit is the same process. Go up there. Be committed to what you’re going to do. Hopefully it goes in.” Only this time it didn’t. A stunned crowd watched in disbelief as the ball skimmed past the hole. Tiger’s shot was just a bit off, but it cost him the lead. He took another putt, made par, and lost nearly a million dollars in winnings.

For professional golfers, every putt is a risky decision, one that can have big financial consequences. A putt is reasonably simple goal-directed action, yet each stroke requires more than just motor skill. Good golfers sink putts because they’re also good decision makers. Every putt requires a host of tough choices. Besides having to estimate how the ball will break, a player must choose between playing it safe—going with a softer stroke that will mean an easier following shot if things go badly—or going for the hole at the risk of overshooting. As the above example illustrates, even the best golfer in the world can make errors.

Those of us who aren’t golfers are not immune to the difficulty of making risky decisions. Though we (usually) play for smaller stakes than Tiger, we too spend our days navigating risky choices that can have significant consequences for our health, bank account, and overall well being. The question of how to best make decisions has fascinated humankind for centuries. For economists, the answer has always been relatively simple: making good decisions is a simple act of comparison shopping. A smart decision maker should start by listing all the possible choices for a given decision and then estimate the average payoff of each individual choice. Once the decision maker has all this information handy, he just needs to pick the choice with the highest expected payoff. Simple, right? Unfortunately, in practice the strategy of maximizing your expected return runs into a number of thorny issues.

First, most decisions don’t come with a finite set of nicely lined up choices. For our biggest decisions in life—finding a mate, choosing a career, and so on—it’s often hard to know exactly how many options are at our disposal. In addition, we often have limited information about how the various choices we can identify will actually affect our happiness. For all these reasons, real decision making usually fails to live up to economists’ lofty standards. Given the difficulty of maximizing payoffs, it’s no surprise that we make tons of bad mistakes all the time. What is surprising, is that we don’t just make random mistakes, we seem to make systematic mistakes. We don’t just experience a catastrophic cognitive meltdown when facing hard choices; we instead systematically switch on a set of simple (though mostly irrational) strategies to weigh those choices.

To witness one of your own irrational strategies in action, consider the following scenario: Imagine that you are an economic adviser to the president of the United States. Your goal is to choose a course of action that will reduce the rate of housing foreclosures for the 3 million home owners currently in danger. Two plans are on the table. If Plan A is implemented, the government will be able to save 1 million homes. If Plan B is implemented, there’s a one-in-three chance that the government will be able to save 3 million homes and a two-in-three chance that no homes will be saved. What’s your advice?

You probably suggested that the president go with Plan A. Any plan guaranteeing that at least some people will keep their homes seems like the better option. Fair enough. But what if the options are slightly different? Imagine a choice between two new plans, C and D. If Plan C is implemented, 2 million people will lose their homes for sure. If Plan D is adopted, there’s a two-in-three chance that 3 million people will lose their homes and a one-in-three chance that no one will lose his home. Here you might advise the president to go with Plan D. It’s a riskier option, but it also offers the possibility that no homes will be lost. When the psychologists Daniel Kahneman and Amos Tversky tested undergraduates using similar scenarios, most of their subjects showed the same pattern: they preferred Plan A to Plan B and Plan D to Plan C.1 The problem with this pattern of decision making is that the two sets of plans are identical. Plans A and C are statistically indistinguishable (since the 3 million homes are at stake in all scenarios, a result in which 1 million people keep their homes is identical to a result in which 2 million people lose theirs). The same is true of Plans B and D. As Kahneman and Tversky observed, small changes in the wording of a problem have a big effect on our preferences. When plans are presented in terms of the number of houses, lives, or dollars saved, people tend to play it safe, but when plans get us thinking in terms of houses, lives, or dollars lost, we switch to riskier tactics.

Why does a simple change in wording so critically influence our decisions? Kahneman and Tversky discovered that the culprits are two psychological biases: reference dependence and loss aversion. The first of these is our tendency to see things not in absolute terms but relative to some status quo. Most people think about their decisions not in terms of their overall happiness or total net worth but as gains or losses relative to some reference point, usually the here and now. A $20 parking ticket won’t have a significant effect on our life savings, but it’s still a negative change from our current wealth level, and thus we tend to find the event salient. The parking-ticket example also highlights the second psychological bias at work: loss aversion. We generally avoid situations in which we could incur a loss. Indeed, Kahneman and Tversky’s studies have shown that we work twice as hard to prevent being in the red as we do to seek out opportunities to land in the black.

Reference dependence and loss aversion appear to wreak havoc in a number of real-world situations. Investors tend to view the value of a stock not in absolute terms but relative to a salient reference point: what they paid when they bought it. Averse to the loss of selling below the purchase price, many investors irrationally hold on to stocks while they’re dropping in value.2 These biases also cause problems in the housing market; people are averse to selling for less than what they paid, which has led some families to decline such offers.3 Indeed, these biases are so widespread that they affect the scores of professional golfers. A golfer’s only true measure of success is his or her final score, but each hole has a salient reference point: par. The economists Devin Pope and Maurice Schweitzer analyzed more than 1.6 million PGA tour putts to determine whether players tended to perform differently when putting for birdie and eagle (i.e., strokes that put them under par) than when faced with comparable putts for par and bogey (i.e., strokes that could put them over par). Consistent with loss aversion, players were more accurate when putting for par and bogey, meticulous in their attempt to minimize the “loss” of going one or two strokes over par. Players putting for birdie or eagle, in contrast, were about 2 percent more likely to miss the hole. This small percentage of errors adds up fast—just ask Tiger Woods. Tiger’s loss aversion statistic was one of the highest on the tour; that is, he was 3.6 percent more likely to miss his birdie putts than his par putts. Indeed, this bias may have been what cost him the 2009 Barclays on the eighteenth hole.4

Why do house sellers, professional golfers, experienced investors, and the rest of us succumb to strategies that make us systematically go wrong? A few years ago, my Yale colleagues Venkat Lakshminarayanan and Keith Chen and I decided to try to get to the bottom of this question. After reviewing examples in which people succumb to these biases time and again, we started thinking that reference dependence and loss aversion might be more fundamental than economists had previously thought. This led us to a somewhat radical idea: perhaps these biases are a natural part of the way we view our choices, a result of a long evolutionary legacy. If so, we hypothesized, humans might not be the only species to use these poor decision-making strategies. Rather than investigating the biases of human subjects, we decided to test whether similar errors showed up in the decision making of one of our primate relatives: the capuchin monkey, whose last common ancestor with humans lived around 35 million years ago.

Our question was whether capuchins would show humanlike patterns of reference dependence and loss aversion, even though they lacked experience with the kinds of economic problems that typically lead human decision makers astray. Our first challenge was figuring out how to demonstrate loss aversion and reference dependence in monkeys. Capuchins aren’t all that good at investing in stocks or playing golf, so the way ahead was unclear. In the end, we decided to give the monkeys some money and see whether they could be taught to use it.5

The idea of teaching monkeys to use money might seem daunting, but the process took only a few months. We began by introducing them to a token economy. The capuchin “tokens” were coin-sized metal discs that could be traded with experimenters for food. Although the monkeys didn’t know what to do with the tokens at first, within weeks they were handing tokens to experimenters and holding out their hands for the food. We then allowed the capuchins to use the tokens in a real economy. Each monkey was given a wallet of tokens before entering a “market,” in which two “salesmen”—research assistants—offered it two different kinds of food at two different prices. The monkeys could spend their tokens to buy whichever treat they wanted. Like human shoppers, our monkeys quickly became skilled at maximizing their token value. They bought more food from experimenters who gave them a better deal. They bought more food during “sales,” when prices were cheaper. They carefully weighed the risks of dealing with unreliable salesmen who switched their behavior over time. Our monkeys’ performance so closely mirrored that of human consumers that the data fit perfectly with formal economic models of human market choice.

We were now ready to ask the real question of interest: Would monkeys’ market behavior be affected by loss aversion and reference dependence? To study this, we set up two situations: in both, the salesmen didn’t always hand over the number of apple pieces they had originally displayed—sometimes the monkeys got more pieces than they had been offered, sometimes fewer. We hypothesized that the monkeys might make their choices based not just on how many apple pieces they managed to get overall but also on how many they got relative to how many they originally saw displayed. In other words, we predicted that the monkeys would use the original offer as a reference point and, like professional golfers thinking about par or birdie, make their decision based on whether the payoff seemed like a loss or gain relative to that reference point.

First, each of the monkeys got to choose between dealing with Salesman A or Salesman B. Salesman A would always offer a monkey one piece of apple and, when the monkey made its payment of a token, add a second piece as a bonus. Salesman B was more of a risk. He also began by showing a monkey one piece of apple, but his apple payoff changed across trials: on some trials, after receiving the monkey’s token, he added a large bonus of two apple pieces, while on other trials he gave no bonus at all. Just like humans asked to choose between the government plans A and B, our monkeys bought more food from Salesman A than from Salesman B. Like people, they preferred to play it safe when dealing with bonuses.

We then introduced monkeys to two new salesmen, C and D, whose payoffs felt like losses. Salesman C always gave a small but consistent loss — he showed the monkeys three apple pieces but gave them only two in return for the token. Salesman D was riskier. He began by showing the monkeys three pieces and sometimes gave them all three but other times gave them only one. As predicted, our monkeys took greater risks when their token payments felt like losses; that is, they consistently preferred to trade with risky Salesman D over reliable (but shortchanging) Salesman C.

Overall, our monkeys behaved just like humans tested in Kahneman and Tversky’s scenarios: they thought about the market in terms of arbitrary reference points and responded to payoffs differently depending on whether the payoffs appeared to be gains or losses relative to those reference points. In this and other studies, monkeys seemed not to consider their choices in absolute terms. Moreover, they made decisions differently when dealing with losses than when dealing with gains. These findings suggest that the biases that human decision makers show may be far more fundamental than originally thought. The biased strategies that cost Tiger Woods millions of dollars each year may be at least 35 million years old.

The discovery that loss aversion and reference dependence may be deeply evolved psychological tendencies has important implications for our ability to overcome these biases. For years, economists have assumed that decision makers would stop using irrational strategies in the face of enough negative financial feedback. Unfortunately, there is growing evidence that people don’t drop these strategies as soon as they become costly. Pope and Schweitzer estimate that loss aversion causes even experienced professional golfers to lose more than a million dollars a year, yet nearly all golfers on the tour exhibit these biases. Similarly, investors tend to hold on to losing stocks even after suffering repeated losses because of doing so. Our capuchin findings suggest an answer to why these biases might be so hard to overcome: reference dependence and loss aversion may be as deeply ingrained as some of our other evolved cognitive tendencies. Just consider how difficult it is to switch off our natural fondness for cheesecake, our squeamishness about bugs, our disgust at a pile of feces. When natural selection builds in a strategy, it’s hard to get rid of. If reference dependence and loss aversion are phylogenetically ancient enough to be shared with capuchin monkeys—as our work suggests—it’s unlikely that the human species will overcome these tendencies anytime soon.

How, then, should we deal with the fact that our choices are at the mercy of deeply ingrained irrational strategies? One way, advocated by the behavioral economist Richard Thaler, is to harness these biases for our benefit.6 We may be at the mercy of reference dependence, but there’s lots of flexibility in what counts as a reference point. Using subtle changes in wording and framing, we can switch how we instinctively think about a problem and make the most rational option feel more intuitive. Thaler has used this idea to develop a better retirement savings plan, one that increases people’s savings contributions automatically after they’ve received a pay raise. By taking retirement contributions before people have a chance to adjust to their new paycheck’s reference point, Thaler’s plan avoids loss aversion and allows people to feel better about saving more.7 Similar reference-point changes have been used to increase other good behaviors. The psychologist Noah Goldstein observed that hotel guests are more likely to reuse bath towels when they are informed that most previous guests chose to do so. The actions of others provide a powerful reference point against which we strive to avoid seeming less environmentally correct.8

With newfound insights about the phylogenetic origins of our irrational decision-making strategies in place, social scientists are now poised to discover new ways we can harness our evolved biases to further modern decision-making agendas—such as making better financial choices and perhaps even increasing our happiness. Even professional golfers have made some headway in this regard. Reference dependence may have cost Tiger a win at the 2009 Barclays, but it also gave him a way to feel better about his poor performance. When interviewed about his score on the eighteenth, Tiger was quick to highlight an alternative reference point for the press: his final putt wasn’t the worst final putt in the tournament. “It’s frustrating when you misread a putt that bad,” he said. But one of the players he tied with “did the same thing. His putt broke more.” Changing your reference point may be an evolutionarily old strategy, but it’s also a smart one. And as any golfer can tell you, if you look carefully you can always find a worse putt.


1 D. Kahneman and A. Tversky, “The Framing of Decision and the Psychology of Choice,” Science 211 (1981), 453–58.

 

2 T. Odean, “Are Investors Reluctant to Realize Their Losses?,” Journal of Finance 5 (1998), 1775–98.

 

3 D. Genesove and C. Mayer, “Loss Aversion and Seller Behavior: Evidence from the Housing Market,” Quarterly Journal of Economics 116 (2001), 1233–60.

 

4 D. G. Pope and M. Schweitzer, “Is Tiger Woods Loss Averse? Persistent Bias in the Face of Experience, Competition, and High Stakes” (2009), http://ssrn.com/abstract=1419027.

 

5 M. K. Chen, V. Lakshminarayanan, and L. R. Santos, “The Evolu- tion of Our Preferences: Evidence from Capuchin Monkey Trading Behavior,” Journal of Political Economy 114 (2006), 517–37.

 

6 R. H. Thaler and C. R. Sunstein, Nudge: Improving Decisions on Health, Wealth, and Happiness (New Haven, Conn.: Yale University Press, 2008).

 

7 R. H. Thaler and S. Bernartzi, “Save More Tomorrow: Using Behavioral Economics to Increase Employee Saving,” Journal of Political Economy 112 (2004), S164–87. For other helpful suggestions about how to use your biases to your advantage, see Thaler’s blog, http://nudges.wordpress.com/

 

8 N. J. Goldstein, R. B. Cialdini, and V. Griskevicius, “A Room with a Viewpoint: Using Social Norms to Motivate Environmental Con- servation in Hotels,” Journal of Consumer Research 35 (2008), 472–82.

 


 

Edited by Max Brockman

JUST PUBLISHED —  AVAILABLE IN BOOKSTORES AND ONLINE — ORDER NOW!

amazon.com | bn.com | amazon.co.uk (available October)

Advance Praise for Future Science:

"This remarkable collection of fluent and fascinating essays reminds me that there is almost nothing as spine-tinglingly exciting as glimpsing a new nugget of knowledge for the first time. These young scientists give us a treasure trove of precious new insights." — Matt Ridley, Author, The Rational Optimist

"I would have killed for books like this when I was a student!" — Brian Eno, Composer; Recording Artist; Producer: U2, Cold Play, Talking Heads, Paul Simon

"Future Science shares with the world a delightful secret that we academics have been keeping — that despite all the hysteria about how electronic media are dumbing down the next generation, a tidal wave of talent has been flooding into science, making their elders feel like the dumb ones..... It has a wealth of new and exciting ideas, and will help shake up our notions regarding the age, sex, color, and topic clichés of the current public perception of science." — Steven Pinker, Johnstone Family Professor, Department of Psychology, Harvard; Author, The Language Instinct


Eighteen original essays by:

Kevin P. Hand: "On the Coming Age of Ocean Exploration" Felix Warneken: "Children's Helping Hands" William McEwan: "Molecular Cut and Paste" Anthony Aguirre: "Next Step Infinity" Daniela Kaufer and Darlene Francis: "Nurture, Nature, and the Stress That Is Life" Jon Kleinberg: "What Can Huge Data Sets Teach Us About Society and Ourselves?" Coren Apicella: "On the Universality of Attractiveness" Laurie R. Santos: "To Err Is Primate" Samuel M. McLure: "Our Brains Know Why We Do What We Do" Jennifer Jacquet: "Is Shame Necessary?"  Kirsten Bomblies: "Plant Immunity in a Changing World" Asif A. Ghazanfar: "The Emergence of Human Audiovisual Communication" Naomi I. Eisenberger: "Why Rejection Hurts" Joshua Knobe: "Finding the Mind in the Body" Fiery Cushman: "Should the Law Depend on Luck?" Liane Young: "How We Read People's Moral Minds" Daniel Haun: "How Odd I Am!" Joan Y. Chiao: "Where Does Human Diversity Come From?"

Click Here for the Annotated Table of Contents