DANIEL GILBERT: Economic decisions are inherently affective forecasts. Economists believe that people engage in economic transactions in order to 'maximize their utility.' Now, for psychologists the word utility isn't particularly meaningful unless you are talking about gas and electricity. Psychologists argue that utility is actually a stand-in for something like happiness or satisfaction—some subjective, hedonic state of the decision-maker. That sounds a bit squishy to modern economists, who often confuse utility with wealth, but how could it be otherwise?
People engage in economic transactions in order to get things that they believe will provide them with positive emotional experiences, and wealth is nothing more than an "experience credit" that can be used to attain those experiences in the future. So rational economic behavior requires that we look into the future and figure out what will provide that experience and what won't. As it turns out, people make systematic errors when they do this, which is why their economic decisions are so often suboptimal.
The problem lies in how we imagine our future hedonic states. We are the only animals that can peer deeply into our futures—the only animal that can travel mentally through time, preview a variety of futures, and choose the one that will bring us the greatest pleasure and/or the least pain. This is a remarkable adaptation—which, incidentally, is directly tied to the evolution of the frontal lobe—because it means that we can learn from mistakes before we make them. We don't have to actually have gallbladder surgery or lounge around on a Caribbean beach to know that one of these is better than another. We may do this better than any other animal, but our research suggests that we don't do it perfectly. Our ability to simulate the future and to forecast our hedonic reactions to it is seriously flawed, and that people are rarely as happy or unhappy as they expect to be.
What kinds of errors and mistakes do people make? The first thing to note is that psychologists who study errors of judgment are only interested in systematic errors. There's a difference between an error and a systematic error. If you're standing in front of a dart board and you're trying to hit the bull's eye, you are bound to miss sometimes, but your errors will be randomly distributed around the middle of the dartboard. The mere fact that you can't hit the bull's eye every time is not particularly interesting or unusual, and the mere fact that people are inaccurate in predicting their hedonic reactions to future events is not interesting or unusual either. But if every time you missed the bull's-eye you made a particular kind of error—for example, if all of your misses were twenty degrees to the left—then something interesting and unusual might be happening and we might start to wonder what it was.
Perhaps you have a visual deficit, perhaps the dart is badly weighted, perhaps there is a strong air current in the room. Systematic errors beg for scientific explanations, and as it turns out, the errors that people make when they try to predict their emotional futures are quite systematic. Specifically, people tend to overestimate the impact of future events. That is, they predict that future events will have a more intense and more enduring hedonic impact than they actually do. We call this the impact bias.
Let me give you a couple of real-world examples of this bias. We've done dozens of studies in both the laboratory and the field, and the general strategy of the research is really very simple: We ask people to predict how they will feel minutes, days, weeks, months, or even years after some future event occurs, and then we measure how they actually do feel after that event occurs. If the two numbers differ systematically, then we have one of those interesting and unusual systematic errors I mentioned.
We've seen the impact bias in just about every context we've studied. For example, we've studied numerous elections over the last few years, and voters invariably predict that if their candidate wins they're going to be happy for months, and if their candidate loses they'll be unhappy for months. In fact, their happiness is barely influenced by electoral outcomes. We see the same pattern when we look at the dissolution of romantic relationships.
People predict that they will be very unhappy for a very long time after a romantic relationship dissolves, but the fact is that they are usually back to their baseline in a relatively short time—a much shorter time than they predicted. Professors expect to be happier for years after getting tenure than after being denied tenure, but the two groups are equally happy in a brief time. Please understand that I'm not saying that these events had no impact. Of course promotions make us feel good and divorces make us feel bad! What I'm saying is that whatever impact these events have, it is demonstrably smaller and less enduring than the impact the people who experienced them expected them to have.
Now, notice something about these events: They are remarkably ordinary. We aren't asking people to tell us how they'll feel after a Martian invasion. Most voters have voted and won before, most lovers have loved and lost before. For the most part, the events we study are events that people have experienced many times in their lives—events about which they should be quite expert—which makes their inaccuracy all the more curious and all the more interesting.
The question, then, is not whether there is an impact bias, because that has been amply demonstrated both by our lab and by others. The question is why? Why are we such strangers to ourselves? There are a couple of different answers to this question. Most robust phenomenon in nature are multiply determined, which is to say that when something happens all the time there are probably a lot of independent mechanisms making it happen. That's what we've found with the impact bias. Let me tell you about a few of the mechanisms that seem to give rise to the impact bias.
First, people have a tremendous talent for changing their views of events so that they can feel better about them. We're not immediately delighted when our wife runs away with another guy, but in fairly short order most of us start to realize that "she was never really right for me" or that "we didn't have that much in common." Our friends snicker and say that we are rationalizing—as if these conclusions were wrong simply because they are comforting. In fact, rationalization doesn't necessarily mean self-delusion. These conclusions may actually have been right all along, and rationalization may be the process of discovering what was always true but previously unacknowledged. But it really doesn't matter from my perspective whether these conclusions are objectively true or not.
What matters is that human beings are exceptionally good at discovering them when it is convenient for them to do so. Shakespeare wrote "'tis nothing either good or bad, but thinking makes it so," and in fact, thinking is a remarkable tool that allows us to change our views of the world in order to change our emotional reactions to the world. Once we discover how wrong our wife was for us, her departure is transformed from a trauma to a blessing.
Now, it's not big news that people are good at this. What is news is that people don't know they're good at this. Rationalization is largely an unconscious process. We don't wake up in the morning and say, "Today I'm going to fool myself." Rather, soon after a bad event occurs, unconscious processes are activated and these processes begin to generate different ways of construing the event. Thoughts such as "Maybe I was never really in love" seem to come to mind all by themselves, and we feel like the passive recipients of a reasonable suggestion. Because we don't consciously experience the cognitive processes that are creating these new ways of thinking about the event, we don't realize they will occur in the future.
One of the reasons why we think bad things will make us feel bad for a long time is because we don't realize that we have this defensive system—something like a psychological immune system, if you will. If I were to ask you to predict how healthy you would be if you encountered a cold germ and you didn't know that you had a physical immune system, you'd expect to get very sick and perhaps even die.
Similarly, when people predict how they're going to feel in the face of adversity, not knowing they have a psychological immune system leads them to expect more intense and enduring dissatisfaction than they will actually experience. We have several studies demonstrating this point. For example, if you ask subjects in an experiment to predict how they will feel a few minutes after getting negative feedback about their personalities from a clinician or a computer, they expect to feel awful—and they expect to feel equally awful in both cases.
But when you actually give them that feedback, they feel slightly disappointed but not awful. Moreover, they feel much less disappointed when the feedback came from a computer than from a clinician. Why? Because it is much easier to rationalize feedback from a computer than a clinician. After all, what does a machine know? What's interesting is that subjects don't realize in prospect that they will do this. Results such as these suggest that people just aren't looking forward to their opportunities for rationalization when they predict their future happiness.
Consider another mechanism that causes the impact bias. I spend a lot of time asking people to imagine how they would feel a year after their child died (as you can imagine, this makes me very popular at parties). Everybody gives the same answer, of course, which is some form of "I would be totally devastated." Then I ask them what they did to come to that conclusion, and they'll almost always report that they had a horrifying mental image of being at a funeral at which their child is being buried, or of standing in the child's room looking at an empty crib, etc. These horrifying images serve as the basis for their predictions which, as it turns out, are wrong. The clinical literature suggests that people who lose a child are not usually "thoroughly devastated" a year later. The event has lasting repercussions, of course, but what it remarkable about the people who experience it is just how well they usually do. As your grandmother said, life goes on.
So why do people mispredict their reactions to tragedies like this one?
A mental image captures one moment of a single event. But one's happiness a year after the event is influenced by much more than the event itself. A lot happens in a year—there are birthday parties, school plays, promotions, love-making, dental appointments, hot fudge sundaes, and so on. These things aren't nearly as important as the tragedy, of course, but they are real, there are a lot of them, and together they have an impact that forecasters tend not to consider.
When we're trying to predict how happy we will be in a future that contains Event X, we tend to focus on Event X and forget about all the other events that also populate that future—events that tend to dilute the hedonic impact of Event X. In a sense, we are slaves to the focus of our own attention. For example, in one study we asked college students to predict how happy or unhappy they would be a few days after their home team won or lost a football game, and they expected the game to have a large impact on their hedonic state. But when we simply asked them to name a dozen other things that would happen in those days before they made their predictions, the game had far less impact on their predictions. In other words, once they thought about how well-populated the future was, they realized that the game was just one of many sources of happiness and that its impact would be diluted by others.
When you study errors such as these, it is only natural to wonder how they might be avoided, and people are always asking me if I would like to develop programs to improve people's affective forecasting accuracy. Before we rush out to develop such programs, we should ask whether the impact bias is something we want to live without. Errors in human judgment are logical violations—if you say you'll feel 7 on a 1-to-10 scale and you actually end up feeling 5, then you've made a mistake. But is that mistake a bad thing?
The fact is that errors can have adaptive value. For example, perhaps it is important for organisms to believe they would be thoroughly devastated by the loss of their offspring, and the fact that this isn't actually true is beside the point. What may matter is that the organism thinks it is true and act accordingly. Perhaps the best way to think of an error in judgment is like a mosquito in an ecosystem. You see the darn pest and your first inclination is to ask, how do we rid of these? So you spray DDT and you kill all the mosquitoes and then you find out that the mosquitoes were at the bottom of a food chain and the fish ate the mosquitoes, and the frogs ate the fish, and the bears ate the frogs, and now the entire ecology is devastated. Similarly, errors in human judgment may be playing important roles that scientists don't see.
Many economists believe that affective forecasting errors are impediments to rational action and hence should be eliminated—just as we would all agree that illiteracy or innumeracy are bad things that deserve to be eradicated. But cognitive errors may be more like optical illusions than they are like illiteracy. The human visual system is susceptible to a variety of optical illusions, but if someone offered to surgically restructure your eyes and your visual cortex so that parallel lines no longer appeared to converge on the horizon, you should run as far and fast as possible.
I'm interested in learning how people can become better affective forecasters, but not because I believe that people should become better affective forecasters. My job as a scientist is to find and explain these errors and illusions, and it is up to each individual to decide how they want to use our findings.
With that said, our research does suggest that there is a simple antidote to affective forecasting errors. Consider this. There are two ways to make a prediction about how you're going to feel in the future. The first is to close your eyes and imagine that future—to simulate it in your own mind and preview your own hedonic reaction. That's the kind of affective forecasting we've studied extensively, and what we now know is that the process of projecting oneself into the future is a process that is fraught with error. But there's a second way to make these kinds of forecasts, namely, to find somebody who's already experiencing that future and observe how they actually feel.
If you were trying to decide whether you should take job X or job Y, you might try to imagine yourself in each of them, but you might instead observe people who have job X and job Y and simply see how happy they are. What we've discovered is that (a) when people do this, they make extremely accurate affective forecasts, and (b) no one does this unless you force them to!
Try this thought experiment: You're going to go on a vacation to a tropical island. It's offered at a very good price, and you have to decide whether you're willing to pay. You are offered one of two pieces of information to help you make your decision. Either you can have a brochure about the hotel and the recreational activities on the island, or you can find out how much a randomly selected traveler who recently spent time there liked his or her experience. Which would you prefer? In studies we've done that are modeled on this thought experiment, roughly 100% of the people prefer the kind of information contained in the brochure. After all, who the hell wants to hear from some random guy when they can look at the brochure and judge for themselves?
Nonetheless, if you actually give people one of these two pieces of information, they more accurately predict their own happiness when they see the random traveler's report then when they see the brochure. Why? Because the brochure enables you to simulate what the island might be like and how much you'd enjoy it, but as I've mentioned, these sorts of predictions are susceptible to a wide variety of errors.
On the other hand, another person's report enables you avoid these errors because it allows you to base your predictions on real experience rather than imaginary experience. If another person liked the island, the odds are that you will like it too. There's a delicious irony here, which is that the information we need to predict how we'll feel in the future is usually right in front of us in the form of other people. But because individuals believe so much in their own uniqueness—because we think we're so psychologically different from others—we refuse to use the information that's right before our eyes.
If you want to be a better affective forecaster, then, you would do well to base your forecasts on the actual experiences of real people who've been in the situations you are only imagining. The more similar to you the person is, the more informative their experience will be, of course. But what's amazing is that even the experience of a randomly selected person provides a better basis for forecasting than does your own imagination.
If you actually looked at the correlates of happiness across the human population, you learn a few important things. First of all, wealth is a poor predictor of happiness. It's not a useless predictor, but it is quite limited. The first $40,000 or so buys you almost all of the happiness you can get from wealth. The difference between earning nothing and earning $20,000 is enormous—that's the difference between having shelter and food and being homeless and hungry.
But economists have shown us that after basic needs are met, there isn't much 'marginal utility' to increased wealth. In other words, the difference between a guy who makes $15,000 and a guy who makes $40,000 is much bigger than the difference between the guy who makes $100,000 and the guy who makes $1,000,000. Psychologists, philosophers, and religious leaders are a little too quick to say that money can't buy happiness, and that really betrays a failure to understand what it's like to live in the streets with an empty stomach. Money makes a big difference to people who have none.
On the other hand, once basic needs are met, further wealth doesn't seem to predict further happiness. So the relationship between money and happiness is complicated, and definitely not linear. If it were linear, then billionaires would be a thousand times happier than millionaires, who would be a hundred times happier than professors. That clearly isn't the case.
On the other hand, social relationships are a powerful predictor of happiness—much more so than money is. Happy people have extensive social networks and good relationships with the people in those networks. What's interesting to me is that while money is weakly and complexly correlated with happiness, and social relationships are strongly and simply correlated with happiness, most of us spend most of our time trying to be happy by pursuing wealth. Why?
Individuals and societies don't have the same fundamental need. Individuals want to be happy, and societies want individuals to consume. Most of us don't feel personally responsible for stoking our country's economic engine; we feel personally responsible for increasing our own well-being. These different goals present a real dilemma, and society cunningly solves it by teaching us that consumption will bring us happiness.
Society convinces us that what's good for the economy is good for us too. This message is delivered to us by every magazine, television, newspaper, and billboard, at every bus stop, grocery store, and airport. It finds us in our cars, it's made its way onto our clothing. Happiness, we learn, is just around the corner and it requires that we consume just one more thing. And then just one thing more after that. So we do, we find out that the happiness of consumption is thin and fleeting, and rather than thinking to ourselves, "Gosh, that promise of happiness-by-consumption was a lie," we instead think, "Gosh, I must not have consumed enough and I probably need just one small upgrade to my stereo, car, wardrobe, or wife, and then I'll be happy."
We live in the shadow of a great lie, and by the time we figure out that it is a lie we are closing in on death and have become irrelevant consumers, and a new generation of young and relevant consumers takes our place in the great chain of shopping.
Do I make all these affective forecasting errors myself? You bet I do. Because of the research I do, I occasionally glimpse life from the experimenter's point of view, but most of the time I'm just another one of life's subjects and I do all the same things that everyone does. I make the same mistakes the other subjects make, and if there is any difference between us it is that I am dimly aware of my mistakes as I make them.
But awareness isn't enough to stop me. Affective forecasting errors are a bit like perceptual illusions in this respect. Someone shows you a neat illusion and you say, "Wow, it looks like the black rectangle is floating above the white one even though it's really not." But that awareness doesn't make the illusion go away. Similarly, you can know at an intellectual level that an affective forecast is wrong, but that in and of itself doesn't change the fact that it feels so damn right. For example, my girlfriend is a consultant who has to live in different cities five days a week, and I am absolutely convinced that if she would just find a job in Cambridge and be home with me at night, I would be deliriously happy forevermore. I am as convinced as anyone that my big wombassa is just around the corner.