2008 : WHAT HAVE YOU CHANGED YOUR MIND ABOUT? WHY?

nassim_nicholas_taleb's picture
Distinguished Professor of Risk Engineering, New York University School of Engineering ; Author, Incerto (Antifragile, The Black Swan...)
The Irrelevance of

I spent a long time believing in the centrality of probability in life and advocating that we should express everything in terms of degrees of credence, with unitary probabilities as a special case for total certainties, and null for total implausibility. Critical thinking, knowledge, beliefs, everything needed to be probabilized. Until I came to realize, twelve years ago, that I was wrong in this notion that the calculus of probability could be a guide to life and help society. Indeed, it is only in very rare circumstances that probability (by itself) is a guide to decision making . It is a clumsy academic construction, extremely artificial, and nonobservable. Probability is backed out of decisions; it is not a construct to be handled in a standalone way in real-life decision-making. It has caused harm in many fields.

Consider the following statement. "I think that this book is going to be a flop. But I would be very happy to publish it." Is the statement incoherent? Of course not: even if the book was very likely to be a flop, it may make economic sense to publish it (for someone with deep pockets and the right appetite) since one cannot ignore the small possibility of a handsome windfall, or the even smaller possibility of a huge windfall. We can easily see that when it comes to small odds, decision making no longer depends on the probability alone. It is the pair probability times payoff (or a series of payoffs), the expectation, that matters. On occasion, the potential payoff can be so vast that it dwarfs the probability — and these are usually real world situations in which probability is not computable.

Consequently, there is a difference between knowledge and action. You cannot naively rely on scientific statistical knowledge (as they define it) or what the epistemologists call "justified true belief" for non-textbook decisions. Statistically oriented modern science is typically based on Right/Wrong with a set confidence level, stripped of consequences. Would you take a headache pill if it was deemed effective at a 95% confidence level? Most certainly. But would you take the pill if it is established that it is "not lethal" at a 95% confidence level? I hope not.

When I discuss the impact of the highly improbable ("black swans"), people make the automatic mistake of thinking that the message is that these "black swans" are necessarily more probable than assumed by conventional methods. They are mostly less probable. Consider that, in a winner-take-all environment such as the one in the arts, the odds of success are low, since there are fewer successful people, but the payoff is disproportionately high. So, in a fat tailed environment, what I call "Extremistan", rare events are less frequent (their probability is lower), but they are so effective that their contribution to the total pie is more substantial.

[Technical note: the distinction is, simply, between raw probability, P[x>K], i.e. the probability of exceeding K, and E[x|x>K], the expectation of x conditional on x>K. It is the difference between the zeroth moment and the first moment. The latter is what usually matters for decisions. And it is the (conditional) first moment that needs to be the core of decision making. What I saw in 1995 was that an out-of-the-money option value increases when the probability of the eventdecreases, making me feel that everything I thought until then was wrong.]

What causes severe mistakes is that, outside the special cases of casinos and lotteries, you almost never face a single probability with a single (and known) payoff. You may face, say, a 5% probability of an earthquake of magnitude 3 or higher, a 2% probability of one of 4 or higher, etc. The same with wars: you have a risk of different levels of damage, each with a different probability. "What is the probability of war?" is a meaningless question for risk assessment.

So it is wrong to just look at a single probability of a single event in cases of richer possibilities (like focusing on such questions as "what is the probability of losing a million dollars?" while ignoring that , conditional on losing more than a million dollars, you may have an expected loss of twenty million, one hundred million, or just one million). Once again, real life is not a casino with simple bets. This is the error that helps the banking system go bust with an astonishing regularity — I've showed that institutions that are exposed to negative black swans, like banks and some classes of insurance ventures, have almost never profitable over long periods. The problem of the illustrative current subprime mess is not so much that the "quants" and other pseudo-experts in bank risk management were wrong about the probabilities (they were), but that they were severely wrong about the different layers of depth of potential negative outcomes. For instance, Morgan Stanley lost about ten billion dollars (so far) while allegedly having foreseen a subprime crisis and executed hedges against it — they just did not realize how deep it would go and had open exposure to the big tail risks. This is routine: a friend who went bust during the crash of 1987, told me: "I was betting that it would happen but I did not know it would go that far".

The point is mathematically simple but does not register easily. I've enjoyed giving math students the following quiz (to be answered intuitively, on the spot). In a Gaussian world, the probability of exceeding one standard deviations is ~16%. What are the odds of exceeding it under a distribution of fatter tails (with same mean and variance)? The right answer: lower, not higher — the number of deviations drops, but the few that take place matter more. It was entertaining to see that most of the graduate students get it wrong. Those who are untrained in the calculus of probability have a far better intuition of these matters.

Another complication is that just as probability and payoff are inseparable, so one cannot extract another complicated component, utility, from the decision-making equation. Fortunately, the ancients with all their tricks and accumulated wisdom in decision-making, knew a lot of that, at least better than modern-day probability theorists. Let us stop to systematically treat them as if they were idiots. Most texts blame the ancients for their ignorance of the calculus of probability — the Babylonians, Egyptians, and Romans in spite of their engineering sophistication, and the Arabs, in spite of their taste for mathematics, were blamed for not having produced a calculus of probability (the latter being, incidentally, a myth, since Umayyad scholars used relative word frequencies to determine authorships of holy texts and decrypt messages). The reason was foolishly attributed to theology, lack of sophistication, lack of something people call the "scientific method", or belief in fate. The ancients just made decisions in a more ecologically sophisticated manner than modern epistemology minded people. They integrated skeptical Pyrrhonian empiricism into decision making. As I said, consider that belief (i.e., epistemology) and action (i.e., decision-making), the way they are practiced, are largely not consistent with one another.

Let us apply the point to the current debate on carbon emissions and climate change. Correspondents keep asking me if it the climate worriers are basing their claims on shoddy science, and whether, owing to nonlinearities, their forecasts are marred with such a possible error that we should ignore them. Now, even if I agreed that it were shoddy science; even if I agreed with the statement that the climate folks were most probably wrong, I would still opt for the most ecologically conservative stance — leave planet earth the way we found it. Consider the consequences of the very remote possibility that they may be right, or, worse, the even more remote possibility that they may be extremely right.