The notion of robustness became my recent obsession. How to be a robust? It's not clear; it's the hard problem — and that's my problem.
THE HARD PROBLEM [5.17.10]
On April 10th, at Harvard, I attended the "Hardest Problems" Symposium at Harvard, inspired by "Hilbert's Problems" — 23 problems presented by the German mathematician David Hilbert in 1900 which were unsolved at the time. The day-long event was organized and hosted by Stephen Kosslyn, a psychologist who is Dean of Social Science in the Faculty of Arts and Sciences at Harvard University — and a frequent Edge contributor (see "What Shape Are a German Shepard's Ears?")
Speakers included: Nick Bostrom, Susan Carey, Nicholas Christakis, James Fowler , Roland Fryer, Claudia Goldin. Gary King, Emily Oster, Ann Swidler, Nassim Taleb, and Richard Zeckhauser.
Nassim Taleb's "specialty and fear and obsession is with small probabilities. We don't quite understand small probabilities". Taleb, author of The Black Swan, notes that "the fundamental problem of small probabilities is that rare events don't show in samples, because they are rare. So when someone makes a statement that this in the financial markets should happen every ten thousand years, visibly they are not making a statement based on empirical evidence, or computation of the odds, but based on what? On some model, some theory.
"...our random variables became more and more complex. We cannot escape it. We can become more robust. There are techniques to become more robust. The notion of robustness became my recent obsession. How to be a robust? It's not clear; it's the hard problem — and that's my problem."
NASSIM NICHOLAS TALEB, essayist and former mathematical trader, is Distinguished Professor of Risk Engineering at New York University’s Polytechnic Institute. He is the author of Fooled by Randomness and the international bestseller The Black Swan.
Further Reading on Edge: "Learning to Expect the Unexpected"; "The Fourth Quadrant: A Map of the Limits of Statistics".
[NASSIM TALEB:] My specialty and fear and obsession is with small probabilities. We don't quite understand small probabilities. Let's discuss. [slide]
You often see in the papers things saying events we just saw should happen every ten thousand years, hundred thousand years, ten billion years. Some faculty here in this university had an event and said that a 10-sigma event should happen every, I don't know how many billion years. Do you ever regard how worrisome it is, when someone makes a statement like that, "it should happen every ten thousand years," particularly when the person is not even two thousand years old?
So the fundamental problem of small probabilities is that rare events don't show in samples, because they are rare. So when someone makes a statement that this in the financial markets should happen every ten thousand years, visibly they are not making a statement based on empirical evidence, or computation of the odds, but based on what? On some model, some theory.
So my telescope problem is as follows. [slide]
Consider the following. The smaller the probability, the less you observe it in a sample, therefore your error rate in computing it is going to be very high. Actually, your relative error rate can be infinite, because you're computing a very, very small probability, so your error rate is monstrous and probably very, very small. The problem we have now is that we don't care about probabilities. Probabilities are something you do in the classroom. What you care about is what you are going to use the probability for, what I have here called lambda of the impact of the event, the effect of the event, the consequence of the event.
So you're interested in the pair, probability times impact. So notice something, that the vicious aspect is that the pair, probability times effect by lambda, as the probability becomes smaller, the impact is bigger. A thousand year flood is going to be definitely vastly more consequential than the five year flood, or a hundred year flood.
And you see here the variation of a derivatives portfolio over twenty years. One day represents 97 percent of the variations. And that day is not forecastable, OK, and that's the problem of small probabilities. There are domains in which small probabilities aren't everything – very, very small probabilities, and we know very little about them.
What are these domains? [slide] There's a regular classification. I won't get into the details of the way these things are made, but there is type-1, I call something that reaches central limit in real time, and type-2 that doesn't reach it in real time. The classical distinction is between the thing they call Gaussian or Gaussian family or power law families — it doesn't work very well. It's much more effective to have what we call in real time, a real time central limit. In other words, the rare event in type-2 will dominate the properties regardless of what theory you're using.
This graph shows you the following. [slide] I took every economic variable I could find data for that covers 40 plus years, from stock, exchange rates, stock markets — everything that had a lot of data — everything I could find — 20 some million pieces of data — and for most of these variables, one day in 40 years represent 90 percent of a measure called Kurtosis . Kurtosis tells you how much something is not Gaussian (conditional on the distribution being type-1). In other words, you don't even know how non-Gaussian we are. That is a severe problem.
Measurability. [slide] Number one. That what we call the Norm L2, nobody should use when you have open ended payoff in economics, nobody should use at all something called variance, Least-Square methods, standard deviations, Sharpe ratio , portfolio theory, even the word correlation. These don't work in the domains of type-2. They don't work. That explains why the system collapsed because banks were using…. I spent 15 years fighting risk management methods in banks, they were all based on standard deviation portfolio theory because they say Nobels were proposing these things.
Second point. Social science has made some distinctions, which I don't think are very rigorous, between what they call Knightian measurable risk and what they call not measurable uncertainty. Several problems. [slide]
I have to specify some sub problems and then go to my main problems [slide].
The hard problem is the classical problem of induction, inverse problems, you don't observe the generator; you don't know what processes you're dealing with. [slide] There are a series of smaller problems that we can probably solve for, something called the Ludic Fallacy, the fact that probabilities that are not as observable in real life as they are in games. There are a lot of problems associated with my general thing, but my main point is that small probabilities are not measurable.Now, what do you do? My solution is in the following way. [slide]
There are two kinds of decisions you make in life, decisions that depend on small probabilities, and decisions that do not depend on small probabilities. For example, if I'm doing an experiment that is true-false, I don't really care about that pi-lambda effect, in other words, if its very false or false, it doesn't make a difference. So for example if I'm doing tests on individuals in medicine, I'm not really affected by tale events, but if I'm studying epidemics, then the random variable how many people are effected becomes open-ended with consequences so therefore it depends on fat tales. So I have kinds of decisions. One simple, true-false, and one more complicated, like the ones we have in economics, financial decision-making, a lot of things, I call them M1, M1+.And then we have, as we saw, two types of worlds [slide], I have a world of thin tails and a world of fat tails. Two worlds. And I discovered the following, and this I learned from dealing with statisticians, is that if you tell them "your techniques don't work," they get angry, but if I tell them, "this is where your techniques work," they buy you a diet coke, and try to be nice, say hello in the elevator. So, I tried to make a map of where there is, what I call, a Black Swan problem. [slide] It is when you have complex decisions, decisions that are effected by how important the move is, and we can't really compute small probabilities.
So, the four quadrants. That fourth quadrant is the limits of where we can do things, and I think what we need to do here — that's my hard problem — is try to define how to be a robust, what robustness means to Black Swans, and how to be robust to the Black Swans, in this quadrant. [slide]
We cannot escape it unfortunately in finance, ever since we left the stone-age, our random variables became more and more complex. We cannot escape it. We can become more robust. There are techniques to become more robust. The notion of robustness became my recent obsession. How to be a robust? It's not clear; it's the hard problem — and that's my problem.
John Brockman, Editor and Publisher