The Fourth Quadrant

Monday, September 22nd, 2008

Nassim Nicholas Taleb (The Black Swan, Fooled by Randomness) maps decisions onto four quadrants — much like any MBA, really. In his case, the two criteria are whether the decision itself is simple (binary) or complex, and whether the probability distribution is known and thin-tailed (Mediocristan) or unknown and/or fat-tailed (Extremistan).

The Fourth Quadrant, naturally, is where all hell breaks loose:

In response, Taleb has developed a list of Phronetic RulesWhat Is Wise To Do (Or Not Do) In The Fourth Quadrant:

1) Avoid Optimization, Learn to Love Redundancy. Psychologists tell us that getting rich does not bring happiness — if you spend it. But if you hide it under the mattress, you are less vulnerable to a black swan. Only fools (such as Banks) optimize, not realizing that a simple model error can blow through their capital (as it just did). In one day in August 2007, Goldman Sachs experienced 24 x the average daily transaction volume — would 29 times have blown up the system? The only weak point I know of financial markets is their ability to drive people & companies to “efficiency” (to please a stock analyst’s earnings target) against risks of extreme events.

Indeed some systems tend to optimize — therefore become more fragile. Electricity grids for example optimize to the point of not coping with unexpected surges — Albert-Lazlo Barabasi warned us of the possibility of a NYC blackout like the one we had in August 2003. Quite prophetic, the fellow. Yet energy supply kept getting more and more efficient since. Commodity prices can double on a short burst in demand (oil, copper, wheat) — we no longer have any slack. Almost everyone who talks about “flat earth” does not realize that it is overoptimized to the point of maximal vulnerability.

Biological systems — those that survived millions of years — include huge redundancies. Just consider why we like sexual encounters (so redundant to do it so often!). Historically populations tended to produced around 4-12 children to get to the historical average of ~2 survivors to adulthood.

Option-theoretic analysis: redundancy is like long an option. You certainly pay for it, but it may be necessary for survival.

2) Avoid prediction of remote payoffs — though not necessarily ordinary ones. Payoffs from remote parts of the distribution are more difficult to predict than closer parts.

A general principle is that, while in the first three quadrants you can use the best model you can find, this is dangerous in the fourth quadrant: no model should be better than just any model.

3) Beware the “atypicality” of remote events. There is a sucker’s method called “scenario analysis” and “stress testing” — usually based on the past (or some “make sense” theory). Yet I show in the appendix how past shortfalls that do not predict subsequent shortfalls. Likewise, “prediction markets” are for fools. They might work for a binary election, but not in the Fourth Quadrant. Recall the very definition of events is complicated: success might mean one million in the bank …or five billions!

4) Time. It takes much, much longer for a times series in the Fourth Quadrant to reveal its property. At the worst, we don’t know how long. Yet compensation for bank executives is done on a short term window, causing a mismatch between observation window and necessary window. They get rich in spite of negative returns. But we can have a pretty clear idea if the “Black Swan” can hit on the left (losses) or on the right (profits).

The point can be used in climatic analysis. Things that have worked for a long time are preferable — they are more likely to have reached their ergodic states.

5) Beware Moral Hazard. Is optimal to make series of bonuses betting on hidden risks in the Fourth Quadrant, then blow up and write a thank you letter. Fannie Mae and Freddie Mac’s Chairmen will in all likelihood keep their previous bonuses (as in all previous cases) and even get close to 15 million of severance pay each.

6) Metrics. Conventional metrics based on type 1 randomness don’t work. Words like “standard deviation” are not stable and does not measure anything in the Fourth Quadrant. So does “linear regression” (the errors are in the fourth quadrant), “Sharpe ratio”, Markowitz optimal portfolio, ANOVA shmnamova, Least square, etc. Literally anything mechanistically pulled out of a statistical textbook.

My problem is that people can both accept the role of rare events, agree with me, and still use these metrics, which is leading me to test if this is a psychological disorder.

The technical appendix shows why these metrics fail: they are based on “variance”/”standard deviation” and terms invented years ago when we had no computers. One way I can prove that anything linked to standard deviation is a facade of knowledge: There is a measure called Kurtosis that indicates departure from “Normality”. It is very, very unstable and marred with huge sampling error: 70-90% of the Kurtosis in Oil, SP500, Silver, UK interest rates, Nikkei, US deposit rates, sugar, and the dollar/yet currency rate come from 1 day in the past 40 years, reminiscent of figure 3. This means that no sample will ever deliver the true variance. It also tells us anyone using “variance” or “standard deviation” (or worse making models that make us take decisions based on it) in the fourth quadrant is incompetent.

7) Where is the skewness? Clearly the Fourth Quadrant can present left or right skewness. If we suspect right-skewness, the true mean is more likely to be underestimated by measurement of past realizations, and the total potential is likewise poorly gauged. A biotech company (usually) faces positive uncertainty, a bank faces almost exclusively negative shocks. I call that in my new project “concave” or “convex” to model error.

8) Do not confuse absence of volatility with absence of risks. Recall how conventional metrics of using volatility as an indicator of stability has fooled Bernanke — as well as the banking system.

9) Beware presentations of risk numbers. Not only we have mathematical problems, but risk perception is subjected to framing issues that are acute in the Fourth Quadrant. Dan Goldstein and I are running a program of experiments in the psychology of uncertainty and finding that the perception of rare events is subjected to severe framing distortions: people are aggressive with risks that hit them “once every thirty years” but not if they are told that the risk happens with a “3% a year” occurrence. Furthermore it appears that risk representations are not neutral: they cause risk taking even when they are known to be unreliable.

I didn’t realize he also has a seminar DVD out, Nassim Nicholas Taleb: The Future Has Always Been Crazier Than We Thought.

Leave a Reply