The key is not options but obliquity

Wednesday, March 27th, 2019

Eric Falkenstein explains why Taleb’s Antifragile book is a fraud:

In Nassim Taleb’ book Antifragile he emphasizes that ‘if you see a fraud and do not say fraud, you are a fraud,’ I am thus compelled to note that Antifragile is a fraud because its theme is based on intentional misdirection. The most conspicuous and popular examples he presents are also explicitly mentioned as not the essence of antifragility. Indeed, incoherence is Taleb’s explicit strategy, as the Wikipedia entry on Antifragility notes Taleb presents his book in a way to make it difficult to criticize. He tried to squeeze a weltanschauung onto the Procrustean bed of his Black Swan and generated a synecdoche that confuses the part with the whole.

[...]

There are two ways to generate an option payoff. One is to buy an option; another is via dynamic replication, which involves doubling down a position as it becomes more in-the-money. The outsized success of winners over losers in dynamic systems generates large convexities, but to be a winner, the keys are not buying options, but rather, obliquity, the process of achieving success indirectly via a combination of vision, excellence, resilience, and flexibility. To describe the essence of this as being convex distracts people from focusing on their strengths, areas where they have, in a sense, insider information. Meanwhile, simple hormesis helps generate the efficiency and resiliency that allows firms/organisms to outgrow their competitors, why everyone mentions examples of hormesis, waves their hands, and hopes no one notices the bait-and-switch.

Promoting the new idea that acquiring options on the next Black Swan is the basis of “our own existence as a species on this planet” is the sort of hyperbole you hear at TED talks. It is the sort of thing bureaucrats love because they are generally too high up to have much domain-specific expertise, and the incoherent but plausible-sounding theme allows one to talk about strategy without actually knowing anything specific. Then you give examples of your great idea that are really something else entirely, and fade to black…

Comments

  1. Bill says:

    And here I thought that book was unreadable because Taleb is a poor writer.

  2. Kirk says:

    I’ve always read Taleb’s books with a sense that the guy has something going, but he’s not that good at making it clear. I don’t know if that’s because of his thinking being basically fuzzy, or if he’s so much smarter than I am that I can’t quite wrap my head around some of what he’s saying… Alternatively, he’s mostly right, and a lousy communicator.

    Or, I suppose, he’s a glib and facile bullshitter who really doesn’t know what he’s talking about.

    Bottom line–I can’t tell. So, for me, the jury is out.

    I do think there’s something to this idea of anti-fragility, but whether or not Taleb has the right way of articulating it and putting it into effect…? No idea, whatsoever.

  3. Harry Jones says:

    My take has been slightly different: he’s on the trail of something but hasn’t caught it. He’s vague because he hasn’t pinpointed the precise truth of matters. In other words, what he’s serving up is half-baked. (But half-baked compared to what?)

    I’m inclined to show him some grace, because it’s rare that someone can figure out the meaning of life all by himself and then reveal it to the world. For example, I’m not convinced Gautama Buddha quite pulled that trick off, but I wouldn’t dismiss it all as utter bullshit either. There’s a middle ground.

    Maybe Taleb didn’t signal enough humility to hedge his bets. But open self doubt doesn’t sell books, so maybe he had no choice. I wonder whether his reputation as a public intellectual will prove to be fragile or anti-fragile?

  4. Kirk says:

    For me, this “fragile” and “anti-fragile” is a little too buzz-wordy to really take seriously.

    It is a way of looking at things, but it’s not at all a complete way of doing that. Part of the issue is the language itself; it does not lend itself to discussing this stuff very well, because the features we’re talking about are not aligned with a lot of the terminology. Taleb is struggling with his ideas because the ideas are not exactly and precisely expressible in the English language.

    He couches his ideas in terms of economics, but what he’s getting at is outside that realm. You want to build something that lasts, the way he is talking about, you have to start thinking in terms that are not well-supported in conceptualization within the language. Especially when discussing institutions.

    We have a major problem in the world, in that the way we organize ourselves is not working well. I could outline a dozen case histories where institutions have failed or are on a path to failure, and the reasons they’re in that mode has to do with how they’ve been set up. The vast static bureaucracies we build around solving problems have failure baked into them, especially in terms of the bureaucracy becoming the point of the whole enterprise, rather than dealing with the issues it was supposed to address.

    What Taleb is trying to get at is how this failure mode should be addressed. Fundamentally, I think what needs to happen is that the scale of these things needs to be cut down to the point where failure is not a world-killer, and embrace the chaos of the world. You can’t make everything perfect, every time, so instead of that, plan for failure and plan to learn from it. Rather than have a massive industrial age bureaucracy, construct an institution out of a bunch of much smaller pieces, and let those pieces swarm the problem until it doesn’t need attention.

    Much of what we do tries to apply the lessons of one area to everything. Does it make sense to have a “flood department” like we do with fire departments? Or, should we treat firefighting the way we do public disasters like flooding, and not have permanent fire departments?

    To my way of thinking, there’s a time and a place for institutionalizing a problem like fires and floods. There’s a break-point after which, yeah, you’d better start building those flood department structures, and having people permanently assigned to that sort of work, who do nothing else. Where that point is, though…?

    If I were organizing something from the ground up, I’d concentrate on building small functional groups of people that worked well together, and then let them deal with what needs doing as they see fit. Small, polyvalent organizations that swarm tasks and work on jobs on an ad-hoc basis are what we need, because that way, you don’t have to rely on specific pipelined sub-organizations. If you’re in a manufacturing setting, and find that you have a bottleneck in shipping because there aren’t enough people down on the loading docks, it’s better to temporarily re-task excess personnel from other tasks to go down and work that issue until it’s over than it is to beef up the shipping department past the point that they really need most of the time. And, most workers ought to be ready and able to approach other aspects of the company efforts, even the management structure. You want robust and “anti-fragile”, what you really mean is you want flexible and adaptable sub-elements that can fail and be easily supplanted when they do.

    It’s a recognition of what goes on in real life, anyway–We’re all part of small teams. Even in the executive suites, the boss is supported by a host of other personnel that provide him with information and support–Like the secretaries and other “knowledge workers”. Those folks ought to be seen as discrete “work groups”, and treated as such. It’s a management team, really–Not a manager. Hell, the guy who’s a success in one organization, and who fails abysmally in another? Oftentimes, the reason for that is that he’s got outstanding support and teamwork in the former, and crap in the latter…

  5. Harry Jones says:

    I’m not sure how much a local flood department in New Orleans would have done in response to Hurricane Katrina. They could have perhaps done what FEMA attempted, and likely have done a less awful job at it.

    But the real problem is that city should not have been constructed that way in the first place.

    Taleb seems to define a Black Swan as something no one could have seen coming, as opposed to something everyone should have seen coming but no one did. Human cognition is fallible that way. Brittle minds create fragile systems and structures.

    Socrates would have held a dialogue to define all these new terms more robustly.

  6. Paul from Canada says:

    Kirk,

    I’ve heard you expound on this before many times, and I think you have hit the nail on the head. I also think, if you boil out the buzzwords and vague academic language, Taleb is in the same chapter, if not yet on the same page, but like you, I have a hard time getting my haed around all his stuff. Some of it seems simple and straight forward, if a little esoteric, and some of it seems like word salad.

    It is a bit like nature. all kinds of species fail and go extinct, all kinds of business get too big/complacent/sclerotic, and die, and the same applies to organizations, governments and societies. I think, like you, the key is flexibility and adaptability, but even more so, simply having wider variety. It is like evolution. The more diversity you have in a system, the more likely one random example will by accident hit on the optimal solution.

    I really like you last paragraph about how a manager may do brilliantly at one place and crash and burn at another, and the difference is not in HIM, rather it is the synergy and work of the group he was “managing”.

    I work in aviation, and our previous manager was a toxic mess, but we did well, because the team under him was strong, and we could often pay lip service to his “directives”, or subvert or ignore him. We succeeded DESPITE our management, not because of it. If we had a less experienced, competent and cohesive team we would have been lost. One of our proposed t-shirt slogans was “we are so flexible we don’t NEED to plan”.

    You see this dynamic in the military all the time, as you very well know, having expounded upon it many times. The part that mystifies me, is the unpredictability of it. A toxic leader can destroy the best unit, and sometimes a mediocre unit absorbs a toxic leader without much damage until the next posting cycle.

    What is the indefinable difference? Is there a critical number of good officers/NCOs required in specific positions, or simply a critical mass of good other officers/NCOs? Is there a Regimental/Unit ethos that transcends the rest of the system and makes some units more resilient than others? Some of the training and acculturation the military does is well understood in terms of psychology, and some of it isn’t, but trial and error have proven it to work.

    One of my favourite pieces in Taleb’s books is his example of a Las Vegas casino. We was staying there, because Vegas host a lot of conventions, and he was attending one on risk management. Knowing who he was and what he did, the casino’s security people invited him for a behind the scenes tour. They showed him all of the things they did, using math and algorithms to test the odds and detect cheaters using statistics and probability. The dual custody system and random assignment of cash handlers to avoid skimming, and so on and so on.

    He was suitably impressed and asked how it had worked, what HAD been their biggest recent loss. It turned out that their biggest loss had come because the manager whose job it was to send out the IRS paperwork on big winners had a mental breakdown and had simply not been doing his job for several months. Consequently, the IRS had fined them millions for the non-compliance.

    Likely, they implemented a policy to prevent that from happening again, but it was something so random, that it likely would not have happened again anyway.

    Tho moral being, you can’t possibly plan for everything, so you had better have a cash cushion for such events.

    I have made much greater peace with the universe since I came to accept that everything we do and every institution we belong to consists of humans. Humans are inherently irrational, emotional, prone to confirmation bias and every sort of mental pathology and the wonder is that the world is as successful and efficient as it is, and not a hell of a lot worse.

  7. Paul from Canada says:

    Harry,

    I disagree. The “Black Swan” is what you could not foresee, and consequently what you CANNOT plan for, but can happen anyway, so you need a system that can survive unexpected buffeting.

    There might be things, that in hindsight/Monday morning quarterbacking, MIGHT have been foreseen, but human fallibility being what it is, even the best intended, most intelligent/competent WILL eventually miss the clue to impending disaster.

    That is a huge part of what I do for a living in aviation. We have CRM (looking at human factors and human interaction in the cockpit, and now expanding that to the operations and maintenance people as well). We have SMS/RMS (safety management/risk management systems). All of them are predicated on the EXPECTATION that errors will occur.

    Punishing or trying to eliminate error is a fools errand, and it is better to try and find ways to trap and mitigate the inevitable error that will occur, hopefully early enough in the process to prevent a catastrophic outcome.

Leave a Reply