A Stunningly Poor Source of Expertise

Thursday, March 26th, 2009

Nicholas Kristof notes that experts are often a stunningly poor source of expertise:

The expert on experts is Philip Tetlock, a professor at the University of California, Berkeley. His 2005 book, Expert Political Judgment, is based on two decades of tracking some 82,000 predictions by 284 experts. The experts’ forecasts were tracked both on the subjects of their specialties and on subjects that they knew little about.

The result? The predictions of experts were, on average, only a tiny bit better than random guesses — the equivalent of a chimpanzee throwing darts at a board.

“It made virtually no difference whether participants had doctorates, whether they were economists, political scientists, journalists or historians, whether they had policy experience or access to classified information, or whether they had logged many or few years of experience,” Mr. Tetlock wrote.

Indeed, the only consistent predictor was fame — and it was an inverse relationship. The more famous experts did worse than unknown ones. That had to do with a fault in the media. Talent bookers for television shows and reporters tended to call up experts who provided strong, coherent points of view, who saw things in blacks and whites. People who shouted — like, yes, Jim Cramer!

Mr. Tetlock called experts such as these the “hedgehogs,” after a famous distinction by the late Sir Isaiah Berlin (my favorite philosopher) between hedgehogs and foxes. Hedgehogs tend to have a focused worldview, an ideological leaning, strong convictions; foxes are more cautious, more centrist, more likely to adjust their views, more pragmatic, more prone to self-doubt, more inclined to see complexity and nuance. And it turns out that while foxes don’t give great sound-bites, they are far more likely to get things right.

This was the distinction that mattered most among the forecasters, not whether they had expertise. Over all, the foxes did significantly better, both in areas they knew well and in areas they didn’t.

Other studies have confirmed the general sense that expertise is overrated. In one experiment, clinical psychologists did no better than their secretaries in their diagnoses. In another, a white rat in a maze repeatedly beat groups of Yale undergraduates in understanding the optimal way to get food dropped in the maze. The students overanalyzed and saw patterns that didn’t exist, so they were beaten by the rodent.

I’ve discussed Tetlock’s work before.

Leave a Reply