Tom Nichols, professor of national security affairs at the U.S. Naval War College, recently lamented the death of expertise — or, rather, the death of the acknowledgement of expertise:
A fair number of Americans now seem to reject the notion that one person is more likely to be right about something, due to education, experience, or other attributes of achievement, than any other.
Indeed, to a certain segment of the American public, the idea that one person knows more than another person is an appalling thought, and perhaps even a not-too-subtle attempt to put down one’s fellow citizen. It’s certainly thought to be rude: to judge from social media and op-eds, the claim of expertise — and especially any claim that expertise should guide the outcome of a disagreement — is now considered by many people to be worse than a direct personal insult.
The expert isn’t always right, he admits, but an expert is far more likely to be right than you are.
Only this isn’t quite true, as Philip Tetlock’s research has shown:
The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes — if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.
Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals — distinguished political scientists, area study specialists, economists, and so on — are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight…. The expert also suffers from knowing too much: the more facts an expert has, the more information is available to be enlisted in support of his or her pet theories, and the more chains of causation he or she can find beguiling. This helps explain why specialists fail to outguess non-specialists. The odds tend to be with the obvious”.
James Shanteau‘s “cross-domain” study of expert performance showed that some fields developed true expertise (“high validity” domains), and others did not:
The importance of predictable environments and opportunities to learn them was apparent in an early review of professions in which expertise develops. Shanteau (1992) reviewed evidence showing that [real, measurable] expertise was found in livestock judges, astronomers, test pilots, soil judges, chess masters, physicists, mathematicians, accountants, grain inspectors, photo interpreters, and insurance analysts.
In contrast, Shanteau noted poor performance by experienced professionals in another large set of occupations: stockbrokers, clinical psychologists, psychiatrists, college admissions officers, court judges, personnel selectors, and intelligence analysts.”
Read T. Greer’s whole piece on the limits of expertise.