The Limits of Expertise

Monday, June 23rd, 2014

Tom Nichols, professor of national security affairs at the U.S. Naval War College, recently lamented the death of expertise — or, rather, the death of the acknowledgement of expertise:

A fair number of Americans now seem to reject the notion that one person is more likely to be right about something, due to education, experience, or other attributes of achievement, than any other.

Indeed, to a certain segment of the American public, the idea that one person knows more than another person is an appalling thought, and perhaps even a not-too-subtle attempt to put down one’s fellow citizen. It’s certainly thought to be rude: to judge from social media and op-eds, the claim of expertise — and especially any claim that expertise should guide the outcome of a disagreement — is now considered by many people to be worse than a direct personal insult.

The expert isn’t always right, he admits, but an expert is far more likely to be right than you are.

Only this isn’t quite true, as Philip Tetlock’s research has shown:

The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes — if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.

Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals — distinguished political scientists, area study specialists, economists, and so on — are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight…. The expert also suffers from knowing too much: the more facts an expert has, the more information is available to be enlisted in support of his or her pet theories, and the more chains of causation he or she can find beguiling. This helps explain why specialists fail to outguess non-specialists. The odds tend to be with the obvious”.

James Shanteau‘s “cross-domain” study of expert performance showed that some fields developed true expertise (“high validity” domains), and others did not:

The importance of predictable environments and opportunities to learn them was apparent in an early review of professions in which expertise develops. Shanteau (1992) reviewed evidence showing that [real, measurable] expertise was found in livestock judges, astronomers, test pilots, soil judges, chess masters, physicists, mathematicians, accountants, grain inspectors, photo interpreters, and insurance analysts.

In contrast, Shanteau noted poor performance by experienced professionals in another large set of occupations: stockbrokers, clinical psychologists, psychiatrists, college admissions officers, court judges, personnel selectors, and intelligence analysts.”

Read T. Greer’s whole piece on the limits of expertise.

Comments

  1. Candide III says:

    Zero validity environments tend to fall into two categories:Environments whose size or complexity make it impossible for experts to recognize the patterns or relationships they need to understand in order to make valid judgements about the system (such as faced by economists, ecologists, and financial analysts).Environments where experts must evaluate behaviors, attitudes, past history, and other personal idiosyncrasies to try and explain why individuals act as they do or how they will act in the future (such as faced by psychiatrists, college admissions officers, and court judges).

    There is a third category: environments where there are patterns and relationships to be recognized and learned, but ones that are unacceptable because they run counter to established beliefs about the world. Very narrow experts, those who can avoid the limelight and keep to the shadows of academic obscurity, may accumulate expertise in such fields, but it does not propagate outwards and cannot be made use of in the wider community. Examples: HBD, intelligence, eugenics.

  2. Lucklucky says:

    The New Yorker article has an example that I disagree is clear cut: the two-linked event.

    There are odds where a two-linked event occurs more often than one. For example: people have eyesight problem vs people with eyesight problem who use glasses or contact lenses. I bet there are more of the second, even if there is need of two events versus one.

    Another, people with a driver’s licence vs people with a driver’s license who own a car. I bet there are more of the second than the first.

  3. Borepatch says:

    The problem is that what passes for “expertise” these days is very often wretched, incompetent, and venal. This especially applies to Washington D.C. Think Tanks.

    I think that there’s still considerable respect for commercial innovators (c.f. Steve Jobs), but increasingly the public sector seems to be part of the problem, not the solution.

  4. Toddy Cat says:

    All too often, these days, “expertise” means “having the right credential obtained by parroting politically correct lies” as opposed to actually being good at something. No wonder people are losing respect for it. I’d be willing to bet that a group of people randomly chosen off the street could craft far better Middle Eastern policy, Immigration policy, and criminal justice than what the “experts” have given us the last fifty years or so.

  5. T. Greer says:

    Very narrow experts, those who can avoid the limelight and keep to the shadows of academic obscurity, may accumulate expertise in such fields, but it does not propagate outwards and cannot be made use of in the wider community. Examples: HBD, intelligence, eugenics.

    This is unproven. Indeed, given that HBD, intelligence, and eugenics fall into both of the categories (human psychology and complex systems) that define a ‘zero validity’ environment, I am inclined to think these fields would do even worse when field tested then most of the normal ones.

    But I suppose we could do a simple field test without any of the fancy social science methodology attached to test this out. I would love to see the HBD crew make a series of predictions about the world and its state over the next one, five, and fifteen years and see if they do any better than those relying on different frames of analysis.

    I am not holding my breath.

  6. Space Nookie says:

    Reminds me of this article: So You Think You’re Smarter Than A CIA Agent.

  7. Candide III says:

    I don’t agree. HBD and intelligence in the narrow sense (IQ measurement, outcomes) are a normal-validity environment. Maybe not as high-validity as mechanical engineering, but not in Freudianism territory either. IQ measurement has little to do with human psychology, unless you define psychology very broadly. You don’t have to wonder about motivations etc. when assessing test results.

  8. T. Greer says:

    Candide:

    IQ does follow normal distribution, gaussian type distribution. In that sense there are real experts in IQ.

    But that is simply in assessing IQ. What is less clear is how IQ scores will affect a given individual or even a given society. We can pour over our cross sectional data and note all kinds of interesting correlations, but I don’t put too much stock in these until they can be used predictavly (e.g. you can score a given individual’s IQ, make predictions about his future life history, and then be vindicated as the years go on, or do more or less the same thing with entire nations or regions). Those tests have not been conducted yet.

Leave a Reply