A superforecaster reviews Superforecasting:
While superforecasters had ordinary-seeming day jobs, they were an unusually smart and knowledgeable group. When tested, they scored at least a standard deviation higher than the general population on tests of fluid intelligence and at least a standard deviation higher than the general population on tests of political knowledge. Many were retired or — like me — were employed less than full time, so they could spend hours every week researching the questions and breaking them down into manageable parts. If the question was whether Ebola would spread to Europe, they pored over epidemiological models, studied airline screening procedures, and read papers on the possible sexual transmission of the disease. They updated their forecasts often.
Superforecasters also scored highly on measures of “actively open-minded thinking”. That is, they are not committed in advance to any one idea of how the world works. They treat their ideas as hypotheses to be tested, rather than premises to be built on. They look for facts and arguments that might call their views into question. They generally see events as determined in part by chance rather than attributing them to divine will or fate. They approach problems from a variety of different angles. They are unusually willing to consider that they might be wrong.
The philosopher Isaiah Berlin famously divided thinkers into “foxes”, who look at problems from a different perspectives, and “hedgehogs”, who “relate everything to a single central vision”. The dichotomy comes from the Greek poet Archilochus’ line that “The fox knows many things, but the hedgehog knows one big thing”. Tetlock found that people who were confident there are simple, readily-available explanations for events — whether they were realists or liberal idealists, Marxists or supply-side economists — were practically worthless forecasters. People who saw themselves as foxes, who thought politics was complex and unpredictable, and who were willing to consider different points of view were consistently more accurate. Foxes were better forecasters.
The superforecaster writing that is Robert de Neufville, an associate of the Global Catastrophic Risk Institute:
He has degrees in political science and political theory from Harvard and Berkeley. As one of the top 2% of forecasters in IARPA ‘s experimental Good Judgment Project forecasting tournament, he qualifies as a “superforecaster”. He was one on the forecasters interviewed for Philip Tetlock and Dan Gardner’s book Superforecasting: The Art and Science of Prediction. He has contributed to The Economist and The Washington Monthly, and for several years wrote the Politeia column for Big Think. Follow him on Twitter here.
It was amazing to me how much of Tetlock’s site I could review without being told how superforecasters behave. I ended up deciding it wasn’t worth the effort.
Seems like it’s hard work combined with not being an idiot.
Thanks for taking an interest in my essay. Thanks also linking to my website. I am not comfortable with you’re republishing the whole essay, however, and would appreciate it if you would take this post down. You are of course always welcome to post an excerpt of a couple paragraphs. Thanks for your consideration.
I’ve edited down the excerpt.
Thanks, I appreciate it!