Trust, But Verify

Wednesday, November 13th, 2013

A simple idea underpins science — trust, but verify:

A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.

[...]

In the 1950s, when modern academic research took shape after its successes in the second world war, it was still a rarefied pastime. The entire club of scientists numbered a few hundred thousand. As their ranks have swelled, to 6m-7m active researchers on the latest reckoning, scientists have lost their taste for self-policing and quality control. The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat. Full professors in America earned on average $135,000 in 2012—more than judges did. Every year six freshly minted PhDs vie for every academic post. Nowadays verification (the replication of other people’s results) does little to advance a researcher’s career. And without verification, dubious findings live on to mislead.

Careerism also encourages exaggeration and the cherry-picking of results. In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts. The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.

Conversely, failures to prove a hypothesis are rarely even offered for publication, let alone accepted. “Negative results” now account for only 14% of published papers, down from 30% in 1990. Yet knowing what is false is as important to science as knowing what is true. The failure to report failures means that researchers waste money and effort exploring blind alleys already investigated by other scientists.

The hallowed process of peer review is not all it is cracked up to be, either. When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested.

Comments

  1. Ross says:

    Ah, false positives and irreproducible results! I dub thee: The Ioannidis Valley of Death. (After John “Most Published Results are False” Ioannidis).

    New results now indicate reproducibility would be significantly boosted (ha ha, little pun there) and false positives similarly suppressed by raising the bar for “p” to .005 and .001 for significant and highly significant results, respectively.

    http://www.pnas.org/content/early/2013/10/28/1313476110.full.pdf

    Certainly this comes at the cost of more expensive and slower science (need more samples, you see), but maybe a more solid footing will bear out the truth in “slow is smooth; smooth is fast”.

  2. That probably wouldn’t hurt, but the issues with the conduct of modern science are a lot deeper than that. The article points out that the field has grown enormously in the last century, but what it somewhat misses is the change in the character of Science as a profession. It has gone from a self-selected calling to a job you train for and has lost a lot from that transition.

    It worked out for a while as society converted more of its potential scientists into actual ones, but there are only so many such people and now I would judge we are well past the point of diminishing returns. Thus the precipitous drop in the quality of research and the proportional increase in the amount of trivial work being done. Unfortunately, it seems from my experience that this cultural change seems to disincentivise the “true” scientists who are perhaps less than brilliant at navigating complex grant bureaucracies and academic politics.

Leave a Reply