At Vox.com, Julia Belluz and Steven Hoffman show how perverse incentives and human frailty contribute to the wasted efforts — and sometimes outright fraudulent methods — that get “scientific” results published. It’s getting so bad that “the editor of The Lancet … recently lamented, ‘Much of the scientific literature, perhaps half, may simply be untrue.'”:
From study design to dissemination of research, there are dozens of ways science can go off the rails. Many of the scientific studies that are published each year are poorly designed, redundant, or simply useless. Researchers looking into the problem have found that more than half of studies fail to take steps to reduce biases, such as blinding whether people receive treatment or placebo.
In an analysis of 300 clinical research papers about epilepsy — published in 1981, 1991, and 2001 — 71 percent were categorized as having no enduring value. Of those, 55.6 percent were classified as inherently unimportant and 38.8 percent as not new. All told, according to one estimate, about $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on flawed and redundant studies.
After publication, there’s the well-documented irreproducibility problem — the fact that researchers often can’t validate findings when they go back and run experiments again. Just last month, a team of researchers published the findings of a project to replicate 100 of psychology’s biggest experiments. They were only able to replicate 39 of the experiments, and one observer — Daniele Fanelli, who studies bias and scientific misconduct at Stanford University in California — told Nature that the reproducibility problem in cancer biology and drug discovery may actually be even more acute.
Indeed, another review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. (The problem even inspired a satirical publication called the Journal of Irreproducible Results.)
So why aren’t these problems caught prior to publication of a study? Consider peer review, in which scientists send their papers to other experts for vetting prior to publication. The idea is that those peers will detect flaws and help improve papers before they are published as journal articles. Peer review won’t guarantee that an article is perfect or even accurate, but it’s supposed to act as an initial quality-control step.
Yet there are flaws in this traditional “pre-publication” review model: it relies on the goodwill of scientists who are increasingly pressed and may not spend the time required to properly critique a work, it’s subject to the biases of a select few, and it’s slow – so it’s no surprise that peer review sometimes fails. These factors raise the odds that even in the highest-quality journals, mistakes, flaws, and even fraudulent work will make it through. (“Fake peer review” reports are also now a thing.)