In The Atlantic, Bourree Lam looks at the state of published science and how scientists can begin to address the problems of bad data, statistical sleight-of-hand, and actual fraud:
In May, the journal Science retracted a much-talked-about study suggesting that gay canvassers might cause same-sex marriage opponents to change their opinion, after an independent statistical analysis revealed irregularities in its data. The retraction joined a string of science scandals, ranging from Andrew Wakefield’s infamous study linking a childhood vaccine and autism to the allegations that Marc Hauser, once a star psychology professor at Harvard, fabricated data for research on animal cognition. By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders). This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.
“Retractions are born of many mothers,” write Ivan Oransky and Adam Marcus, the co-founders of the blog Retraction Watch, which has logged thousands of retractions in the past five years. A study in the Proceedings of the National Academy of Sciences reviewed 2,047 retractions of biomedical and life-sciences articles and found that just 21.3 percent stemmed from straightforward error, while 67.4 percent resulted from misconduct, including fraud or suspected fraud (43.4 percent) and plagiarism (9.8 percent).
Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’”.
As for why these practices are so prevalent, many scientists blame increased competition for academic jobs and research funding, combined with a “publish or perish” culture. Because journals are more likely to accept studies reporting “positive” results (those that support, rather than refute, a hypothesis), researchers may have an incentive to “cook” or “mine” their data to generate a positive finding. Such publication bias is not in itself news — back in 1987, a study found that, compared with research trials that went unpublished, those that were published were three times as likely to have positive results. But the bias does seem to be getting stronger: a more recent study of 4,600 research papers found that from 1990 to 2007, the proportion of positive results grew by 22 percent.