But what about the error correction function of peer review? Surely it’s important to ensure that the literature doesn’t fill up with bullshit? Shouldn’t we want our journals to publish only the most reliable, correct information – data analysis you can set your clock by, conclusions as solid as the Earth under your feet, uncertainties quantified to within the nearest fraction of a covariant Markov Chain Monte Carlo-delineated sigma contour?
Well, about that.
The replication crisis has been festering throughout the academic community for the better part of a decade, now. It turns out that a huge part of the scientific literature simply can’t be reproduced. In many cases the works in question are high-impact papers, the sort of work that careers are based on, that lead to million-dollar grants being handed out to laboratories across the world. Indeed, it seems that the most-cited works are also the least likely to be reproduced (there’s a running joke that if something was published in Nature or Science, you know it’s probably wrong). Awkward.
The scientific community has completely failed to draw the obvious conclusion from the replication crisis, which is that peer review doesn’t work at all. Indeed, it may well play a causal role in the replication crisis.
The replication crisis, I should emphasize, is probably not mostly due to deliberate fraud, although there’s certainly some of that. There was a recent scandal involving the connection of amyloid plaques to Alzheimer’s disease which seems to have been entirely fraudulent, and which led to many millions – perhaps billions – of dollars in biomedical research programs being pissed away, to say nothing of the uncountable number of wasted man-hours. There have been many other such scandals, in almost every field you can name, and God alone knows how many are still buried like undiscovered time bombs in the foundations of various sub-fields. Most scientists, however, are not deliberately, consciously deceptive. They try to be honest. But the different models, assumptions, and methods they adopt can lead to wildly divergent results, even when analyzing the same data and testing the same hypothesis. Beyond that, they can also be sloppy. And the sloppiness, compounded across interlinked citation chains in the knowledge network, builds up.
Scientists know quite well that just because something has received the imprimatur of publication in a peer-reviewed journal with a high impact factor doesn’t mean that it’s correct. But while they know this intellectually, it’s very difficult to avoid the operating assumption that if something has passed peer review it’s probably mostly okay, and they’re not inclined to spend valuable time checking everything themselves. After all, they need to publish their own papers – in order to finish their PhD, get that faculty position, or get that next grant – and papers that are just trying to reproduce the results of other papers, that aren’t doing something novel, aren’t very interesting on their own, hence unlikely to be published. So instead of checking carefully yourself, you assume a work is probably reliable, and you use it as an element of your own work, maybe in a small way – taking a number from a table to populate an empty field in your dataset – or maybe in an important way, as a key supporting measurement or fundamental theoretical interpretative framework.
But some of those papers, despite having been peer reviewed, will be wrong, in small ways and large, and those erroneous results will propagate through your own results, possibly leading to your own paper being irretrievably flawed. But then your paper passes peer review, and gets used as the basis for subsequent work. Over time the entire scientific literature comes to resemble a house of cards.
Peer review gives scientists – and the lay public – a false sense of security regarding the soundness of scientific results. It also imposes an additional, and quite unnecessary, barrier to publication. It frequently takes months for a paper to work its way through the review process. A year or more is not unheard of, particularly if a paper is rejected, and the authors must start the whole process anew at a different journal, submitting their work as a grindstone for whatever rusty old axe the new referee is looking to sharpen. Far from ensuring errors are corrected, peer review slows down the error correction process. A bad paper can persist in the literature – being cited by other scientists – for some time, for years, before the refutation finally makes it to print … at which point some (not all) will consider the original paper debunked, and stop citing it (others, not being aware of the debunking, will continue to cite it). But what if the refutation is itself tendentious? The original authors may wish to reply, but their refutation of the refutation must now go through the peer review process as well, and on and on it interminably drags …
As to what is happening behind the scenes, no one – not the public, not other scientists – has any idea. The correspondence between referees and authors is rarely published along with the paper. Whether the review was meticulous or sloppy, whether the referee’s critiques warranted or absurd, is entirely opaque.
In essence, the peer review process slows down the publication duty cycle, thereby slowing down scientific debate, while taking much of that debate behind closed doors, where its quality cannot be evaluated by anyone but the participants.
John Carter, “DIEing Academic Research Budgets”, Postcards from Barsoom, 2025-03-17.
June 19, 2025
QotD: Peer review and the replication crisis
Comments Off on QotD: Peer review and the replication crisis
No Comments
No comments yet.
RSS feed for comments on this post.
Sorry, the comment form is closed at this time.



