Quotulatiousness

April 23, 2011

QotD: The debunking problem in media

Filed under: Media, Quotations, Science — Tags: , , , , , — Nicholas @ 12:23

[. . .] the second issue is how people find out about stuff. We exist in a blizzard of information, and stuff goes missing: as we saw recently, research shows that people don’t even hear about retractions of outright fraudulent work. Publishing a follow-up in the same venue that made an initial claim is one way of addressing this problem (and when the journal Science rejected the replication paper, even they said “your results would be better received and appreciated by the audience of the journal where the Daryl Bem research was published”).

The same can be said for the New York Times, who ran a nice long piece on the original precognition finding, New Scientist who covered it twice, the Guardian who joined in online, the Telegraph who wrote about it three times over, New York Magazine, and so on.

It’s hard to picture many of these outlets giving equal prominence to the new negative findings that are now emerging, in the same way that newspapers so often fail to return to a debunked scare, or a not-guilty verdict after reporting the juicy witness statements.

All the most interesting problems around information today are about structure: how to cope with the overload, and find sense in the data. For some eyecatching precognition research, this stuff probably doesn’t matter. What’s interesting is that the information architectures of medicine, academia and popular culture are all broken in the exact same way.

Ben Goldacre, “I foresee that nobody will do anything about this problem”, Bad Science, 2011-04-23

January 3, 2011

Healthy skepticism about study results

Filed under: Bureaucracy, Media, Science — Tags: , , , , — Nicholas @ 13:30

John Allen Paulos provides some useful mental tools to use when presented with unlikely published findings from various studies:

Ioannidis examined the evidence in 45 well-publicized health studies from major journals appearing between 1990 and 2003. His conclusion: the results of more than one third of these studies were flatly contradicted or significantly weakened by later work.

The same general idea is discussed in “The Truth Wears Off,” an article by Jonah Lehrer that appeared last month in the New Yorker magazine. Lehrer termed the phenomenon the “decline effect,” by which he meant the tendency for replication of scientific results to fail — that is, for the evidence supporting scientific results to seemingly weaken over time, disappear altogether, or even suggest opposite conclusions.

[. . .]

One reason for some of the instances of the decline effect is provided by regression to the mean, the tendency for an extreme value of a random quantity dependent on many variables to be followed by a value closer to the average or mean.

[. . .]

This phenomenon leads to nonsense when people attribute the regression to the mean as the result of something real, rather than to the natural behavior of any randomly varying quantity.

[. . .]

In some instances, another factor contributing to the decline effect is sample size. It’s become common knowledge that polls that survey large groups of people have a smaller margin of error than those that canvass a small number. Not just a poll, but any experiment or measurement that examines a large number of test subjects will have a smaller margin of error than one having fewer subjects.

Not surprisingly, results of experiments and studies with small samples often appear in the literature, and these results frequently suggest that the observed effects are quite large — at one end or the other of the large margin of error. When researchers attempt to demonstrate the effect on a larger sample of subjects, the margin of error is smaller and so the effect size seems to shrink or decline.

[. . .]

Publication bias is, no doubt, also part of the reason for the decline effect. That is to say that seemingly significant experimental results will be published much more readily than those that suggest no experimental effect or only a small one. People, including journal editors, naturally prefer papers announcing or at least suggesting a dramatic breakthrough to those saying, in effect, “Ehh, nothing much here.”

The availability error, the tendency to be unduly influenced by results that, for one reason or another, are more psychologically available to us, is another factor. Results that are especially striking or counterintuitive or consistent with experimenters’ pet theories also more likely will result in publication.

« Newer Posts

Powered by WordPress