In The Register, Andrew Orlowski reports on the sad state of published neuroscience articles:
A group of academics from Oxford, Stanford, Virginia and Bristol universities have looked at a range of subfields of neuroscience and concluded that most of the results are statistically worthless.
The researchers found that most structural and volumetric MRI studies are very small and have minimal power to detect differences between compared groups (for example, healthy people versus those with mental health diseases). Their paper also stated that, specifically, a clear excess of “significance bias” (too many results deemed statistically significant) has been demonstrated in studies of brain volume abnormalities, and similar problems appear to exist in fMRI studies of the blood-oxygen-level-dependent response.
The team, researchers at Stanford Medical School, Virginia, Bristol and the Human Genetics dept at Oxford, looked at 246 neuroscience articles published in 2011 and and excluded papers where the test data was unavailable. They found that the papers’ median statistical power — the possibility that a study will identify an effect when there is an effect there to be found — was just 21 per cent. What that means in practice is that if you were to run one of the experiments five times, you’d only find the effect once.
A further survey of papers drawn from fMRI brain scanners — and studies using such scanners have long filled the popular media with dramatic claims — found that their statistical power was just 8 per cent.
Low statistical power caused three problems, the authors said. Firstly, there is a low probability of finding true effects; secondly, there is a low probability that a “true” finding is actually true; and thirdly, exaggerating the magnitude of the effect when a positive is discovered.