Quotulatiousness

August 14, 2015

QotD: When “the science” shows what you want it to show

Filed under: Media, Quotations, Science — Tags: , , , , , — Nicholas @ 01:00

To see what I mean, consider the recent tradition of psychology articles showing that conservatives are authoritarian while liberals are not. Jeremy Frimer, who runs the Moral Psychology Lab at the University of Winnipeg, realized that who you asked those questions about might matter — did conservatives defer to the military because they were authoritarians or because the military is considered a “conservative” institution? And, lo and behold, when he asked similar questions about, say, environmentalists, the liberals were the authoritarians.

It also matters because social psychology, and social science more generally, has a replication problem, which was recently covered in a very good article at Slate. Take the infamous “paradox of choice” study that found that offering a few kinds of jam samples at a supermarket was more likely to result in a purchase than offering dozens of samples. A team of researchers that tried to replicate this — and other famous experiments — completely failed. When they did a survey of the literature, they found that the array of choices generally had no important effect either way. The replication problem is bad enough in one subfield of social psychology that Nobel laureate Daniel Kahneman wrote an open letter to its practitioners, urging them to institute tougher replication protocols before their field implodes. A recent issue of Social Psychology was devoted to trying to replicate famous studies in the discipline; more than a third failed replication.

Let me pause here to say something important: Though I mentioned bias above, I’m not suggesting in any way that the replication problems mostly happen because social scientists are in on a conspiracy against conservatives to do bad research or to make stuff up. The replication problems mostly happen because, as the Slate article notes, journals are biased toward publishing positive and novel results, not “there was no relationship, which is exactly what you’d expect.” So readers see the one paper showing that something interesting happened, not the (possibly many more) teams that got muddy data showing no particular effect. If you do enough studies on enough small groups, you will occasionally get an effect just by random chance. But because those are the only studies that get published, it seems like “science has proved …” whatever those papers are about.

Megan McArdle, “The Truth About Truthiness”, Bloomberg View, 2014-09-08.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress