Something has happened at Slate. Until relatively recently, Slate‘s science page produced so much amazingly good content that we were tempted to link to them multiple times per day. In our 2013 list of the Top 10 Science News Sites, we awarded them an honorable mention.
But, that was then. Now, for some reason, Slate‘s science page has partially abandoned its strong tradition of in-depth analysis to promote an angry, opinion-driven reportage that is mostly aimed at insulting Republicans and Christians.
This is counterproductive. Science journalism that forsakes its primary mission of science communication to engage in partisan culture wars does a grotesque disservice to the scientific endeavor and is doomed to fail. Just ask ScienceBlogs, which has become a shell of its former self because, as the New York Times described, it became “Fox News for the religion-baiting, peak-oil crowd” that utilized “redundant and effortfully incendiary rhetoric.” Slate‘s science page is heading toward a similar path.
Alex B. Berezow, “Slate‘s Science Page Has Gone Crazy”, Real Clear Science, 2015-05-25.
December 23, 2016
November 22, 2016
John Tierney on the President-elect’s stated views on science:
What will a Trump administration mean for scientific research and technology?
The good news is that the next president doesn’t seem all that interested in science, judging from the little he said about it during the campaign. That makes a welcome contrast with Barack Obama, who cared far too much — in the wrong way. He politicized science to advance his agenda. His scientific appointees in the White House, the Centers for Disease Control, and the Food and Drug Administration were distinguished by their progressive ideology, not the quality of their research. They used junk science — or no science — to justify misbegotten crusades against dietary salt, trans fats, and electronic cigarettes. They cited phony statistics to spread myths about a gender pay gap and a rape crisis on college campuses. Ignoring mainstream climate scientists, they blamed droughts and storms on global warming and then tried to silence critics who pointed out their mistakes.
Trump has vaguely expressed support for federal funding of R&D in science, medicine, and energy, but he has stressed encouraging innovation in the private sector. His election has left the science establishment aghast. Its members were mostly behind Hillary Clinton, both because they share her politics and because she would continue the programs funded by Obama. Their fears of losing funding are probably overblown — there’s strong support in Congress for R&D — but some of the priorities could change.
Trump has vowed to ignore the Paris international climate agreement that committed the U.S. to reduce greenhouse emissions. That prospect appalls environmentalists but cheers those of us who consider the agreement an enormously expensive way to achieve very little. Trump’s position poses a financial threat to wind-power producers and other green-energy companies that rely on federal subsidies to survive.
November 17, 2016
Today I saw a link to an article in Mother Jones bemoaning the fact that the general public is out of step with the consensus of science on important issues. The implication is that science is right and the general public are idiots. But my take is different.
I think science has earned its lack of credibility with the public. If you kick me in the balls for 20-years, how do you expect me to close my eyes and trust you?
If a person doesn’t believe climate change is real, despite all the evidence to the contrary, is that a case of a dumb human or a science that has not earned credibility? We humans operate on pattern recognition. The pattern science serves up, thanks to its winged monkeys in the media, is something like this:
Step One: We are totally sure the answer is X.
Step Two: Oops. X is wrong. But Y is totally right. Trust us this time.
Science isn’t about being right every time, or even most of the time. It is about being more right over time and fixing what it got wrong. So how is a common citizen supposed to know when science is “done” and when it is halfway to done which is the same as being wrong?
You can’t tell. And if any scientist says you should be able to tell when science is “done” on a topic, please show me the data indicating that people have psychic powers.
So maybe we should stop scoffing at people who don’t trust science and ask ourselves why. Ignorance might be part of the problem. But I think the bigger issue is that science is a “mostly wrong” situation by design that is intended to become more right over time. How do you make people trust a system that is designed to get wrong answers more often than right answers? And should we?
Science is an amazing thing. But it has a credibility issue that it earned. Should we fix the credibility situation by brainwashing skeptical citizens to believe in science despite its spotty track record, or is society’s current level of skepticism healthier than it looks? Maybe science is what needs to improve, not the citizens.
I’m on the side that says climate change, for example, is pretty much what science says it is because the scientific consensus is high. But I realize half of my fellow-citizens disagree, based on pattern recognition. On one hand, the views of my fellow citizens might lead humanity to inaction on climate change and result in the extinction of humans. On the other hand, would I want to live in a world in which people stopped using pattern recognition to make decisions?
Those are two bad choices.
Scott Adams, “Science’s Biggest Fail”, Scott Adams Blog, 2015-02-02.
November 10, 2016
What is science’s biggest fail of all time?
I nominate everything about diet and fitness.
Maybe science has the diet and fitness stuff mostly right by now. I hope so. But I thought the same thing twenty years ago and I was wrong.
I used to think fatty food made you fat. Now it seems the opposite is true. Eating lots of peanuts, avocados, and cheese, for example, probably decreases your appetite and keeps you thin.
I used to think vitamins had been thoroughly studied for their health trade-offs. They haven’t. The reason you take one multivitamin pill a day is marketing, not science.
I used to think the U.S. food pyramid was good science. In the past it was not, and I assume it is not now.
I used to think drinking one glass of alcohol a day is good for health, but now I think that idea is probably just a correlation found in studies.
I used to think I needed to drink a crazy-large amount of water each day, because smart people said so, but that wasn’t science either.
I could go on for an hour.
You might be tempted to say my real issue is with a lack of science, not with science. In some of the cases I mentioned there was a general belief that science had studied stuff when in fact it had not. So one could argue that the media and the government (schools in particular) are to blame for allowing so much non-science to taint the field of real science. And we all agree that science is not intended to be foolproof. Science is about crawling toward the truth over time.
Perhaps my expectations were too high. I expected science to tell me the best ways to eat and to exercise. Science did the opposite, sometimes because of misleading studies and sometimes by being silent when bad science morphed into popular misconceptions. And science was pretty damned cocky about being right during this period in which it was so wrong.
So you have the direct problem of science collectively steering my entire generation toward obesity, diabetes, and coronary problems. But the indirect problem might be worse: It is hard to trust science.
Scott Adams, “Science’s Biggest Fail”, Scott Adams Blog, 2015-02-02.
April 19, 2016
At Open Culture, Josh Jones discusses Richard Feynman’s suggestion for non-scientists to evaluate whether a claim is scientific or pseudo-scientific nonsense:
The problem of demarcation, or what is and what is not science, has occupied philosophers for some time, and the most famous answer comes from philosopher of science Karl Popper, who proposed his theory of “falsifiability” in 1963. According to Popper, an idea is scientific if it can conceivably be proven wrong. Although Popper’s strict definition of science has had its uses over the years, it has also come in for its share of criticism, since so much accepted science was falsified in its day (Newton’s gravitational theory, Bohr’s theory of the atom), and so much current theoretical science cannot be falsified (string theory, for example). Whatever the case, the problem for lay people remains. If a scientific theory is beyond our comprehension, it’s unlikely we’ll be able to see how it might be disproven.
Physicist and science communicator Richard Feynman came up with another criterion, one that applies directly to the non-scientist likely to be bamboozled by fancy terminology that sounds scientific. Simon Oxenham at Big Think points to the example of Deepak Chopra, who is “infamous for making profound sounding yet entirely meaningless statements by abusing scientific language.” (What Daniel Dennet calls “deepities.”) As a balm against such statements, Oxenham refers us to a speech Feynman gave in 1966 to a meeting of the National Science Teachers Association. Rather than asking lay people to confront scientific-sounding claims on their own terms, Feynman would have us translate them into ordinary language, thereby assuring that what the claim asserts is a logical concept, rather than just a collection of jargon.
The example Feynman gives comes from the most rudimentary source, a “first grade science textbook” which “begins in an unfortunate manner to teach science”: it shows its student a picture of a “windable toy dog,” then a picture of a real dog, then a motorbike. In each case the student is asked “What makes it move?” The answer, Feynman tells us “was in the teacher’s edition of the book… ‘energy makes it move.’” Few students would have intuited such an abstract concept, unless they had previously learned the word, which is all the lesson teaches them. The answer, Feynman points out, might as well have been “’God makes it move,’ or ‘Spirit makes it move,’ or, ‘Movability makes it move.’”
Instead, a good science lesson “should think about what an ordinary human being would answer.” Engaging with the concept of energy in ordinary language enables the student to explain it, and this, Feynman says, constitutes a test for “whether you have taught an idea or you have only taught a definition.
January 4, 2016
I have noticed a tendency of mine to reply to arguments with “Well yeah, that would work for the X Czar, but there’s no such thing.”
For example, take the problems with the scientific community, which my friends in Berkeley often discuss. There’s lots of publication bias, statistics are done in a confusing and misleading way out of sheer inertia, and replications often happen very late or not at all. And sometimes someone will say something like “I can’t believe people are too dumb to fix Science. All we would have to do is require early registration of studies to avoid publication bias, turn this new and powerful statistical technique into the new standard, and accord higher status to scientists who do replication experiments. It would be really simple and it would vastly increase scientific progress. I must just be smarter than all existing scientists, since I’m able to think of this and they aren’t.”
And I answer “Well, yeah, that would work for the Science Czar. He could just make a Science Decree that everyone has to use the right statistics, and make another Science Decree that everyone must accord replications higher status. And since we all follow the Science Czar’s Science Decrees, it would all work perfectly!”
Why exactly am I being so sarcastic? Because things that work from a czar’s-eye view don’t work from within the system. No individual scientist has an incentive to unilaterally switch to the new statistical technique for her own research, since it would make her research less likely to produce earth-shattering results and since it would just confuse all the other scientists. They just have an incentive to want everybody else to do it, at which point they would follow along.
Likewise, no journal has the incentive to unilaterally demand early registration, since that just means everyone who forgot to early register their studies would switch to their competitors’ journals.
And since the system is only made of individual scientists and individual journals, no one is ever going to switch and science will stay exactly as it is.
Scott Alexander, “Reactionary Philosophy In An Enormous, Planet-Sized Nutshell”, Slate Star Codex, 2013-03-03.
December 16, 2015
Henry Miller on the Faustian bargain Chipotle willingly made and is now paying for:
Chipotle, the once-popular Mexican restaurant chain, is experiencing a well-deserved downward spiral.
The company found it could pass off a fast-food menu stacked with high-calorie, sodium-rich options as higher quality and more nutritious because the meals were made with locally grown, genetic engineering-free ingredients. And to set the tone for the kind of New Age-y image the company wanted, Chipotle adopted slogans like, “We source from farms rather than factories” and, “With every burrito we roll or bowl we fill, we’re working to cultivate a better world.”
The rest of the company wasn’t as swift as the marketing department, however. Last week, about 140 people, all but a handful Boston College students, were recovering from a nasty bout of norovirus-caused gastroenteritis, a foodborne illness apparently contracted while eating Chipotle’s “responsibly raised” meats and largely organic produce.
And they’re not alone. The Centers for Disease Control and Prevention has been tracking another, unrelated Chipotle food poisoning outbreak in California, Illinois, Maryland, Minnesota, New York, Ohio, Oregon, Pennsylvania and Washington, in which victims have been as young as one year and as old as 94. Using whole genome sequencing, CDC investigators identified the DNA fingerprint of the bacterial culprit in that outbreak as E. coli strain STEC O26, which was found in all of the sickened customers tested.
Outbreaks of food poisoning have become something of a Chipotle trademark; the recent ones are the fourth and fifth this year, one of which was not disclosed to the public. A particularly worrisome aspect of the company’s serial deficiencies is that there have been at least three unrelated pathogens in the outbreaks – Salmonella and E. coli bacteria and norovirus. In other words, there has been more than a single glitch; suppliers and employees have found a variety of ways to contaminate what Chipotle cavalierly sells (at premium prices) to its customers.
November 5, 2015
Henry I. Miller & Julie Kelly on the less-than-certain future of the organic farming community:
The organic-products industry, which has been on a tear for the past decade, is running scared. Challenged by progress in modern genetic engineering and state-of-the-art pesticides — which are denied to organic farmers — the organic movement is ratcheting up its rhetoric and bolstering its anti-innovation agenda while trying to expand a consumer base that shows signs of hitting the wall.
Genetic-engineering-labeling referendums funded by the organic industry failed last year in Colorado and Oregon, following similar defeats in California and Washington. Even worse for the industry, a recent Supreme Court decision appears to proscribe on First Amendment grounds the kind of labeling they want. A June 2015 Supreme Court decision has cleared a judicial path to challenge the constitutionality of special labeling — “compelled commercial speech” — to identify foods that contain genetically engineered (sometimes called “genetically modified”) ingredients. The essence of the decision is the expansion of the range of regulations subject to “strict scrutiny,” the most rigorous standard of review for constitutionality, to include special labeling laws.
Organic agriculture has become a kind of Dr. Frankenstein’s monster, a far cry from what was intended: “Let me be clear about one thing, the organic label is a marketing tool,” said then secretary of agriculture Dan Glickman when organic certification was being considered. “It is not a statement about food safety. Nor is ‘organic’ a value judgment about nutrition or quality.” That quote from Secretary Glickman should have to be displayed prominently in every establishment that sells organic products.
The backstory here is that in spite of its “good vibes,” organic farming is an affront to the environment — hugely wasteful of arable land and water because of its low yields. Plant pathologist Dr. Steve Savage recently analyzed the data from USDA’s 2014 Organic Survey, which reports various measures of productivity from most of the certified-organic farms in the nation, and compared them to those at conventional farms, crop by crop, state by state. His findings are extraordinary. Of the 68 crops surveyed, there was a “yield gap” — poorer performance of organic farms — in 59. And many of those gaps, or shortfalls, were impressive: strawberries, 61 percent less than conventional; fresh tomatoes, 61 percent less; tangerines, 58 percent less; carrots, 49 percent less; cotton, 45 percent less; rice, 39 percent less; peanuts, 37 percent less.
October 28, 2015
The World Health Organization appears to exist primarily to give newspaper editors the excuse to run senational headlines about the risk of cancer. This is not a repeat story from earlier years. Oh, wait. Yes it is. Here’s The Atlantic‘s Ed Yong to de-sensationalize the recent scary headlines:
The International Agency of Research into Cancer (IARC), an arm of the World Health Organization, is notable for two things. First, they’re meant to carefully assess whether things cause cancer, from pesticides to sunlight, and to provide the definitive word on those possible risks.
Second, they are terrible at communicating their findings.
Group 1 is billed as “carcinogenic to humans,” which means that we can be fairly sure that the things here have the potential to cause cancer. But the stark language, with no mention of risks or odds or any remotely conditional, invites people to assume that if they specifically partake of, say, smoking or processed meat, they will definitely get cancer.
Similarly, when Group 2A is described as “probably carcinogenic to humans,” it roughly translates to “there’s some evidence that these things could cause cancer, but we can’t be sure.” Again, the word “probably” conjures up the specter of individual risk, but the classification isn’t about individuals at all.
Group 2B, “possibly carcinogenic to humans,” may be the most confusing one of all. What does “possibly” even mean? Proving a negative is incredibly difficult, which is why Group 4 — “probably not carcinogenic to humans” — contains just one substance of the hundreds that IARC has assessed.
So, in practice, 2B becomes a giant dumping ground for all the risk factors that IARC has considered, and could neither confirm nor fully discount as carcinogens. Which is to say: most things. It’s a bloated category, essentially one big epidemiological shruggie. But try telling someone unfamiliar with this that, say, power lines are “possibly carcinogenic” and see what they take away from that.
Worse still, the practice of lumping risk factors into categories without accompanying description — or, preferably, visualization — of their respective risks practically invites people to view them as like-for-like. And that inevitably led to misleading headlines like this one in the Guardian: “Processed meats rank alongside smoking as cancer causes – WHO.”
October 6, 2015
The easy way to tell real religion from fake religion is that real religion doesn’t make you feel good. It doesn’t assure you that everything you’re doing is right and that you ought to keep on doing it.
The same holds true for science. Real science doesn’t make you feel smart. Fake science does.
No matter how smart you think you are, real science will make you feel stupid far more often than it will make you feel smart. Real science not only tells us how much more we don’t know than we know, a state of affairs that will continue for all of human history, but it tells us how fragile the knowledge that we have gained is, how prone we are to making childish mistakes and allowing our biases to think for us.
Science is a rigorous way of making fewer mistakes. It’s not very useful to people who already know everything. Science is for stupid people who know how much they don’t know.
A look back at the march of science doesn’t show an even line of progress led by smooth-talking popularizers who are never wrong. Instead the cabinets of science are full of oddballs, unqualified, jealous, obsessed and eccentric, whose pivotal discoveries sometimes came about by accident. Science, like so much of human accomplishment, often depended on lucky accidents to provide a result that could then be isolated and systematized into a useful understanding of the process.
Daniel Greenfield, “Science is for Stupid People”, Sultan Knish, 2014-09-30.
August 29, 2015
We depend on scientific studies to provide us with valid information on so many different aspects of life … it’d be nice to know that the results of those studies actually hold up to scrutiny:
One of the bedrock assumptions of science is that for a study’s results to be valid, other researchers should be able to reproduce the study and reach the same conclusions. The ability to successfully reproduce a study and find the same results is, as much as anything, how we know that its findings are true, rather than a one-off result.
This seems obvious, but in practice, a lot more work goes into original studies designed to create interesting conclusions than into the rather less interesting work of reproducing studies that have already been done to see whether their results hold up.
Everyone wants to be part of the effort to identify new and interesting results, not the more mundane (and yet potentially career-endangering) work of reproducing the results of older studies:
Why is psychology research (and, it seems likely, social science research generally) so stuffed with dubious results? Let me suggest three likely reasons:
A bias towards research that is not only new but interesting: An interesting, counterintuitive finding that appears to come from good, solid scientific investigation gets a researcher more media coverage, more attention, more fame both inside and outside of the field. A boring and obvious result, or no result, on the other hand, even if investigated honestly and rigorously, usually does little for a researcher’s reputation. The career path for academic researchers, especially in social science, is paved with interesting but hard to replicate findings. (In a clever way, the Reproducibility Project gets around this issue by coming up with the really interesting result that lots of psychology studies have problems.)
An institutional bias against checking the work of others: This is the flipside of the first factor: Senior social science researchers often actively warn their younger colleagues — who are in many cases the best positioned to check older work—against investigating the work of established members of the field. As one psychology professor from the University of Southern California grouses to the Times, “There’s no doubt replication is important, but it’s often just an attack, a vigilante exercise.”
Small, unrepresentative sample sizes: In general, social science experiments tend to work with fairly small sample sizes — often just a few dozen people who are meant to stand in for everyone else. Researchers often have a hard time putting together truly representative samples, so they work with subjects they can access, which in a lot of cases means college students.
A couple of years ago, I linked to a story about the problem of using western university students as the default source of your statistical sample for psychological and sociological studies:
A notion that’s popped up several times in the last couple of months is that the easy access to willing test subjects (university students) introduces a strong bias to a lot of the tests, yet until recently the majority of studies disregarded the possibility that their test results were unrepresentative of the general population.
August 20, 2015
In Forbes, Henry I. Miller and Drew L. Kershen explain why they think organic farming is, as they term it, a “colossal hoax” that promises far more than it can possibly deliver:
Consumers of organic foods are getting both more and less than they bargained for. On both counts, it’s not good.
Many people who pay the huge premium — often more than 100% — for organic foods do so because they’re afraid of pesticides. If that’s their rationale, they misunderstand the nuances of organic agriculture. Although it’s true that synthetic chemical pesticides are generally prohibited, there is a lengthy list of exceptions listed in the Organic Foods Production Act, while most “natural” ones are permitted. However, “organic” pesticides can be toxic. As evolutionary biologist Christie Wilcox explained in a 2012 Scientific American article (“Are lower pesticide residues a good reason to buy organic? Probably not.”): “Organic pesticides pose the same health risks as non-organic ones.”
Another poorly recognized aspect of this issue is that the vast majority of pesticidal substances that we consume are in our diets “naturally” and are present in organic foods as well as non-organic ones. In a classic study, UC Berkeley biochemist Bruce Ames and his colleagues found that “99.99 percent (by weight) of the pesticides in the American diet are chemicals that plants produce to defend themselves.” Moreover, “natural and synthetic chemicals are equally likely to be positive in animal cancer tests.” Thus, consumers who buy organic to avoid pesticide exposure are focusing their attention on just one-hundredth of 1% of the pesticides they consume.
Some consumers think that the USDA National Organic Program (NOP) requires certified organic products to be free of ingredients from “GMOs,” organisms crafted with molecular techniques of genetic engineering. Wrong again. USDA does not require organic products to be GMO-free. (In any case, the methods used to create so-called GMOs are an extension, or refinement, of older techniques for genetic modification that have been used for a century or more.)
August 17, 2015
Henry I. Miller and Drew L. Kershen on the widespread FUD still being pushed in much of the mainstream media about genetically modified organisms in the food supply:
New York Times nutrition and health columnist Jane Brody recently penned a generally good piece about genetic engineering, “Fears, Not Facts, Support GMO-Free Food.” She recapitulated the overwhelming evidence for the importance and safety of products from GMOs, or “genetically modified organisms” (which for the sake of accuracy, we prefer to call organisms modified with molecular genetic engineering techniques, or GE). Their uses encompass food, animal feed, drugs, vaccines and animals. Sales of drugs made with genetic engineering techniques are in the scores of billions of dollars annually, and ingredients from genetically engineered crop plants are found in 70-80 percent of processed foods on supermarket shelves.
Brody’s article had two errors, however. The first was this statement, in a correction that was appended (probably by the editors) after the article was published:
The article also referred imprecisely to regulation of GMOs by the Food and Drug Administration and the Environmental Protection Agency. While the organizations regulate food from genetically engineered crops to ensure they are safe to eat, the program is voluntary. It is not the case that every GMO must be tested before it can be marketed.
In fact, every so-called GMO used for food, fiber or ornamental use is subject to compulsory case-by-case regulation by the Animal and Plant Health Inspection Service (APHIS) of USDA and many are also regulated by the Environmental Protection Agency (EPA) during extensive field testing. When these organisms — plants, animals or microorganisms — become food, they are then overseen by the FDA, which has strict rules about misbranding (inaccurate or misleading labeling) and adulteration (the presence of harmful substances). Foods from “new plant varieties” made with any technique are subject to case-by-case premarket FDA review if they possess certain characteristics that pose questions of safety. In addition, food from genetically engineered organisms can undergo a voluntary FDA review. (Every GE food to this point has undergone the voluntary FDA review, so FDA has evaluated every GE food on the market).
The second error by Brody occurred in the very last words of the piece, “the best way for concerned consumers to avoid G.M.O. products is to choose those certified as organic, which the U.S.D.A. requires to be G.M.O.-free.” Brody has fallen victim to a common misconception; in fact, the USDA does not require organic products to be GMO-free.
August 15, 2015
In The Atlantic, Bourree Lam looks at the state of published science and how scientists can begin to address the problems of bad data, statistical sleight-of-hand, and actual fraud:
In May, the journal Science retracted a much-talked-about study suggesting that gay canvassers might cause same-sex marriage opponents to change their opinion, after an independent statistical analysis revealed irregularities in its data. The retraction joined a string of science scandals, ranging from Andrew Wakefield’s infamous study linking a childhood vaccine and autism to the allegations that Marc Hauser, once a star psychology professor at Harvard, fabricated data for research on animal cognition. By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders). This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.
“Retractions are born of many mothers,” write Ivan Oransky and Adam Marcus, the co-founders of the blog Retraction Watch, which has logged thousands of retractions in the past five years. A study in the Proceedings of the National Academy of Sciences reviewed 2,047 retractions of biomedical and life-sciences articles and found that just 21.3 percent stemmed from straightforward error, while 67.4 percent resulted from misconduct, including fraud or suspected fraud (43.4 percent) and plagiarism (9.8 percent).
Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’”.
As for why these practices are so prevalent, many scientists blame increased competition for academic jobs and research funding, combined with a “publish or perish” culture. Because journals are more likely to accept studies reporting “positive” results (those that support, rather than refute, a hypothesis), researchers may have an incentive to “cook” or “mine” their data to generate a positive finding. Such publication bias is not in itself news — back in 1987, a study found that, compared with research trials that went unpublished, those that were published were three times as likely to have positive results. But the bias does seem to be getting stronger: a more recent study of 4,600 research papers found that from 1990 to 2007, the proportion of positive results grew by 22 percent.
August 14, 2015
To see what I mean, consider the recent tradition of psychology articles showing that conservatives are authoritarian while liberals are not. Jeremy Frimer, who runs the Moral Psychology Lab at the University of Winnipeg, realized that who you asked those questions about might matter — did conservatives defer to the military because they were authoritarians or because the military is considered a “conservative” institution? And, lo and behold, when he asked similar questions about, say, environmentalists, the liberals were the authoritarians.
It also matters because social psychology, and social science more generally, has a replication problem, which was recently covered in a very good article at Slate. Take the infamous “paradox of choice” study that found that offering a few kinds of jam samples at a supermarket was more likely to result in a purchase than offering dozens of samples. A team of researchers that tried to replicate this — and other famous experiments — completely failed. When they did a survey of the literature, they found that the array of choices generally had no important effect either way. The replication problem is bad enough in one subfield of social psychology that Nobel laureate Daniel Kahneman wrote an open letter to its practitioners, urging them to institute tougher replication protocols before their field implodes. A recent issue of Social Psychology was devoted to trying to replicate famous studies in the discipline; more than a third failed replication.
Let me pause here to say something important: Though I mentioned bias above, I’m not suggesting in any way that the replication problems mostly happen because social scientists are in on a conspiracy against conservatives to do bad research or to make stuff up. The replication problems mostly happen because, as the Slate article notes, journals are biased toward publishing positive and novel results, not “there was no relationship, which is exactly what you’d expect.” So readers see the one paper showing that something interesting happened, not the (possibly many more) teams that got muddy data showing no particular effect. If you do enough studies on enough small groups, you will occasionally get an effect just by random chance. But because those are the only studies that get published, it seems like “science has proved …” whatever those papers are about.
Megan McArdle, “The Truth About Truthiness”, Bloomberg View, 2014-09-08.