What is science’s biggest fail of all time?
I nominate everything about diet and fitness.
Maybe science has the diet and fitness stuff mostly right by now. I hope so. But I thought the same thing twenty years ago and I was wrong.
I used to think fatty food made you fat. Now it seems the opposite is true. Eating lots of peanuts, avocados, and cheese, for example, probably decreases your appetite and keeps you thin.
I used to think vitamins had been thoroughly studied for their health trade-offs. They haven’t. The reason you take one multivitamin pill a day is marketing, not science.
I used to think the U.S. food pyramid was good science. In the past it was not, and I assume it is not now.
I used to think drinking one glass of alcohol a day is good for health, but now I think that idea is probably just a correlation found in studies.
I used to think I needed to drink a crazy-large amount of water each day, because smart people said so, but that wasn’t science either.
I could go on for an hour.
You might be tempted to say my real issue is with a lack of science, not with science. In some of the cases I mentioned there was a general belief that science had studied stuff when in fact it had not. So one could argue that the media and the government (schools in particular) are to blame for allowing so much non-science to taint the field of real science. And we all agree that science is not intended to be foolproof. Science is about crawling toward the truth over time.
Perhaps my expectations were too high. I expected science to tell me the best ways to eat and to exercise. Science did the opposite, sometimes because of misleading studies and sometimes by being silent when bad science morphed into popular misconceptions. And science was pretty damned cocky about being right during this period in which it was so wrong.
So you have the direct problem of science collectively steering my entire generation toward obesity, diabetes, and coronary problems. But the indirect problem might be worse: It is hard to trust science.
Scott Adams, “Science’s Biggest Fail”, Scott Adams Blog, 2015-02-02.
November 10, 2016
QotD: Science’s Biggest Fail
April 30, 2016
QotD: “SETI is a religion”
Cast your minds back to 1960. John F. Kennedy is president, commercial jet airplanes are just appearing, the biggest university mainframes have 12K of memory. And in Green Bank, West Virginia at the new National Radio Astronomy Observatory, a young astrophysicist named Frank Drake runs a two-week project called Ozma, to search for extraterrestrial signals. A signal is received, to great excitement. It turns out to be false, but the excitement remains. In 1960, Drake organizes the first SETI conference, and came up with the now-famous Drake equation:
N=N*fp ne fl fi fc fL[where N is the number of stars in the Milky Way galaxy; fp is the fraction with planets; ne is the number of planets per star capable of supporting life; fl is the fraction of planets where life evolves; fi is the fraction where intelligent life evolves; and fc is the fraction that communicates; and fL is the fraction of the planet’s life during which the communicating civilizations live.]
This serious-looking equation gave SETI a serious footing as a legitimate intellectual inquiry. The problem, of course, is that none of the terms can be known, and most cannot even be estimated. The only way to work the equation is to fill in with guesses. And guesses — just so we’re clear — are merely expressions of prejudice. Nor can there be “informed guesses.” If you need to state how many planets with life choose to communicate, there is simply no way to make an informed guess. It’s simply prejudice.
As a result, the Drake equation can have any value from “billions and billions” to zero. An expression that can mean anything means nothing. Speaking precisely, the Drake equation is literally meaningless, and has nothing to do with science. I take the hard view that science involves the creation of testable hypotheses. The Drake equation cannot be tested and therefore SETI is not science. SETI is unquestionably a religion. Faith is defined as the firm belief in something for which there is no proof. The belief that the Koran is the word of God is a matter of faith. The belief that God created the universe in seven days is a matter of faith. The belief that there are other life forms in the universe is a matter of faith. There is not a single shred of evidence for any other life forms, and in forty years of searching, none has been discovered. There is absolutely no evidentiary reason to maintain this belief. SETI is a religion.
Michael Crichton, “Aliens Cause Global Warming”: the Caltech Michelin Lecture, 2003-01-17.
April 19, 2016
Richard Feynman’s shortcut for detecting pseudo-scientific bullshit
At Open Culture, Josh Jones discusses Richard Feynman’s suggestion for non-scientists to evaluate whether a claim is scientific or pseudo-scientific nonsense:
The problem of demarcation, or what is and what is not science, has occupied philosophers for some time, and the most famous answer comes from philosopher of science Karl Popper, who proposed his theory of “falsifiability” in 1963. According to Popper, an idea is scientific if it can conceivably be proven wrong. Although Popper’s strict definition of science has had its uses over the years, it has also come in for its share of criticism, since so much accepted science was falsified in its day (Newton’s gravitational theory, Bohr’s theory of the atom), and so much current theoretical science cannot be falsified (string theory, for example). Whatever the case, the problem for lay people remains. If a scientific theory is beyond our comprehension, it’s unlikely we’ll be able to see how it might be disproven.
Physicist and science communicator Richard Feynman came up with another criterion, one that applies directly to the non-scientist likely to be bamboozled by fancy terminology that sounds scientific. Simon Oxenham at Big Think points to the example of Deepak Chopra, who is “infamous for making profound sounding yet entirely meaningless statements by abusing scientific language.” (What Daniel Dennet calls “deepities.”) As a balm against such statements, Oxenham refers us to a speech Feynman gave in 1966 to a meeting of the National Science Teachers Association. Rather than asking lay people to confront scientific-sounding claims on their own terms, Feynman would have us translate them into ordinary language, thereby assuring that what the claim asserts is a logical concept, rather than just a collection of jargon.
The example Feynman gives comes from the most rudimentary source, a “first grade science textbook” which “begins in an unfortunate manner to teach science”: it shows its student a picture of a “windable toy dog,” then a picture of a real dog, then a motorbike. In each case the student is asked “What makes it move?” The answer, Feynman tells us “was in the teacher’s edition of the book… ‘energy makes it move.’” Few students would have intuited such an abstract concept, unless they had previously learned the word, which is all the lesson teaches them. The answer, Feynman points out, might as well have been “’God makes it move,’ or ‘Spirit makes it move,’ or, ‘Movability makes it move.’”
Instead, a good science lesson “should think about what an ordinary human being would answer.” Engaging with the concept of energy in ordinary language enables the student to explain it, and this, Feynman says, constitutes a test for “whether you have taught an idea or you have only taught a definition.
January 4, 2016
QotD: The Science Czar
I have noticed a tendency of mine to reply to arguments with “Well yeah, that would work for the X Czar, but there’s no such thing.”
For example, take the problems with the scientific community, which my friends in Berkeley often discuss. There’s lots of publication bias, statistics are done in a confusing and misleading way out of sheer inertia, and replications often happen very late or not at all. And sometimes someone will say something like “I can’t believe people are too dumb to fix Science. All we would have to do is require early registration of studies to avoid publication bias, turn this new and powerful statistical technique into the new standard, and accord higher status to scientists who do replication experiments. It would be really simple and it would vastly increase scientific progress. I must just be smarter than all existing scientists, since I’m able to think of this and they aren’t.”
And I answer “Well, yeah, that would work for the Science Czar. He could just make a Science Decree that everyone has to use the right statistics, and make another Science Decree that everyone must accord replications higher status. And since we all follow the Science Czar’s Science Decrees, it would all work perfectly!”
Why exactly am I being so sarcastic? Because things that work from a czar’s-eye view don’t work from within the system. No individual scientist has an incentive to unilaterally switch to the new statistical technique for her own research, since it would make her research less likely to produce earth-shattering results and since it would just confuse all the other scientists. They just have an incentive to want everybody else to do it, at which point they would follow along.
Likewise, no journal has the incentive to unilaterally demand early registration, since that just means everyone who forgot to early register their studies would switch to their competitors’ journals.
And since the system is only made of individual scientists and individual journals, no one is ever going to switch and science will stay exactly as it is.
Scott Alexander, “Reactionary Philosophy In An Enormous, Planet-Sized Nutshell”, Slate Star Codex, 2013-03-03.
December 16, 2015
Chipotle gains “green cred PR opportunities” and worse health outcomes for customers
Henry Miller on the Faustian bargain Chipotle willingly made and is now paying for:
Chipotle, the once-popular Mexican restaurant chain, is experiencing a well-deserved downward spiral.
The company found it could pass off a fast-food menu stacked with high-calorie, sodium-rich options as higher quality and more nutritious because the meals were made with locally grown, genetic engineering-free ingredients. And to set the tone for the kind of New Age-y image the company wanted, Chipotle adopted slogans like, “We source from farms rather than factories” and, “With every burrito we roll or bowl we fill, we’re working to cultivate a better world.”
The rest of the company wasn’t as swift as the marketing department, however. Last week, about 140 people, all but a handful Boston College students, were recovering from a nasty bout of norovirus-caused gastroenteritis, a foodborne illness apparently contracted while eating Chipotle’s “responsibly raised” meats and largely organic produce.
And they’re not alone. The Centers for Disease Control and Prevention has been tracking another, unrelated Chipotle food poisoning outbreak in California, Illinois, Maryland, Minnesota, New York, Ohio, Oregon, Pennsylvania and Washington, in which victims have been as young as one year and as old as 94. Using whole genome sequencing, CDC investigators identified the DNA fingerprint of the bacterial culprit in that outbreak as E. coli strain STEC O26, which was found in all of the sickened customers tested.
Outbreaks of food poisoning have become something of a Chipotle trademark; the recent ones are the fourth and fifth this year, one of which was not disclosed to the public. A particularly worrisome aspect of the company’s serial deficiencies is that there have been at least three unrelated pathogens in the outbreaks – Salmonella and E. coli bacteria and norovirus. In other words, there has been more than a single glitch; suppliers and employees have found a variety of ways to contaminate what Chipotle cavalierly sells (at premium prices) to its customers.
November 5, 2015
The high-church organic movement is feeling under threat
Henry I. Miller & Julie Kelly on the less-than-certain future of the organic farming community:
The organic-products industry, which has been on a tear for the past decade, is running scared. Challenged by progress in modern genetic engineering and state-of-the-art pesticides — which are denied to organic farmers — the organic movement is ratcheting up its rhetoric and bolstering its anti-innovation agenda while trying to expand a consumer base that shows signs of hitting the wall.
Genetic-engineering-labeling referendums funded by the organic industry failed last year in Colorado and Oregon, following similar defeats in California and Washington. Even worse for the industry, a recent Supreme Court decision appears to proscribe on First Amendment grounds the kind of labeling they want. A June 2015 Supreme Court decision has cleared a judicial path to challenge the constitutionality of special labeling — “compelled commercial speech” — to identify foods that contain genetically engineered (sometimes called “genetically modified”) ingredients. The essence of the decision is the expansion of the range of regulations subject to “strict scrutiny,” the most rigorous standard of review for constitutionality, to include special labeling laws.
[…]
Organic agriculture has become a kind of Dr. Frankenstein’s monster, a far cry from what was intended: “Let me be clear about one thing, the organic label is a marketing tool,” said then secretary of agriculture Dan Glickman when organic certification was being considered. “It is not a statement about food safety. Nor is ‘organic’ a value judgment about nutrition or quality.” That quote from Secretary Glickman should have to be displayed prominently in every establishment that sells organic products.
The backstory here is that in spite of its “good vibes,” organic farming is an affront to the environment — hugely wasteful of arable land and water because of its low yields. Plant pathologist Dr. Steve Savage recently analyzed the data from USDA’s 2014 Organic Survey, which reports various measures of productivity from most of the certified-organic farms in the nation, and compared them to those at conventional farms, crop by crop, state by state. His findings are extraordinary. Of the 68 crops surveyed, there was a “yield gap” — poorer performance of organic farms — in 59. And many of those gaps, or shortfalls, were impressive: strawberries, 61 percent less than conventional; fresh tomatoes, 61 percent less; tangerines, 58 percent less; carrots, 49 percent less; cotton, 45 percent less; rice, 39 percent less; peanuts, 37 percent less.
October 28, 2015
The WHO’s lack of clarity leads to sensationalist newspaper headlines (again)
The World Health Organization appears to exist primarily to give newspaper editors the excuse to run senational headlines about the risk of cancer. This is not a repeat story from earlier years. Oh, wait. Yes it is. Here’s The Atlantic‘s Ed Yong to de-sensationalize the recent scary headlines:
The International Agency of Research into Cancer (IARC), an arm of the World Health Organization, is notable for two things. First, they’re meant to carefully assess whether things cause cancer, from pesticides to sunlight, and to provide the definitive word on those possible risks.
Second, they are terrible at communicating their findings.
[…]
Group 1 is billed as “carcinogenic to humans,” which means that we can be fairly sure that the things here have the potential to cause cancer. But the stark language, with no mention of risks or odds or any remotely conditional, invites people to assume that if they specifically partake of, say, smoking or processed meat, they will definitely get cancer.
Similarly, when Group 2A is described as “probably carcinogenic to humans,” it roughly translates to “there’s some evidence that these things could cause cancer, but we can’t be sure.” Again, the word “probably” conjures up the specter of individual risk, but the classification isn’t about individuals at all.
Group 2B, “possibly carcinogenic to humans,” may be the most confusing one of all. What does “possibly” even mean? Proving a negative is incredibly difficult, which is why Group 4 — “probably not carcinogenic to humans” — contains just one substance of the hundreds that IARC has assessed.
So, in practice, 2B becomes a giant dumping ground for all the risk factors that IARC has considered, and could neither confirm nor fully discount as carcinogens. Which is to say: most things. It’s a bloated category, essentially one big epidemiological shruggie. But try telling someone unfamiliar with this that, say, power lines are “possibly carcinogenic” and see what they take away from that.
Worse still, the practice of lumping risk factors into categories without accompanying description — or, preferably, visualization — of their respective risks practically invites people to view them as like-for-like. And that inevitably led to misleading headlines like this one in the Guardian: “Processed meats rank alongside smoking as cancer causes – WHO.”
October 6, 2015
QotD: Real science
The easy way to tell real religion from fake religion is that real religion doesn’t make you feel good. It doesn’t assure you that everything you’re doing is right and that you ought to keep on doing it.
The same holds true for science. Real science doesn’t make you feel smart. Fake science does.
No matter how smart you think you are, real science will make you feel stupid far more often than it will make you feel smart. Real science not only tells us how much more we don’t know than we know, a state of affairs that will continue for all of human history, but it tells us how fragile the knowledge that we have gained is, how prone we are to making childish mistakes and allowing our biases to think for us.
Science is a rigorous way of making fewer mistakes. It’s not very useful to people who already know everything. Science is for stupid people who know how much they don’t know.
A look back at the march of science doesn’t show an even line of progress led by smooth-talking popularizers who are never wrong. Instead the cabinets of science are full of oddballs, unqualified, jealous, obsessed and eccentric, whose pivotal discoveries sometimes came about by accident. Science, like so much of human accomplishment, often depended on lucky accidents to provide a result that could then be isolated and systematized into a useful understanding of the process.
Daniel Greenfield, “Science is for Stupid People”, Sultan Knish, 2014-09-30.
September 5, 2015
The subtle lure of “research” that confirms our biases
Megan McArdle on why we fall for bogus research:
Almost three years ago, Nobel Prize-winning psychologist Daniel Kahneman penned an open letter to researchers working on “social priming,” the study of how thoughts and environmental cues can change later, mostly unrelated behaviors. After highlighting a series of embarrassing revelations, ranging from outright fraud to unreproducible results, he warned:
For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research. Your problem is not with the few people who have actively challenged the validity of some priming results. It is with the much larger population of colleagues who in the past accepted your surprising results as facts when they were published. These people have now attached a question mark to the field, and it is your responsibility to remove it.
At the time it was a bombshell. Now it seems almost delicate. Replication of psychology studies has become a hot topic, and on Thursday, Science published the results of a project that aimed to replicate 100 famous studies — and found that only about one-third of them held up. The others showed weaker effects, or failed to find the effect at all.
This is, to put it mildly, a problem. But it is not necessarily the problem that many people seem to assume, which is that psychology research standards are terrible, or that the teams that put out the papers are stupid. Sure, some researchers doubtless are stupid, and some psychological research standards could be tighter, because we live in a wide and varied universe where almost anything you can say is certain to be true about some part of it. But for me, the problem is not individual research papers, or even the field of psychology. It’s the way that academic culture filters papers, and the way that the larger society gets their results.
August 29, 2015
We need a new publication called The Journal of Successfully Reproduced Results
We depend on scientific studies to provide us with valid information on so many different aspects of life … it’d be nice to know that the results of those studies actually hold up to scrutiny:
One of the bedrock assumptions of science is that for a study’s results to be valid, other researchers should be able to reproduce the study and reach the same conclusions. The ability to successfully reproduce a study and find the same results is, as much as anything, how we know that its findings are true, rather than a one-off result.
This seems obvious, but in practice, a lot more work goes into original studies designed to create interesting conclusions than into the rather less interesting work of reproducing studies that have already been done to see whether their results hold up.
Everyone wants to be part of the effort to identify new and interesting results, not the more mundane (and yet potentially career-endangering) work of reproducing the results of older studies:
Why is psychology research (and, it seems likely, social science research generally) so stuffed with dubious results? Let me suggest three likely reasons:
A bias towards research that is not only new but interesting: An interesting, counterintuitive finding that appears to come from good, solid scientific investigation gets a researcher more media coverage, more attention, more fame both inside and outside of the field. A boring and obvious result, or no result, on the other hand, even if investigated honestly and rigorously, usually does little for a researcher’s reputation. The career path for academic researchers, especially in social science, is paved with interesting but hard to replicate findings. (In a clever way, the Reproducibility Project gets around this issue by coming up with the really interesting result that lots of psychology studies have problems.)
An institutional bias against checking the work of others: This is the flipside of the first factor: Senior social science researchers often actively warn their younger colleagues — who are in many cases the best positioned to check older work—against investigating the work of established members of the field. As one psychology professor from the University of Southern California grouses to the Times, “There’s no doubt replication is important, but it’s often just an attack, a vigilante exercise.”
[…]
Small, unrepresentative sample sizes: In general, social science experiments tend to work with fairly small sample sizes — often just a few dozen people who are meant to stand in for everyone else. Researchers often have a hard time putting together truly representative samples, so they work with subjects they can access, which in a lot of cases means college students.
A couple of years ago, I linked to a story about the problem of using western university students as the default source of your statistical sample for psychological and sociological studies:
A notion that’s popped up several times in the last couple of months is that the easy access to willing test subjects (university students) introduces a strong bias to a lot of the tests, yet until recently the majority of studies disregarded the possibility that their test results were unrepresentative of the general population.
August 20, 2015
One of the slickest marketing campaigns of our time
In Forbes, Henry I. Miller and Drew L. Kershen explain why they think organic farming is, as they term it, a “colossal hoax” that promises far more than it can possibly deliver:
Consumers of organic foods are getting both more and less than they bargained for. On both counts, it’s not good.
Many people who pay the huge premium — often more than 100% — for organic foods do so because they’re afraid of pesticides. If that’s their rationale, they misunderstand the nuances of organic agriculture. Although it’s true that synthetic chemical pesticides are generally prohibited, there is a lengthy list of exceptions listed in the Organic Foods Production Act, while most “natural” ones are permitted. However, “organic” pesticides can be toxic. As evolutionary biologist Christie Wilcox explained in a 2012 Scientific American article (“Are lower pesticide residues a good reason to buy organic? Probably not.”): “Organic pesticides pose the same health risks as non-organic ones.”
Another poorly recognized aspect of this issue is that the vast majority of pesticidal substances that we consume are in our diets “naturally” and are present in organic foods as well as non-organic ones. In a classic study, UC Berkeley biochemist Bruce Ames and his colleagues found that “99.99 percent (by weight) of the pesticides in the American diet are chemicals that plants produce to defend themselves.” Moreover, “natural and synthetic chemicals are equally likely to be positive in animal cancer tests.” Thus, consumers who buy organic to avoid pesticide exposure are focusing their attention on just one-hundredth of 1% of the pesticides they consume.
Some consumers think that the USDA National Organic Program (NOP) requires certified organic products to be free of ingredients from “GMOs,” organisms crafted with molecular techniques of genetic engineering. Wrong again. USDA does not require organic products to be GMO-free. (In any case, the methods used to create so-called GMOs are an extension, or refinement, of older techniques for genetic modification that have been used for a century or more.)
August 17, 2015
Food fears and GMOs
Henry I. Miller and Drew L. Kershen on the widespread FUD still being pushed in much of the mainstream media about genetically modified organisms in the food supply:
New York Times nutrition and health columnist Jane Brody recently penned a generally good piece about genetic engineering, “Fears, Not Facts, Support GMO-Free Food.” She recapitulated the overwhelming evidence for the importance and safety of products from GMOs, or “genetically modified organisms” (which for the sake of accuracy, we prefer to call organisms modified with molecular genetic engineering techniques, or GE). Their uses encompass food, animal feed, drugs, vaccines and animals. Sales of drugs made with genetic engineering techniques are in the scores of billions of dollars annually, and ingredients from genetically engineered crop plants are found in 70-80 percent of processed foods on supermarket shelves.
Brody’s article had two errors, however. The first was this statement, in a correction that was appended (probably by the editors) after the article was published:
The article also referred imprecisely to regulation of GMOs by the Food and Drug Administration and the Environmental Protection Agency. While the organizations regulate food from genetically engineered crops to ensure they are safe to eat, the program is voluntary. It is not the case that every GMO must be tested before it can be marketed.
In fact, every so-called GMO used for food, fiber or ornamental use is subject to compulsory case-by-case regulation by the Animal and Plant Health Inspection Service (APHIS) of USDA and many are also regulated by the Environmental Protection Agency (EPA) during extensive field testing. When these organisms — plants, animals or microorganisms — become food, they are then overseen by the FDA, which has strict rules about misbranding (inaccurate or misleading labeling) and adulteration (the presence of harmful substances). Foods from “new plant varieties” made with any technique are subject to case-by-case premarket FDA review if they possess certain characteristics that pose questions of safety. In addition, food from genetically engineered organisms can undergo a voluntary FDA review. (Every GE food to this point has undergone the voluntary FDA review, so FDA has evaluated every GE food on the market).
The second error by Brody occurred in the very last words of the piece, “the best way for concerned consumers to avoid G.M.O. products is to choose those certified as organic, which the U.S.D.A. requires to be G.M.O.-free.” Brody has fallen victim to a common misconception; in fact, the USDA does not require organic products to be GMO-free.
August 15, 2015
Science in the media – from “statistical irregularities” to outright fraud
In The Atlantic, Bourree Lam looks at the state of published science and how scientists can begin to address the problems of bad data, statistical sleight-of-hand, and actual fraud:
In May, the journal Science retracted a much-talked-about study suggesting that gay canvassers might cause same-sex marriage opponents to change their opinion, after an independent statistical analysis revealed irregularities in its data. The retraction joined a string of science scandals, ranging from Andrew Wakefield’s infamous study linking a childhood vaccine and autism to the allegations that Marc Hauser, once a star psychology professor at Harvard, fabricated data for research on animal cognition. By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders). This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.
“Retractions are born of many mothers,” write Ivan Oransky and Adam Marcus, the co-founders of the blog Retraction Watch, which has logged thousands of retractions in the past five years. A study in the Proceedings of the National Academy of Sciences reviewed 2,047 retractions of biomedical and life-sciences articles and found that just 21.3 percent stemmed from straightforward error, while 67.4 percent resulted from misconduct, including fraud or suspected fraud (43.4 percent) and plagiarism (9.8 percent).
Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’”.
As for why these practices are so prevalent, many scientists blame increased competition for academic jobs and research funding, combined with a “publish or perish” culture. Because journals are more likely to accept studies reporting “positive” results (those that support, rather than refute, a hypothesis), researchers may have an incentive to “cook” or “mine” their data to generate a positive finding. Such publication bias is not in itself news — back in 1987, a study found that, compared with research trials that went unpublished, those that were published were three times as likely to have positive results. But the bias does seem to be getting stronger: a more recent study of 4,600 research papers found that from 1990 to 2007, the proportion of positive results grew by 22 percent.
August 14, 2015
QotD: When “the science” shows what you want it to show
To see what I mean, consider the recent tradition of psychology articles showing that conservatives are authoritarian while liberals are not. Jeremy Frimer, who runs the Moral Psychology Lab at the University of Winnipeg, realized that who you asked those questions about might matter — did conservatives defer to the military because they were authoritarians or because the military is considered a “conservative” institution? And, lo and behold, when he asked similar questions about, say, environmentalists, the liberals were the authoritarians.
It also matters because social psychology, and social science more generally, has a replication problem, which was recently covered in a very good article at Slate. Take the infamous “paradox of choice” study that found that offering a few kinds of jam samples at a supermarket was more likely to result in a purchase than offering dozens of samples. A team of researchers that tried to replicate this — and other famous experiments — completely failed. When they did a survey of the literature, they found that the array of choices generally had no important effect either way. The replication problem is bad enough in one subfield of social psychology that Nobel laureate Daniel Kahneman wrote an open letter to its practitioners, urging them to institute tougher replication protocols before their field implodes. A recent issue of Social Psychology was devoted to trying to replicate famous studies in the discipline; more than a third failed replication.
Let me pause here to say something important: Though I mentioned bias above, I’m not suggesting in any way that the replication problems mostly happen because social scientists are in on a conspiracy against conservatives to do bad research or to make stuff up. The replication problems mostly happen because, as the Slate article notes, journals are biased toward publishing positive and novel results, not “there was no relationship, which is exactly what you’d expect.” So readers see the one paper showing that something interesting happened, not the (possibly many more) teams that got muddy data showing no particular effect. If you do enough studies on enough small groups, you will occasionally get an effect just by random chance. But because those are the only studies that get published, it seems like “science has proved …” whatever those papers are about.
Megan McArdle, “The Truth About Truthiness”, Bloomberg View, 2014-09-08.
August 1, 2015
QotD: How to write a headline about a “scientific” result
… let’s not forget the Heads We Win Tails You Lose rule of the in-group affirmations which we loosely call “social sciences.”
Suppose you run a test to distinguish whether women, or men, are more willing to hire family — that is, engage in nepotism — when filling a job.
If it turns out that men are more likely to engage in nepotistic practices, the study will be titled:
Women More Ethical in Business Dealings Than Men
On the other hand, if it turns out that women are more likely to approve of nepotism, whereas men are less likely, the study will have the title:
Women More Caring Towards Family Members; Men Care Only About Filthy Careerism & the Welfare of Total Strangers Who Might Be Rapists
Ace, “Shock: Social Scientists Determine Conservatives Are Stupid”, Ace of Spades HQ, 2014-09-09.