The easy way to tell real religion from fake religion is that real religion doesn’t make you feel good. It doesn’t assure you that everything you’re doing is right and that you ought to keep on doing it.
The same holds true for science. Real science doesn’t make you feel smart. Fake science does.
No matter how smart you think you are, real science will make you feel stupid far more often than it will make you feel smart. Real science not only tells us how much more we don’t know than we know, a state of affairs that will continue for all of human history, but it tells us how fragile the knowledge that we have gained is, how prone we are to making childish mistakes and allowing our biases to think for us.
Science is a rigorous way of making fewer mistakes. It’s not very useful to people who already know everything. Science is for stupid people who know how much they don’t know.
A look back at the march of science doesn’t show an even line of progress led by smooth-talking popularizers who are never wrong. Instead the cabinets of science are full of oddballs, unqualified, jealous, obsessed and eccentric, whose pivotal discoveries sometimes came about by accident. Science, like so much of human accomplishment, often depended on lucky accidents to provide a result that could then be isolated and systematized into a useful understanding of the process.
Daniel Greenfield, “Science is for Stupid People”, Sultan Knish, 2014-09-30.
October 6, 2015
August 29, 2015
We depend on scientific studies to provide us with valid information on so many different aspects of life … it’d be nice to know that the results of those studies actually hold up to scrutiny:
One of the bedrock assumptions of science is that for a study’s results to be valid, other researchers should be able to reproduce the study and reach the same conclusions. The ability to successfully reproduce a study and find the same results is, as much as anything, how we know that its findings are true, rather than a one-off result.
This seems obvious, but in practice, a lot more work goes into original studies designed to create interesting conclusions than into the rather less interesting work of reproducing studies that have already been done to see whether their results hold up.
Everyone wants to be part of the effort to identify new and interesting results, not the more mundane (and yet potentially career-endangering) work of reproducing the results of older studies:
Why is psychology research (and, it seems likely, social science research generally) so stuffed with dubious results? Let me suggest three likely reasons:
A bias towards research that is not only new but interesting: An interesting, counterintuitive finding that appears to come from good, solid scientific investigation gets a researcher more media coverage, more attention, more fame both inside and outside of the field. A boring and obvious result, or no result, on the other hand, even if investigated honestly and rigorously, usually does little for a researcher’s reputation. The career path for academic researchers, especially in social science, is paved with interesting but hard to replicate findings. (In a clever way, the Reproducibility Project gets around this issue by coming up with the really interesting result that lots of psychology studies have problems.)
An institutional bias against checking the work of others: This is the flipside of the first factor: Senior social science researchers often actively warn their younger colleagues — who are in many cases the best positioned to check older work—against investigating the work of established members of the field. As one psychology professor from the University of Southern California grouses to the Times, “There’s no doubt replication is important, but it’s often just an attack, a vigilante exercise.”
Small, unrepresentative sample sizes: In general, social science experiments tend to work with fairly small sample sizes — often just a few dozen people who are meant to stand in for everyone else. Researchers often have a hard time putting together truly representative samples, so they work with subjects they can access, which in a lot of cases means college students.
A couple of years ago, I linked to a story about the problem of using western university students as the default source of your statistical sample for psychological and sociological studies:
A notion that’s popped up several times in the last couple of months is that the easy access to willing test subjects (university students) introduces a strong bias to a lot of the tests, yet until recently the majority of studies disregarded the possibility that their test results were unrepresentative of the general population.
August 20, 2015
In Forbes, Henry I. Miller and Drew L. Kershen explain why they think organic farming is, as they term it, a “colossal hoax” that promises far more than it can possibly deliver:
Consumers of organic foods are getting both more and less than they bargained for. On both counts, it’s not good.
Many people who pay the huge premium — often more than 100% — for organic foods do so because they’re afraid of pesticides. If that’s their rationale, they misunderstand the nuances of organic agriculture. Although it’s true that synthetic chemical pesticides are generally prohibited, there is a lengthy list of exceptions listed in the Organic Foods Production Act, while most “natural” ones are permitted. However, “organic” pesticides can be toxic. As evolutionary biologist Christie Wilcox explained in a 2012 Scientific American article (“Are lower pesticide residues a good reason to buy organic? Probably not.”): “Organic pesticides pose the same health risks as non-organic ones.”
Another poorly recognized aspect of this issue is that the vast majority of pesticidal substances that we consume are in our diets “naturally” and are present in organic foods as well as non-organic ones. In a classic study, UC Berkeley biochemist Bruce Ames and his colleagues found that “99.99 percent (by weight) of the pesticides in the American diet are chemicals that plants produce to defend themselves.” Moreover, “natural and synthetic chemicals are equally likely to be positive in animal cancer tests.” Thus, consumers who buy organic to avoid pesticide exposure are focusing their attention on just one-hundredth of 1% of the pesticides they consume.
Some consumers think that the USDA National Organic Program (NOP) requires certified organic products to be free of ingredients from “GMOs,” organisms crafted with molecular techniques of genetic engineering. Wrong again. USDA does not require organic products to be GMO-free. (In any case, the methods used to create so-called GMOs are an extension, or refinement, of older techniques for genetic modification that have been used for a century or more.)
August 17, 2015
Henry I. Miller and Drew L. Kershen on the widespread FUD still being pushed in much of the mainstream media about genetically modified organisms in the food supply:
New York Times nutrition and health columnist Jane Brody recently penned a generally good piece about genetic engineering, “Fears, Not Facts, Support GMO-Free Food.” She recapitulated the overwhelming evidence for the importance and safety of products from GMOs, or “genetically modified organisms” (which for the sake of accuracy, we prefer to call organisms modified with molecular genetic engineering techniques, or GE). Their uses encompass food, animal feed, drugs, vaccines and animals. Sales of drugs made with genetic engineering techniques are in the scores of billions of dollars annually, and ingredients from genetically engineered crop plants are found in 70-80 percent of processed foods on supermarket shelves.
Brody’s article had two errors, however. The first was this statement, in a correction that was appended (probably by the editors) after the article was published:
The article also referred imprecisely to regulation of GMOs by the Food and Drug Administration and the Environmental Protection Agency. While the organizations regulate food from genetically engineered crops to ensure they are safe to eat, the program is voluntary. It is not the case that every GMO must be tested before it can be marketed.
In fact, every so-called GMO used for food, fiber or ornamental use is subject to compulsory case-by-case regulation by the Animal and Plant Health Inspection Service (APHIS) of USDA and many are also regulated by the Environmental Protection Agency (EPA) during extensive field testing. When these organisms — plants, animals or microorganisms — become food, they are then overseen by the FDA, which has strict rules about misbranding (inaccurate or misleading labeling) and adulteration (the presence of harmful substances). Foods from “new plant varieties” made with any technique are subject to case-by-case premarket FDA review if they possess certain characteristics that pose questions of safety. In addition, food from genetically engineered organisms can undergo a voluntary FDA review. (Every GE food to this point has undergone the voluntary FDA review, so FDA has evaluated every GE food on the market).
The second error by Brody occurred in the very last words of the piece, “the best way for concerned consumers to avoid G.M.O. products is to choose those certified as organic, which the U.S.D.A. requires to be G.M.O.-free.” Brody has fallen victim to a common misconception; in fact, the USDA does not require organic products to be GMO-free.
August 15, 2015
In The Atlantic, Bourree Lam looks at the state of published science and how scientists can begin to address the problems of bad data, statistical sleight-of-hand, and actual fraud:
In May, the journal Science retracted a much-talked-about study suggesting that gay canvassers might cause same-sex marriage opponents to change their opinion, after an independent statistical analysis revealed irregularities in its data. The retraction joined a string of science scandals, ranging from Andrew Wakefield’s infamous study linking a childhood vaccine and autism to the allegations that Marc Hauser, once a star psychology professor at Harvard, fabricated data for research on animal cognition. By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders). This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.
“Retractions are born of many mothers,” write Ivan Oransky and Adam Marcus, the co-founders of the blog Retraction Watch, which has logged thousands of retractions in the past five years. A study in the Proceedings of the National Academy of Sciences reviewed 2,047 retractions of biomedical and life-sciences articles and found that just 21.3 percent stemmed from straightforward error, while 67.4 percent resulted from misconduct, including fraud or suspected fraud (43.4 percent) and plagiarism (9.8 percent).
Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’”.
As for why these practices are so prevalent, many scientists blame increased competition for academic jobs and research funding, combined with a “publish or perish” culture. Because journals are more likely to accept studies reporting “positive” results (those that support, rather than refute, a hypothesis), researchers may have an incentive to “cook” or “mine” their data to generate a positive finding. Such publication bias is not in itself news — back in 1987, a study found that, compared with research trials that went unpublished, those that were published were three times as likely to have positive results. But the bias does seem to be getting stronger: a more recent study of 4,600 research papers found that from 1990 to 2007, the proportion of positive results grew by 22 percent.
August 14, 2015
To see what I mean, consider the recent tradition of psychology articles showing that conservatives are authoritarian while liberals are not. Jeremy Frimer, who runs the Moral Psychology Lab at the University of Winnipeg, realized that who you asked those questions about might matter — did conservatives defer to the military because they were authoritarians or because the military is considered a “conservative” institution? And, lo and behold, when he asked similar questions about, say, environmentalists, the liberals were the authoritarians.
It also matters because social psychology, and social science more generally, has a replication problem, which was recently covered in a very good article at Slate. Take the infamous “paradox of choice” study that found that offering a few kinds of jam samples at a supermarket was more likely to result in a purchase than offering dozens of samples. A team of researchers that tried to replicate this — and other famous experiments — completely failed. When they did a survey of the literature, they found that the array of choices generally had no important effect either way. The replication problem is bad enough in one subfield of social psychology that Nobel laureate Daniel Kahneman wrote an open letter to its practitioners, urging them to institute tougher replication protocols before their field implodes. A recent issue of Social Psychology was devoted to trying to replicate famous studies in the discipline; more than a third failed replication.
Let me pause here to say something important: Though I mentioned bias above, I’m not suggesting in any way that the replication problems mostly happen because social scientists are in on a conspiracy against conservatives to do bad research or to make stuff up. The replication problems mostly happen because, as the Slate article notes, journals are biased toward publishing positive and novel results, not “there was no relationship, which is exactly what you’d expect.” So readers see the one paper showing that something interesting happened, not the (possibly many more) teams that got muddy data showing no particular effect. If you do enough studies on enough small groups, you will occasionally get an effect just by random chance. But because those are the only studies that get published, it seems like “science has proved …” whatever those papers are about.
Megan McArdle, “The Truth About Truthiness”, Bloomberg View, 2014-09-08.
August 1, 2015
… let’s not forget the Heads We Win Tails You Lose rule of the in-group affirmations which we loosely call “social sciences.”
Suppose you run a test to distinguish whether women, or men, are more willing to hire family — that is, engage in nepotism — when filling a job.
If it turns out that men are more likely to engage in nepotistic practices, the study will be titled:
Women More Ethical in Business Dealings Than Men
On the other hand, if it turns out that women are more likely to approve of nepotism, whereas men are less likely, the study will have the title:
Women More Caring Towards Family Members; Men Care Only About Filthy Careerism & the Welfare of Total Strangers Who Might Be Rapists
July 27, 2015
In Slate, William Saletan on the FUD campaign that has been waged against genetically modified foods:
Is genetically engineered food dangerous? Many people seem to think it is. In the past five years, companies have submitted more than 27,000 products to the Non-GMO Project, which certifies goods that are free of genetically modified organisms. Last year, sales of such products nearly tripled. Whole Foods will soon require labels on all GMOs in its stores. Abbott, the company that makes Similac baby formula, has created a non-GMO version to give parents “peace of mind.” Trader Joe’s has sworn off GMOs. So has Chipotle.
Some environmentalists and public interest groups want to go further. Hundreds of organizations, including Consumers Union, Friends of the Earth, Physicians for Social Responsibility, the Center for Food Safety, and the Union of Concerned Scientists, are demanding “mandatory labeling of genetically engineered foods.” Since 2013, Vermont, Maine, and Connecticut have passed laws to require GMO labels. Massachusetts could be next.
The central premise of these laws — and the main source of consumer anxiety, which has sparked corporate interest in GMO-free food — is concern about health. Last year, in a survey by the Pew Research Center, 57 percent of Americans said it’s generally “unsafe to eat genetically modified foods.” Vermont says the primary purpose of its labeling law is to help people “avoid potential health risks of food produced from genetic engineering.” Chipotle notes that 300 scientists have “signed a statement rejecting the claim that there is a scientific consensus on the safety of GMOs for human consumption.” Until more studies are conducted, Chipotle says, “We believe it is prudent to take a cautious approach toward GMOs.”
The World Health Organization, the American Medical Association, the National Academy of Sciences, and the American Association for the Advancement of Science have all declared that there’s no good evidence GMOs are unsafe. Hundreds of studies back up that conclusion. But many of us don’t trust these assurances. We’re drawn to skeptics who say that there’s more to the story, that some studies have found risks associated with GMOs, and that Monsanto is covering it up.
I’ve spent much of the past year digging into the evidence. Here’s what I’ve learned. First, it’s true that the issue is complicated. But the deeper you dig, the more fraud you find in the case against GMOs. It’s full of errors, fallacies, misconceptions, misrepresentations, and lies. The people who tell you that Monsanto is hiding the truth are themselves hiding evidence that their own allegations about GMOs are false. They’re counting on you to feel overwhelmed by the science and to accept, as a gut presumption, their message of distrust.
H/T to Coyote Blog for the link.
July 24, 2015
Matt Ridley on the danger to all scientific fields when one field is willing to subordinate fact to political expediency:
For much of my life I have been a science writer. That means I eavesdrop on what’s going on in laboratories so I can tell interesting stories. It’s analogous to the way art critics write about art, but with a difference: we “science critics” rarely criticise. If we think a scientific paper is dumb, we just ignore it. There’s too much good stuff coming out of science to waste time knocking the bad stuff.
Sure, we occasionally take a swipe at pseudoscience — homeopathy, astrology, claims that genetically modified food causes cancer, and so on. But the great thing about science is that it’s self-correcting. The good drives out the bad, because experiments get replicated and hypotheses put to the test. So a really bad idea cannot survive long in science.
Or so I used to think. Now, thanks largely to climate science, I have changed my mind. It turns out bad ideas can persist in science for decades, and surrounded by myrmidons of furious defenders they can turn into intolerant dogmas.
This should have been obvious to me. Lysenkoism, a pseudo-biological theory that plants (and people) could be trained to change their heritable natures, helped starve millions and yet persisted for decades in the Soviet Union, reaching its zenith under Nikita Khrushchev. The theory that dietary fat causes obesity and heart disease, based on a couple of terrible studies in the 1950s, became unchallenged orthodoxy and is only now fading slowly.
What these two ideas have in common is that they had political support, which enabled them to monopolise debate. Scientists are just as prone as anybody else to “confirmation bias”, the tendency we all have to seek evidence that supports our favoured hypothesis and dismiss evidence that contradicts it—as if we were counsel for the defence. It’s tosh that scientists always try to disprove their own theories, as they sometimes claim, and nor should they. But they do try to disprove each other’s. Science has always been decentralised, so Professor Smith challenges Professor Jones’s claims, and that’s what keeps science honest.
What went wrong with Lysenko and dietary fat was that in each case a monopoly was established. Lysenko’s opponents were imprisoned or killed. Nina Teicholz’s book The Big Fat Surprise shows in devastating detail how opponents of Ancel Keys’s dietary fat hypothesis were starved of grants and frozen out of the debate by an intolerant consensus backed by vested interests, echoed and amplified by a docile press.
July 16, 2015
On a recent wine tour in the Beamsville Bench region, I watched a fascinating interaction between a winery representative and a potential purchaser. Out of respect, I won’t identify the winery (although there are a few in both Beamsville and Niagara who profess to be “biodynamic” wineries), but the question was asked and the poor winery employee had to fight against her own clear instincts and try to describe in positive terms the utter bullshit that is “biodynamic” theory. Kindly, the questioner allowed her off the hook quickly and our group moved off to taste some other wines.
At Boing-Boing, Maggie Koerth-Baker links to an older article at the SF Weekly saying:
[…] biodynamic farming is, essentially, organic farming … plus a heaping helping of astrology, mysticism and some delightfully medieval-gothic growth preparations. (One involved taking fresh cow skulls, stuffing them with oak bark, burying them at the fall Equinox, unearthing in spring and adding minute amounts of the resulting goop to compost piles. Ostensibly to promote healing in plants.) Perhaps unsurprisingly, large, independent, peer-reviewed studies haven’t found much of a difference between biodynamic and organic grapes. Now, some folks like biodynamic wine, and that’s cool. I just think people ought to know what it is they’re paying a premium for.
The link is broken, but from the old URL, it’s probably this one:
When asked just what was going on, Eierman shot a glance at Jessica LaBounty, Benziger’s marketing manager, who closed her eyes and gave a quick nod. The gardener proceeded to explain that the severed heads were a vital ingredient in Biodynamic Preparation No. 505: Finely ground oak bark will be placed into the cows’ fresh skulls and stored in a shallow, moist hole or rain bucket throughout autumn and winter. The resultant concoction is then applied, in nearly undetectable quantities, to the gargantuan compost piles; Benziger’s promotional literature claims it “stimulates the plant’s immune system and promotes healing.”
Light-years from the surreal scenes at the Sonoma winery, glasses tinkled and forks hit plates of house-marinated olives in a dimly lit San Francisco storefront. Sharply dressed men and their attractive dates laughed over full pours of red and white at Yield Wine Bar in San Francisco’s up-and-coming Dogpatch neighborhood. Nearly half of the 50 wines served that night were grown Biodynamically — a fact prominently displayed on the bar’s menu. When asked what, exactly, this means, bar co-owner Chris Tavelli described Biodynamics as “the highest level of organics, you know, organic above organic.”
Among those who earn a living selling wine to the general public, this was a typical answer. Those with a vested interest in moving Biodynamic wines almost invariably use the words “natural” and “holistic” — terms that are malleable and vague, but near and dear to every San Franciscan’s heart. Its producers and sellers describe the process as “organic to the nth degree,” “the Rolls-Royce of organic farming,” or, simply, “the new organic.”
It’s an explanation Tavelli and fellow wine merchants have to make — or, more accurately, not make — now more than ever. Winemakers recently began aggressively marketing their Biodynamic status as a selling point, claiming their product to be both the “greenest” and most distinctive-tasting available. In San Francisco, Jeff Daniels of the Wine Club has added 10 new Biodynamic labels in the last year alone; Kirk Walker of K & L Wine Merchants says customer queries about Biodynamic wines have jumped in the past few years from roughly one a week to more than 30. Dozens of other San Francisco winesellers concur that they’ve augmented the number of Biodynamic wines they carry by four, five, or even 10 times of late. National chains report the same, and rank San Francisco as perhaps the nation’s top consumer of Biodynamic wine.
June 18, 2015
At Real Clear Science, Ross Pomeroy explains how historical “expert knowledge” and government cheerleading pointed in exactly the opposite direction of today’s experts and government regulators:
For decades, the federal government has been advising Americans on what to eat. Those recommendations have been subject to the shifting sands of dietary science. And have those sands ever been shifting. At first, fat and cholesterol were vilified, while sugar was mostly let off the hook. Now, fat is fine (saturated fat is still evil, though), cholesterol is back, and sugar is the new bogeyman.
Why the sizable shift? The answer may be “bad science.”
Every five years, the Dietary Guidelines Advisory Committee, composed of nutrition and health experts from around the country, convenes to review the latest scientific and medical literature. From their learned dissection, they form the dietary guidelines.
But according to a new editorial published in Mayo Clinic Proceedings, much of the science they review is fundamentally flawed. Unlike experiments in the hard sciences of chemistry, physics, and biology, which rely on direct observational evidence, most diet studies are based on self-reported data. Study subjects are examined for height, weight, and health, then are questioned about what they eat. Their dietary choices are subsequently linked to health outcomes — cancer, mortality, heart disease, etc.
That’s a poor way of doing science, says Edward Archer, a research fellow with the Nutrition Obesity Research Center at the University of Alabama, and lead author of the report.
“The assumption that human memory can provide accurate or precise reproductions of past ingestive behavior is indisputably false,” he and his co-authors write.
Two of the largest studies on nutritional intake in the United States, the CDC’s NHANES and “What We Eat,” are based on asking subjects to recall precisely what and how much they usually eat.
But despite all of the steps that NHANES examiners take to aid recall, such as limiting the recall period to the previous 24 hours and even offering subjects measuring guides to help them report accurate data, the information received is wildly inaccurate. An analysis conducted by Archer in 2013 found that most of the 60,000+ NHANES subjects report eating a lower amount of calories than they would physiologically need to survive, let alone to put on all the weight that Americans have in the past few decades.
June 4, 2015
In The Lancet, Richard Horton discusses the problems of scientific journalism:
The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.
Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.
June 1, 2015
At Real Clear Science, Alex B. Berezow issues a clarion call to stop the US government’s (hidden) subsidy to pornography producers:
You might be asking, What federal porn subsidy? Fair question. Technically, there isn’t a federal porn subsidy. However, if we borrow some of the logic commonly used by politically driven economists, we can redefine the word subsidy to mean whatever we want.
Pornography is enjoyed by many people, but it comes with a very real social cost: it can break up families and perhaps even become an addiction, which are profound losses of productivity. Economists refer to these as negative externalities — i.e., bad side effects that affect people other than the person making the decision. One way to deal with such decisions is to tax them. This should, in theory, reduce the negative side effects, while simultaneously forcing the decisionmaker to bear the “true cost” of his actions. Clearly, if anyone should have to pay for this societal cost, it should be porn watchers, in the form of a porn tax. If they don’t pay such a tax, they are getting an indirect subsidy.
As it turns out, we don’t have a federal porn tax. Thus, we could say that the American government has issued a federal porn subsidy.
Obviously, that reasoning is absurd. Not only does it dubiously redefine the word subsidy, but it unconvincingly claims to be able to accurately place a price tag on every conceivable externality created by watching porn. Accepting that argument would require a nearly complete suspension of disbelief.
Yet, that is essentially the argument that a group of economists at the International Monetary Fund (IMF) just made about fossil fuel subsidies. (See PDF.)
The Guardian, which penned the most influential coverage, began its article with an eye-popping statistic:
“Fossil fuel companies are benefitting from global subsidies of $5.3tn (£3.4tn) a year, equivalent to $10m a minute every day…”
Wow. $5.3 trillion in fossil fuel subsidies? That sounds insane. But, how do they arrive at that number? The Guardian goes on to explain:
“The vast sum is largely due to polluters not paying the costs imposed on governments by the burning of coal, oil and gas. These include the harm caused to local populations by air pollution as well as to people across the globe affected by the floods, droughts and storms being driven by climate change.”
Ah, okay. The subsidy isn’t a direct financial calculation, but is instead based on a bunch of externalities whose costs are nearly impossible to derive with any sense of believability. To give you an idea of just how much fudging exists in these kinds of calculations, a similar report issued in 2013 (PDF) concluded that the fossil fuel subsidy was $1.9 trillion. A discrepancy of $3.4 trillion should raise red flags in regard to methodology.
May 28, 2015
At Vox.com, Julia Belluz and Steven Hoffman show how perverse incentives and human frailty contribute to the wasted efforts — and sometimes outright fraudulent methods — that get “scientific” results published. It’s getting so bad that “the editor of The Lancet … recently lamented, ‘Much of the scientific literature, perhaps half, may simply be untrue.'”:
From study design to dissemination of research, there are dozens of ways science can go off the rails. Many of the scientific studies that are published each year are poorly designed, redundant, or simply useless. Researchers looking into the problem have found that more than half of studies fail to take steps to reduce biases, such as blinding whether people receive treatment or placebo.
In an analysis of 300 clinical research papers about epilepsy — published in 1981, 1991, and 2001 — 71 percent were categorized as having no enduring value. Of those, 55.6 percent were classified as inherently unimportant and 38.8 percent as not new. All told, according to one estimate, about $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on flawed and redundant studies.
After publication, there’s the well-documented irreproducibility problem — the fact that researchers often can’t validate findings when they go back and run experiments again. Just last month, a team of researchers published the findings of a project to replicate 100 of psychology’s biggest experiments. They were only able to replicate 39 of the experiments, and one observer — Daniele Fanelli, who studies bias and scientific misconduct at Stanford University in California — told Nature that the reproducibility problem in cancer biology and drug discovery may actually be even more acute.
Indeed, another review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. (The problem even inspired a satirical publication called the Journal of Irreproducible Results.)
So why aren’t these problems caught prior to publication of a study? Consider peer review, in which scientists send their papers to other experts for vetting prior to publication. The idea is that those peers will detect flaws and help improve papers before they are published as journal articles. Peer review won’t guarantee that an article is perfect or even accurate, but it’s supposed to act as an initial quality-control step.
Yet there are flaws in this traditional “pre-publication” review model: it relies on the goodwill of scientists who are increasingly pressed and may not spend the time required to properly critique a work, it’s subject to the biases of a select few, and it’s slow – so it’s no surprise that peer review sometimes fails. These factors raise the odds that even in the highest-quality journals, mistakes, flaws, and even fraudulent work will make it through. (“Fake peer review” reports are also now a thing.)
April 30, 2015
At VinePair, Kathleen Willcox explains why the “organic” label on your wine may be little more than a marketing ploy:
A lot of the buzz and imagery about organics appears to be just that – empty sound bites and gimmicks created by folks eager to cash in on the increasingly lucrative organic market. Where does that leave us? Not in an easy place.
Falling for marketers’ ploys is practically a full-time occupation in America (I’m not the only one who’s bought multiple cartons of fat-free ice cream hoping, this time, to finally find “creamy fat-free vanilla bliss” right?). Consumers’ perception of what organic agriculture is vs. the reality, and the halo of virtue with which it is bequeathed (and conventional agriculture’s implicit pair of devil’s horns) is, arguably, one of the biggest boondoggles in our culture today. More than half of Americans (55%) go organic because they believe it’s healthier. Meanwhile, there is really no evidence to back that assumption up. And even organic farmers use pesticides (sorry random lady at the bar). They just happen to be “natural.”
It’s never been a better time for organic marketers and companies. The market for organic food and beverages worldwide was estimated to be $80.4 billion in 2013 and is set to reach $161.5 billion in 2018, a compound annual growth rate of 15% per year. North America has the biggest market share, and will be responsible for roughly $66.2 billion by 2018.
But in the rush to get organic products out the door (and fulfill the public’s desire for healthier, more environmentally responsible products), some producers are often doing little more than following the letter of the USDA law to earn the “organic” label, consequences to the environment and our overall health be damned. In fact, from what producers and studies revealed, it may actually be worse for the environment and your body to buy organic wine from a large manufacturer instead of buying wine produced from grapes on a smaller vineyard sprayed judiciously with synthetic pesticides by a hands-on farmer.