As I am fond of saying, it works like a stock market bubble. There is no need to posit a conspiracy. David Friedman’s view that this is a matter of a build up of many little lies rather than a few big ones is a more realistic as well as a more charitable picture of the mechanism at work.
I am yet more charitable than Professor Friedman. Though I completely agree with him that there are almost certainly many scientists shading their conclusions, it might well be the case that they are not doing so consciously at all. All it would take is for a lot of people with jobs to keep and mortgages to pay each to see which side their bread is buttered when the time comes round to apply for grants. As the American socialist author Upton Sinclair put it, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” On the unbuttered side of the bread, when a scientist observes that colleagues who raise doubts suffer for it, she would be acting much like the rest of humanity if she, while never aware of feeling fear, somehow finds herself more comfortable out of the intellectual proximity of these pariahs.
In a way the Rosetta scientists had it easy. All they had to do was hit a moving target half a billion kilometres away. Succeed or fail, there is no kidding yourself and no kidding others. Twenty-eight minutes later you and the world will know.
Natalie Solent, “Bubbles, lies, and buttered toast”, Samizdata, 2014-11-13.
January 15, 2016
January 4, 2016
I have noticed a tendency of mine to reply to arguments with “Well yeah, that would work for the X Czar, but there’s no such thing.”
For example, take the problems with the scientific community, which my friends in Berkeley often discuss. There’s lots of publication bias, statistics are done in a confusing and misleading way out of sheer inertia, and replications often happen very late or not at all. And sometimes someone will say something like “I can’t believe people are too dumb to fix Science. All we would have to do is require early registration of studies to avoid publication bias, turn this new and powerful statistical technique into the new standard, and accord higher status to scientists who do replication experiments. It would be really simple and it would vastly increase scientific progress. I must just be smarter than all existing scientists, since I’m able to think of this and they aren’t.”
And I answer “Well, yeah, that would work for the Science Czar. He could just make a Science Decree that everyone has to use the right statistics, and make another Science Decree that everyone must accord replications higher status. And since we all follow the Science Czar’s Science Decrees, it would all work perfectly!”
Why exactly am I being so sarcastic? Because things that work from a czar’s-eye view don’t work from within the system. No individual scientist has an incentive to unilaterally switch to the new statistical technique for her own research, since it would make her research less likely to produce earth-shattering results and since it would just confuse all the other scientists. They just have an incentive to want everybody else to do it, at which point they would follow along.
Likewise, no journal has the incentive to unilaterally demand early registration, since that just means everyone who forgot to early register their studies would switch to their competitors’ journals.
And since the system is only made of individual scientists and individual journals, no one is ever going to switch and science will stay exactly as it is.
Scott Alexander, “Reactionary Philosophy In An Enormous, Planet-Sized Nutshell”, Slate Star Codex, 2013-03-03.
December 15, 2015
Another older post from Megan McArdle on the nice-soundbites-but-terrible-economic-notions from the Hillary Clinton campaign to fix the prescription medicine marketplace:
Hillary Clinton thinks drug development should be riskier, and less profitable. Also, your health insurance premiums should be higher. And there should be fewer drugs available.
This is not, of course, how the Clinton campaign would put it. The official line is that Americans are just paying too darn much for drugs, and she has a plan to stop that:
- Regulate direct-to-consumer advertising more heavily, and strip its tax deductibility
- Require drug companies to spend a certain percentage of revenue on research and development, or face penalty payments and the loss of their R&D tax credit (I am inferring that this is what she is talking about, since the actual language of the proposal is long on paeans to the importance of federal research funding and short on details)
- Cap out-of-pocket costs for drugs
- Reduce the exclusivity period for biologic drugs
- Prohibit companies from making side payments to generic manufacturers to keep generic competition off the market
- Allow drug reimportation
- Require that new treatments be proved to be a substantial improvement over existing treatments — i.e., eliminate the dreaded “me too” drugs
- Allow Medicare to “negotiate” drug prices
Eliminating the side payments seems eminently sensible. (Yes, yes, you can strip my libertarian card, but market-rigging contracts shouldn’t be enforced.) It also seems reasonable to require some sort of comparative effectiveness research. Other provisions will certainly drive down drug prices, at the risk of also driving down innovation.
Still other provisions, however, are simply bad economics. In what other market do we worry about having a second product available that’s merely just as good as the first? Should we really only have one antidepressant, one statin, one blood pressure medication, and so forth? Might there be variation among patients so that drugs that are statistically about equally effective in large groups are nonetheless individually more or less effective for different people? Might one drug’s side effects be better tolerated by some patients than another’s? Might having two drugs in the category help keep prices down?
Then there is notion that we should force pharmaceutical companies to spend a set percentage of their revenues on R&D. This seems to me to be … what’s the word I am looking for? Ah, I’ve got it: “insane.”
Economically, large parts of this plan make little sense. Politically, many of these items would be very difficult to pass, not least because the Congressional Budget Office would assess the likely effects and would make it sound much less appealing than it does in a gauzy stump speech. But away from those harsh realities, purely as campaign rhetoric, it probably works very well.
An older report at the BBC News website discusses recent research into childhood asthma:
Being exposed to “good bacteria” early in life could prevent asthma developing, say Canadian scientists.
The team, reporting in Science Translational Medicine, were analysing the billions of bugs that naturally call the human body home.
Their analysis of 319 children showed they were at higher risk of asthma if four types of bacteria were missing.
Experts said the “right bugs at the right time” could be the best way of preventing allergies and asthma.
In the body, bacteria, fungi and viruses outnumber human cells 10 to one, and this “microbiome” is thought to have a huge impact on health.
The team, at the University of British Columbia and the Children’s Hospital in Vancouver, compared the microbiome at three months and at one year with asthma risk at the age of three.
Children lacking four types of bacteria – Faecalibacterium, Lachnospira, Veillonella, and Rothia (Flvr) – at three months were at high risk of developing asthma at the age of three, based on wheeze and skin allergy tests.
The same effect was not noticed in the microbiome of one-year-olds, suggesting that the first few months of life are crucial.
Further experiments showed that giving the bacterial cocktail to previously germ-free mice reduced inflammation in the airways of their pups.
One of the researchers, Dr Stuart Turvey, said: “Our longer-term vision would be that children in early life could be supplemented with Flvr to look to prevent the ultimate development of asthma
“I want to emphasise that we are not ready for that yet, we know very little about these bacteria, [but] our ultimate vision of the future would be to prevent this disease.”
December 8, 2015
A brief post at Real Clear Science on a recent discovery in human immunology:
Think again if you thought that doctors had long since identified and described exactly how the body defends itself against microorganisms.
Scientists have recently discovered a whole new side to the immune system: a rapid immune response that kicks in well before any of the other known mechanisms.
“I hate to use the term ‘text books will write about this’, but this [discovery] really is brand new and we will need to write a new chapter,” says co-author Søren R. Paludan, professor of virology and immunology form the Department of Biomedicine, Aarhus University, Denmark.
In collaboration with groups from the US and Germany, the scientists showed that when the body’s outer defence, the mucosa lining that surrounds certain organs, is disturbed by a virus, the underlying layer of cells are the first to react and sound the alarm. They summon the body’s cell soldiers, which attack the invading virus.
Both this alarm system and the ‘soldier’ cells operate completely separately from what were believed to be the first responders to immune system attacks.
December 3, 2015
David Warren is rather a skeptic on the long-term usefulness of big medical charities (and not just because, like any big bureaucracy, sooner or later the primary goal becomes for the organization itself to survive and grow rather than pursuing whatever they were originally created to do):
Medical “research” does similar direct damage. Huge foundations are created to “fight” every imaginable human ailment, and find new ones on which to build fresh fundraising efforts, should any of the old ones go stale. Grand sums are expended on “public awareness” campaigns, to encourage hypochondria and psychosomatic disorders. (I suspect, for instance, that the chief cause of lung cancer today is grisly health warnings on packets of cigarettes.) Money is raised in billions to “find a cure” for whatever. (Snake oil sales were on a much smaller scale.)
At the most elementary level, people should try to understand cause and effect. Vast numbers come to rely upon the metastasis of these soi-disant “charitable” bureaucracies. And if a cure is ever found, they will all be out of their overpaid jobs. Moreover, it is almost invariably some isolated, eccentric, unqualified and unfunded tyro, who makes the fatal discovery. That is why one of the principal tasks of any large medical foundation is to locate these brilliant “inventor” types, and sue them into surrender.
Does gentle reader know that almost all the increase in human longevity, over the last century or so, can be attributed to people washing their hands and taking showers? And most of the rest to better sewage disposal? Or that it took until almost the middle of the last century for life expectancy in the West to rise to levels last seen in the parish records of the Middle Ages? Which was when “modern” hygienic practices were last observed. (Large, centralized hospitals are the most efficient spreaders of infection today.)
Painkillers are nice, and I’m inclined to keep them, only if we realize that the blessing is mixed. They turn our minds away from futurity; they displace faith in God, to faith in doctors. They create the mindset that embraces “euthanasia.”
Of course, the main focus of contemporary liberal “philanthropy” is not on saving lives at all; rather on killing off babies — in Africa, by first choice. It is what the proggies used to call “population control,” until they invented better euphemisms. That is what truly gladdens the peons in the foundations of all the Bills and Melindas; and lights the corridors of the United Nations. That and the (still historically recent) “climate change” agenda.
November 21, 2015
Almost all of our hard data on race comes from sociology programs in universities – ie the most liberal departments in the most liberal institutions in the country. Most of these sociology departments have an explicit mission statement of existing to fight racism. Many sociologists studying race will tell you quite openly that they went into the field – which is not especially high-paying or prestigious – in order to help crusade against the evil of racism.
Imagine a Pfizer laboratory whose mission statement was to prove Pfizer drugs had no side effects, and whose staff all went into pharmacology specifically to help crusade against the evil of believing Pfizer’s drugs have side effects. Imagine that this laboratory hands you their study showing that the latest Pfizer drug has zero side effects, c’mon, trust us! Is there any way you’re taking that drug?
We know that a lot of medical research, especially medical research by drug companies, turns up the wrong answer simply through the file-drawer effect. That is, studies that turn up an exciting result everyone wants to hear get published, and studies that turn up a disappointing result don’t – either because the scientist never submits it to the journals, or because the journal doesn’t want to publish it. If this happens all the time in medical research despite growing safeguards to prevent it, how often do you think it happens in sociological research?
Do you think the average sociologist selects the study design most likely to turn up evidence of racist beliefs being correct, or the study design most likely to turn up the opposite? If despite her best efforts a study does turn up evidence of racist beliefs being correct, do you think she’s going to submit it to a major journal with her name on it for everyone to see? And if by some bizarre chance she does submit it, do you think the International Journal Of We Hate Racism So We Publish Studies Proving How Dumb Racists Are is going to cheerfully include it in their next edition?
And so when people triumphantly say “Modern science has completely disproven racism, there’s not a shred of evidence in support of it”, we should consider that exactly the same level of proof as the guy from 1900 who said “Modern science has completely proven racism, there’s not a shred of evidence against it”. The field is still just made of people pushing their own dogmatic opinions and calling them science; only the dogma has changed.
And although Reactionaries love to talk about race, in the end race is nothing more than a particularly strong and obvious taboo. There are taboos in history, too, and in economics, and in political science, and although they’re less obvious and interesting they still mean you need this same skepticism when parsing results from these fields. “But every legitimate scientist disagrees with this particular Reactionary belief!” should be said with the same intonation as “But every legitimate archbishop disagrees with this particular heresy.”
This is not intended as a proof that racism is correct, or even as the slightest shred of evidence for that hypothesis (although a lot of Reactionaries are, in fact, racist as heck). No doubt the Spanish Inquisition found a couple of real Satanists, and probably some genuine murderers and rapists got sent to Siberia. Sometimes, once in a blue moon, a government will even censor an idea that happens to be false. But it’s still useful to know when something is being censored, so you don’t actually think the absence of evidence for one side of the story is evidence of anything other than people on that side being smart enough to keep their mouths shut.
Scott Alexander, “Reactionary Philosophy In An Enormous, Planet-Sized Nutshell”, Slate Star Codex, 2013-03-03.
November 19, 2015
Matt Ridley on recent developments in the search for ways to ameliorate the effects of aging:
Squeezed between falling birth rates and better healthcare, the world population is getting rapidly older. Learning how to deal with that is one of the great challenges of this century. The World Health Organisation has just produced a report on the implications of an ageing population, which — inadvertently — reveals a dismal fatalism we share about the illnesses of old age: that they will always be inevitable.
This could soon be wrong. A new book, The Telomerase Revolution, published in America this week by the doctor and medical researcher Michael Fossel, argues that we now understand enough about the fundamental cause of ageing to be confident that we will eventually be able to reverse it. This would mean curing diseases such as Alzheimer’s, heart disease and osteoporosis, rather than coping with them or treating their symptoms.
Let me show you what I mean about fatalism. The WHO report on ageing and health, for all its talk of the need for “profound changes” to health care for the elderly, actually urges us to stop trying to cure the afflictions of old age and learn to live with them: “The societal response to population ageing will require a transformation of health systems that moves away from disease-based curative models and towards the provision of older-person-centred and integrated care.”
Yet it also subscribes to the somewhat magical hope that illnesses of old age can be “prevented or delayed by engaging in healthy behaviours” and that “physical activity and good nutrition can have powerful benefits for health and wellbeing.” This is largely wishful thinking. There is no evidence that, say, Alzheimer’s can be prevented by a certain diet or activity. A lack of activity and poor nutrition can worsen health at any age, but the underlying chronic diseases of old age are caused by age itself.
When I asked Dr Fossel what he thought of the WHO report, he replied: “In 1950 we could have talked (and did) about ‘active polio’ in the sense of keeping polio victims active rather than giving up, but the very phrase itself implies that one has already given up. I would prefer that we cure the fundamental problem. Why talk about ‘active ageing’, ‘successful ageing’, and ‘healthy ageing’ when we could talk about not ageing?”
September 5, 2015
Megan McArdle on why we fall for bogus research:
Almost three years ago, Nobel Prize-winning psychologist Daniel Kahneman penned an open letter to researchers working on “social priming,” the study of how thoughts and environmental cues can change later, mostly unrelated behaviors. After highlighting a series of embarrassing revelations, ranging from outright fraud to unreproducible results, he warned:
For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research. Your problem is not with the few people who have actively challenged the validity of some priming results. It is with the much larger population of colleagues who in the past accepted your surprising results as facts when they were published. These people have now attached a question mark to the field, and it is your responsibility to remove it.
At the time it was a bombshell. Now it seems almost delicate. Replication of psychology studies has become a hot topic, and on Thursday, Science published the results of a project that aimed to replicate 100 famous studies — and found that only about one-third of them held up. The others showed weaker effects, or failed to find the effect at all.
This is, to put it mildly, a problem. But it is not necessarily the problem that many people seem to assume, which is that psychology research standards are terrible, or that the teams that put out the papers are stupid. Sure, some researchers doubtless are stupid, and some psychological research standards could be tighter, because we live in a wide and varied universe where almost anything you can say is certain to be true about some part of it. But for me, the problem is not individual research papers, or even the field of psychology. It’s the way that academic culture filters papers, and the way that the larger society gets their results.
August 13, 2015
At New Scientist, a report on some very hopeful research findings:
A virus found in sewage has spawned a unique drug that targets plaques implicated in a host of brain-crippling diseases, including Alzheimer’s disease, Parkinson’s disease and Creutzfeldt-Jakob disease (CJD).
Results from tests of the drug, announced this week, show that it breaks up plaques in mice affected with Alzheimer’s disease or Parkinson’s disease, and improves the memories and cognitive abilities of the animals.
Other promising results in rats and monkeys mean that the drug developers, NeuroPhage Pharmaceuticals, are poised to apply for permission to start testing it in people, with trials starting perhaps as early as next year.
The drug is the first that seems to target and destroy the multiple types of plaque implicated in human brain disease. Plaques are clumps of misfolded proteins that gradually accumulate into sticky, brain-clogging gunk that kills neurons and robs people of their memories and other mental faculties. Different kinds of misfolded proteins are implicated in different brain diseases, and some can be seen within the same condition.
July 28, 2015
In Nautilis, Adam Piore talks about the project to thoroughly map Icelanders’ DNA:
In the ninth century there was a Norwegian Viking named Kveldulf, so big and strong that no man could defeat him. He sailed the seas in a long-ship and raided and plundered towns and homesteads of distant lands for many years. He settled down to farm, a very wealthy man.
Kveldulf had two sons who grew up to become mighty warriors. One joined the service of King Harald Tangle Hair. But in time the King grew fearful of the son’s growing power and had him murdered. Kveldulf vowed revenge. With his surviving son and allies, Kveldulf caught up with the killers, and wielding a double-bladed ax, slew 50 men. He sent the paltriest survivors back to the king to recount his deed and fled toward the newly settled realm of Iceland. Kveldulf died on the journey. But his remaining son Skallagrim landed on Iceland’s west coast, prospered, and had children.
Skallagrim’s children had children. Those children had children. And the blood and genes of Kveldulf the Viking and Skallagrim his son were passed down the ages. Then, in 1949, in the capital of Reykjavik, a descendent named Kari Stefansson was born.
Like Kveldulf, Stefansson would grow to be a giant, 6’5”, with piercing eyes and a beard. As a young man, he set out for the distant lands of the universities of Chicago and Harvard in search of intellectual bounty. But at the dawn of modern genetics in the 1990s, Stefansson, a neurologist, was lured back to his homeland by an unlikely enticement — the very genes that he and his 300,000-plus countrymen had inherited from Kveldulf and the tiny band of settlers who gave birth to Iceland.
Stefansson had a bold vision. He would create a library of DNA from every single living descendent of his nation’s early inhabitants. This library, coupled with Iceland’s rich trove of genealogical data and meticulous medical records, would constitute an unparalleled resource that could reveal the causes — and point to cures — for human diseases.
In 1996, Stefansson founded a company called Decode, and thrust his tiny island nation into the center of the burgeoning field of gene hunting. “Our genetic heritage is a natural resource,” Stefansson declared after returning to Iceland. “Like fish and hot pools.”
June 25, 2015
At Reason, Ronald Bailey links to a study that appears to undermine most of Thomas Picketty’s claims:
From the study:
We believe Piketty’s core message is provably flawed on several levels, as a result of fundamental and avoidable errors in his basic assumptions. He begins with the sensible presumption that the return on invested capital, r, exceeds macroeconomic growth, g, as must be true in any healthy economy. But from this near-tautology, he moves on to presume that wealthy families will grow ever richer over future generations, leading to a society dominated by unearned, hereditary wealth. Alas, this logic holds true only if the wealthy never dissipate their wealth through spending, charitable giving, taxation, and splitting bequests among multiple heirs.
As individuals, and as families, the rich generally do not get richer; after a fortune is first built, the rich get relentlessly and inevitably poorer.
The “evidence” Piketty uses in support of his thesis is largely anecdotal, drawn from the novels of Austen and Balzac, and from the current fortunes of Bill Gates and Liliane Bettencourt. If Piketty is right, where are the current hyper-wealthy descendants of past entrepreneurial dynasties — the Astors, Vanderbilts, Carnegies, Rockefellers, Mellons, and Gettys? Almost to a man (or woman) they are absent from the realms of the super-affluent. Our evidence — used to refute Piketty’s argument — is empirical, drawn from the rapid rotation of the hyper-wealthy through the ranks of the Forbes 400, and suggests that, at any given time, roughly half of the collective worth of the hyper-wealthy is first-generation earned wealth, not inherited wealth.
The originators of great wealth are one-in-a-million geniuses; their innovation, invention, and single-minded entrepreneurial focus create myriad jobs and productivity enhancements for society at large. They create wealth for society, from which they draw wealth for themselves. In contrast, the descendants of the hyper-wealthy rarely have that same one-in-a-million genius. Bettencourt, cited by Piketty, is a clear exception. Typically, we find that descendants halve their inherited wealth — relative to the growth of per capita GDP — every 20 years or less, without any additional assistance from Piketty’s redistribution prescription.
Dynastic wealth accumulation is simply a myth. The reality is that each generation spawns its own entrepreneurs who create vast pools of entirely new wealth, and enjoy their share of it, displacing many of the preceding generations’ entrepreneurial wealth creators. Today, the massive fortunes of the 19th century are largely depleted and almost all of the fortunes generated just a half-century ago are also gone. Do we really want to stifle entrepreneurialism, invention, and innovation in an effort to accelerate the already-rapid process of wealth redistribution?
June 19, 2015
At Strategy Page, a quick look at the US Army’s latest change in camouflage clothing and equipment:
The U.S. Army has begun issuing its new combat uniforms featuring a new and improved camouflage pattern. This is yet another effort to deal with troop complaints about the shortcomings of earlier camouflage patterns. Back in 2012 the army has decided to scrap its current digital pattern camouflage combat uniforms and replace them with the more effective (according to the troops), but more expensive, MultiCam. Actually, MultiCam itself was not used but a pattern selected for the new uniforms, but one based on MultiCam. This variant is called Scorpion W2 and the army gave it another, official, name; Operational Camouflage Pattern (OCP). So if you hear someone talking about the new uniform being Scorpion W2 or MultiCam they are not entirely wrong. But the final, official term is OCP.
Since 2001 both the army and marines adopted new, digital camouflage pattern field uniforms. But in Afghanistan U.S. soldiers noted that the marine digital uniforms (called MARPAT, for Marine Pattern) were superior to the army UCP (Universal Camouflage Pattern). Both UCP and MARPAT were introduced at the same time (2002). From the beginning there was growing dissatisfaction with UCP, and it became a major issue because all the infantry have access to the Internet, where the constant clamor for something better than UCP eventually forced the army to do something.
This is ironic because UCP itself was another variant of MARPAT but a poor one, at least according to soldiers in UCP who encountered marines wearing MARPAT. Even more ironic is that MARPAT is based on research originally done by the army. Thus some of the resistance to copying MARPAT is admitting the marines took the same research on digital camouflage and produced a superior pattern for combat uniforms.
May 28, 2015
At Vox.com, Julia Belluz and Steven Hoffman show how perverse incentives and human frailty contribute to the wasted efforts — and sometimes outright fraudulent methods — that get “scientific” results published. It’s getting so bad that “the editor of The Lancet … recently lamented, ‘Much of the scientific literature, perhaps half, may simply be untrue.'”:
From study design to dissemination of research, there are dozens of ways science can go off the rails. Many of the scientific studies that are published each year are poorly designed, redundant, or simply useless. Researchers looking into the problem have found that more than half of studies fail to take steps to reduce biases, such as blinding whether people receive treatment or placebo.
In an analysis of 300 clinical research papers about epilepsy — published in 1981, 1991, and 2001 — 71 percent were categorized as having no enduring value. Of those, 55.6 percent were classified as inherently unimportant and 38.8 percent as not new. All told, according to one estimate, about $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on flawed and redundant studies.
After publication, there’s the well-documented irreproducibility problem — the fact that researchers often can’t validate findings when they go back and run experiments again. Just last month, a team of researchers published the findings of a project to replicate 100 of psychology’s biggest experiments. They were only able to replicate 39 of the experiments, and one observer — Daniele Fanelli, who studies bias and scientific misconduct at Stanford University in California — told Nature that the reproducibility problem in cancer biology and drug discovery may actually be even more acute.
Indeed, another review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. (The problem even inspired a satirical publication called the Journal of Irreproducible Results.)
So why aren’t these problems caught prior to publication of a study? Consider peer review, in which scientists send their papers to other experts for vetting prior to publication. The idea is that those peers will detect flaws and help improve papers before they are published as journal articles. Peer review won’t guarantee that an article is perfect or even accurate, but it’s supposed to act as an initial quality-control step.
Yet there are flaws in this traditional “pre-publication” review model: it relies on the goodwill of scientists who are increasingly pressed and may not spend the time required to properly critique a work, it’s subject to the biases of a select few, and it’s slow – so it’s no surprise that peer review sometimes fails. These factors raise the odds that even in the highest-quality journals, mistakes, flaws, and even fraudulent work will make it through. (“Fake peer review” reports are also now a thing.)
April 28, 2015
Truth, indeed, is something that is believed in completely only by persons who have never tried personally to pursue it to its fastnesses and grab it by the tail. It is the adoration of second-rate men — men who always receive it at second-hand. Pedagogues believe in immutable truths and spend their lives trying to determine them and propagate them; the intellectual progress of man consists largely of a concerted effort to block and destroy their enterprise. Nine times out of ten, in the arts as in life, there is actually no truth to be discovered; there is only error to be exposed. In whole departments of human inquiry it seems to me quite unlikely that the truth ever will be discovered. Nevertheless, the rubber-stamp thinking of the world always makes the assumption that the exposure of an error is identical with the discovery of the truth — that error and truth are simple opposites. They are nothing of the sort. What the world turns to, when it has been cured of one error, is usually simply another error, and maybe one worse than the first one. This is the whole history of the intellect in brief. The average man of to-day does not believe in precisely the same imbecilities that the Greek of the fourth century before Christ believed in, but the things that he does believe in are often quite as idiotic. Perhaps this statement is a bit too sweeping. There is, year by year, a gradual accumulation of what may be called, provisionally, truths — there is a slow accretion of ideas that somehow manage to meet all practicable human tests, and so survive. But even so, it is risky to call them absolute truths. All that one may safely say of them is that no one, as yet, has demonstrated that they are errors. Soon or late, if experience teaches us anything, they are likely to succumb too. The profoundest truths of the Middle Ages are now laughed at by schoolboys. The profoundest truths of democracy will be laughed at, a few centuries hence, even by school-teachers.
H.L. Mencken, “Footnote on Criticism”, Prejudices, Third Series, 1922.