Quotulatiousness

August 29, 2015

We need a new publication called The Journal of Successfully Reproduced Results

Filed under: Media, Science — Tags: , , , , — Nicholas @ 04:00

We depend on scientific studies to provide us with valid information on so many different aspects of life … it’d be nice to know that the results of those studies actually hold up to scrutiny:

One of the bedrock assumptions of science is that for a study’s results to be valid, other researchers should be able to reproduce the study and reach the same conclusions. The ability to successfully reproduce a study and find the same results is, as much as anything, how we know that its findings are true, rather than a one-off result.

This seems obvious, but in practice, a lot more work goes into original studies designed to create interesting conclusions than into the rather less interesting work of reproducing studies that have already been done to see whether their results hold up.

Everyone wants to be part of the effort to identify new and interesting results, not the more mundane (and yet potentially career-endangering) work of reproducing the results of older studies:

Why is psychology research (and, it seems likely, social science research generally) so stuffed with dubious results? Let me suggest three likely reasons:

A bias towards research that is not only new but interesting: An interesting, counterintuitive finding that appears to come from good, solid scientific investigation gets a researcher more media coverage, more attention, more fame both inside and outside of the field. A boring and obvious result, or no result, on the other hand, even if investigated honestly and rigorously, usually does little for a researcher’s reputation. The career path for academic researchers, especially in social science, is paved with interesting but hard to replicate findings. (In a clever way, the Reproducibility Project gets around this issue by coming up with the really interesting result that lots of psychology studies have problems.)

An institutional bias against checking the work of others: This is the flipside of the first factor: Senior social science researchers often actively warn their younger colleagues — who are in many cases the best positioned to check older work—against investigating the work of established members of the field. As one psychology professor from the University of Southern California grouses to the Times, “There’s no doubt replication is important, but it’s often just an attack, a vigilante exercise.”

[…]

Small, unrepresentative sample sizes: In general, social science experiments tend to work with fairly small sample sizes — often just a few dozen people who are meant to stand in for everyone else. Researchers often have a hard time putting together truly representative samples, so they work with subjects they can access, which in a lot of cases means college students.

A couple of years ago, I linked to a story about the problem of using western university students as the default source of your statistical sample for psychological and sociological studies:

A notion that’s popped up several times in the last couple of months is that the easy access to willing test subjects (university students) introduces a strong bias to a lot of the tests, yet until recently the majority of studies disregarded the possibility that their test results were unrepresentative of the general population.

August 19, 2015

Canada looks very good on an international ranking you won’t hear about from our media

Filed under: Cancon, Economics — Tags: , , , — Nicholas @ 05:00

Our traditional media are quick to pump up the volume for “studies” that find that we rank highly on various rankings of cities or what-have-you, but here’s someone pointing out that Canada’s ranking is quite good, but it’s not the kind of measurement our media want to encourage or publicize:

First, we must identify a nation’s currently employed population. Next, all public sector employees are removed to obtain an adjusted productive workforce. It may be objectionable that certain professions, like teaching, nurses in single payer systems and fire fighters, are classified as an unproductive workforce, but as our system is currently designed, the salaries of these individuals are not covered by the immediate beneficiaries like any other business but are paid through dispersed taxation methods.

Finally, this productive population is divided into the nation’s total population to identify the total number of individuals a worker is expected to support in his country. To remove bias toward non-working spouses and children, the average household size is subtracted from this result to get the final number of individuals that an individual must support that are not part of their own voluntary household. In other words, how many total strangers is this individual providing for?

The Implied Public Reliance metric does a far better job of predicting economic performance:

Canada - implied public reliance

Greece, the nation with the debt problem, is currently expecting each employed person to support 6.1 other people above and beyond their own families. This explains much of the pressure to work long hours and also explains the unstable debt loads. Since a single Greek worker can’t possibly hope to support what amounts to a complete baseball team on a single salary, the difference is covered by Greek public debt, debt that the underlying social system cannot hope to repay as the incentives are to maintain the current system of subsidies. To demonstrate how difficult it is to change these systems within a democratic society, we just have to look at the percentage of the population that is reliant on public subsidy.

Canada - reliant on public funding

Oh, and by the way … Greece is doomed:

The numbers imply that 67 percent of the population of Greece is wholly reliant on the Greek government to provide their incomes. With such a commanding supermajority, changing this system with the democratic process is impossible as the 67 percent have strong incentives to continue to vote for the other 33 percent — and also foreign entities — to cover their living expenses.

August 15, 2015

Science in the media – from “statistical irregularities” to outright fraud

Filed under: Media, Science — Tags: , , , — Nicholas @ 02:00

In The Atlantic, Bourree Lam looks at the state of published science and how scientists can begin to address the problems of bad data, statistical sleight-of-hand, and actual fraud:

In May, the journal Science retracted a much-talked-about study suggesting that gay canvassers might cause same-sex marriage opponents to change their opinion, after an independent statistical analysis revealed irregularities in its data. The retraction joined a string of science scandals, ranging from Andrew Wakefield’s infamous study linking a childhood vaccine and autism to the allegations that Marc Hauser, once a star psychology professor at Harvard, fabricated data for research on animal cognition. By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders). This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.

“Retractions are born of many mothers,” write Ivan Oransky and Adam Marcus, the co-founders of the blog Retraction Watch, which has logged thousands of retractions in the past five years. A study in the Proceedings of the National Academy of Sciences reviewed 2,047 retractions of biomedical and life-sciences articles and found that just 21.3 percent stemmed from straightforward error, while 67.4 percent resulted from misconduct, including fraud or suspected fraud (43.4 percent) and plagiarism (9.8 percent).

Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’”.

As for why these practices are so prevalent, many scientists blame increased competition for academic jobs and research funding, combined with a “publish or perish” culture. Because journals are more likely to accept studies reporting “positive” results (those that support, rather than refute, a hypothesis), researchers may have an incentive to “cook” or “mine” their data to generate a positive finding. Such publication bias is not in itself news — back in 1987, a study found that, compared with research trials that went unpublished, those that were published were three times as likely to have positive results. But the bias does seem to be getting stronger: a more recent study of 4,600 research papers found that from 1990 to 2007, the proportion of positive results grew by 22 percent.

August 3, 2015

Undependable numbers in the campus rape debate

Filed under: Politics, USA — Tags: , , — Nicholas @ 02:00

Megan McArdle on the recently revealed problems in one of the most frequently used set of statistics on campus rape:

a new article in Reason magazine suggests that this foundation is much shakier than most people working on this issue — myself included — may have assumed. (Full disclosure: the Official Blog Spouse is an editor at Reason.) The author, Linda M. LeFauve, looked carefully at the study, including conducting an interview with Lisak, and identified multiple issues:

  1. Lisak did not actually do original research. Instead, he pooled data from studies that were not necessarily aimed at collecting data on college students, or indeed, about rape. Only five questions on a multi-page questionnaire asked about sexual violence that they may have committed as adults, against other adults.
  2. The campus where this data was collected is a commuter campus. It’s not clear that everyone surveyed was a college student, but if so, the sample included many non-traditional students, with an average age of 26.5. Yet this data has been widely applied to traditional campuses, even though the two populations may differ greatly.
  3. The responses indicate that the men identified as rapists were extraordinarily more violent than the normal population: “The high rate of other forms of violence reported by the men in Lisak’s paper further suggests they are an atypical group. Of the 120 subjects Lisak classified as rapists, 46 further admitted to battery of an adult, 13 to physical abuse of a child, 21 to sexual abuse of a child, and 70 — more than half the group — to other forms of criminal violence. By itself, the nearly 20 percent who had sexually abused a child should signal that this is not a group from whom it is reasonable to generalize findings to a college campus.”
  4. The data did not cover acts committed while in college, but any acts of sexual violence; a number of them seem to have been committed in domestic violence situations.
  5. Lisak appears to have exaggerated how much follow-up he was able to do on the people he surveyed, at least to LeFauve: “Lisak told me that he subsequently interviewed most of them. That was a surprising claim, given the conditions of the survey and the fact that he was looking at the data produced long after his students had completed those dissertations; nor were there plausible circumstances under which a faculty member supervising a dissertation would interact directly with subjects. When I asked how he was able to speak with men participating in an anonymous survey for research he was not conducting, he ended the phone call.” Robby Soave of Reason, in a companion piece, also raises doubts about Lisak’s repeated assertions that he conducted extensive follow-ups with “most” of the respondents to what were mostly anonymous surveys.

In short, Lisak’s 2002 study is not a systematic survey of rape on campus; it is pooled data from surveys of people who happen to have been near a commuter campus on days when the surveys were being collected.

Before I go any further, let me note that I’m not saying that what these men did was not bad, or does not deserve to be punished. But if LeFauve is right, this study is basically worthless for shaping campus policies designed to stop rape.

July 27, 2015

The “Ferguson Effect”

Filed under: Law, Politics, USA — Tags: , , , — Nicholas @ 04:00

Radley Balko explains why the concerns and worries of police officials have been totally upheld by the rising tide of violence against police officers in the wake of the events in Ferguson … oh, wait. No, that’s not what happened at all:

The “Ferguson effect,” you might remember, is a phenomenon law-and-order types have been throwing around in an effort to blame police brutality on protesters and public officials who actually try to hold bad cops accountable for an alleged increase in violence, both general violence and violence against police officers.

The problem is that there’s no real evidence to suggest it exists. As I and others pointed out in June, while there have been some increases in crime in a few cities, including Baltimore and St. Louis County, there’s just no empirical data to support the notion that we’re in the middle of some national crime wave. And while there was an increase in killings of police officers in 2014, that came after a year in which such killings were at a historic low. And in any case, the bulk of killings of police officers last year came before the Ferguson protests in August and well before the nationwide Eric Garner protests in December.

Now, the National Law Enforcement Officers Memorial Fund has released its mid-year report on police officers’ deaths in 2015. Through the end of June, the number of officers killed by gunfire has dropped 25 percent from last year, from 24 to 18. Two of those incidents were accidental shootings (by other cops), so the number killed by hostile gunfire is 16. (As of today, the news is even better: Police deaths due to firearms through July 23 are down 30 percent from last year.)

[…]

A typical officer on a typical stop is far more likely to die of a heart attack than to be shot by someone inside that car.

It’s important to note here that we’re also talking about very small numbers overall. Police officer deaths have been in such rapid decline since the 1990s that when taken as percentages, even statistical noise in the raw figures can look like a large swing one way or the other. And if we look at the rate of officer fatalities (as opposed to the raw data), the degree to which policing has gotten safer over the last 20 years is only magnified.

But the main takeaway from the first-half figures of 2015 is this: If we really were in the midst of a nationwide “Ferguson effect,” we’d expect to see attacks on police officers increasing. Instead, we’re seeing the opposite. That’s good news for cops. It’s bad news for people who want to blame protesters and reform advocates for the deaths of police officers.

June 22, 2015

Breaking – it’s a nation-wide crime wave (when you cherry-pick your data)

Filed under: Law, USA — Tags: , , — Nicholas @ 04:00

Daniel Bier looks at how Wall Street Journal contributor Heather Mac Donald concocted her data to prove that there’s a rising tide of crime across the United States:

Heather Mac Donald is back in the Wall Street Journal to defend her thesis that there is a huge national crime wave and that protesters and police reformers are to blame.

In her original piece, Mac Donald cherry picked whatever cities and whatever categories of crime showed an increase so far this year, stacked up all the statistics that supported her idea, ignored all the ones that didn’t, and concluded we are suffering a “nationwide crime wave.”

Of course, you could do this kind of thing any year to claim that crime is rising. But it isn’t.

The fifteen largest cities have seen a 2% net decrease in murder so far this year. Eight saw a rise in murder rates, and seven saw an even larger decline.

Guess which cities Mac Donald mentioned and which she did not.

This is how you play tennis without the net. Or lines.

And in her recent post, buried seven paragraphs in, comes this admission: “It is true that violent crime has not skyrocketed in every American city — but my article didn’t say it had.”

But neither did her article acknowledge that murder in big cities was falling overall — in fact, it didn’t acknowledge that murder or violent crime was declining anywhere. Apparently, in her view, it is acceptable to present a distorted view of the data as long as it isn’t an outright lie.

June 4, 2015

When did “scientific literature” transmogrify into bad science fiction?

Filed under: Bureaucracy, Media, Science — Tags: , , , — Nicholas @ 03:00

In The Lancet, Richard Horton discusses the problems of scientific journalism:

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.
Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.

May 7, 2015

Vancouver – where “happiness” doesn’t co-relate with “quality of life”

Filed under: Cancon, China, Economics — Tags: , , , , , — Nicholas @ 04:00

Reducing the realities of life in a given city to a quick numerical value or data point on a chart requires you to ignore subtleties and local influences. Last month, Mark Collins linked to this article by Terry Glavin on what the “quality of life” numbers for Vancouver actually conceal:

If the Economist Intelligence Unit’s annual top 10 world cities rankings are what you’ve been relying on, you probably weren’t surprised last month when the global human resources outfit Mercer tagged Vancouver on its Quality of Living index as the best city in North America. But you might have been surprised this week when Statistics Canada released a study showing that, by a variety of indices, Vancouverites are the unhappiest people in Canada, falling dead last among the residents of 33 cities across the country.

We like to think of Lotusland’s grand metropolis as a place where people ski, sail, ride their bikes, swim, and hike though lush rainforests, all in the same day. But StatsCan’s annual survey of median household income in Canadian cities routinely puts Vancouver close to the bottom of the heap on that same list of 33 cities, and in January the Demographia International research institute ranked Vancouver second to last in a global survey of 378 cities on its Housing Affordability Survey.

Vancouver’s median household income in 2014 was $66,400, while the city’s median home price was 10.6 times higher: $704,800. Only Hong Kong fared worse, and just barely. Hong Kong also tops Vancouver, again only barely, as the property investment bolt-hole most favoured by Mainland China’s loot-laden millionaires. For years, we’ve been instructed to pretend that this is somehow mere coincidence. You can’t get away with talking to Hong Kongers like that, but Vancouverites take it sitting down.

In happier places like Saguenay, Sudbury and Thunder Bay, there’s manufacturing, dairy farming, forestry and mining, and there’s a high degree of neighborliness and civility. But Vancouverites make most of their money from increases in the real estate value of whatever property they might be lucky to own. This tends to skew any real sense of hometown belonging, and nothing quite so rattles the cages as loose talk about the elaborate, federally-sanctioned swindle that has been keeping the bubble inflated all these years.

April 28, 2015

Another misleading statistical quirk about US corporate profits

Filed under: Business, Economics, USA — Tags: , — Nicholas @ 07:02

Earlier this month, Tim Worstall explained why the huffing and puffing over the increased share of corporate profits in the US GDP figures is misdirected:

There’s all sorts of Very Serious People running around shouting about how the capitalist plutocrats are taking ever greater shares of the US economy. This might even be true but one of the pieces of evidence that is relied upon is not actually telling us what people seem to be concluding it is. The reason is that we’re in an age of increased globalisation. This means that large American companies (we mostly think of the tech companies here, Apple, Google, Microsoft) are making large profits outside America. However, when we measure the profit share of the US economy we are measuring those offshore profits as being part of the US economy. But we’re not also measuring the labour income that goes along with the generation of those profits. It’s thus very misleading indeed to be using this profit share as an indication of the capitalist plutocrats rooking us all.

[…]

It’s possible that that rise in the profit share is actually nothing at all to do with the US domestic economy. If American corporations are now making much larger foreign profits than they used to then that could be the explanation. No, it makes no difference about whether they repatriate those profits, nor whether they pay tax on them: those foreign profits will be included in GNP either way. Note also that measuring the profit share this way is rather misleading. Yes, it does, obviously because this is the way we calculate it, mean that the profit share of GNP is rising. But we’re not including the labour income that goes along with the generation of those profits. That’s all off in the GNP (or GDP) of the countries where the sales and manufacturing are taking place. The only part of this economic activity that we’re including in US GNP is that profit margin.

[…]

Now to backpeddle a little bit. I do not in fact insist that this is the entire explanation of the increased profit share. It wouldn’t surprise me if it was but I don’t insist that it’s the entire explanation. I do however insist that it is part of the explanation. The sums being earned offshore by large American companies are large enough to show up as multiple percentage points of the US economy. So some of that change in the profit share is just because American companies are doing well elsewhere in the world. It’s got very little to almost no relevance to the American economy itself that they are. At least, not in the sense that it’s being used here, to talk about the declining labour share. Because these profits simply aren’t coming from the domestic American economy therefore they can’t have any influence upon the percentage of that American economy that labour gets.

This does, of course, have public policy implications. If the above is the whole and total reason for the fall in the labour share of GNP then obviously we can raise the labour share of GNP just by telling American companies not to make profits in foreign countries. Which would be a completely ridiculous thing to do of course. But given that that would indeed solve this perceived problem, and also that it’s a ridiculous thing to do, means that the worries over the problem itself are also ridiculous. So, we don’t actually need a public policy response to it.

April 21, 2015

The statistical anomalies of sex

Filed under: Health, Randomness — Tags: , — Nicholas @ 03:00

As the old saying has it, “everyone lies about sex“:

Straight men have had twice as many sexual partners, on average, as straight women. Sounds plausible, seeing that men supposedly think about sex every seven seconds. Except that it’s mathematically impossible: in a closed population with as many men as women (which roughly there are) the averages should match up. Someone is being dishonest, but who? And why? These questions, along with many others, are explored in Sex by numbers, a new book by David Spiegelhalter, Winton Professor for the Public Understanding of Risk at the University of Cambridge.

“Sex is a great topic,” says Spiegelhalter. “There’s lots of it going on, but we don’t know what goes on or how much of it, because most of the time it goes on behind closed doors. It’s a really difficult topic to investigate scientifically, and a real challenge for statistics.” Spiegelhalter’s aim is to get people interested in a critical approach to the numbers they hear about in the news and give them the tools to figure out if they can be believed. “It’s really a book about statistics, using sex as an example.”

Statistics about sex are not all equally good. Some, like the number of births in a given year, are cast-iron facts, but others are much harder to come by. The number of sexual partners is a good example. The mismatch above comes from the third The National Survey of Sexual Attitudes and Lifestyles (Natsal), conducted between 2010 and 2012, in which men reported having had 14 sexual partners, on average, and women 7. Studies have suggested that women give lower numbers when they fear the survey isn’t entirely confidential, something that doesn’t seem to affect men (contrary to my expectation, it doesn’t induce them to exaggerate). So that’s one possible explanation for the mismatch: sadly, women still need to fear social stigma.

But there are other explanations too. One is that men (more than women) may have some of their sexual experience with sex workers. These aren’t included in the surveys, so their experiences are missing from the female tally. Another is that there are different attitudes as to what counts as a sexual partner. If a woman feels she’s been coerced by a man, for example, she may not want to count him.

April 18, 2015

Correlation, causation, and lobby money

Filed under: Business, Health, Media — Tags: , , — Nicholas @ 02:00

Tim Harford‘s latest column on tobacco, research, and lobby money:

It is said that there is a correlation between the number of storks’ nests found on Danish houses and the number of children born in those houses. Could the old story about babies being delivered by storks really be true? No. Correlation is not causation. Storks do not deliver children but larger houses have more room both for children and for storks.

This much-loved statistical anecdote seems less amusing when you consider how it was used in a US Senate committee hearing in 1965. The expert witness giving testimony was arguing that while smoking may be correlated with lung cancer, a causal relationship was unproven and implausible. Pressed on the statistical parallels between storks and cigarettes, he replied that they “seem to me the same”.

The witness’s name was Darrell Huff, a freelance journalist beloved by generations of geeks for his wonderful and hugely successful 1954 book How to Lie with Statistics. His reputation today might be rather different had the proposed sequel made it to print. How to Lie with Smoking Statistics used a variety of stork-style arguments to throw doubt on the connection between smoking and cancer, and it was supported by a grant from the Tobacco Institute. It was never published, for reasons that remain unclear. (The story of Huff’s career as a tobacco consultant was brought to the attention of statisticians in articles by Andrew Gelman in Chance in 2012 and by Alex Reinhart in Significance in 2014.)

Indisputably, smoking causes lung cancer and various other deadly conditions. But the problematic relationship between correlation and causation in general remains an active area of debate and confusion. The “spurious correlations” compiled by Harvard law student Tyler Vigen and displayed on his website (tylervigen.com) should be a warning. Did you realise that consumption of margarine is strongly correlated with the divorce rate in Maine?

April 16, 2015

Measuring productivity in the modern economy

Filed under: Economics — Tags: , , , — Nicholas @ 03:00

Tim Worstall on how our traditional economic measurements are less and less accurate for the modern economic picture:

… in the developed countries there’s a problem which seems to me obvious (and Brad Delong has even said that I’m right here which is nice). Which is that we’re just not measuring the output of the digital economy correctly. For much of that output is not in fact priced: what Delong has called Andreessenian goods (and Marc Andreessen himself calls Mokyrian). For example, we take Google’s addition to the economy to be the value of advertising that Google sells, not the value in use of the Google search engine. Similarly, Facebook is valued at its advertising sales, not whatever value people gain from being part of a social network of 1.3 billion people. In the traditional economy that consumer surplus can be roughly taken to be twice the sales value of the goods. For these Andreessenian goods the consumer surplus could be 20 times (Delong) or even 100 times (my own, very controversial and back of envelope calculations) that sales value.

We are therefore, in my view, grossly underestimating output. And since we measure productivity as the residual of output and resources used to create it we’re therefore also grossly underestimating productivity growth. We’re in error by using measurements of the older, physical, economy as our metric for the newer, digital, one.

In short, I simply don’t agree that growth is as slow as we are measuring it to be. Thus any predictions that rely upon taking our current “low” rate of growth as being a starting point must, logically, be wrong. And that also means that all the policy prescriptions that flow from such an analysis, that we must spend more on infrastructure, education, government support for innovation, must also be wrong.

April 2, 2015

How do we measure prosperity? Badly

Filed under: Economics, History, Technology — Tags: , — Nicholas @ 05:00

At Coyote Blog, Warren Meyer points out that we do not have a useful way to measure prosperity:

GDP growth and unemployment reduction are terrible measures. Just to give one example, these measures looked fabulous in WWII. But the average person living in the US had access to almost nothing — they couldn’t buy anything under rationing, they couldn’t travel for leisure, etc. GDP looked great because we were building stuff and then blowing it up, the economic equivalent of digging a hole and filling it in (but worse, because people were dying). And unemployment looked great because we had drafted everyone and sent them off to get shot.

[…]

How do we take into account that even if a person has the same income as someone in 1952, they are effectively wealthier in many ways due to access to medical procedures, travel, entertainment, electronic devices, etc?

Somehow we need to measure consumer capability — not just how much raw money one has but what can one do with the money? What is the horizon of possibilities? Deirdre McCloskey tends to eschew the term capitalism in favor of “market-tested innovation.” I think that is a pretty powerful description of our system. But if it is, we really are only measuring the impact of productivity and cost-reduction innovations. How do we measure the wealth impact of consumer-empowerment innovations like iPhones? Essentially, we don’t. Which, by the way, may be one reason our current crappy metrics say we have growing income inequality. With our current metrics, Steve Jobs’ increase in wealth is noted in the metrics, but the metrics don’t show the rest of us getting any wealthier by the fact that we can now have iPhones (or the myriad of competitors the iPhone spawned). The consumer surplus from iPhones undoubtedly dwarfs the money Jobs made, but it doesn’t show up in any wealth calculations.

March 19, 2015

There’s a deep-seated problem with how we measure the so-called “standard of living”

Filed under: Economics, History — Tags: , — Nicholas @ 04:00

My family are tired of hearing me say any variation on the expression “the past is a foreign country”, but I ring the changes on that phrase because it at least frames some of the problem we have in trying to comprehend just how much life has changed even within living memory, never mind more than a couple of generations ago. At the Cato Institute, Megan McArdle tries to avoid saying exactly those words, but the sense is still very much the same:

The generation that fought the Civil War paid an incredible price: one in four soldiers never returned home, and one in thirteen of those who did were missing one or more limbs. Were they better off than their parents’ generation? What about the generation that lived through the Great Depression, many of whom graduated into World War II? Does a new refrigerator and a Chevrolet in the driveway make up for decades and lives lost to the march of history? Or for the rapid increase in crime and civic disorder that marked the postwar boom? Then again, what about African Americans, who saw massive improvements in both their personal liberty and their personal income?

We should never pooh-pooh economic progress. As P.J. O’Rourke once remarked, I have one word for people who think that we live in a degenerate era fallen from a blessed past full of bounty and ease, and that word is “dentistry.” On the other hand, we should not reduce standard of living to (appropriately inflation adjusted) GDP numbers either. Living standards are complicated, and the tools we have to measure what is happening to them are almost absurdly crude. I certainly won’t achieve a satisfying measure in this brief essay. But we can, I think, begin to sketch the major ways in which things are better and worse for this generation. Hopefully we can also zero in on what makes the current era feel so deprived, and our distribution of income so worrisome.

My grandfather worked as a grocery boy until he was 26 years old. He married my grandmother on Thanksgiving because that was the only day he could get off. Their honeymoon consisted of a weekend visiting relatives , during which they shared their nuptial bed with their host’s toddler. They came home to a room in his parents’ house—for which they paid monthly rent. Every time I hear that marriage is collapsing because the economy is so bad, I think of their story.

By the standards of today, my grandparents were living in wrenching poverty. Some of this, of course, involves technologies that didn’t exist—as a young couple in the 1930s my grandparents had less access to health care than the most neglected homeless person in modern America, simply because most of the treatments we now have had not yet been invented. That is not the whole story, however. Many of the things we now have already existed; my grandparents simply couldn’t afford them. With some exceptions, such as microwave ovens and computers, most of the modern miracles that transformed 20th century domestic life already existed in some form by 1939. But they were out of the financial reach of most people.

If America today discovered a young couple where the husband had to drop out of high school to help his father clean tons of unsold, rotted produce out of their farm’s silos, and now worked a low-wage, low-skilled job, was living in a single room with no central heating and a single bathroom to share for two families, who had no refrigerator and scrubbed their clothes by hand in a washtub, who had serious conversations in low voices over whether they should replace or mend torn clothes, who had to share a single elderly vehicle or make the eight-mile walk to town … that family would be the subject of a three-part Pulitzer prize winning series on Poverty in America.

But in their time and place, my grandparents were a boring bourgeois couple, struggling to make ends meet as everyone did, but never missing a meal or a Sunday at church. They were excited about the indoor plumbing and electricity which had just been installed on his parents’ farm, and they were not too young to marvel at their amazing good fortune in owning an automobile. In some sense they were incredibly deprived, but there are millions of people in America today who are incomparably better off materially, and yet whose lives strike us (and them) as somehow objectively more difficult.

March 16, 2015

Comparing statistics from different sources

Filed under: Economics, Government, USA — Tags: , — Nicholas @ 03:00

In Forbes, Tim Worstall points out that you need to be careful in using statistics sourced from different organizations or agencies, as they don’t necessarily measure quite the same thing, despite the names being very similar:

There are certain sets of statistics put out (largely by the OECD nations like the US and so on) which we really can believe as saying exactly what is indicated upon the tin.

However, that isn’t the same as saying that we should be willing to just accept all such US or OECD statistical numbers. Take, for example and this is one that I have banged on about for many a year now, The US and other OECD measures of poverty. The standard OECD measure of who is in poverty is below 60% of median income, adjusted for housing costs and household size. This is a measure of inequality, not actual poverty. It is also after all of the things that are done to reduce poverty, benefits, redistribution and all that. The US measure is, again adjusted for household size but not for housing costs, a measure of actual poverty. It is not related to average incomes but to what was low income in the early 1960s updated for inflation. And more significantly, it is before almost all of the things done to try to alleviate poverty. The OECD poverty measure is thus a measure of how much (relative) poverty there is after the things done to reduce poverty and the US standard number is a measure of how much absolute poverty there is before attempts to reduce poverty.

There’s nothing particularly wrong with either measure. But we’ve got to be very careful in acknowledging the difference between the two before we go and do something stupid like directly compare them, US poverty rates against the poverty rates of other OECD countries. Yet we do in fact see such comparisons being made all the time.

Another such little mistake of current interest is the way that we’re continually told that US average wages haven’t risen for decades. And it’s true, in one sense, that they haven’t. But wages aren’t actually what we should be looking at: total compensation from work is. And that’s been rising reasonably nicely over that same time period. The difference is in the benefits that we get over and above our wages from going to work. That health care insurance for example. This is more a matter of manipulation in the presentation of the statistics and if you see someone bleating about “wages” be very careful to check and see whether they are talking about what is of interest, compensation, or about wages which is a sign that they’re trying to mislead.

Older Posts »

Powered by WordPress