Quotulatiousness

November 14, 2015

The scandal of NCAA “graduation” rates

Filed under: Football, Sports — Tags: , , , — Nicholas @ 02:00

Gregg Easterbrook on the statistical sleight-of-hand that allows US universities to claim unrealistic graduation rates for their student athletes:

N.C.A.A. Graduation Rate Hocus-Pocus. [Hawaii coach Norm] Chow and [Maryland coach Randy] Edsall both made bona fide improvements to the educational quality of their college football programs, and both were fired as thanks. Edsall raised Maryland’s football graduation rate from 56 percent five years ago to 70 percent. Chow raised Hawaii’s football graduation rate from 29 percent five years ago to 50 percent.

At least that’s what the Department of Education says. According to the N.C.A.A., Hawaii graduates not 50 percent of its players but 70 percent, while Maryland graduates not 70 percent but 75 percent.

At work is the distinction between the Federal Graduation Rate, calculated by the Department of Education, and the Graduation Success Rate, calculated by the N.C.A.A. No other aspect of higher education has a graduation “success rate” — just a graduation rate. The N.C.A.A. cooks up this number to make the situation seem better than it is.

The world of the Graduation Success Rate is wine and roses: According to figures the N.C.A.A. released last week, 86 percent of N.C.A.A. athletes achieved “graduation success” in the 2014-2015 academic year. But “graduation success” is different from graduating; the Department of Education finds that 67 percent of scholarship athletes graduated in 2014-2015. (These dueling figures are for all scholarship athletes: Football and men’s basketball players generally are below the average, those in other sports generally above.)

Both the federal and N.C.A.A. calculations have defects. The federal figure scores only those who graduate from the college of their initial enrollment. The athlete who transfers and graduates elsewhere does not count in the federal metric.

The G.S.R., by contrast, scores as a “graduate” anyone who leaves a college in good standing, via transfer or simply giving up on school: There’s no attempt to follow-up to determine whether athletes who leave graduate somewhere else. Not only is the N.C.A.A.’s graduation metric anchored in the absurd assumption that leaving a college is the same as graduating, but it can also reflect a double-counting fallacy. Suppose a football player starts at College A, transfers to College B and earns his diploma there. Both schools count him as a graduate under the G.S.R.

[…]

Football players ought to graduate at a higher rate than students as a whole. Football scholarships generally pay for five years on campus plus summer school, and football scholarship holders never run out of tuition money, which is the most common reason students fail to complete college. Instead at Ohio State and other money-focused collegiate programs, players graduate at a lower rate than students as a whole. To divert attention from this, the N.C.A.A. publishes its annual hocus-pocus numbers.

October 28, 2015

The WHO’s lack of clarity leads to sensationalist newspaper headlines (again)

Filed under: Health, Media, Science — Tags: , , , , , — Nicholas @ 05:00

The World Health Organization appears to exist primarily to give newspaper editors the excuse to run senational headlines about the risk of cancer. This is not a repeat story from earlier years. Oh, wait. Yes it is. Here’s The Atlantic‘s Ed Yong to de-sensationalize the recent scary headlines:

The International Agency of Research into Cancer (IARC), an arm of the World Health Organization, is notable for two things. First, they’re meant to carefully assess whether things cause cancer, from pesticides to sunlight, and to provide the definitive word on those possible risks.

Second, they are terrible at communicating their findings.

[…]

Group 1 is billed as “carcinogenic to humans,” which means that we can be fairly sure that the things here have the potential to cause cancer. But the stark language, with no mention of risks or odds or any remotely conditional, invites people to assume that if they specifically partake of, say, smoking or processed meat, they will definitely get cancer.

Similarly, when Group 2A is described as “probably carcinogenic to humans,” it roughly translates to “there’s some evidence that these things could cause cancer, but we can’t be sure.” Again, the word “probably” conjures up the specter of individual risk, but the classification isn’t about individuals at all.

Group 2B, “possibly carcinogenic to humans,” may be the most confusing one of all. What does “possibly” even mean? Proving a negative is incredibly difficult, which is why Group 4 — “probably not carcinogenic to humans” — contains just one substance of the hundreds that IARC has assessed.

So, in practice, 2B becomes a giant dumping ground for all the risk factors that IARC has considered, and could neither confirm nor fully discount as carcinogens. Which is to say: most things. It’s a bloated category, essentially one big epidemiological shruggie. But try telling someone unfamiliar with this that, say, power lines are “possibly carcinogenic” and see what they take away from that.

Worse still, the practice of lumping risk factors into categories without accompanying description — or, preferably, visualization — of their respective risks practically invites people to view them as like-for-like. And that inevitably led to misleading headlines like this one in the Guardian: “Processed meats rank alongside smoking as cancer causes – WHO.”

October 15, 2015

S.L.A. Marshall, Dave Grossman, and the “man is naturally peaceful” meme

Filed under: Books, Cancon, History, Military, USA — Tags: , , , , , — Nicholas @ 03:00

The American military historian S.L.A. Marshall was perhaps best known for his book Men Against Fire: The Problem of Battle Command in Future War, where he argued that American military training was insufficient to overcome most men’s natural hesitation to take another human life, even in intense combat situations. Dave Grossman is a modern military author who draws much of his conclusions from the initial work of Marshall. Grossman’s case is presented in his book On Killing: The Psychological Cost of Learning to Kill in War and Society, which was reviewed by Robert Engen in an older issue of the Canadian Military Journal:

As a military historian, I am instinctively skeptical of any work or theory that claims to overturn all existing scholarship – indeed, overturn an entire academic discipline – in one fell swoop. In academic history, the field normally expands and evolves incrementally, based upon new research, rather than being completely overthrown periodically. While it is not impossible for such a revolution to take place and become accepted, extraordinary new research and evidence would need to be presented to back up these claims. Simply put, Grossman’s On Killing and its succeeding “killology” literature represent a potential revolution for military history, if his claims can stand up to scrutiny – especially the claim that throughout human history, most soldiers and people have been unable to kill one another.

I will be the first to acknowledge that Grossman has made positive contributions to the discipline. On Combat, in particular, contains wonderful insights on the physiology of combat that bear further study and incorporation within the discipline. However, Grossman’s current “killology” literature contains some serious problems, and there are some worrying flaws in the theories that are being preached as truth to the men and women of the Canadian Forces. Although much of Grossman’s work is credible, his proposed theories on the inability of human beings to kill one another, while optimistic, are not sufficiently reinforced to warrant uncritical acceptance. A reassessment of the value that this material holds for the Canadian military is necessary.

The evidence seems to indicate that, contrary to Grossman’s ideas, killing is a natural, if difficult, part of human behaviour, and that killology’s belief that soldiers and the population at large are only being able to kill as part of programmed behaviour (or as a symptom of mental illness) hinders our understanding of the actualities of warfare. A flawed understanding of how and why soldiers can kill is no more helpful to the study of military history than it is to practitioners of the military profession. More research in this area is required, and On Killing and On Combat should be treated as the starting points, rather than the culmination, of this process.

(more…)

September 5, 2015

The subtle lure of “research” that confirms our biases

Filed under: Health, Science — Tags: , , , , — Nicholas @ 02:00

Megan McArdle on why we fall for bogus research:

Almost three years ago, Nobel Prize-winning psychologist Daniel Kahneman penned an open letter to researchers working on “social priming,” the study of how thoughts and environmental cues can change later, mostly unrelated behaviors. After highlighting a series of embarrassing revelations, ranging from outright fraud to unreproducible results, he warned:

    For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research. Your problem is not with the few people who have actively challenged the validity of some priming results. It is with the much larger population of colleagues who in the past accepted your surprising results as facts when they were published. These people have now attached a question mark to the field, and it is your responsibility to remove it.

At the time it was a bombshell. Now it seems almost delicate. Replication of psychology studies has become a hot topic, and on Thursday, Science published the results of a project that aimed to replicate 100 famous studies — and found that only about one-third of them held up. The others showed weaker effects, or failed to find the effect at all.

This is, to put it mildly, a problem. But it is not necessarily the problem that many people seem to assume, which is that psychology research standards are terrible, or that the teams that put out the papers are stupid. Sure, some researchers doubtless are stupid, and some psychological research standards could be tighter, because we live in a wide and varied universe where almost anything you can say is certain to be true about some part of it. But for me, the problem is not individual research papers, or even the field of psychology. It’s the way that academic culture filters papers, and the way that the larger society gets their results.

August 29, 2015

We need a new publication called The Journal of Successfully Reproduced Results

Filed under: Media, Science — Tags: , , , , , — Nicholas @ 04:00

We depend on scientific studies to provide us with valid information on so many different aspects of life … it’d be nice to know that the results of those studies actually hold up to scrutiny:

One of the bedrock assumptions of science is that for a study’s results to be valid, other researchers should be able to reproduce the study and reach the same conclusions. The ability to successfully reproduce a study and find the same results is, as much as anything, how we know that its findings are true, rather than a one-off result.

This seems obvious, but in practice, a lot more work goes into original studies designed to create interesting conclusions than into the rather less interesting work of reproducing studies that have already been done to see whether their results hold up.

Everyone wants to be part of the effort to identify new and interesting results, not the more mundane (and yet potentially career-endangering) work of reproducing the results of older studies:

Why is psychology research (and, it seems likely, social science research generally) so stuffed with dubious results? Let me suggest three likely reasons:

A bias towards research that is not only new but interesting: An interesting, counterintuitive finding that appears to come from good, solid scientific investigation gets a researcher more media coverage, more attention, more fame both inside and outside of the field. A boring and obvious result, or no result, on the other hand, even if investigated honestly and rigorously, usually does little for a researcher’s reputation. The career path for academic researchers, especially in social science, is paved with interesting but hard to replicate findings. (In a clever way, the Reproducibility Project gets around this issue by coming up with the really interesting result that lots of psychology studies have problems.)

An institutional bias against checking the work of others: This is the flipside of the first factor: Senior social science researchers often actively warn their younger colleagues — who are in many cases the best positioned to check older work—against investigating the work of established members of the field. As one psychology professor from the University of Southern California grouses to the Times, “There’s no doubt replication is important, but it’s often just an attack, a vigilante exercise.”

[…]

Small, unrepresentative sample sizes: In general, social science experiments tend to work with fairly small sample sizes — often just a few dozen people who are meant to stand in for everyone else. Researchers often have a hard time putting together truly representative samples, so they work with subjects they can access, which in a lot of cases means college students.

A couple of years ago, I linked to a story about the problem of using western university students as the default source of your statistical sample for psychological and sociological studies:

A notion that’s popped up several times in the last couple of months is that the easy access to willing test subjects (university students) introduces a strong bias to a lot of the tests, yet until recently the majority of studies disregarded the possibility that their test results were unrepresentative of the general population.

August 19, 2015

Canada looks very good on an international ranking you won’t hear about from our media

Filed under: Cancon, Economics, Greece — Tags: , , — Nicholas @ 05:00

Our traditional media are quick to pump up the volume for “studies” that find that we rank highly on various rankings of cities or what-have-you, but here’s someone pointing out that Canada’s ranking is quite good, but it’s not the kind of measurement our media want to encourage or publicize:

First, we must identify a nation’s currently employed population. Next, all public sector employees are removed to obtain an adjusted productive workforce. It may be objectionable that certain professions, like teaching, nurses in single payer systems and fire fighters, are classified as an unproductive workforce, but as our system is currently designed, the salaries of these individuals are not covered by the immediate beneficiaries like any other business but are paid through dispersed taxation methods.

Finally, this productive population is divided into the nation’s total population to identify the total number of individuals a worker is expected to support in his country. To remove bias toward non-working spouses and children, the average household size is subtracted from this result to get the final number of individuals that an individual must support that are not part of their own voluntary household. In other words, how many total strangers is this individual providing for?

The Implied Public Reliance metric does a far better job of predicting economic performance:

Canada - implied public reliance

Greece, the nation with the debt problem, is currently expecting each employed person to support 6.1 other people above and beyond their own families. This explains much of the pressure to work long hours and also explains the unstable debt loads. Since a single Greek worker can’t possibly hope to support what amounts to a complete baseball team on a single salary, the difference is covered by Greek public debt, debt that the underlying social system cannot hope to repay as the incentives are to maintain the current system of subsidies. To demonstrate how difficult it is to change these systems within a democratic society, we just have to look at the percentage of the population that is reliant on public subsidy.

Canada - reliant on public funding

Oh, and by the way … Greece is doomed:

The numbers imply that 67 percent of the population of Greece is wholly reliant on the Greek government to provide their incomes. With such a commanding supermajority, changing this system with the democratic process is impossible as the 67 percent have strong incentives to continue to vote for the other 33 percent — and also foreign entities — to cover their living expenses.

August 15, 2015

Science in the media – from “statistical irregularities” to outright fraud

Filed under: Media, Science — Tags: , , , — Nicholas @ 02:00

In The Atlantic, Bourree Lam looks at the state of published science and how scientists can begin to address the problems of bad data, statistical sleight-of-hand, and actual fraud:

In May, the journal Science retracted a much-talked-about study suggesting that gay canvassers might cause same-sex marriage opponents to change their opinion, after an independent statistical analysis revealed irregularities in its data. The retraction joined a string of science scandals, ranging from Andrew Wakefield’s infamous study linking a childhood vaccine and autism to the allegations that Marc Hauser, once a star psychology professor at Harvard, fabricated data for research on animal cognition. By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders). This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.

“Retractions are born of many mothers,” write Ivan Oransky and Adam Marcus, the co-founders of the blog Retraction Watch, which has logged thousands of retractions in the past five years. A study in the Proceedings of the National Academy of Sciences reviewed 2,047 retractions of biomedical and life-sciences articles and found that just 21.3 percent stemmed from straightforward error, while 67.4 percent resulted from misconduct, including fraud or suspected fraud (43.4 percent) and plagiarism (9.8 percent).

Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’”.

As for why these practices are so prevalent, many scientists blame increased competition for academic jobs and research funding, combined with a “publish or perish” culture. Because journals are more likely to accept studies reporting “positive” results (those that support, rather than refute, a hypothesis), researchers may have an incentive to “cook” or “mine” their data to generate a positive finding. Such publication bias is not in itself news — back in 1987, a study found that, compared with research trials that went unpublished, those that were published were three times as likely to have positive results. But the bias does seem to be getting stronger: a more recent study of 4,600 research papers found that from 1990 to 2007, the proportion of positive results grew by 22 percent.

August 3, 2015

Undependable numbers in the campus rape debate

Filed under: Politics, USA — Tags: , , — Nicholas @ 02:00

Megan McArdle on the recently revealed problems in one of the most frequently used set of statistics on campus rape:

a new article in Reason magazine suggests that this foundation is much shakier than most people working on this issue — myself included — may have assumed. (Full disclosure: the Official Blog Spouse is an editor at Reason.) The author, Linda M. LeFauve, looked carefully at the study, including conducting an interview with Lisak, and identified multiple issues:

  1. Lisak did not actually do original research. Instead, he pooled data from studies that were not necessarily aimed at collecting data on college students, or indeed, about rape. Only five questions on a multi-page questionnaire asked about sexual violence that they may have committed as adults, against other adults.
  2. The campus where this data was collected is a commuter campus. It’s not clear that everyone surveyed was a college student, but if so, the sample included many non-traditional students, with an average age of 26.5. Yet this data has been widely applied to traditional campuses, even though the two populations may differ greatly.
  3. The responses indicate that the men identified as rapists were extraordinarily more violent than the normal population: “The high rate of other forms of violence reported by the men in Lisak’s paper further suggests they are an atypical group. Of the 120 subjects Lisak classified as rapists, 46 further admitted to battery of an adult, 13 to physical abuse of a child, 21 to sexual abuse of a child, and 70 — more than half the group — to other forms of criminal violence. By itself, the nearly 20 percent who had sexually abused a child should signal that this is not a group from whom it is reasonable to generalize findings to a college campus.”
  4. The data did not cover acts committed while in college, but any acts of sexual violence; a number of them seem to have been committed in domestic violence situations.
  5. Lisak appears to have exaggerated how much follow-up he was able to do on the people he surveyed, at least to LeFauve: “Lisak told me that he subsequently interviewed most of them. That was a surprising claim, given the conditions of the survey and the fact that he was looking at the data produced long after his students had completed those dissertations; nor were there plausible circumstances under which a faculty member supervising a dissertation would interact directly with subjects. When I asked how he was able to speak with men participating in an anonymous survey for research he was not conducting, he ended the phone call.” Robby Soave of Reason, in a companion piece, also raises doubts about Lisak’s repeated assertions that he conducted extensive follow-ups with “most” of the respondents to what were mostly anonymous surveys.

In short, Lisak’s 2002 study is not a systematic survey of rape on campus; it is pooled data from surveys of people who happen to have been near a commuter campus on days when the surveys were being collected.

Before I go any further, let me note that I’m not saying that what these men did was not bad, or does not deserve to be punished. But if LeFauve is right, this study is basically worthless for shaping campus policies designed to stop rape.

July 27, 2015

The “Ferguson Effect”

Filed under: Law, Politics, USA — Tags: , , , — Nicholas @ 04:00

Radley Balko explains why the concerns and worries of police officials have been totally upheld by the rising tide of violence against police officers in the wake of the events in Ferguson … oh, wait. No, that’s not what happened at all:

The “Ferguson effect,” you might remember, is a phenomenon law-and-order types have been throwing around in an effort to blame police brutality on protesters and public officials who actually try to hold bad cops accountable for an alleged increase in violence, both general violence and violence against police officers.

The problem is that there’s no real evidence to suggest it exists. As I and others pointed out in June, while there have been some increases in crime in a few cities, including Baltimore and St. Louis County, there’s just no empirical data to support the notion that we’re in the middle of some national crime wave. And while there was an increase in killings of police officers in 2014, that came after a year in which such killings were at a historic low. And in any case, the bulk of killings of police officers last year came before the Ferguson protests in August and well before the nationwide Eric Garner protests in December.

Now, the National Law Enforcement Officers Memorial Fund has released its mid-year report on police officers’ deaths in 2015. Through the end of June, the number of officers killed by gunfire has dropped 25 percent from last year, from 24 to 18. Two of those incidents were accidental shootings (by other cops), so the number killed by hostile gunfire is 16. (As of today, the news is even better: Police deaths due to firearms through July 23 are down 30 percent from last year.)

[…]

A typical officer on a typical stop is far more likely to die of a heart attack than to be shot by someone inside that car.

It’s important to note here that we’re also talking about very small numbers overall. Police officer deaths have been in such rapid decline since the 1990s that when taken as percentages, even statistical noise in the raw figures can look like a large swing one way or the other. And if we look at the rate of officer fatalities (as opposed to the raw data), the degree to which policing has gotten safer over the last 20 years is only magnified.

But the main takeaway from the first-half figures of 2015 is this: If we really were in the midst of a nationwide “Ferguson effect,” we’d expect to see attacks on police officers increasing. Instead, we’re seeing the opposite. That’s good news for cops. It’s bad news for people who want to blame protesters and reform advocates for the deaths of police officers.

June 22, 2015

Breaking – it’s a nation-wide crime wave (when you cherry-pick your data)

Filed under: Law, USA — Tags: , , — Nicholas @ 04:00

Daniel Bier looks at how Wall Street Journal contributor Heather Mac Donald concocted her data to prove that there’s a rising tide of crime across the United States:

Heather Mac Donald is back in the Wall Street Journal to defend her thesis that there is a huge national crime wave and that protesters and police reformers are to blame.

In her original piece, Mac Donald cherry picked whatever cities and whatever categories of crime showed an increase so far this year, stacked up all the statistics that supported her idea, ignored all the ones that didn’t, and concluded we are suffering a “nationwide crime wave.”

Of course, you could do this kind of thing any year to claim that crime is rising. But it isn’t.

The fifteen largest cities have seen a 2% net decrease in murder so far this year. Eight saw a rise in murder rates, and seven saw an even larger decline.

Guess which cities Mac Donald mentioned and which she did not.

This is how you play tennis without the net. Or lines.

And in her recent post, buried seven paragraphs in, comes this admission: “It is true that violent crime has not skyrocketed in every American city — but my article didn’t say it had.”

But neither did her article acknowledge that murder in big cities was falling overall — in fact, it didn’t acknowledge that murder or violent crime was declining anywhere. Apparently, in her view, it is acceptable to present a distorted view of the data as long as it isn’t an outright lie.

June 4, 2015

When did “scientific literature” transmogrify into bad science fiction?

Filed under: Bureaucracy, Media, Science — Tags: , , , — Nicholas @ 03:00

In The Lancet, Richard Horton discusses the problems of scientific journalism:

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.
Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.

May 7, 2015

Vancouver – where “happiness” doesn’t co-relate with “quality of life”

Filed under: Cancon, China, Economics — Tags: , , , , , — Nicholas @ 04:00

Reducing the realities of life in a given city to a quick numerical value or data point on a chart requires you to ignore subtleties and local influences. Last month, Mark Collins linked to this article by Terry Glavin on what the “quality of life” numbers for Vancouver actually conceal:

If the Economist Intelligence Unit’s annual top 10 world cities rankings are what you’ve been relying on, you probably weren’t surprised last month when the global human resources outfit Mercer tagged Vancouver on its Quality of Living index as the best city in North America. But you might have been surprised this week when Statistics Canada released a study showing that, by a variety of indices, Vancouverites are the unhappiest people in Canada, falling dead last among the residents of 33 cities across the country.

We like to think of Lotusland’s grand metropolis as a place where people ski, sail, ride their bikes, swim, and hike though lush rainforests, all in the same day. But StatsCan’s annual survey of median household income in Canadian cities routinely puts Vancouver close to the bottom of the heap on that same list of 33 cities, and in January the Demographia International research institute ranked Vancouver second to last in a global survey of 378 cities on its Housing Affordability Survey.

Vancouver’s median household income in 2014 was $66,400, while the city’s median home price was 10.6 times higher: $704,800. Only Hong Kong fared worse, and just barely. Hong Kong also tops Vancouver, again only barely, as the property investment bolt-hole most favoured by Mainland China’s loot-laden millionaires. For years, we’ve been instructed to pretend that this is somehow mere coincidence. You can’t get away with talking to Hong Kongers like that, but Vancouverites take it sitting down.

In happier places like Saguenay, Sudbury and Thunder Bay, there’s manufacturing, dairy farming, forestry and mining, and there’s a high degree of neighborliness and civility. But Vancouverites make most of their money from increases in the real estate value of whatever property they might be lucky to own. This tends to skew any real sense of hometown belonging, and nothing quite so rattles the cages as loose talk about the elaborate, federally-sanctioned swindle that has been keeping the bubble inflated all these years.

April 28, 2015

Another misleading statistical quirk about US corporate profits

Filed under: Business, Economics, USA — Tags: , — Nicholas @ 07:02

Earlier this month, Tim Worstall explained why the huffing and puffing over the increased share of corporate profits in the US GDP figures is misdirected:

There’s all sorts of Very Serious People running around shouting about how the capitalist plutocrats are taking ever greater shares of the US economy. This might even be true but one of the pieces of evidence that is relied upon is not actually telling us what people seem to be concluding it is. The reason is that we’re in an age of increased globalisation. This means that large American companies (we mostly think of the tech companies here, Apple, Google, Microsoft) are making large profits outside America. However, when we measure the profit share of the US economy we are measuring those offshore profits as being part of the US economy. But we’re not also measuring the labour income that goes along with the generation of those profits. It’s thus very misleading indeed to be using this profit share as an indication of the capitalist plutocrats rooking us all.

[…]

It’s possible that that rise in the profit share is actually nothing at all to do with the US domestic economy. If American corporations are now making much larger foreign profits than they used to then that could be the explanation. No, it makes no difference about whether they repatriate those profits, nor whether they pay tax on them: those foreign profits will be included in GNP either way. Note also that measuring the profit share this way is rather misleading. Yes, it does, obviously because this is the way we calculate it, mean that the profit share of GNP is rising. But we’re not including the labour income that goes along with the generation of those profits. That’s all off in the GNP (or GDP) of the countries where the sales and manufacturing are taking place. The only part of this economic activity that we’re including in US GNP is that profit margin.

[…]

Now to backpeddle a little bit. I do not in fact insist that this is the entire explanation of the increased profit share. It wouldn’t surprise me if it was but I don’t insist that it’s the entire explanation. I do however insist that it is part of the explanation. The sums being earned offshore by large American companies are large enough to show up as multiple percentage points of the US economy. So some of that change in the profit share is just because American companies are doing well elsewhere in the world. It’s got very little to almost no relevance to the American economy itself that they are. At least, not in the sense that it’s being used here, to talk about the declining labour share. Because these profits simply aren’t coming from the domestic American economy therefore they can’t have any influence upon the percentage of that American economy that labour gets.

This does, of course, have public policy implications. If the above is the whole and total reason for the fall in the labour share of GNP then obviously we can raise the labour share of GNP just by telling American companies not to make profits in foreign countries. Which would be a completely ridiculous thing to do of course. But given that that would indeed solve this perceived problem, and also that it’s a ridiculous thing to do, means that the worries over the problem itself are also ridiculous. So, we don’t actually need a public policy response to it.

April 21, 2015

The statistical anomalies of sex

Filed under: Health, Randomness — Tags: , — Nicholas @ 03:00

As the old saying has it, “everyone lies about sex“:

Straight men have had twice as many sexual partners, on average, as straight women. Sounds plausible, seeing that men supposedly think about sex every seven seconds. Except that it’s mathematically impossible: in a closed population with as many men as women (which roughly there are) the averages should match up. Someone is being dishonest, but who? And why? These questions, along with many others, are explored in Sex by numbers, a new book by David Spiegelhalter, Winton Professor for the Public Understanding of Risk at the University of Cambridge.

“Sex is a great topic,” says Spiegelhalter. “There’s lots of it going on, but we don’t know what goes on or how much of it, because most of the time it goes on behind closed doors. It’s a really difficult topic to investigate scientifically, and a real challenge for statistics.” Spiegelhalter’s aim is to get people interested in a critical approach to the numbers they hear about in the news and give them the tools to figure out if they can be believed. “It’s really a book about statistics, using sex as an example.”

Statistics about sex are not all equally good. Some, like the number of births in a given year, are cast-iron facts, but others are much harder to come by. The number of sexual partners is a good example. The mismatch above comes from the third The National Survey of Sexual Attitudes and Lifestyles (Natsal), conducted between 2010 and 2012, in which men reported having had 14 sexual partners, on average, and women 7. Studies have suggested that women give lower numbers when they fear the survey isn’t entirely confidential, something that doesn’t seem to affect men (contrary to my expectation, it doesn’t induce them to exaggerate). So that’s one possible explanation for the mismatch: sadly, women still need to fear social stigma.

But there are other explanations too. One is that men (more than women) may have some of their sexual experience with sex workers. These aren’t included in the surveys, so their experiences are missing from the female tally. Another is that there are different attitudes as to what counts as a sexual partner. If a woman feels she’s been coerced by a man, for example, she may not want to count him.

April 18, 2015

Correlation, causation, and lobby money

Filed under: Books, Business, Health — Tags: , — Nicholas @ 02:00

Tim Harford‘s latest column on tobacco, research, and lobby money:

It is said that there is a correlation between the number of storks’ nests found on Danish houses and the number of children born in those houses. Could the old story about babies being delivered by storks really be true? No. Correlation is not causation. Storks do not deliver children but larger houses have more room both for children and for storks.

This much-loved statistical anecdote seems less amusing when you consider how it was used in a US Senate committee hearing in 1965. The expert witness giving testimony was arguing that while smoking may be correlated with lung cancer, a causal relationship was unproven and implausible. Pressed on the statistical parallels between storks and cigarettes, he replied that they “seem to me the same”.

The witness’s name was Darrell Huff, a freelance journalist beloved by generations of geeks for his wonderful and hugely successful 1954 book How to Lie with Statistics. His reputation today might be rather different had the proposed sequel made it to print. How to Lie with Smoking Statistics used a variety of stork-style arguments to throw doubt on the connection between smoking and cancer, and it was supported by a grant from the Tobacco Institute. It was never published, for reasons that remain unclear. (The story of Huff’s career as a tobacco consultant was brought to the attention of statisticians in articles by Andrew Gelman in Chance in 2012 and by Alex Reinhart in Significance in 2014.)

Indisputably, smoking causes lung cancer and various other deadly conditions. But the problematic relationship between correlation and causation in general remains an active area of debate and confusion. The “spurious correlations” compiled by Harvard law student Tyler Vigen and displayed on his website (tylervigen.com) should be a warning. Did you realise that consumption of margarine is strongly correlated with the divorce rate in Maine?

« Newer PostsOlder Posts »

Powered by WordPress