Quotulatiousness

July 24, 2013

In spite of all the overheated rhetoric, there’s good news about race and crime in the US

Filed under: Law, Media, USA — Tags: , , — Nicholas @ 09:58

Radley Balko looks behind the scripted talking points to get at the actual data they’re ignoring:

Civil rights leaders and progressive activists have cited Zimmerman’s acquittal and the proliferation of robust self-defense laws as evidence of a “war on black men” — or, similarly, that it’s now “open season on black men.” Meanwhile, Zimmerman supporters and many on the political right have used the case to bring up old discussions of black-on-black murders in places like Chicago, and to argue that violence in black America is spiraling out of control. Both positions are cynical, and both tend to pit black and white America against one another.

But both are also wrong on the facts.

First, about the alleged “war on black men.” The argument here is that laws like Florida’s “Stand Your Ground” are encouraging white vigilantism, and moving white people to shoot and kill black people at the slightest provocation. But there just isn’t any data to support the contention. Black homicides have been falling since the mid-1990s (as have all homicides). Moreover, according to a 2005 Bureau of Justice Statistics report, more than 90 percent of black murder victims are killed by other black people. And if we look at interracial murder, there are about twice as many black-on-white murders as the other way around, and that ratio has held steady for decades.

However, it also isn’t true that black America is growing increasingly violent. Again, black homicides, like all homicides, are in a steep, 20-year decline. In fact, the rates at which blacks both commit and are victims of homicide have shown sharper declines than those of whites. It’s true that Chicago has had an unusually violent last few years, but this is an anomaly among big American cities. The 2012 murder rate in Washington, D.C., for example, hit a 50-year low. Violent crime in New York and Los Angeles is also falling to levels we haven’t seen in decades.

[…]

To get to the more sensational conclusion, the article considers interracial homicide as a percentage of total homicides. And indeed, measured that way the “rate” of interracial murder has gone up. But it’s an odd way to measure. The vast, vast majority of murders are intraracial. And, as noted, those murders have been dropping considerably. The interracial murder rate has been dropping, too. According to the Scripps Howard review, the raw number of black-on-white and white-on-black murders combined was about the same in 2010 as it was in the early 1980s. But the United States population has grown considerably in that time, from 227 million in 1980, to 315 million today. So if you measure it the way all other crime is measured, the interracial murder rate has dropped, not increased.

July 17, 2013

Keep calm, and don’t panic about bee-pocalypse now

Filed under: Environment, Food, Media, Science — Tags: , , , , — Nicholas @ 08:17

You’ve heard about the mysterious colony collapse disorder (CCD) that has been devastating bee colonies across the world, right? This is serious, as bees are a very important part of the pollenization of many crops. As you’ll know from many media reports, this is a food disaster unfolding before us and we’re all going to starve! Or, looking at the facts, perhaps not:

In a rush to identify the culprit of the disorder, many journalists have made exaggerated claims about the impacts of CCD. Most have uncritically accepted that continued bee losses would be a disaster for America’s food supply. Others speculate about the coming of a second “silent spring.” Worse yet, many depict beekeepers as passive, unimaginative onlookers that stand idly by as their colonies vanish.

This sensational reporting has confused rather than informed discussions over CCD. Yes, honey bees are dying in above average numbers, and it is important to uncover what’s causing the losses, but it hardly spells disaster for bees or America’s food supply.

Consider the following facts about honey bees and CCD.

For starters, US honey bee colony numbers are stable, and they have been since before CCD hit the scene in 2006. In fact, colony numbers were higher in 2010 than any year since 1999. How can this be? Commercial beekeepers, far from being passive victims, have actively rebuilt their colonies in response to increased mortality from CCD. Although average winter mortality rates have increased from around 15% before 2006 to more than 30%, beekeepers have been able to adapt to these changes and maintain colony numbers.

[…]

“The state of the honey bee population—numbers, vitality, and economic output — are the products of not just the impact of disease but also the economic decisions made by beekeepers and farmers,” economists Randal Rucker and Walter Thurman write in a summary of their working paper on the impacts of CCD. Searching through a number of economic measures, the researchers came to a surprising conclusion: CCD has had almost no discernible economic impact.

But you don’t need to rely on their study to see that CCD has had little economic effect. Data on colonies and honey production are publicly available from the USDA. Like honey bee numbers, US honey production has shown no pattern of decline since CCD was first detected. In 2010, honey production was 14% greater than it was in 2006. (To be clear, US honey production and colony numbers are lower today than they were 30 years ago, but as Rucker and Thurman explain, this gradual decline happened prior to 2006 and cannot be attributed to CCD).

H/T to Tyler Cowen for the link.

July 13, 2013

What is the real inflation rate?

Filed under: Economics, Politics, USA — Tags: , , , — Nicholas @ 10:11

The official US inflation rate is around 1% annually. That doesn’t seem quite right to a lot of people who seem to be spending more money for the same goods:

… what Bernanke will never admit is that the official inflation rate is a total sham. The way that inflation is calculated has changed more than 20 times since 1978, and each time it has been changed the goal has been to make it appear to be lower than it actually is.

If the rate of inflation was still calculated the way that it was back in 1980, it would be about 8 percent right now and everyone would be screaming about the fact that inflation is way too high.

But instead, Bernanke can get away with claiming that inflation is “too low” because the official government numbers back him up.

Of course many of us already know that inflation is out of control without even looking at any numbers. We are spending a lot more on the things that we buy on a regular basis than we used to.

For example, when Barack Obama first entered the White House, the average price of a gallon of gasoline was $1.84. Today, the average price of a gallon of gasoline has nearly doubled. It is currently sitting at $3.49, but when I filled up my vehicle yesterday I paid nearly $4.00 a gallon.

And of course the price of gasoline influences the price of almost every product in the entire country, since almost everything that we buy has to be transported in some manner.

But that is just one example.

Our monthly bills also seem to keep growing at a very brisk pace.

Electricity bills in the United States have risen faster than the overall rate of inflation for five years in a row, and according to USA Today water bills have actually tripled over the past 12 years in some areas of the country.

No inflation there, eh?

July 11, 2013

Inane new “measurement” claims 1978 was the “best year ever”

Filed under: Economics, History — Tags: , , , , — Nicholas @ 08:39

An editorial in New Scientist (which I can’t quote from due to copyright concerns) claims that using a new “measurement” called the Genuine Progress Indicator (GPI), human progress peaked in 1978 and it’s all been downhill since then. Anyone who actually lived through 1978 might struggle to recall just what — if anything — was better about 1978 than following years, but the NS editors do point out that the GDP measurement generally used to compare national economies doesn’t capture all the relevant details, while GPI includes what they refer to as social factors and economic costs, making it a better measuring tool for certain comparisons.

I can only assume that most of the economists who believe that 1978 was a peak year for the environment hadn’t been born at that time: pollution was a much more visible issue in North America and western Europe than at almost any time afterwards (and eastern Europe was far worse). Industry and government were taking steps to cut back some of the worst pollutants, but that process was really only just in its early stages: it took several years for the effects to start to show.

In the late 1970s, the world was a much dirtier, poorer, less egalitarian place than even a decade later: China and India were both much more authoritarian and had still not mastered the art of ensuring that there was enough food to feed everyone. Behind the Iron Curtain, Soviets and citizens of their client states in Europe were falling further and further behind the material well-being of westerners (and becoming much more aware of the deficit).

No matter how much emphasis you put on nebulous “social factors”, the fact that the world poverty rate — regardless of how you measure it — has been cut in half over the last twenty years, lifting literally billions of people out of near-starvation makes an incredibly strong case that the world is doing better now than at any time since 1978. You can prattle on all you like about “rising inequality”, but for my money it’s a better world where the risk of people literally starving to death is that much closer to being eliminated. Give me an “unequal” world where even the poorest have enough food and clean water over an egalitarian world where billions starve, thanks very much.

July 4, 2013

“Buenos Aires […] is the headquarters for the central planning bad idea bus”

Filed under: Americas, Economics, Government — Tags: , , , , , — Nicholas @ 08:32

At the Sovereign Man blog, Simon Black discusses Argentina’s sad history of central planning failures:

The more interesting part about Buenos Aires, though, is that this place is the headquarters for the central planning bad idea bus.

Argentina’s President, Cristina Fernandez, continues to tighten her stranglehold over the nation’s economy and society.

This country is so abundant with natural resources, it should be immensely wealthy. And it was. At the turn of the 20th century, Argentina was one of the richest countries in the world.

Yet rather than adopting the market-oriented approaches taken by, say, Colombia and Chile, Argentina is following the model of Venezuela.

Cristina rules by decree here; there is very little legislative power. She may as well start wearing a crown.

Just in the last few years, she’s imposed capital controls. Media controls. Price controls. Export controls.

She’s seized pension funds. She fired a central banker who didn’t bend to her ‘print more money’ directives. She even filed criminal charges against economists who publish credible inflation figures, as opposed to the lies that her government releases.

Inflation here is completely out of control. The government figures say 10%, but the street level is several times that.

[. . .]

Being here in this laboratory of central planning makes a few things abundantly clear:

1) Printing money does not create wealth. If it did, Argentina would be one of the richest places in the world again.

2) All of these policies that are ‘for the benefit of the people’ almost universally and up screwing the people they claim to help.

Printing money creates nasty inflation. If you’re wealthy, it leads to asset bubbles, which can make you even wealthier. If you’re poor, you just get crushed by rising prices. Or worse – shortages (remember the recent Venezuelan toilet paper crisis?)

3) Desperation leads to even more desperation. The worse things get, the tighter government controls become… which makes things even worse. It’s a classic negative feedback loop.

Both the United States and pan-European governments are varying degrees of this model, with only a flimsy layer of international credibility separating them from the regime of Cristina.

So Argentina is really a perfect case study in things to come.

June 23, 2013

Wine tasting scores are bullshit

Filed under: Business, Media, Science, Wine — Tags: , , — Nicholas @ 11:49

In the Guardian, David Derbyshire takes the modern “science” of wine tasting to the woodshed:

… drawing on his background in statistics, Hodgson approached the organisers of the California State Fair wine competition, the oldest contest of its kind in North America, and proposed an experiment for their annual June tasting sessions.

Each panel of four judges would be presented with their usual “flight” of samples to sniff, sip and slurp. But some wines would be presented to the panel three times, poured from the same bottle each time. The results would be compiled and analysed to see whether wine testing really is scientific.

The first experiment took place in 2005. The last was in Sacramento earlier this month. Hodgson’s findings have stunned the wine industry. Over the years he has shown again and again that even trained, professional palates are terrible at judging wine.

“The results are disturbing,” says Hodgson from the Fieldbrook Winery in Humboldt County, described by its owner as a rural paradise. “Only about 10% of judges are consistent and those judges who were consistent one year were ordinary the next year.

“Chance has a great deal to do with the awards that wines win.”

These judges are not amateurs either. They read like a who’s who of the American wine industry from winemakers, sommeliers, critics and buyers to wine consultants and academics. In Hodgson’s tests, judges rated wines on a scale running from 50 to 100. In practice, most wines scored in the 70s, 80s and low 90s.

Results from the first four years of the experiment, published in the Journal of Wine Economics, showed a typical judge’s scores varied by plus or minus four points over the three blind tastings. A wine deemed to be a good 90 would be rated as an acceptable 86 by the same judge minutes later and then an excellent 94.

Today’s headline is a slightly stronger version of one I ran in May: Is wine tasting bullshit? with this rather amusing caption:

A real wine review

Although that “real” wine “review” illustrates the verbal bullshit side of wine reviewing, the statistical analysis in Robert Hodgson’s tests rather undermines the claims to any kind of actual analysis in most or all wine reviewing.

I’ve said for years that for most people there is a range of wine prices that will satisfy their tastes without emptying their wallets — in Ontario, the range for most people seems to be in the $14-$40 price spectrum. Pay less than that, and you risk buying wine that really isn’t very good (although there are some underpriced gems even there), and pay over $40 and you’re just paying extra for the “prestige” and most of us wouldn’t really be able to detect any flavour differences.

It’s interesting to see what kind of immediate environmental changes seem to be able to directly influence the scores given by reviewers:

More evidence that wine-tasting is influenced by context was provided by a 2008 study from Heriot-Watt University in Edinburgh. The team found that different music could boost tasters’ wine scores by 60%. Researchers discovered that a blast of Jimi Hendrix enhanced cabernet sauvignon while Kylie Minogue went well with chardonnay.

June 19, 2013

Even the Chinese statistics office couldn’t accept these numbers

Filed under: Bureaucracy, China, Economics, Government — Tags: , , — Nicholas @ 07:57

In the Wall Street Journal‘s ChinaRealtime section, an amusing story about a local Chinese government whose official statistics were so unrealistic that the central statistics office called them out on it:

It’s typically advisable not to accept Chinese economic data at face value – as even the country’s own premier will tell you. Figures on everything from inflation and industrial output to energy consumption and international trade often don’t seem to gel with observation and sometimes struggle to stack up when compared with other indicators.

How the figures are massaged and by whom is as much a secret as the real data itself. But in an unusual move, the National Bureau of Statistics – clearly frustrated with the lies, damn lies – has recently outed a local government it says was involved in a particularly egregious case of number fudging, providing rare insight into just how we’re being deceived.

According to a statement on the statistics bureau’s website dated June 14 (in Chinese), the economic development and technology information bureau of Henglan, a town in southern China’s Guangdong province, massively overstated the gross industrial output of large firms in the area.

[. . .]

The statistics bureau doesn’t say why Henglan inflated its industrial output numbers. But indications that a local economy is sagging could reflect poorly on the prospects for promotion of local officials, and China’s southern provinces have been particularly hard hit by the global slowdown in demand for the country’s exports. Factories have closed, moving inland and overseas in search of cheaper labor, denting local government revenues.

“When governments are looking to burnish their track record, that can put the local statistics departments in a very awkward situation,” said a commentary piece that ran Tuesday in the Economic Daily (in Chinese), a newspaper under the control of the State Council, China’s cabinet. The article said that one of the biggest obstacles to ensuring accurate data is that the agencies responsible for crunching the numbers aren’t independent from local authorities. Moreover, it argues that penalties for producing fake data were too mild to act as a deterrent.

May 28, 2013

Charles Joseph Minard died for our (infographic) sins

Filed under: History, Media, Russia — Tags: , , , — Nicholas @ 00:01

Do you remember seeing the amazingly informative diagram by Charles Joseph Minard on the statistical side of Napoleon’s disastrous march on Moscow:

Charles Joseph Minard's famous graph showing the decreasing size of the Grande Armée as it marches to Moscow (brown line, from left to right) and back (black line, from right to left) with the size of the army equal to the width of the line. Temperature is plotted on the lower graph for the return journey (Multiply Réaumur temperatures by 1¼ to get Celsius, e.g. −30 °R = −37.5 °C)

Charles Joseph Minard’s famous graph showing the decreasing size of the Grande Armée as it marches to Moscow (brown line, from left to right) and back (black line, from right to left) with the size of the army equal to the width of the line. Temperature is plotted on the lower graph for the return journey (Multiply Réaumur temperatures by 1¼ to get Celsius, e.g. −30 °R = −37.5 °C)

That is what a really good infographic can be. But, as Tim Harford points out, they’re not all that good:

Camouflage usually means blending in. That wasn’t an option for the submarine-dodging battleships of a century ago, which advertised their presence against an ever-changing sea and sky with bow waves and smokestacks. And so dazzle camouflage was born, an abstract riot of squiggles and harlequin patterns. It wasn’t hard to spot a dazzle ship but the challenge for the periscope operator was quickly to judge a ship’s speed and direction before firing a torpedo on a ponderous intercept. Dazzle camouflage was intended to provoke misjudgments, and there is some evidence that it worked.

Now let’s talk about data visualisation, the latest fashion in numerate journalism, albeit one that harks back to the likes of Florence Nightingale. She was not only the most famous nurse in history but the creator of a beautiful visualisation technique, the “Coxcomb diagram”, and the first woman to be elected as a member of the Royal Statistical Society.

Data visualisation creates powerful, elegant images from complex data. It’s like good prose: a pleasure to experience and a force for good in the right hands, but also seductive and potentially deceptive. Because we have less experience of data visualisation than of rhetoric, we are naive, and allow ourselves to be dazzled. Too much data visualisation is the statistical equivalent of dazzle camouflage: striking looks grab our attention but either fail to convey useful information or actively misdirect us.

[. . .]

Those beautiful Coxcomb diagrams are no exception. They show the causes of mortality in the Crimean war, and make a powerful case that better hygiene saved lives. But Hugh Small, a biographer of Nightingale, argues that she chose the Coxcomb diagram in order to make exactly this case. A simple bar chart would have been clearer: too clear for Nightingale’s purposes, because it suggested that winter was as much of a killer as poor hygiene was. Nightingale’s presentation of data was masterful. It was also designed not to inform but to persuade. When we look at modern data visualisations, we should remember that.

May 18, 2013

The “most balanced gender studies textbook available”

Filed under: Media, Politics, USA — Tags: , , , , , — Nicholas @ 00:01

Cathy Young has some concerns with a popular gender studies textbook:

A few months ago, a post with a shocking claim about misogyny in America began to circulate on Tumblr, the social media site popular with older teens and young adults. It featured a scanned book page section stating that, according to “recent survey data,” when junior high school students in the Midwest were asked what they would do if they woke up “transformed into the opposite sex,” the girls showed mixed emotions but the boys’ reaction was straightforward: “‘Kill myself’ was the most common answer when they contemplated the possibility of life as a girl.” The original poster — whose comment was, “Wow” —identified the source as her “Sex & Gender college textbook,” The Gendered Society by Michael Kimmel.

The post quickly caught on with Tumblr’s radical feminist contingent: in less than three months, it was reblogged or “liked” by over 33,000 users. Some appended their own comments, such as, “Yeah, tell me again how misogyny ‘isn’t real‘ and men and boys and actually ‘like,’ ‘love‘ and ‘respect the female sex‘? This is how deep misogynistic propaganda runs… As Germaine Greer said, ‘Women have no idea how much men hate them.'”

Yet, as it turns out, the claim reveals less about men and misogyny than it does about gender studies and academic feminism.

I was sufficiently intrigued to check out Kimmel’s reference: a 1984 book called The Longest War: Sex Differences in Perspective by psychologists Carol Tavris and Carole Wade. The publication date was the first tipoff that the study’s description in the excerpt was not entirely accurate: the “recent” data had to be about thirty years old. Still, did American teenage boys in the early 1980s really hold such a dismal view of being female?

When I obtained a copy of The Longest War, I was shocked to discover that the claim was not even out of context: it seemed to have no basis at all, other than one comment among examples of negative reactions from younger boys (the survey included third- through twelfth-grade students, not just those in junior high). Published in 1983 by the Institute for Equality in Education, the study had some real fodder for feminist arguments: girls generally felt they would be better off as males while boys generally saw the switch as a disadvantage, envisioning more social restrictions and fewer career options (many responses seemed based on stereotypes — e.g., husband-hunting as a girl’s main training for adulthood — than 1980s reality). But that’s not nearly as dramatic as “I’d rather kill myself than be a girl.”

Update, 19 May: Welcome to all the visitors from Reddit. I think this is the first time one of my posts got linked from Reddit (and several thousand of you have dropped by in the last 24 hours). To mark the occasion, I’ve added a Reddit link to the Sharing options on all posts.

May 2, 2013

Cherrypicking the result you prefer from a recent Medicaid study

Filed under: Health, Media, USA — Tags: , , , — Nicholas @ 10:43

Megan McArdle explains why a recent study’s results may be much more important than you might gather from the way it’s been reported so far:

Bombshell news out of Oregon today: a large-scale randomized controlled trial (RCT) of what happens to people when they gain Medicaid eligibility shows no impact on objective measures of health. Utilization went up, out-of-pocket expenditure went down, and the freqency of depression diagnoses was lower. But on the three important health measures they checked that we can measure objectively — glycated hemoglobin, a measure of blood sugar levels; blood pressure; and cholesterol levels — there was no significant improvement.

I know: sounds boring. Glycated hemoglobin! I might as well be one of the adults on Charlie Brown going wawawawawawa . . . and you fell asleep, didn’t you?

But this is huge news if you care about health care policy — and given the huge national experiment we’re about to embark on, you’d better. Bear with me.

Some of the news reports I’ve seen so far are somewhat underselling just how major these results are.

“Study: Medicaid reduces financial hardship, doesn’t quickly improve physical health” says the Washington Post.

The Associated Press headline reads “Study: Depression rates for uninsured dropped with Medicaid coverage”

At the New York Times, it’s “Study Finds Expanded Medicaid Increases Health Care Use”

I think Slate is closer to the mark, though a bit, well, Slate-ish: “Bad News for Obamacare: A new study suggests universal health care makes people happier but not healthier.”

This study is a big, big deal. Let me explain why.

A layman’s guide to evaluating statistical claims

Filed under: Media, Politics, Science — Tags: , , — Nicholas @ 10:23

We’re awash with statistics, 43.2% of which seem to be made up on the spot (did you see what I did there?). Betsey Stevenson & Justin Wolfers offer some guidance on how non-statisticians should approach the numbers we’re presented with in the media:

So how can non-experts and policy makers separate the useful research from the dross? Allow us to offer six rules.

1. Focus on how robust a finding is, meaning that different ways of looking at the evidence point to the same conclusion. Do the same patterns repeat in many data sets, in different countries, industries or eras? Are the findings fragile, changing as one makes small changes in how phenomena are measured, and do the results depend on whether particularly influential observations are included? Thanks to Moore’s Law of increasing computing power, it has never been easier or cheaper to assess, test and retest an interesting finding. If the author hasn’t made a convincing case, then don’t be convinced.

2. Data mavens often make a big deal of their results being statistically significant, which is a statement that it’s unlikely their findings simply reflect chance. Don’t confuse this with something actually mattering. With huge data sets, almost everything is statistically significant. On the flip side, tests of statistical significance sometimes tell us that the evidence is weak, rather than that an effect is nonexistent. Remember, results can be useful even if they don’t meet significance tests. Sometimes questions are so important that we need to glean whatever meaning we can from available data. The best bad evidence is still more informative than no evidence.

3. Be wary of scholars using high-powered statistical techniques as a bludgeon to silence critics who are not specialists. If the author can’t explain what they’re doing in terms you can understand, then you shouldn’t be convinced. You wouldn’t be convinced by an analysis just because it was written in ancient Latin, so why be impressed by an abundance of Greek letters? Sophisticated statistical methods can be helpful, but they can also hide more than they reveal.

4. Don’t fall into the trap of thinking about an empirical finding as “right” or “wrong.” At best, data provide an imperfect guide. Evidence should always shift your thinking on an issue; the question is how far.

5. Don’t mistake correlation for causation. For instance, even after revisions and corrections, Reinhart and Rogoff have demonstrated that economic growth is typically slower when government debt is higher. But does high debt cause slow growth, or is slow growth in gross domestic product the cause of higher debt-to-GDP ratios? Or are there other important determinants, such as populist spending by a government looking to get re- elected, which is more likely when growth is slow and typically drives debt up?

6. Always ask “so what?” Are the factors that drove the observed negative correlation between debt and GDP likely to exist today, in the U.S.? Does it even make sense to speak of “the” relationship between debt and economic growth, when there are surely many such relationships: Governments borrowing simply to fund their re-election are likely harming growth, while those investing in much-needed public works can provide the foundation for growth. The “so what” question is about moving beyond the internal validity of a finding to asking about its external usefulness.

April 29, 2013

US domestic firearms sales continue to grow

Filed under: Business, USA — Tags: , , , , — Nicholas @ 12:38

In the Wall Street Journal, Tom Gara shows us the booming market for firearms since 2008:

Gun buyers have long demonstrated a tendency to stock up on weapons and ammunition ahead of possible changes to gun laws, and a so-called “Obama surge” in gun sales kicked off in the lead-up to Barack Obama’s first election victory in 2008.

There was a similar pick up in 2012 as a second Obama victory looked likely, and another rush on stores when it became clear the Obama administration would push for tighter gun control laws in the wake of the Sandy Hook school shooting last December.

In fact, the rush beginning in December has been high even by historic standards: the FBI conducted just under 2.8 million background checks on prospective gun buyers in December 2012, the highest number in any single month since records begin in November 1998. That’s more than triple the number it was running in in December 2002.

And the rush has continued through 2013 so far. Here’s the number of monthly background checks in the first three months of each year since 1998. This year looks set to be the busiest ever.

Jan-Mar gun sales 1998-2013

April 25, 2013

What we “know” as opposed to what is actually true

Filed under: Business, Football, Health, Law — Tags: , , , , — Nicholas @ 13:36

We all know the NFL is in serious trouble as more evidence comes out about the relationship between playing professional football and brain damage in later life. But what we know may not be true:

Dr. Everett Lehman, part of a team of government scientists who studied mortality rates for NFL retirees at the behest of the players’ union, discovered that the pros live longer than their male counterparts outside of the NFL. The scientists looked at more than 3,000 pension-vested NFL retirees and expected 625 deaths. They found only 334. “There has been this perception over a number of years of people dying at 55 on the average,” Dr. Lehman told me. “It’s just based on a faulty understanding of statistics.”

The scientists also learned that, contrary to conventional wisdom, NFL players commit suicide at a dramatically lower rate than the general male population. The suicides of Junior Seau, Dave Duerson, and Andre Waters don’t represent a trend but outliers that attract massive attention, and thereby massively distort the public’s perception. More typical was the death of Pat Summerall, who passed away quietly last week at 82 after a productive post-career career.

Indeed, a 2009 study by University of Michigan researchers reported that NFL retirees are far more likely to own a home, possess a college degree, and enjoy health insurance than their peers who never played in the league. The myth of the broke and broken-down athlete is just that: a myth. A few surely struggle after competition ceases; most apply their competitive natures to new endeavors.

It’s true that skill-position players on rosters for five or more years in the NFL faced elevated levels of Alzheimer’s, Lou Gehrig’s, and Parkinson’s disease deaths. But some perspective is in order. Of the 3,439 retired athletes studied by Lehman’s group, less than a dozen succumbed to deaths directly attributable to these neurodegenerative killers. Had Parkinson’s killed one rather than the two retirees it did kill, for example, its rate would have been lower among players than among the general population.

It’s quite possible the NFL is concerned (and ensuring that it is seen to be concerned) primarily because of the need to address public perceptions, rather than as a defensive move against future or ongoing legal challenges.

April 24, 2013

Copyright terms are almost certainly too long already

Filed under: Books, Business, Economics, Law, Media — Tags: , , , — Nicholas @ 11:59

At Techdirt, Mike Masnick makes the case for reducing the swollen length of time current copyrights are protected:

We’ve pointed a few times in the past to a chart from William Patry’s book, looking at how frequently copyright was renewed at the 28 year mark back when copyright (a) required registration and (b) required a “renewal” at 28 years to keep it another 28 years. The data is somewhat amazing:

Copyright renewal rates 1958-59

As you can see, very few works are renewed after 28 years. Only movies, at 74% are over the 50% mark. Only 35% of music and only 7% of books tells quite a story. It makes it quite clear that even the copyright holders see almost no value in their copyrights after a short period of time. It appears that the Bureau of Economic Analysis is coming to the same conclusion from a different angle. As Matthew Yglesias notes, as part of its effort to recalibrate how it calculates GDP, the BEA is considering money spent on the creation of content an “investment” in a capital good, which needs to be depreciated over the time period in which it is valuable. Frankly, I’m not convinced this is the smartest way to account for money spent on the creation of content, but either way, the BEA’s analysis provides some insight into the standard “economic life” of various pieces of content, which match up with the chart above in many ways.

QotD: Welcoming the DSM-V appropriately

Filed under: Books, Health, Humour, Quotations, Science — Tags: , , — Nicholas @ 00:01

The much-awaited arrival of DSM-5 (the fifth edition of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders) should ensure that every human being is classed as insane. At this point we might be able to start again and consider what psychiatry is for. Genomics is keen to help in the effort by finding the loci that are associated with all sorts of mental disorders. Enter a huge population based study funded by the National Institute of Mental Health: “Our findings show that specific SNPs are associated with a range of psychiatric disorders of childhood onset or adult onset. In particular, variation in calcium-channel activity genes seems to have pleiotropic effects on psychopathology. These results provide evidence relevant to the goal of moving beyond descriptive syndromes in psychiatry, and towards a nosology informed by disease cause.” Hmm. I think that when authors have to use words like “pleiotropic” and “nosology” there is a high chance that they do not know what they are talking about. So before welcoming the marriage of genomics and psychiatry, let us remember that there is a strong history of madness on both sides.

Richard Lehman, “Richard Lehman’s journal review—22 April 2013”, BMJ Group blogs, 2013-04-22

« Newer PostsOlder Posts »

Powered by WordPress