Quotulatiousness

March 16, 2015

Comparing statistics from different sources

Filed under: Economics, Government, USA — Tags: , — Nicholas @ 03:00

In Forbes, Tim Worstall points out that you need to be careful in using statistics sourced from different organizations or agencies, as they don’t necessarily measure quite the same thing, despite the names being very similar:

There are certain sets of statistics put out (largely by the OECD nations like the US and so on) which we really can believe as saying exactly what is indicated upon the tin.

However, that isn’t the same as saying that we should be willing to just accept all such US or OECD statistical numbers. Take, for example and this is one that I have banged on about for many a year now, The US and other OECD measures of poverty. The standard OECD measure of who is in poverty is below 60% of median income, adjusted for housing costs and household size. This is a measure of inequality, not actual poverty. It is also after all of the things that are done to reduce poverty, benefits, redistribution and all that. The US measure is, again adjusted for household size but not for housing costs, a measure of actual poverty. It is not related to average incomes but to what was low income in the early 1960s updated for inflation. And more significantly, it is before almost all of the things done to try to alleviate poverty. The OECD poverty measure is thus a measure of how much (relative) poverty there is after the things done to reduce poverty and the US standard number is a measure of how much absolute poverty there is before attempts to reduce poverty.

There’s nothing particularly wrong with either measure. But we’ve got to be very careful in acknowledging the difference between the two before we go and do something stupid like directly compare them, US poverty rates against the poverty rates of other OECD countries. Yet we do in fact see such comparisons being made all the time.

Another such little mistake of current interest is the way that we’re continually told that US average wages haven’t risen for decades. And it’s true, in one sense, that they haven’t. But wages aren’t actually what we should be looking at: total compensation from work is. And that’s been rising reasonably nicely over that same time period. The difference is in the benefits that we get over and above our wages from going to work. That health care insurance for example. This is more a matter of manipulation in the presentation of the statistics and if you see someone bleating about “wages” be very careful to check and see whether they are talking about what is of interest, compensation, or about wages which is a sign that they’re trying to mislead.

February 21, 2015

QotD: Campbell’s Law

Filed under: Business, Quotations — Tags: , , — Nicholas @ 01:00

The most common problem is that all these new systems — metrics, algo­rithms, automated decisionmaking processes — result in humans gaming the system in rational but often unpredictable ways. Sociologist Donald T. Campbell noted this dynamic back in the ’70s, when he articulated what’s come to be known as Campbell’s law: “The more any quantitative social indicator is used for social decision-making,” he wrote, “the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

On a managerial level, once the quants come into an industry and disrupt it, they often don’t know when to stop. They tend not to have decades of institutional knowledge about the field in which they have found themselves. And once they’re empowered, quants tend to create systems that favor something pretty close to cheating. As soon as managers pick a numerical metric as a way to measure whether they’re achieving their desired outcome, everybody starts maximizing that metric rather than doing the rest of their job — just as Campbell’s law predicts.

Felix Salmon, “Why Quants Don’t Know Everything”, Wired, 2014-01-14

January 22, 2015

China (barely) misses growth target … as if we can trust their numbers anyway

Filed under: China, Economics, Government — Tags: , , , — Nicholas @ 04:00

Ah, well, I haven’t ridden this old hobby horse for a while, so let’s just let Tim Worstall explain why this time, we might be able to get a bit of perspective from the otherwise unreliable official Chinese government economic figures:

Many observers have been slightly sceptical of Chinese GDP numbers for some years now. Regional GDP numbers don’t seem to quite match with other regional numbers (say, oil consumption, other proxies for economic activity) and national numbers don’t necessarily reflect the sum of all of those regional numbers either. There’s absolutely no doubt at all that the place has been getting richer but whether quite so much or quite in the manner being reported is another matter. And then there’s another group of observers (this one including myself) who have some experience of how communists report economic numbers. There’s a plan, the Communist Party is in charge of executing that plan and, amazingly, the plan is always reported to have either worked or been exceeded. Anything less would reflected badly on said Communist Party. As I’ve also been exposed to the old Soviet accounting systems I’m more sceptical than most on this point.

So, there’s that slight worry that a slowing China (or one not growing at the former breakneck pace perhaps) will also lower growth in other countries. We’re pretty sure that’s going to happen. But we’ve also got this other thing to ponder. If the Communist Party is allowing the reporting of numbers that don’t meet the plan then what’s going on with that?

Is this some sea change in the management of the numbers? They’re actually reporting the correct numbers? Or are those suspected massages of the numbers still going on but they underlying reality is so bad that they just couldn’t get up to the planned target? This is, I agree, all wild surmise. But it is a surprise that the numbers came in below target because that’s just not what we’ve come to expect in such a political system. And that could be very bad news indeed.

December 10, 2014

US child poverty is bad … but nowhere near as bad as they say

Filed under: Media, USA — Tags: , , , , — Nicholas @ 00:04

Tim Worstall debunks a headline statistic from earlier this month:

We’ve a new report out from the Mailman School of Public Health telling us that in some urban parts of the US child poverty is up at the unbelievable rates of 40, even 50% or more. The problem with this claim is that it’s simply not true. Apparently the researchers aren’t quite au fait with how poverty is both defined and alleviated in the US. Which is, when you think about it, something of a problem for those who decide to present us with statistics about child poverty.

[…]

Everyone else [in the world] (as well as using a relative poverty standard, usually below 60% of median earnings adjusted for family size) measures poverty after the effects of the tax and benefits systems on alleviating poverty. So, in my native UK if you’re poor you might get some cash payments (say, unemployment pay), some tax credits, help with your housing costs (housing benefit we call it), reduced property taxes (council tax credit) and so on. Whether you are poor or not is defined as being whether you are still under that poverty level after the effects of all of those attempts to alleviate poverty.

In the US things are rather different. It’s an absolute standard of income (set in the 1960s and upgraded only for inflation, not median incomes, since) but it counts only market income plus direct cash transfers to the poor before measuring against that standard. Thus, when we measure the US poor we do not include the EITC (equivalent of those UK tax credits, indeed our UK ones were copied from the US), we do not include Section 8 vouchers (housing benefit), Medicaid, we don’t even include food stamps. Because the US measure of poverty simply doesn’t include the effects of benefits in kind and through the tax system.

The US measure therefore isn’t the number of children living in poverty. It’s the number of children who would be in poverty if there wasn’t this system of government alleviation of poverty. When we do actually take into account what is done to alleviate child poverty we find that it’s really some 2-3% of US children who live in poverty. Yes, that low: the US welfare state is very much child orientated.

(Emphasis mine)

November 14, 2014

Either kink is now pretty much mainstream … or Quebec is a hotbed of kinksters

Filed under: Cancon, Health — Tags: , , , , , — Nicholas @ 07:24

In Reason, Elizabeth Nolan Brown reviews the findings of a recent survey on what kind of kinks are no longer considered weird or unusual (because so many people fantasize about ’em or are actively partaking of ’em):

Being sexually dominated. Having sex with multiple people at once. Watching someone undress without their knowledge. These are just a few of the totally normal sexual fantasies uncovered by recent research published in the Journal of Sexual Medicine. The overarching takeaway from this survey of about 1,500 Canadian adults is that sexual kink is incredibly common.

While plenty of research has been conducted on sexual fetishes, less is known about the prevalence of particular sexual desires that don’t rise to the level of pathological (i.e., don’t harm others or interfere with normal life functioning and aren’t a requisite for getting off). “Our main objective was to specify norms in sexual fantasies,” said lead study author Christian Joyal. “We suspected there are a lot more common fantasies than atypical fantasies.”

Joyal’s team surveyed about 717 Québécois men and 799 women, with a mean age of 30. Participants ranked 55 different sexual fantasies, as well as wrote in their own. Each fantasy was then rated as statistically rare, unusual, common, or typical.

Of course, the statistics also show where men and women differ in some areas:

Notably, men were more likely than women to say they wanted their sexual fantasies to become sexual realities. “Approximately half of women with descriptions of submissive fantasies specified that they would not want the fantasy to materialize in real life,” the researchers note. “This result confirms the important distinction between sexual fantasies and sexual wishes, which is usually stronger among women than among men.”

The researchers also found a number of write-in “favorite” sexual fantasies that were common among men had no equivalent in women’s fantasies. These included having sex with a trans woman (included in 4.2 percent of write-in fantasies), being on the receiving end of strap-on/non-homosexual anal sex (6.1 percent), and watching a partner have sex with another man (8.4 percent).

Next up, the researchers plan to map subgroups of sexual fantasies that often go together (for instance, those who reported submissive fantasies were also more likely to report domination fantasies, and both were associated with higher levels of overall sexual satisfaction). For now, they caution that “care should be taken before labeling (a sexual fantasy) as unusual, let alone deviant.”

It would be interesting to see the results of this study replicated in other areas — Quebec may or may not be representative of the rest of western society.

Update, 28 November: Maggie McNeill is not impressed by the study at all.

But there’s a bigger problem, which as it turns out I’ve written on before when the titillation du jour was the claim that fewer men were paying for sex:

    … the General Social Survey … has one huge, massive flaw that was mentioned by my psychology professors way back in the Dark Ages of the 1980s, yet seems not to trouble those who rely upon it so heavily these days: it is conducted in person, face to face with the respondents. And that means that on sensitive topics carrying criminal penalties or heavy social stigma, the results are less than solid; negative opinions of its dependability on such matters range from “unreliable” to “useless”. The fact of the matter is that human beings want to look good to authority figures (like sociologists in white lab coats) even when they don’t know them from Adam, so they tend to deviate from strict veracity toward whatever answer they think the interviewer wants to hear…

So, what does this study say constitutes an “abnormal” fantasy?

    “Clinically, we know what pathological sexual fantasies are: they involve non-consenting partners, they induce pain, or they are absolutely necessary in deriving satisfaction,” Christian Joyal, the lead author of the study, said…The researchers found that only two sexual fantasies were…rare: Sexual activities with a child or an animal…only nine sexual fantasies were considered unusual…[including] “golden showers,” cross-dressing, [and] sex with a prostitute…

Joyal’s claim that sadistic and rape fantasies are innately “pathological” is both insulting and totally wrong; we “know” no such thing. And did you think it was a coincidence that pedophilia and bestiality were the only two fantasies to fall into the “rare” category during a time when those are the two most vilified kinks in the catalog, kinks which will result in permanent consignment to pariah status if discovered? Guess again; as recently as the 1980s it was acceptable to at least talk about both of these, and neither is as rare as this “study” pretends. But Man is a social animal, and even if someone is absolutely certain of his anonymity (which in the post-Snowden era would be a much rarer thing than either of those fantasies), few are willing to risk the disapproval of a lab-coated authority figure even if he isn’t sitting directly in front of them. What this study shows is not how common these fantasies actually are, but rather how safe people feel admitting to them. And while that’s an interesting thing in itself, it isn’t what everyone from researchers to reporters to readers is pretending the study measured.

October 16, 2014

Italian recession officially ends, thanks to drugs and prostitution

Filed under: Economics, Europe, Italy — Tags: , , , , — Nicholas @ 10:21

As Kelly McParland put it, it’s “another reason to legalize everything nasty“:

Italy learnt it was no longer in a recession on Wednesday thanks to a change in data calculations across the European Union which includes illegal economic activities such as prostitution and drugs in the GDP measure.

Adding illegal revenue from hookers, narcotics and black market cigarettes and alcohol to the eurozone’s third-biggest economy boosted gross domestic product figures.

GDP rose slightly from a 0.1 percent decline for the first quarter to a flat reading, the national institute of statistics said.

Although ISTAT confirmed a 0.2 percent decline for the second quarter, the revision of the first quarter data meant Italy had escaped its third recession in the last six years.

The economy must contract for two consecutive quarters, from output in the previous quarter, for a country to be technically in recession.

It’s merely a change in the statistical measurement, not an actual increase in Italian economic activity. And, given that illegal revenue pretty much by definition isn’t (and can’t be) accurately tracked, it’s only an estimated value anyway.

October 15, 2014

The pay gap issue, again

Filed under: Business, Economics — Tags: , , , — Nicholas @ 09:28

There’s been a lot of moaning on about inequality recently — some are even predicting it will be the big issue in next year’s Canadian federal election — but the eye-popping figures being tossed around (CEOs being paid hundreds of times the average wage) are very much a case of statistical cherry-picking:

Before retiring to their districts for the fall, the House Democratic Caucus rallied behind the CEO/Employee Pay Fairness Act, which would prevent a public company from deducting executive compensation over $1 million unless it also gives rank-and-file employees raises that keep pace with the cost of living and labor productivity.

Meanwhile, the AFL-CIO and its aligned think tanks have made hay of the huge difference between the pay of CEOs and employees. One of the most widely cited measures of the “gap” comes from the AFL-CIO’s Executive Paywatch website.

  • The nation’s largest federation of unions laments that “corporate CEOs have been taking a greater share of the economic pie” while wages have stagnated for the rest of us.
  • As proof, it points to a 331-to-1 gap in compensation between America’s chief executives and the pay of the average worker.

That’s a sizable number. But don’t grab the pitchforks just yet, says Mark J. Perry, economic professor at the University of Michigan-Flint and resident scholar at the American Enterprise Institute, and Michael Saltsman, research director at the Employment Policies Institute.

The AFL-CIO calculated a pay gap based on a very small sample — 350 CEOs from the S&P 500. According to the Bureau of Labor Statistics, there were 248,760 chief executives in the U.S. in 2013.

  • The BLS reports that the average annual salary for these chief executives is $178,400, which we can compare to the $35,239-per-year salary the AFL-CIO uses for the average American worker.
  • That shrinks the executive pay gap from 331-to-1 down to a far less newsworthy number of roughly five-to-one.

October 13, 2014

Statistical sleight-of-hand on the dangers of texting while driving

Filed under: Health, Media, USA — Tags: , , , , — Nicholas @ 10:15

Philip N. Cohen casts a skeptical eye at the frequently cited statistic on the dangers of texting, especially to teenage drivers. It’s another “epidemic” of bad statistics and panic-mongering headlines:

Recently, [author and journalist Matt] Richtel tweeted a link to this old news article that claims texting causes more fatal accidents for teenagers than alcohol. The article says some researcher estimates “more than 3,000 annual teen deaths from texting,” but there is no reference to a study or any source for the data used to make the estimate. As I previously noted, that’s not plausible.

In fact, 2,823 teens teens died in motor vehicle accidents in 2012 (only 2,228 of whom were vehicle occupants). So, my math gets me 7.7 teens per day dying in motor vehicle accidents, regardless of the cause. I’m no Pulitzer Prize-winning New York Times journalist, but I reckon that makes this giant factoid on Richtel’s website wrong, which doesn’t bode well for the book.

In fact, I suspect the 11-per-day meme comes from Mother Jones (or whoever someone there got it from) doing the math wrong on that Newsday number of 3,000 per year and calling it “nearly a dozen” (3,000 is 8.2 per day). And if you Google around looking for this 11-per-day statistic, you find sites like textinganddrivingsafety.com, which, like Richtel does in his website video, attributes the statistic to the “Institute for Highway Safety.” I think they mean the Insurance Institute for Highway Safety, which is the source I used for the 2,823 number above. (The fact that he gets the name wrong suggests he got the statistic second-hand.) IIHS has an extensive page of facts on distracted driving, which doesn’t have any fact like this (they actually express skepticism about inflated claims of cell phone effects).

[…]

I generally oppose scare-mongering manipulations of data that take advantage of common ignorance. The people selling mobile-phone panic don’t dwell on the fact that the roads are getting safer and safer, and just let you go on assuming they’re getting more and more dangerous. I reviewed all that here, showing the increase in mobile phone subscriptions relative to the decline in traffic accidents, injuries, and deaths.

That doesn’t mean texting and driving isn’t dangerous. I’m sure it is. Cell phone bans may be a good idea, although the evidence that they save lives is mixed. But the overall situation is surely more complicated than the TEXTING-WHILE-DRIVING EPIDEMIC suggests. The whole story doesn’t seem right — how can phones be so dangerous, and growing more and more pervasive, while accidents and injuries fall? At the very least, a powerful part of the explanation is being left out. (I wonder if phones displace other distractions, like eating and putting on make-up; or if some people drive more cautiously while they’re using their phones, to compensate for their distraction; or if distracted phone users were simply the worst drivers already.)

October 8, 2014

Something is wrong when your “data adjustment” is to literally double the reported numbers

Filed under: Health, USA — Tags: , , — Nicholas @ 10:32

In Forbes, Trevor Butterworth looks at an odd data analysis piece where the “fix” for a discrepancy in reported drinks per capita is to just assume everyone under-reported and to double that number:

“Think you drink a lot? This chart will tell you.”

The chart, reproduced below breaks down the distribution of drinkers into deciles, and ends with the startling conclusion that 24 million American adults — 10 percent of the adult population over 18 — consume a staggering 74 drinks a week.

Time for a stiff drink infographic

The source for this figure is “Paying the Tab,” by Phillip J. Cook, which was published in 2007. If we look at the section where he arrives at this calculation, and go to the footnote, we find that he used data from 2001-2002 from NESARC, the National Institute on Alcohol Abuse and Alcoholism, which had a representative sample of 43,093 adults over the age of 18. But following this footnote, we find that Cook corrected these data for under-reporting by multiplying the number of drinks each respondent claimed they had drunk by 1.97 in order to comport with the previous year’s sales data for alcohol in the US. Why? It turns out that alcohol sales in the US in 2000 were double what NESARC’s respondents — a nationally representative sample, remember — claimed to have drunk.

While the mills of US dietary research rely on the great National Health and Nutrition Examination Survey to digest our diets and come up with numbers, we know, thanks to the recent work of Edward Archer, that recall-based survey data are highly unreliable: we misremember what we ate, we misjudge by how much; we lie. Were we to live on what we tell academics we eat, life for almost two thirds of Americans would be biologically implausible.

But Cook, who is trying to show that distribution is uneven, ends up trying to solve an apparent recall problem by creating an aggregate multiplier to plug the sales data gap. And the problem is that this requires us to believe that every drinker misremembered by a factor of almost two. This might not much of a stretch for moderate drinkers; but did everyone who drank, say, four or eight drinks per week systematically forget that they actually had eight or sixteen? That seems like a stretch.

We are also required to believe that just as those who drank consumed significantly more than they were willing to admit, those who claimed to be consistently teetotal never touched a drop. And, we must also forget that those who aren’t supposed to be drinking at all are also younger than 18, and their absence from Cook’s data may well constitute a greater error.

September 3, 2014

QotD: The relative size of the Chinese economy, historically speaking

Filed under: China, Economics, History, Quotations — Tags: , , — Nicholas @ 00:01

People seem to want to get freaked out about China passing the US in terms of the size of its economy. But in the history of Civilization there have probably been barely 200 years in the last 4000 that China hasn’t been the largest economy in the world. It probably only lost that title in the early 19th century and is just now getting it back. We are in some senses ending an unusual period, not starting one.

Warren Meyer, “It is Historically Unusual for China NOT to be the Largest Economy on Earth”, Coyote Blog, 2014-08-30.

August 18, 2014

Worstall confirms that “the UK would lose 3 million jobs in the year it left the European Union”

Filed under: Britain, Business, Economics, Europe — Tags: , , — Nicholas @ 09:15

There you go … proof positive that the UK cannot possibly, under any circumstances, leave the European Union. Except for the fact that the UK would lose 3 million jobs in the year even if it stayed with the EU, because that’s how many jobs it normally loses in a year:

UK Would Not Lose 3 Million Jobs If It Left The European Union

Well, of course, the UK would lose 3 million jobs in the year it left the European Union because the UK loses 3 million jobs each and every year. Roughly 10% of all jobs are destroyed in a year and the economy, generally, tends to create 3 million jobs a year as well. But that’s not the point at contention here which is the oft repeated claim that because we left the EU then therefore the UK economy would suddenly be bereft of 3 million jobs, that 10% of the workforce. And sadly this claim is a common one and it just goes to show that there’s lies, damned lies and then there’s politics.

The way we’re supposed to understand the contention is that there’s three million who make their living making things that are then exported to our partners in the European Union. And we’re then to make the leap to the idea that if we did leave the EU then absolutely none of those jobs would exist: leaving the EU would be the same as never exporting another thing to the EU. This is of course entirely nonsense as any even random reading of our mutual histories would indicate: what became the UK has been exporting to the Continent ever since there’s actually been the technology to facilitate trade. Further too: there have been finds in shipwrecks in the Eastern Mediterranean of Cornish tin dating from 1,000 BC, so it’s not just bloodthirsty and drunken louts that we’ve been exporting all these years.

Salt studies and health outcomes – “all models need to be taken with a pinch of salt”

Filed under: Food, Health, Science — Tags: , , , — Nicholas @ 08:41

Colby Cosh linked to this rather interesting BMJ blog post by Richard Lehman, looking at studies of the impact of dietary salt reduction:

601 The usual wisdom about sodium chloride is that the more you take, the higher your blood pressure and hence your cardiovascular risk. We’ll begin, like the NEJM, with the PURE study. This was a massive undertaking. They recruited 102 216 adults from 18 countries and measured their 24 hour sodium and potassium excretion, using a single fasting morning urine specimen, and their blood pressure by using an automated device. In an ideal world, they would have carried on doing this every week for a month or two, but hey, this is still better than anyone has managed before now. Using these single point in time measurements, they found that people with elevated blood pressure seemed to be more sensitive to the effects of the cations sodium and potassium. Higher sodium raised their blood pressure more, and higher potassium lowered it more, than in individuals with normal blood pressure. In fact, if sodium is a cation, potassium should be called a dogion. And what I have described as effects are in fact associations: we cannot really know if they are causal.

612 But now comes the bombshell. In the PURE study, there was no simple linear relationship between sodium intake and the composite outcome of death and major cardiovascular events, over a mean follow-up period of 3.7 years. Quite the contrary, there was a sort of elongated U-shape distribution. The U begins high and is then splayed out: people who excreted less than 3 grams of salt daily were at much the highest risk of death and cardiovascular events. The lowest risk lay between 3 g and 5 g, with a slow and rather flat rise thereafter. On this evidence, trying to achieve a salt intake under 3 g is a bad idea, which will do you more harm than eating as much salt as you like. Moreover, if you eat plenty of potassium as well, you will have plenty of dogion to counter the cation. The true Mediterranean diet wins again. Eat salad and tomatoes with your anchovies, drink wine with your briny olives, sprinkle coarse salt on your grilled fish, lay it on a bed of cucumber, and follow it with ripe figs and apricots. Live long and live happily.

624 It was rather witty, if slightly unkind, of the NEJM to follow these PURE papers with a massive modelling study built on the assumption that sodium increases cardiovascular risk in linear fashion, mediated by blood pressure. Dariush Mozaffarian and his immensely hardworking team must be biting their lips, having trawled through all the data they could find about sodium excretion in 66 countries. They used a reference standard of 2 g sodium a day, assuming this was the point of optimal consumption and lowest risk. But from PURE, we now know it is associated with a higher cardiovascular risk than 13 grams a day. So they should now go through all their data again, having adjusted their statistical software to the observational curves of the PURE study. Even so, I would question the value of modelling studies on this scale: the human race is a complex thing to study, and all models need to be taken with a pinch of salt.

Update: Colby Cosh followed up the original link with this tweet. Ouch!

August 16, 2014

ESR on demilitarizing the police

Filed under: Law, Liberty, USA — Tags: , , , , , , — Nicholas @ 10:32

Eric S. Raymond is with most other libertarians about the problems with having your police become more like an occupying army:

I join my voice to those of Rand Paul and other prominent libertarians who are reacting to the violence in Ferguson, Mo. by calling for the demilitarization of the U.S.’s police. Beyond question, the local civil police in the U.S. are too heavily armed and in many places have developed an adversarial attitude towards the civilians they serve, one that makes police overreactions and civil violence almost inevitable.

But I publish this blog in part because I think it is my duty to speak taboo and unspeakable truths. And there’s another injustice being done here: the specific assumption, common among civil libertarians, that police overreactions are being driven by institutional racism. I believe this is dangerously untrue and actually impedes effective thinking about how to prevent future outrages.

There are some unwelcome statistics which at least partly explain why young black men are more likely to be stopped by the police:

… the percentage of black males 15-24 in the general population is about 1%. If you add “mixed”, which is reasonable in order to correspond to a policeman’s category of “nonwhite”, it goes to about 2%.

That 2% is responsible for almost all of 52% of U.S. homicides. Or, to put it differently, by these figures a young black or “mixed” male is roughly 26 times more likely to be a homicidal threat than a random person outside that category – older or younger blacks, whites, hispanics, females, whatever. If the young male is unambiguously black that figure goes up, about doubling.

26 times more likely. That’s a lot. It means that even given very forgiving assumptions about differential rates of conviction and other factors we probably still have a difference in propensity to homicide (and other violent crimes for which its rates are an index, including rape, armed robbery, and hot burglary) of around 20:1. That’s being very generous, assuming that cumulative errors have thrown my calculations are off by up to a factor of 6 in the direction unfavorable to my argument.

[…]

Yeah, by all means let’s demilitarize the police. But let’s also stop screaming “racism” when, by the numbers, the bad shit that goes down with black male youths reflects a cop’s rational fear of that particular demographic – and not racism against blacks in general. Often the cops in these incidents are themselves black, a fact that media accounts tend to suppress.

What we can actually do about the implied problem is a larger question. (Decriminalizing drugs would be a good start.) But it’s one we can’t even begin to address rationally without seeing past the accusation of racism.

July 31, 2014

NFL to test player tracking RFID system this year

Filed under: Football, Media, Technology — Tags: , , — Nicholas @ 07:01

Tom Pelissero talks about the new system which will be installed at 17 NFL stadiums this season:

The NFL partnered with Zebra Technologies, which is applying the same radio-frequency identification (RFID) technology that it has used the past 15 years to monitor everything from supplies on automotive assembly lines to dairy cows’ milk production.

Work is underway to install receivers in 17 NFL stadiums, each connected with cables to a hub and server that logs players’ locations in real time. In less than a second, the server can spit out data that can be enhanced graphically for TV broadcasts with the press of a button.

[…]

TV networks have experimented in recent years with route maps and other visual enhancements of players’ movements. But league-wide deployment of the sensors and all the data they produce could be the most significant innovation since the yellow first-down line.

The data also will go to the NFL “cloud,” where it can be turned around in seconds for in-stadium use and, eventually, a variety of apps and other visual and second-screen experiences. Producing a set of proprietary statistics on players and teams is another goal, Shah said.

NFL teams — many already using GPS technology to track players’ movements, workload and efficiency in practice — won’t have access to the in-game information in 2014 because of competitive considerations while the league measures the sustainability and integrity of the data.

“But as you imagine, longer-term, that is the vision,” Shah said. “Ultimately, we’re going to have a whole bunch of location-based data that’s coming out of live-game environment, and we want teams to be able to marry that up to what they’re doing in practice facilities themselves.”

Zebra’s sensors are oblong, less than the circumference of a quarter and installed under the top cup of the shoulder pad, Stelfox said. They blink with a signal 25 times a second and run on a watch battery. The San Francisco 49ers and Detroit Lions and their opponents wore them for each of the two teams home games in last season as part of a trial run.

About 20 receivers will be placed around the bands between the upper and lower decks of the 17 stadiums that were selected for use this year. They’ll provide a cross-section of environments and make sure the technology is operational across competitive settings before full deployment.

July 23, 2014

In statistical studies, the size of the data sample matters

Filed under: Food, Health, Science, USA — Tags: , , , , — Nicholas @ 08:49

In the ongoing investigation into why Westerners — especially North Americans — became obese, some of the early studies are being reconsidered. For example, I’ve mentioned the name of Dr. Ancel Keys a couple of times recently: he was the champion of the low-fat diet and his work was highly influential in persuading government health authorities to demonize fat in pursuit of better health outcomes. He was so successful as an advocate for this idea that his study became one of the most frequently cited in medical science. A brilliant success … that unfortunately flew far ahead of its statistical evidence:

So Keys had food records, although that coding and summarizing part sounds a little fishy. Then he followed the health of 13,000 men so he could find associations between diet and heart disease. So we can assume he had dietary records for all 13,000 of them, right?

Uh … no. That wouldn’t be the case.

The poster-boys for his hypothesis about dietary fat and heart disease were the men from the Greek island of Crete. They supposedly ate the diet Keys recommended: low-fat, olive oil instead of saturated animal fats and all that, you see. Keys tracked more than 300 middle-aged men from Crete as part of his study population, and lo and behold, few of them suffered heart attacks. Hypothesis supported, case closed.

So guess how many of those 300-plus men were actually surveyed about their eating habits? Go on, guess. I’ll wait …

And the answer is: 31.

Yup, 31. And that’s about the size of the dataset from each of the seven countries: somewhere between 25 and 50 men. It’s right there in the paper’s data tables. That’s a ridiculously small number of men to survey if the goal is to accurately compare diets and heart disease in seven countries.

[…]

Getting the picture? Keys followed the health of more than 300 men from Crete. But he only surveyed 31 of them, with one of those surveys taken during the meat-abstinence month of Lent. Oh, and the original seven-day food-recall records weren’t available later, so he swapped in data from an earlier paper. Then to determine fruit and vegetable intake, he used data sheets about food availability in Greece during a four-year period.

And from this mess, he concluded that high-fat diets cause heart attacks and low-fat diets prevent them.

Keep in mind, this is one of the most-cited studies in all of medical science. It’s one of the pillars of the Diet-Heart hypothesis. It helped to convince the USDA, the AHA, doctors, nutritionists, media health writers, your parents, etc., that saturated fat clogs our arteries and kills us, so we all need to be on low-fat diets – even kids.

Yup, Ancel Keys had a tiny one … but he sure managed to screw a lot of people with it.

H/T to Amy Alkon for the link.

« Newer PostsOlder Posts »

Powered by WordPress