Quotulatiousness

December 10, 2014

US child poverty is bad … but nowhere near as bad as they say

Filed under: Media, USA — Tags: , , , , — Nicholas @ 00:04

Tim Worstall debunks a headline statistic from earlier this month:

We’ve a new report out from the Mailman School of Public Health telling us that in some urban parts of the US child poverty is up at the unbelievable rates of 40, even 50% or more. The problem with this claim is that it’s simply not true. Apparently the researchers aren’t quite au fait with how poverty is both defined and alleviated in the US. Which is, when you think about it, something of a problem for those who decide to present us with statistics about child poverty.

[…]

Everyone else [in the world] (as well as using a relative poverty standard, usually below 60% of median earnings adjusted for family size) measures poverty after the effects of the tax and benefits systems on alleviating poverty. So, in my native UK if you’re poor you might get some cash payments (say, unemployment pay), some tax credits, help with your housing costs (housing benefit we call it), reduced property taxes (council tax credit) and so on. Whether you are poor or not is defined as being whether you are still under that poverty level after the effects of all of those attempts to alleviate poverty.

In the US things are rather different. It’s an absolute standard of income (set in the 1960s and upgraded only for inflation, not median incomes, since) but it counts only market income plus direct cash transfers to the poor before measuring against that standard. Thus, when we measure the US poor we do not include the EITC (equivalent of those UK tax credits, indeed our UK ones were copied from the US), we do not include Section 8 vouchers (housing benefit), Medicaid, we don’t even include food stamps. Because the US measure of poverty simply doesn’t include the effects of benefits in kind and through the tax system.

The US measure therefore isn’t the number of children living in poverty. It’s the number of children who would be in poverty if there wasn’t this system of government alleviation of poverty. When we do actually take into account what is done to alleviate child poverty we find that it’s really some 2-3% of US children who live in poverty. Yes, that low: the US welfare state is very much child orientated.

(Emphasis mine)

November 14, 2014

Either kink is now pretty much mainstream … or Quebec is a hotbed of kinksters

Filed under: Cancon, Health — Tags: , , , , , — Nicholas @ 07:24

In Reason, Elizabeth Nolan Brown reviews the findings of a recent survey on what kind of kinks are no longer considered weird or unusual (because so many people fantasize about ‘em or are actively partaking of ‘em):

Being sexually dominated. Having sex with multiple people at once. Watching someone undress without their knowledge. These are just a few of the totally normal sexual fantasies uncovered by recent research published in the Journal of Sexual Medicine. The overarching takeaway from this survey of about 1,500 Canadian adults is that sexual kink is incredibly common.

While plenty of research has been conducted on sexual fetishes, less is known about the prevalence of particular sexual desires that don’t rise to the level of pathological (i.e., don’t harm others or interfere with normal life functioning and aren’t a requisite for getting off). “Our main objective was to specify norms in sexual fantasies,” said lead study author Christian Joyal. “We suspected there are a lot more common fantasies than atypical fantasies.”

Joyal’s team surveyed about 717 Québécois men and 799 women, with a mean age of 30. Participants ranked 55 different sexual fantasies, as well as wrote in their own. Each fantasy was then rated as statistically rare, unusual, common, or typical.

Of course, the statistics also show where men and women differ in some areas:

Notably, men were more likely than women to say they wanted their sexual fantasies to become sexual realities. “Approximately half of women with descriptions of submissive fantasies specified that they would not want the fantasy to materialize in real life,” the researchers note. “This result confirms the important distinction between sexual fantasies and sexual wishes, which is usually stronger among women than among men.”

The researchers also found a number of write-in “favorite” sexual fantasies that were common among men had no equivalent in women’s fantasies. These included having sex with a trans woman (included in 4.2 percent of write-in fantasies), being on the receiving end of strap-on/non-homosexual anal sex (6.1 percent), and watching a partner have sex with another man (8.4 percent).

Next up, the researchers plan to map subgroups of sexual fantasies that often go together (for instance, those who reported submissive fantasies were also more likely to report domination fantasies, and both were associated with higher levels of overall sexual satisfaction). For now, they caution that “care should be taken before labeling (a sexual fantasy) as unusual, let alone deviant.”

It would be interesting to see the results of this study replicated in other areas — Quebec may or may not be representative of the rest of western society.

Update, 28 November: Maggie McNeill is not impressed by the study at all.

But there’s a bigger problem, which as it turns out I’ve written on before when the titillation du jour was the claim that fewer men were paying for sex:

    … the General Social Survey … has one huge, massive flaw that was mentioned by my psychology professors way back in the Dark Ages of the 1980s, yet seems not to trouble those who rely upon it so heavily these days: it is conducted in person, face to face with the respondents. And that means that on sensitive topics carrying criminal penalties or heavy social stigma, the results are less than solid; negative opinions of its dependability on such matters range from “unreliable” to “useless”. The fact of the matter is that human beings want to look good to authority figures (like sociologists in white lab coats) even when they don’t know them from Adam, so they tend to deviate from strict veracity toward whatever answer they think the interviewer wants to hear…

So, what does this study say constitutes an “abnormal” fantasy?

    “Clinically, we know what pathological sexual fantasies are: they involve non-consenting partners, they induce pain, or they are absolutely necessary in deriving satisfaction,” Christian Joyal, the lead author of the study, said…The researchers found that only two sexual fantasies were…rare: Sexual activities with a child or an animal…only nine sexual fantasies were considered unusual…[including] “golden showers,” cross-dressing, [and] sex with a prostitute…

Joyal’s claim that sadistic and rape fantasies are innately “pathological” is both insulting and totally wrong; we “know” no such thing. And did you think it was a coincidence that pedophilia and bestiality were the only two fantasies to fall into the “rare” category during a time when those are the two most vilified kinks in the catalog, kinks which will result in permanent consignment to pariah status if discovered? Guess again; as recently as the 1980s it was acceptable to at least talk about both of these, and neither is as rare as this “study” pretends. But Man is a social animal, and even if someone is absolutely certain of his anonymity (which in the post-Snowden era would be a much rarer thing than either of those fantasies), few are willing to risk the disapproval of a lab-coated authority figure even if he isn’t sitting directly in front of them. What this study shows is not how common these fantasies actually are, but rather how safe people feel admitting to them. And while that’s an interesting thing in itself, it isn’t what everyone from researchers to reporters to readers is pretending the study measured.

October 16, 2014

Italian recession officially ends, thanks to drugs and prostitution

Filed under: Economics, Europe — Tags: , , , , — Nicholas @ 10:21

As Kelly McParland put it, it’s “another reason to legalize everything nasty“:

Italy learnt it was no longer in a recession on Wednesday thanks to a change in data calculations across the European Union which includes illegal economic activities such as prostitution and drugs in the GDP measure.

Adding illegal revenue from hookers, narcotics and black market cigarettes and alcohol to the eurozone’s third-biggest economy boosted gross domestic product figures.

GDP rose slightly from a 0.1 percent decline for the first quarter to a flat reading, the national institute of statistics said.

Although ISTAT confirmed a 0.2 percent decline for the second quarter, the revision of the first quarter data meant Italy had escaped its third recession in the last six years.

The economy must contract for two consecutive quarters, from output in the previous quarter, for a country to be technically in recession.

It’s merely a change in the statistical measurement, not an actual increase in Italian economic activity. And, given that illegal revenue pretty much by definition isn’t (and can’t be) accurately tracked, it’s only an estimated value anyway.

October 15, 2014

The pay gap issue, again

Filed under: Business, Economics — Tags: , , , — Nicholas @ 09:28

There’s been a lot of moaning on about inequality recently — some are even predicting it will be the big issue in next year’s Canadian federal election — but the eye-popping figures being tossed around (CEOs being paid hundreds of times the average wage) are very much a case of statistical cherry-picking:

Before retiring to their districts for the fall, the House Democratic Caucus rallied behind the CEO/Employee Pay Fairness Act, which would prevent a public company from deducting executive compensation over $1 million unless it also gives rank-and-file employees raises that keep pace with the cost of living and labor productivity.

Meanwhile, the AFL-CIO and its aligned think tanks have made hay of the huge difference between the pay of CEOs and employees. One of the most widely cited measures of the “gap” comes from the AFL-CIO’s Executive Paywatch website.

  • The nation’s largest federation of unions laments that “corporate CEOs have been taking a greater share of the economic pie” while wages have stagnated for the rest of us.
  • As proof, it points to a 331-to-1 gap in compensation between America’s chief executives and the pay of the average worker.

That’s a sizable number. But don’t grab the pitchforks just yet, says Mark J. Perry, economic professor at the University of Michigan-Flint and resident scholar at the American Enterprise Institute, and Michael Saltsman, research director at the Employment Policies Institute.

The AFL-CIO calculated a pay gap based on a very small sample — 350 CEOs from the S&P 500. According to the Bureau of Labor Statistics, there were 248,760 chief executives in the U.S. in 2013.

  • The BLS reports that the average annual salary for these chief executives is $178,400, which we can compare to the $35,239-per-year salary the AFL-CIO uses for the average American worker.
  • That shrinks the executive pay gap from 331-to-1 down to a far less newsworthy number of roughly five-to-one.

October 13, 2014

Statistical sleight-of-hand on the dangers of texting while driving

Filed under: Health, Media, USA — Tags: , , , , — Nicholas @ 10:15

Philip N. Cohen casts a skeptical eye at the frequently cited statistic on the dangers of texting, especially to teenage drivers. It’s another “epidemic” of bad statistics and panic-mongering headlines:

Recently, [author and journalist Matt] Richtel tweeted a link to this old news article that claims texting causes more fatal accidents for teenagers than alcohol. The article says some researcher estimates “more than 3,000 annual teen deaths from texting,” but there is no reference to a study or any source for the data used to make the estimate. As I previously noted, that’s not plausible.

In fact, 2,823 teens teens died in motor vehicle accidents in 2012 (only 2,228 of whom were vehicle occupants). So, my math gets me 7.7 teens per day dying in motor vehicle accidents, regardless of the cause. I’m no Pulitzer Prize-winning New York Times journalist, but I reckon that makes this giant factoid on Richtel’s website wrong, which doesn’t bode well for the book.

In fact, I suspect the 11-per-day meme comes from Mother Jones (or whoever someone there got it from) doing the math wrong on that Newsday number of 3,000 per year and calling it “nearly a dozen” (3,000 is 8.2 per day). And if you Google around looking for this 11-per-day statistic, you find sites like textinganddrivingsafety.com, which, like Richtel does in his website video, attributes the statistic to the “Institute for Highway Safety.” I think they mean the Insurance Institute for Highway Safety, which is the source I used for the 2,823 number above. (The fact that he gets the name wrong suggests he got the statistic second-hand.) IIHS has an extensive page of facts on distracted driving, which doesn’t have any fact like this (they actually express skepticism about inflated claims of cell phone effects).

[…]

I generally oppose scare-mongering manipulations of data that take advantage of common ignorance. The people selling mobile-phone panic don’t dwell on the fact that the roads are getting safer and safer, and just let you go on assuming they’re getting more and more dangerous. I reviewed all that here, showing the increase in mobile phone subscriptions relative to the decline in traffic accidents, injuries, and deaths.

That doesn’t mean texting and driving isn’t dangerous. I’m sure it is. Cell phone bans may be a good idea, although the evidence that they save lives is mixed. But the overall situation is surely more complicated than the TEXTING-WHILE-DRIVING EPIDEMIC suggests. The whole story doesn’t seem right — how can phones be so dangerous, and growing more and more pervasive, while accidents and injuries fall? At the very least, a powerful part of the explanation is being left out. (I wonder if phones displace other distractions, like eating and putting on make-up; or if some people drive more cautiously while they’re using their phones, to compensate for their distraction; or if distracted phone users were simply the worst drivers already.)

October 8, 2014

Something is wrong when your “data adjustment” is to literally double the reported numbers

Filed under: Health, USA — Tags: , , — Nicholas @ 10:32

In Forbes, Trevor Butterworth looks at an odd data analysis piece where the “fix” for a discrepancy in reported drinks per capita is to just assume everyone under-reported and to double that number:

“Think you drink a lot? This chart will tell you.”

The chart, reproduced below breaks down the distribution of drinkers into deciles, and ends with the startling conclusion that 24 million American adults — 10 percent of the adult population over 18 — consume a staggering 74 drinks a week.

Time for a stiff drink infographic

The source for this figure is “Paying the Tab,” by Phillip J. Cook, which was published in 2007. If we look at the section where he arrives at this calculation, and go to the footnote, we find that he used data from 2001-2002 from NESARC, the National Institute on Alcohol Abuse and Alcoholism, which had a representative sample of 43,093 adults over the age of 18. But following this footnote, we find that Cook corrected these data for under-reporting by multiplying the number of drinks each respondent claimed they had drunk by 1.97 in order to comport with the previous year’s sales data for alcohol in the US. Why? It turns out that alcohol sales in the US in 2000 were double what NESARC’s respondents — a nationally representative sample, remember — claimed to have drunk.

While the mills of US dietary research rely on the great National Health and Nutrition Examination Survey to digest our diets and come up with numbers, we know, thanks to the recent work of Edward Archer, that recall-based survey data are highly unreliable: we misremember what we ate, we misjudge by how much; we lie. Were we to live on what we tell academics we eat, life for almost two thirds of Americans would be biologically implausible.

But Cook, who is trying to show that distribution is uneven, ends up trying to solve an apparent recall problem by creating an aggregate multiplier to plug the sales data gap. And the problem is that this requires us to believe that every drinker misremembered by a factor of almost two. This might not much of a stretch for moderate drinkers; but did everyone who drank, say, four or eight drinks per week systematically forget that they actually had eight or sixteen? That seems like a stretch.

We are also required to believe that just as those who drank consumed significantly more than they were willing to admit, those who claimed to be consistently teetotal never touched a drop. And, we must also forget that those who aren’t supposed to be drinking at all are also younger than 18, and their absence from Cook’s data may well constitute a greater error.

September 3, 2014

QotD: The relative size of the Chinese economy, historically speaking

Filed under: China, Economics, History, Quotations — Tags: , , — Nicholas @ 00:01

People seem to want to get freaked out about China passing the US in terms of the size of its economy. But in the history of Civilization there have probably been barely 200 years in the last 4000 that China hasn’t been the largest economy in the world. It probably only lost that title in the early 19th century and is just now getting it back. We are in some senses ending an unusual period, not starting one.

Warren Meyer, “It is Historically Unusual for China NOT to be the Largest Economy on Earth”, Coyote Blog, 2014-08-30.

August 18, 2014

Worstall confirms that “the UK would lose 3 million jobs in the year it left the European Union”

Filed under: Britain, Business, Economics, Europe — Tags: , , — Nicholas @ 09:15

There you go … proof positive that the UK cannot possibly, under any circumstances, leave the European Union. Except for the fact that the UK would lose 3 million jobs in the year even if it stayed with the EU, because that’s how many jobs it normally loses in a year:

UK Would Not Lose 3 Million Jobs If It Left The European Union

Well, of course, the UK would lose 3 million jobs in the year it left the European Union because the UK loses 3 million jobs each and every year. Roughly 10% of all jobs are destroyed in a year and the economy, generally, tends to create 3 million jobs a year as well. But that’s not the point at contention here which is the oft repeated claim that because we left the EU then therefore the UK economy would suddenly be bereft of 3 million jobs, that 10% of the workforce. And sadly this claim is a common one and it just goes to show that there’s lies, damned lies and then there’s politics.

The way we’re supposed to understand the contention is that there’s three million who make their living making things that are then exported to our partners in the European Union. And we’re then to make the leap to the idea that if we did leave the EU then absolutely none of those jobs would exist: leaving the EU would be the same as never exporting another thing to the EU. This is of course entirely nonsense as any even random reading of our mutual histories would indicate: what became the UK has been exporting to the Continent ever since there’s actually been the technology to facilitate trade. Further too: there have been finds in shipwrecks in the Eastern Mediterranean of Cornish tin dating from 1,000 BC, so it’s not just bloodthirsty and drunken louts that we’ve been exporting all these years.

Salt studies and health outcomes – “all models need to be taken with a pinch of salt”

Filed under: Health, Science — Tags: , , , — Nicholas @ 08:41

Colby Cosh linked to this rather interesting BMJ blog post by Richard Lehman, looking at studies of the impact of dietary salt reduction:

601 The usual wisdom about sodium chloride is that the more you take, the higher your blood pressure and hence your cardiovascular risk. We’ll begin, like the NEJM, with the PURE study. This was a massive undertaking. They recruited 102 216 adults from 18 countries and measured their 24 hour sodium and potassium excretion, using a single fasting morning urine specimen, and their blood pressure by using an automated device. In an ideal world, they would have carried on doing this every week for a month or two, but hey, this is still better than anyone has managed before now. Using these single point in time measurements, they found that people with elevated blood pressure seemed to be more sensitive to the effects of the cations sodium and potassium. Higher sodium raised their blood pressure more, and higher potassium lowered it more, than in individuals with normal blood pressure. In fact, if sodium is a cation, potassium should be called a dogion. And what I have described as effects are in fact associations: we cannot really know if they are causal.

612 But now comes the bombshell. In the PURE study, there was no simple linear relationship between sodium intake and the composite outcome of death and major cardiovascular events, over a mean follow-up period of 3.7 years. Quite the contrary, there was a sort of elongated U-shape distribution. The U begins high and is then splayed out: people who excreted less than 3 grams of salt daily were at much the highest risk of death and cardiovascular events. The lowest risk lay between 3 g and 5 g, with a slow and rather flat rise thereafter. On this evidence, trying to achieve a salt intake under 3 g is a bad idea, which will do you more harm than eating as much salt as you like. Moreover, if you eat plenty of potassium as well, you will have plenty of dogion to counter the cation. The true Mediterranean diet wins again. Eat salad and tomatoes with your anchovies, drink wine with your briny olives, sprinkle coarse salt on your grilled fish, lay it on a bed of cucumber, and follow it with ripe figs and apricots. Live long and live happily.

624 It was rather witty, if slightly unkind, of the NEJM to follow these PURE papers with a massive modelling study built on the assumption that sodium increases cardiovascular risk in linear fashion, mediated by blood pressure. Dariush Mozaffarian and his immensely hardworking team must be biting their lips, having trawled through all the data they could find about sodium excretion in 66 countries. They used a reference standard of 2 g sodium a day, assuming this was the point of optimal consumption and lowest risk. But from PURE, we now know it is associated with a higher cardiovascular risk than 13 grams a day. So they should now go through all their data again, having adjusted their statistical software to the observational curves of the PURE study. Even so, I would question the value of modelling studies on this scale: the human race is a complex thing to study, and all models need to be taken with a pinch of salt.

Update: Colby Cosh followed up the original link with this tweet. Ouch!

August 16, 2014

ESR on demilitarizing the police

Filed under: Law, Liberty, USA — Tags: , , , , , — Nicholas @ 10:32

Eric S. Raymond is with most other libertarians about the problems with having your police become more like an occupying army:

I join my voice to those of Rand Paul and other prominent libertarians who are reacting to the violence in Ferguson, Mo. by calling for the demilitarization of the U.S.’s police. Beyond question, the local civil police in the U.S. are too heavily armed and in many places have developed an adversarial attitude towards the civilians they serve, one that makes police overreactions and civil violence almost inevitable.

But I publish this blog in part because I think it is my duty to speak taboo and unspeakable truths. And there’s another injustice being done here: the specific assumption, common among civil libertarians, that police overreactions are being driven by institutional racism. I believe this is dangerously untrue and actually impedes effective thinking about how to prevent future outrages.

There are some unwelcome statistics which at least partly explain why young black men are more likely to be stopped by the police:

… the percentage of black males 15-24 in the general population is about 1%. If you add “mixed”, which is reasonable in order to correspond to a policeman’s category of “nonwhite”, it goes to about 2%.

That 2% is responsible for almost all of 52% of U.S. homicides. Or, to put it differently, by these figures a young black or “mixed” male is roughly 26 times more likely to be a homicidal threat than a random person outside that category – older or younger blacks, whites, hispanics, females, whatever. If the young male is unambiguously black that figure goes up, about doubling.

26 times more likely. That’s a lot. It means that even given very forgiving assumptions about differential rates of conviction and other factors we probably still have a difference in propensity to homicide (and other violent crimes for which its rates are an index, including rape, armed robbery, and hot burglary) of around 20:1. That’s being very generous, assuming that cumulative errors have thrown my calculations are off by up to a factor of 6 in the direction unfavorable to my argument.

[…]

Yeah, by all means let’s demilitarize the police. But let’s also stop screaming “racism” when, by the numbers, the bad shit that goes down with black male youths reflects a cop’s rational fear of that particular demographic – and not racism against blacks in general. Often the cops in these incidents are themselves black, a fact that media accounts tend to suppress.

What we can actually do about the implied problem is a larger question. (Decriminalizing drugs would be a good start.) But it’s one we can’t even begin to address rationally without seeing past the accusation of racism.

July 31, 2014

NFL to test player tracking RFID system this year

Filed under: Football, Media, Technology — Tags: , , — Nicholas @ 07:01

Tom Pelissero talks about the new system which will be installed at 17 NFL stadiums this season:

The NFL partnered with Zebra Technologies, which is applying the same radio-frequency identification (RFID) technology that it has used the past 15 years to monitor everything from supplies on automotive assembly lines to dairy cows’ milk production.

Work is underway to install receivers in 17 NFL stadiums, each connected with cables to a hub and server that logs players’ locations in real time. In less than a second, the server can spit out data that can be enhanced graphically for TV broadcasts with the press of a button.

[…]

TV networks have experimented in recent years with route maps and other visual enhancements of players’ movements. But league-wide deployment of the sensors and all the data they produce could be the most significant innovation since the yellow first-down line.

The data also will go to the NFL “cloud,” where it can be turned around in seconds for in-stadium use and, eventually, a variety of apps and other visual and second-screen experiences. Producing a set of proprietary statistics on players and teams is another goal, Shah said.

NFL teams — many already using GPS technology to track players’ movements, workload and efficiency in practice — won’t have access to the in-game information in 2014 because of competitive considerations while the league measures the sustainability and integrity of the data.

“But as you imagine, longer-term, that is the vision,” Shah said. “Ultimately, we’re going to have a whole bunch of location-based data that’s coming out of live-game environment, and we want teams to be able to marry that up to what they’re doing in practice facilities themselves.”

Zebra’s sensors are oblong, less than the circumference of a quarter and installed under the top cup of the shoulder pad, Stelfox said. They blink with a signal 25 times a second and run on a watch battery. The San Francisco 49ers and Detroit Lions and their opponents wore them for each of the two teams home games in last season as part of a trial run.

About 20 receivers will be placed around the bands between the upper and lower decks of the 17 stadiums that were selected for use this year. They’ll provide a cross-section of environments and make sure the technology is operational across competitive settings before full deployment.

July 23, 2014

In statistical studies, the size of the data sample matters

Filed under: Health, Science, USA — Tags: , , , , , — Nicholas @ 08:49

In the ongoing investigation into why Westerners — especially North Americans — became obese, some of the early studies are being reconsidered. For example, I’ve mentioned the name of Dr. Ancel Keys a couple of times recently: he was the champion of the low-fat diet and his work was highly influential in persuading government health authorities to demonize fat in pursuit of better health outcomes. He was so successful as an advocate for this idea that his study became one of the most frequently cited in medical science. A brilliant success … that unfortunately flew far ahead of its statistical evidence:

So Keys had food records, although that coding and summarizing part sounds a little fishy. Then he followed the health of 13,000 men so he could find associations between diet and heart disease. So we can assume he had dietary records for all 13,000 of them, right?

Uh … no. That wouldn’t be the case.

The poster-boys for his hypothesis about dietary fat and heart disease were the men from the Greek island of Crete. They supposedly ate the diet Keys recommended: low-fat, olive oil instead of saturated animal fats and all that, you see. Keys tracked more than 300 middle-aged men from Crete as part of his study population, and lo and behold, few of them suffered heart attacks. Hypothesis supported, case closed.

So guess how many of those 300-plus men were actually surveyed about their eating habits? Go on, guess. I’ll wait …

And the answer is: 31.

Yup, 31. And that’s about the size of the dataset from each of the seven countries: somewhere between 25 and 50 men. It’s right there in the paper’s data tables. That’s a ridiculously small number of men to survey if the goal is to accurately compare diets and heart disease in seven countries.

[…]

Getting the picture? Keys followed the health of more than 300 men from Crete. But he only surveyed 31 of them, with one of those surveys taken during the meat-abstinence month of Lent. Oh, and the original seven-day food-recall records weren’t available later, so he swapped in data from an earlier paper. Then to determine fruit and vegetable intake, he used data sheets about food availability in Greece during a four-year period.

And from this mess, he concluded that high-fat diets cause heart attacks and low-fat diets prevent them.

Keep in mind, this is one of the most-cited studies in all of medical science. It’s one of the pillars of the Diet-Heart hypothesis. It helped to convince the USDA, the AHA, doctors, nutritionists, media health writers, your parents, etc., that saturated fat clogs our arteries and kills us, so we all need to be on low-fat diets – even kids.

Yup, Ancel Keys had a tiny one … but he sure managed to screw a lot of people with it.

H/T to Amy Alkon for the link.

July 22, 2014

23% of US children live in poverty … except that’s not actually true

Filed under: Government, USA — Tags: , , — Nicholas @ 07:48

In Forbes, Tim Worstall explains why the shocking headline rate of child poverty in the US is not correct (and that’s a good thing):

The annual Kids Count, from the Annie F. Casey Foundation, is out and many are reporting that it shows that 23% of American children are living in poverty. I’m afraid that this isn’t quite true and the mistaken assumption depends on one little intricate detail of how US poverty statistics are constructed. This isn’t a snarl at Kids Count, they report the numbers impartially, it’s the interpretation that some are putting on those numbers that is in error. For the reality is that, by the way that the US measures poverty, it does a pretty good job in alleviating child poverty. The real rate of children actually living in poverty, after all the aid they get to not live in poverty, is more like 2 or 3% of US children. Which is pretty good for government work.

[…]

However, this is not the same thing as stating that 23% of US children are living in poverty. For there’s a twist in the way that US poverty statistics are compiled.

Everyone else measures poverty as being below 60% of median equivalised household disposable income. This is a measure of relative poverty, how much less do you have than the average? The US uses a different measure, based upon historical accident really, which is a measure of absolute poverty. How may people have less than $x to live upon? There’s also a second difference. Everyone else measures poverty after the influence of the tax and the benefits system upon those incomes. The US measures only cash income (both market income and also cash from the government). It does not measure the influence of benefits that people receive in kind (ie, in goods or services) nor through the tax system. And the problem with this is that the major poverty alleviation schemes in the US are, in rough order, Medicaid, the EITC, SNAP (or food stamps) and then Section 8 housing vouchers. Three of which are goods or services in kind and the fourth comes through the tax system.

July 21, 2014

The science of ballistics, the art of war, and the birth of the assault rifle

Filed under: History, Military, Technology — Tags: , , , — Nicholas @ 15:47

Defence With A “C” summarizes the tale of how we got to the current suite of modern military small arms. It’s a long story, but if you’re interested in firearms, it’s a fascinating one.

To understand why we’ve arrived where we are now with the NATO standard 5.56mm calibre round you have to go all the way back to the war of 1939-1945. Much study of this conflict would later inform decision making surrounding the adoption of the 5.56, but for now there was one major change that took place which would set the course for the future.

The German Sturmgewehr 44 is widely accepted as the worlds first true assault rifle. Combining the ability to hit targets out to around 500 yards with individual shots in a semi-automatic mode, as well as the ability to fire rapidly in fully automatic mode (almost 600 rounds per minute) the StG 44 represented a bridge between short ranged sub-machine guns and longer ranged bolt action rifles.

[…]

After the second world war the US army began conducting research to help it learn the lessons of its previous campaigns, as well as preparing it for potential future threats. As part of this effort it began to contract the services of the Operations Research Office (ORO) of the John Hopkins University in Baltimore, Maryland, for help in conducting the scientific analysis of various aspects of ground warfare.

On October 1st, 1948, the ORO began Project ALCLAD, a study into the means of protecting soldiers from the “casualty producing hazards of warfare“. In order to determine how best to protect soldiers from harm, it was first necessary to investigate the major causes of casualties in war.

After studying large quantities of combat and casualty reports, ALCLAD concluded that first and foremost the main danger to combat soldiers was from high explosive weapons such as artillery shells, fragments from which accounted for the vast majority of combat casualties. It also determined that casualties inflicted by small arms fire were essentially random.

Allied troops in WW2 had been generally armed with full-sized bolt action rifles (while US troops were being issued the M1 Garand), optimized to be accurate out to 600 yards or more, yet most actual combat was at much shorter ranges than that. Accuracy is directly affected by the stress, tension, distraction, and all-around confusion of the battlefield: even at such short ranges, riflemen required many shots to be expended in hopes of inflicting a hit on an enemy. The ORO ran a series of tests to simulate battle conditions for both expert and ordinary riflemen and found some unexpected results:

A number of significant conclusions were thus drawn from these tests. Firstly, that accuracy — even for prone riflemen, some of them expert shots, shooting at large static targets — was poor beyond ranges of about 250 yards. Secondly, that under simulated conditions of combat shooting an expert level marksman was no more accurate than a regular shot. And finally that the capabilities of the individual shooters were far below the potential of the rifle itself.

This in turn — along with the analysis of missed shots caught by a screen behind the targets — led to three further conclusions.

First, that any effort to try and make the infantry’s general purpose weapon more accurate (such as expensive barrels) was largely a waste of time and money. The weapon was, and probably always would be, inherently capable of shooting much tighter groups than the human behind it.

Second, that there was a practical limit to the value of marksmanship training for regular infantry soldiers. Beyond a certain basic level of training any additional hours were of limited value*, and the number of hours required to achieve a high level of proficiency would be prohibitive. This was particularly of interest for planning in the event of another mass mobilisation for war.

July 15, 2014

The attraction (and danger) of computer-based models

Filed under: Environment, Science, Technology — Tags: , , — Nicholas @ 00:02

Warren Meyer explains why computer models can be incredibly useful tools, but they are not the same thing as an actual proof:

    Among the objections, including one from Green Party politician Chit Chong, were that Lawson’s views were not supported by evidence from computer modeling.

I see this all the time. A lot of things astound me in the climate debate, but perhaps the most astounding has been to be accused of being “anti-science” by people who have such a poor grasp of the scientific process.

Computer models and their output are not evidence of anything. Computer models are extremely useful when we have hypotheses about complex, multi-variable systems. It may not be immediately obvious how to test these hypotheses, so computer models can take these hypothesized formulas and generate predicted values of measurable variables that can then be used to compare to actual physical observations.

[…]

The other problem with computer models, besides the fact that they are not and cannot constitute evidence in and of themselves, is that their results are often sensitive to small changes in tuning or setting of variables, and that these decisions about tuning are often totally opaque to outsiders.

I did computer modelling for years, though of markets and economics rather than climate. But the techniques are substantially the same. And the pitfalls.

Confession time. In my very early days as a consultant, I did something I am not proud of. I was responsible for a complex market model based on a lot of market research and customer service data. Less than a day before the big presentation, and with all the charts and conclusions made, I found a mistake that skewed the results. In later years I would have the moral courage and confidence to cry foul and halt the process, but at the time I ended up tweaking a few key variables to make the model continue to spit out results consistent with our conclusion. It is embarrassing enough I have trouble writing this for public consumption 25 years later.

But it was so easy. A few tweaks to assumptions and I could get the answer I wanted. And no one would ever know. Someone could stare at the model for an hour and not recognize the tuning.

Older Posts »
« « QotD: King George III’s minor fit of barking| The economic side of Net Neutrality » »

Powered by WordPress