Quotulatiousness

November 6, 2012

Encouraging and exhorting didn’t work, so now they’re trying to shame you into voting

Filed under: Politics, USA — Tags: , — Nicholas @ 09:54

At Techdirt, Mike Masnick reports on the latest attempt to get out the vote:

It’s election day. While your actual ballot is (supposed to be) secret, a lot of people don’t know that whether or not you voted at all is public information. A few weeks back, On the Media covered some ways that campaigns try to get out the vote and looked at some research suggesting that letters to people with a “voter report card” showing when they’ve voted in the past was a somewhat effective way of shaming people into voting. An even more extreme example was given as well: a letter that specifically shows how often your neighbors have voted. In the piece, OTM producer Chris Neary noted that while such things were effective in the lab, people shouldn’t be expecting such letters for real, because, while they may be effective in getting out the vote, they also freak people out on privacy grounds, and no campaign wants to risk freaking people out:

    And, by the way Brooke, you’ll never get that last letter. Campaigns hate to send out anything that prompts virulent hate mail in return, and one of those researchers got some of that mail.

Except… Neary has now posted an apology blog post after some OTM listeners reached out to share exactly the kinds of mailers discussed. While campaigns might shy away from such tactics, apparently third party organizations read the exact same research and took it to heart — as they’re a lot less worried about hate mail

All of the various political parties, pressure groups, “public interest” organizations and the rest are desperate to ramp up voter participation in the election. It’s frequenly pointed out that voters are apathetic and the reduced percentage of voters over the last thirty-plus years is trotted out as evidence of that. However, as Katherine Mangu-Ward points out in this month’s issue of Reason, for the vast majority of Americans Your Vote Doesn’t Count:

Let’s start with the basics: Your vote will almost certainly not determine the outcome of any public election. I’m not talking about conspiracy theories regarding rigged elections or malfunctioning voting machines — although both of those things have happened and will happen again. I’m not talking about swing states or Supreme Court power grabs or the weirdness of the Electoral College. I’m talking about pure, raw math.

In all of American history, a single vote has never determined the outcome of a presidential election. And there are precious few examples of any other elections decided by a single vote. A 2001 National Bureau of Economic Research paper by economists Casey Mulligan and Charles Hunter looked at 56,613 contested congressional and state legislative races dating back to 1898. Of the 40,000 state legislative elections they examined, encompassing about 1 billion votes cast, only seven were decided by a single vote (two were tied). A 1910 Buffalo contest was the lone single-vote victory in a century’s worth of congressional races. In four of the 10 ultra-close campaigns flagged in the paper, further research by the authors turned up evidence that subsequent recounts unearthed margins larger than the official record initially suggested.

The numbers just get more ridiculous from there. In a 2012 Economic Inquiry article, Columbia University political scientist Andrew Gelman, statistician Nate Silver, and University of California, Berkeley, economist Aaron Edlin use poll results from the 2008 election cycle to calculate that the chance of a randomly selected vote determining the outcome of a presidential election is about one in 60 million. In a couple of key states, the chance that a random vote will be decisive creeps closer to one in 10 million, which drags voters into the dubious company of people gunning for the Mega-Lotto jackpot. The authors optimistically suggest that even with those terrible odds, you may still choose to vote because “the payoff is the chance to change national policy and improve (one hopes) the lives of hundreds of millions, compared to the alternative if the other candidate were to win.” But how big does that payoff have to be to make voting worthwhile?

October 29, 2012

US 3rd quarter GDP number less substantial than it appears

Filed under: Economics, Media, USA — Tags: — Nicholas @ 11:17

A bit of a downer for what would otherwise be good economic news:

Chart from the Mercatus Center at George Mason University.

Polls are less accurate thanks to a 9% response rate (and falling)

Filed under: Media, Politics — Tags: , , , — Nicholas @ 10:41

Iowahawk has been posting a daily Twitter update reminding people that the reliability of political polls is much lower than ever before:

At Macleans, Colby Cosh digs a bit deeper to find out how polling organizations are responding to their approaching-flatline response rate:

The boffins are becoming increasingly reliant on “non-probability samples” like internet panel groups, which give only narrow pictures of biased subsets of the overall population. The good news is that they can take many such pictures and use modern computational techniques to combine them and make pretty decent population inferences. “Obama is at 90 per cent with black voters in Shelbyville; 54 per cent among auto workers; 48 per cent among California epileptics; 62 per cent with people whose surnames start with the letter Z…” Pile up enough subsets of this sort, combined with knowledge of their relative sizes and other characteristics, and you can build models which let you guess at the characteristics of the entire electorate (or, if you’re doing market research, the consumerate).

As a matter of truth in advertising, however, pollsters have concluded that they shouldn’t report the uncertainty of these guesses by using the traditional term “margin of error.” There is an extra layer of inference involved in the new techniques: they offer what one might call a “margin of error, given that the modelling assumptions are correct.” And there’s a philosophical problem, too. The new techniques are founded on what is called a “Bayesian” basis, meaning that sample data must be combined explicitly with a prior state of knowledge to derive both estimates of particular quantities and the uncertainty surrounding them.

[. . .]

Pollsters are trying very hard to appear as transparent and up-front about their methods as they were in the landline era. When it comes to communicating with journalists, who are by and large a gang of rampaging innumerates, I don’t really see much hope for this; polling firms may not want their methods to be some sort of mysterious “black box,” but the nuances of Bayesian multilevel modelling, even to fairly intense stat hobbyists, might as well be buried in about a mile of cognitive concrete. Our best hope is likely to be the advent of meta-analysts like (he said through tightly gritted teeth) Nate Silver, who are watching and evaluating polling agencies according to their past performance. That is, pretty much exactly as if they were “black boxes.” In the meantime, you will want to be on the lookout for that phrase “credibility interval.” As the American Association for Public Opinion Research says, it is, in effect, a “[news] consumer beware” reminder.

Twenty million broken windows

Filed under: Economics, Environment, Media — Tags: , , — Nicholas @ 09:03

At Forbes, Tim Worstall patiently explains that the damage from Hurricane Sandy (or any major storm) will appear to boost GDP, because it only measures money spent to repair damage, not the costs incurred or the opportunities foregone because of the damage:

We know very well that Hurricane (or Frankenstorm as some are calling it) Sandy will leave a trail of destruction across parts of the US today. There will almost certainly be deaths, as there have been in the hurricane’s passage across the Caribbean. And there will also be a boost to the US economy. Which is really evidence of quite how wrong we are in the way that we measure the economy.

[. . .]

The problem with this is that it is only true because of the way that we calculate GDP. In our working of the numbers we assume that it’s final consumption at market prices: that is, the value that consumers put on everything. However, this is not true of government spending. It’s very difficult indeed to work out what government spending is actually worth: for as we’ve not a choice in it then there’s no market price nor accurate valuation from the people who actually get whatever is produced. Some government spending is most certainly worth more than the actual amount spent. The court system say: a pre-requisite of our having a complex society at all. Other parts not so much: what is the true value of a diversity adviser for example? So what we actually do is value all government spending, for GDP purposes, at the cost of that actual spending. Government spends $100, GDP goes up by $100. That’s just how we define it. This can cause amusement in measuring the success of welfare programs for example. Even Census admits that some of the people who receive Medicaid, or food stamps, value what they receive at less than the cost of providing it.

[. . .]

Now imagine that Hurricane Sandy does $10 billion of damage to that wealth (for our purposes it doesn’t matter whether it’s $100 billion or $1 trillion. Although this obviously matters to everyone except for the purposes of this example). The US is now worth $99.990 Trillion. GDP might rise to $15.1 trillion as we repair that damage. But we’re not in fact any richer at all: despite the fact that GDP has gone up. What has actually happened is that some of our stock of wealth has been destroyed and we’re having to do more work in order to rebuild it. This is exactly the same as our pollution example. We’re measuring what we produce but not the capital stock of what we have (or had).

Yes, the rebound from Sandy may well provide a boost to the economy. But that’s a function of the way that we measure that economy, not a real boost in our general wealth.

October 13, 2012

Sometimes there’s a logical reason to discriminate in pricing

Filed under: Business, Europe — Tags: , , , — Nicholas @ 11:00

Tim Harford on the recent EU ruling that insurers will no longer be allowed to consider gender in setting insurance rates:

He: And not before time. It’s outrageous that I have to pay more for my car insurance than you do. I’m a perfectly safe driver.
She: Of course you are, dear. But you also drive a lot more than I do, which is not unusual for men. Since you drive more miles you are exposing yourself to the risk of more accidents.
He: Am I? Oh.
She: This is one of the reasons that men have more accidents than women. Another, of course, is that some young men are aggressive, overconfident idiots. But in any case you should probably put the money you save into your pension pot because you’re going to need it when you get stuck with the low annuity rates we women have had to put up with.
He: But my life expectancy is shorter. I deserve much higher annuity rates. That’s outrageous.
She: So you’re outraged that discrimination against you hasn’t ended earlier, and equally outraged that discrimination in your favour isn’t going to continue for ever?

[. . .]

She: We might not get too comfortable. Insurers will start looking at other correlates of risk. The obvious one is how far people drive: men tend to drive more than women. Then there are issues such as the choice of a sports car rather than a people carrier. Such distinctions may carry more weight in determining your premium than they do now. As for annuities, if they can’t pay any attention to your sex they might start paying more attention to your cholesterol.
He: I can see that this might get very intrusive.
She: It might. Or it might get very clumsy. Mortgage lenders used to be accused of using geography as a way of discriminating against minorities in the US, since ethnicity and postcode can be closely correlated. There are modern analogies: since women are on average smaller than men, perhaps in the future premiums will be proportionate to height. Stranger things have happened.

A timely reminder that economic statistics only paint part of the overall picture

Filed under: Economics, Technology — Tags: , , — Nicholas @ 10:52

Tim Worstall at the Adam Smith Insitute blog:

Almost at random from my RSS feed two little bits of information that tell of the quite astonishing economic changes going on around us at present. The first, that the world is now pretty much wired:

    According to new figures published by the International Telecommunications Union on Thursday, the global population has purchased 6 billion cellphone subscriptions.

Note that this is not phones, this is actual subscriptions. It’s not quite everyone because there are 7 billion humans and there’s always the occasional Italain with two phones, one for the wife and one for the mistress. But in a manner that has never before been true almost all of the population of the planet are in theory at least able to speak to any one other member of that population. The second:

    The most recent CTIA data, obtained by All Things D, shows that US carriers handled 1.16 trillion megabytes of data between July 2011 and June 2012, up 104 percent from the 568 billion megabytes used between July 2010 and June 2011.

Within that explosive growth of basic communications we’re also seeing the smartphone sector boom. Indeed, I’ve seen figures that suggest that over half of new activations are now smartphones, capable of fully interacting with the internet.

One matter to point to is how fast this all is. It really is only 30 odd years: from mobile telephony being the preserve of the rich with a car battery to power it to something that the rural peasant of India or China is more likely to own than not. Trickle down economics might have a bad reputation but trickle down technology certainly seems to work.

October 4, 2012

Claim: more women die of domestic violence than cancer

Filed under: Law, USA — Tags: , — Nicholas @ 12:43

A friend of mine posted this claim on Twitter earlier today and it struck me as being incredibly unlikely. A quick Google search turns up the following numbers for causes of death in the United States in 2009 (total 2,437,163):

  • Heart disease: 599,413
  • Cancer: 567,628
  • Chronic lower respiratory diseases: 137,353
  • Stroke (cerebrovascular diseases): 128,842
  • Accidents (unintentional injuries): 118,021
  • Alzheimer’s disease: 79,003
  • Diabetes: 68,705
  • Influenza and Pneumonia: 53,692
  • Nephritis, nephrotic syndrome, and nephrosis: 48,935
  • Intentional self-harm (suicide): 36,909

If we assume that exactly half the reported deaths from cancer are women, that says 283,814 women died of various forms of cancer in 2009. How does that stack up against the murder statistics (which would include domestic violence along with all other killings)?

13,636

One of these numbers is not like the other (and of the reported 13,636 homicides, 77% of the victims were male).

This is not to diminish the dangers of domestic violence, but throwing out numbers as my friend did doesn’t actually help the situation.

September 22, 2012

Mismeasuring inequality

Filed under: Economics, Media, Politics, USA — Tags: , , , , — Nicholas @ 10:20

If you haven’t encountered a journalist or an activist going on about the Gini Coefficient, you certainly will soon, as it’s become a common tool to promote certain kinds of political or economic action. It is also useful for pushing certain agendas because while the numbers appear to show one thing clearly (the relative income inequality of a population), it hides nearly as much as it reveals:

The figures they use for a comparison are here. Looking at those you might think, well, if the US is at 0.475, Sweden is at 0.23 (yes, the number of 23.0 for Sweden is the same as 0.23 in this sense) then given that a lower number indicates less inequality then Sweden is a less unequal place than the US. You would of course be correct in your assumption: but not because of these numbers.

For the US number is before taxes and before benefits. The Swedish number is after all taxes and all benefits. So, the US number is what we call “market income”, or before all the things we do to shift money around from rich to poor and the Swedish number (in, fact, the numbers for all other countries) are after all the things we do to reduce inequality.

[. . .]

The US is reporting market inequality, before the effects of taxes and benefits, the Europeans are reporting the inequality after the effect of taxes and benefits.

[. . .]

Which brings us to the 300 million people in the US. Is it really fair to be comparing income inequality among 300 million people with inequality among the 9 million of Sweden? Quite possibly a more interesting comparison would be between the 300 million of the US and the 500 million of the European Union. Or the smaller number in the EU 15, thus leaving out the ex-communist states with their own special problems. Not that it matters all that much as the two numbers for the Gini are the same: 0.3*. Note again that this is post tax and post benefit. On this measure the US is at 0.38. So, yes, the US is indeed more unequal than Europe. But by a lot smaller margin than people generally recognise: or than by he numbers that are generally bandied about.

Which brings us to the second point. Even here the US number is (marginally) over-stated. For even in the post-tax and post-benefit numbers the US is still an outlier in the statistical methods used. In looking at inequality, poverty, in the US we include the cash that poor people are given to alleviate their poverty. But we do not include the things that people are given in kind: the Medicaid, SNAP, Section 8 and so on. It’s possible (I’m not sure I’m afraid) that we don’t include the EITC either. We certainly don’t in the poverty statistics but might in the inequality. All of the other countries do include the effects of such policies. Largely because they don’t offer benefits in kind they just give the poor more money and tell them to buy it themselves. This obviously turns up in figures of how much money the poor have.

September 18, 2012

Canada ranks fifth in the world for economic freedom

Filed under: Australia, Cancon, Economics, Liberty, USA — Tags: , , , , , — Nicholas @ 12:19

The annual Fraser Institute report on world economic freedom may confirm what a lot of Canadians have been noticing: we’re now much more free than our American friends, at least by the measurements tracked in this series of rankings (PDF):

  • In the chain-linked index, average economic freedom rose from 5.30 (out of 10) in
    1980 to 6.88 in 2007. It then fell for two consecutive years, resulting in a score of
    6.79 in 2009 but has risen slightly to 6.83 in 2010, the most recent year available.
    It appears that responses to the economic crisis have reduced economic freedom
    in the short term and perhaps prosperity over the long term, but the upward
    movement this year is encouraging.
  • In this year’s index, Hong Kong retains the highest rating for economic freedom,
    8.90 out of 10. The other top 10 nations are: Singapore, 8.69; New Zealand, 8.36;
    Switzerland, 8.24; Australia, 7.97; Canada, 7.97; Bahrain, 7.94; Mauritius, 7.90;
    Finland, 7.88; and Chile, 7.84.
  • The rankings (and scores) of other large economies in this year’s index are the United
    Kingdom, 12th (7.75); the United States, 18th (7.69); Japan, 20th (7.64); Germany,
    31st (7.52); France, 47th (7.32); Italy, 83rd (6.77); Mexico, 91st, (6.66); Russia, 95th
    (6.56); Brazil, 105th (6.37); China, 107th (6.35); and India, 111th (6.26).
  • The scores of the bottom ten nations in this year’s index are: Venezuela, 4.07;
    Myanmar, 4.29; Zimbabwe, 4.35; Republic of the Congo, 4.86; Angola, 5.12;
    Democratic Republic of the Congo, 5.18; Guinea-Bissau, 5.23; Algeria, 5.34; Chad,
    5.41; and, tied for 10th worst, Mozambique and Burundi, 5.45.
  • The United States, long considered the standard bearer for economic freedom
    among large industrial nations, has experienced a substantial decline in economic
    freedom during the past decade. From 1980 to 2000, the United States was generally
    rated the third freest economy in the world, ranking behind only Hong Kong and
    Singapore. After increasing steadily during the period from 1980 to 2000, the chainlinked
    EFW rating of the United States fell from 8.65 in 2000 to 8.21 in 2005 and
    7.70 in 2010. The chain-linked ranking of the United States has fallen precipitously
    from second in 2000 to eighth in 2005 and 19th in 2010 (unadjusted ranking of 18th).

September 15, 2012

Our collective maladjusted attitude to small risks

Filed under: Economics, Europe, Italy — Tags: , , , , , — Nicholas @ 09:44

Tim Harford shows that you can learn a lot about economics by looking at the process of hiring a rental car:

Here’s a puzzle. If it costs €500 to hire a €25,000 car, how much should you expect to pay to hire a €50 child’s car seat to go with it? Arithmetic says €1; experience suggests you will pay 50 times that.

This was just one of a series of economics posers that raised their heads during my summer vacation – indeed, within a few minutes of clearing customs in Milan. One explanation is that the apparently extortionate price reflects some unexpected cost of cleaning, fitting or insuring the seat – possible but implausible. Or perhaps parents with young families are less sensitive to price than other travellers. This, again, is possible but unconvincing. In other contexts, such as package holidays and restaurants, children with families are often given discounts on the assumption that money is tight and bargains keenly sought.

[. . .]

After paying through the nose for the car seat we were alerted to a risk. “If your car is damaged or stolen, you are liable for the first €1,000 of any loss.” Gosh. I hadn’t really given the matter any thought but the danger suddenly felt very real. And for just €20 a day, or something like that, I could make that danger vanish.

[. . .]

What’s happening here? Behavioural economists have long known about “loss aversion”: we’re disproportionately anxious at the prospect of small but salient risks. The car hire clerk carefully created a very clear image of a loss, even though that loss was unlikely. I haven’t paid such fees for years and have saved enough cash to write off a couple of hire cars in future.

September 8, 2012

Gamers are not superstitious (all the time) about their “lucky dice”

Filed under: Gaming — Tags: , , — Nicholas @ 00:19

Many gamers are highly protective of the “lucky D20” they use for certain die rolls. In some cases, that’s not superstition at all, it’s taking advantage of a manufacturing flaw in polyhedral dice:

One of the biggest manufacturers of RPG dice is a company called Chessex. They make a huge variety of dice, in all kinds of different colors and styles. These dice are put through rock tumblers that give them smooth edges and a shiny finish, so they look great. Like many RPG fans, I own a bunch of them.

I also own a set of GameScience dice. They’re not polished, painted or smoothed, so they’re supposed to roll better than Chessex dice, producing results closer to true random. I like them, but mostly because they don’t roll too far, and their sharp edges look cool. I couldn’t tell you if they truly produce more random results.

But the good folks over at the Awesome Dice Blog can. They recently completed a massive test between a Chessex d20 and a GameScience d20, rolling each over 10,000 times, by hand, to determine which rolls closer to true.

In a video from a few years back, Lou Zocchi explains why his dice are the best quality in the business:

September 7, 2012

“When I discover something surprising in data, the most common explanation is that I made a mistake.”

Filed under: Business, Economics, Government, Media — Tags: , , , — Nicholas @ 08:20

John Kay suggests you always ask how a statistic was created before you consider what the presenter wants you to think:

Always ask yourself the question: “where does that data come from?”. “Long distance rail travel in Britain is expected to increase by 96 per cent by 2043.” Note how the passive voice “is expected” avoids personal responsibility for this statement. Who expects this? And what is the basis of their expectation? For all I know, we might be using flying platforms in 2043, or be stranded at home by oil shortages: where did the authors of the prediction acquire their insight?

“On average, men think about sex every seven seconds.” How did the researchers find this out? Did they ask men how often they thought about sex, or when they last thought about sex (3½ seconds ago, on average)? Did they give their subjects a buzzer to press every time they thought about sex? How did they confirm the validity of the responses? Is it possible that someone just made this statement up, and that it has been repeated frequently and without attribution ever since? Many of the numbers I hear at business conferences have that provenance.

[. . .]

Be careful of data defined by reference to other documents that you are expected not to have read. “These accounts have been compiled in accordance with generally accepted accounting principles”, or “these estimates are prepared in line with guidance given by HM Treasury and the Department of Transport”. Such statements are intended to give a false impression of authoritative endorsement. A data set compiled by a national statistics organisation or a respected international institution such as the Organisation for Economic Co-operation and Development or Eurostat will have been compiled conscientiously. That does not, however, imply that the numbers mean what the person using them thinks or asserts they mean.

September 4, 2012

True-but-misleading factoid: “7 kg Of Grain To Make 1 kg Of Beef”

Filed under: Economics, Environment, Food, Health — Tags: — Nicholas @ 00:05

Tim Worstall on the mis-use of a vegetarian-friendly data point:

I asked Larry Elliott where the number came from and was sent this from Fidelity Investments (not online so far as I know).

    The demand for more protein has a significant knock-on impact on grain demand. Livestock is reared on grain-feed, making production heavily resource intensive. Indeed, it takes 7 kilograms of grain to produce just 1 kilogram of meat. As demand for meat rises, this increases the demand for and prices of feedstock — these increased costs of productions flow back to the consumers in the form of higher meat prices. Adding to the upward pressure on feedstock price and much to the dislike of livestock farmers, have been US environmental regulations (the Renewable Fuel Standard) that require a proportion of corn crops be used for the production of bio-fuel.

So, case closed, right? We all need to give up eating meat to save Mother Gaia? Not necessarily. The numbers given are accurate, but only in a particular context: that of raising meat for the US (and, probably, Canadian) market. The rest of the world doesn’t do it this way:

It is only in US or US style feedlot operations than cattle are fed on this much grain. Thus the equation is useful if you want information about what is going to happen with US cattle and grain futures: for that’s the general production method feeding those cattle futures. But very little of the rest of the world uses these feedlots as their production methods. I’m not certain whether we have any at all in the UK for example, would be surprised if there were many in the EU. Around where I live in Portugal pigs forage for acorns (yes, from the same oak trees that give us cork) or are fed on swill, goats and sheep graze on fields that would support no form of arable farming at all (they can just about, sometimes, support low levels of almond, olive or carob growing). Much beef cattle in the UK is grass fed with perhaps hay or silage in the winters.

My point being that sure, it’s possible to grow a kilo of beef by using 7 kilos of grain. But it isn’t necessary. The number might be useful when looking at agricultural futures in the US but it’s a hopelessly misguiding one to use to try and determine anything at all about the global relationship between meat and grain production. And most certainly entirely wrong in leading to the conclusion that we must all become vegetarians.

Which brings us to the lesson of this little screed. Sure, numbers are great, can be very informative. But you do have to make sure that you’re using the right numbers. Numbers that are applicable to whatever problem it is that you want to illuminate. If you end up, just as a little example, comparing grain to meat numbers for a specific intensive method of farming really only used in the US then you’re going to get very much the wrong answer when you try to apply that globally.

September 2, 2012

Institutionalizing income inequality

Filed under: Business, Cancon, Government — Tags: , — Nicholas @ 11:07

At the Worthwhile Canadian Initiative blog, Frances Woolley explains how a couple of data points will work to “bake in” income inequality:

If these are the rules used to determine wages, income inequality will prevail.

It’s impossible for all firms to pay their CEOs above the median salary — by definition, half of executives must be paid below the median. If the majority of firms adopt a compensation policy like the Bell Canada Enterprises one quoted above, CEO salaries will increase inexorably.

At the same time, allowing firms to bring in temporary workers at less than the prevailing market wage prevents the price of labour from being bid up in response to labour shortages, dampening salary growth for workers at the lower wage end of the labour market.

Inequality rules.

August 14, 2012

Anecdotes are not data: Demise of Guys based on anecdotal evidence

Filed under: Media, Randomness — Tags: , , , , — Nicholas @ 09:15

Jacob Sullum on the recent ebook The Demise of Guys: Why Boys Are Struggling and What We Can Do About It, by Philip G. Zimbardo and Nikita Duncan.

Zimbardo’s thesis is that “boys are struggling” in school and in love because they play video games too much and watch too much porn. But he and his co-author, a recent University of Colorado graduate named Nikita Duncan, never establish that boys are struggling any more nowadays than they were when porn was harder to find and video games were limited to variations on Pong. The data they cite mostly show that girls are doing better than boys, not that boys are doing worse than they did before xvideos.com and Grand Theft Auto. Such an association would by no means be conclusive, but it’s the least you’d expect from a respected social scientist like Zimbardo, who oversaw the famous Stanford “prison experiment” that we all read about in Psych 101.

[. . .]

One source of evidence that Zimbardo and Duncan rely on heavily, an eight-question survey of people who watched Zimbardo’s TED talk online, is so dubious that anyone with a bachelor’s degree in psychology (such as Duncan), let alone a Ph.D. (such as Zimbardo), should be embarrassed to cite it without a litany of caveats. The most important one: It seems probable that people who are attracted to Zimbardo’s talk, watch it all the way through, and then take the time to fill out his online survey are especially likely to agree with his thesis and especially likely to report problems related to electronic diversions. This is not just a nonrepresentative sample; it’s a sample bound to confirm what Zimbardo thinks he already knows. “We wanted our personal views to be challenged or validated by others interested in the topic,” the authors claim. Mostly validated, to judge by their survey design.

[. . .]

Other sources of evidence cited by Zimbardo and Duncan are so weak that they have the paradoxical effect of undermining their argument rather than reinforcing it. How do Zimbardo and Duncan know about “the sense of total entitlement that some middle-aged guys feel within their relationships”? Because “a highly educated female colleague alerted us” to this “new phenomenon.” How do they know that “one consequence of teenage boys watching many hours of Internet pornography…is they are beginning to treat their girlfriends like sex objects”? Because of a theory propounded by Daily Mail columnist Penny Marshall. How do they know that “men are as good as their women require them to be”? Because that’s what “one 27-year-old guy we interviewed” said.

Even when more rigorous research is available, Zimbardo and Duncan do not necessarily bother to look it up. How do they know that teenagers “who spend their nights playing video games or texting their friends instead of sleeping are putting themselves at greater risk for gaining unhealthy amounts of weight and becoming obese”? Because an NPR correspondent said so. Likewise, the authors get their information about the drawbacks of the No Child Left Behind Act from a gloss of a RAND Corporation study in a San Francisco Chronicle editorial. This is the level of documentation you’d expect from a mediocre high school student, not a college graduate, let alone a tenured social scientist at a leading university.

« Newer PostsOlder Posts »

Powered by WordPress