Quotulatiousness

October 13, 2012

A timely reminder that economic statistics only paint part of the overall picture

Filed under: Economics, Technology — Tags: , , — Nicholas @ 10:52

Tim Worstall at the Adam Smith Insitute blog:

Almost at random from my RSS feed two little bits of information that tell of the quite astonishing economic changes going on around us at present. The first, that the world is now pretty much wired:

    According to new figures published by the International Telecommunications Union on Thursday, the global population has purchased 6 billion cellphone subscriptions.

Note that this is not phones, this is actual subscriptions. It’s not quite everyone because there are 7 billion humans and there’s always the occasional Italain with two phones, one for the wife and one for the mistress. But in a manner that has never before been true almost all of the population of the planet are in theory at least able to speak to any one other member of that population. The second:

    The most recent CTIA data, obtained by All Things D, shows that US carriers handled 1.16 trillion megabytes of data between July 2011 and June 2012, up 104 percent from the 568 billion megabytes used between July 2010 and June 2011.

Within that explosive growth of basic communications we’re also seeing the smartphone sector boom. Indeed, I’ve seen figures that suggest that over half of new activations are now smartphones, capable of fully interacting with the internet.

One matter to point to is how fast this all is. It really is only 30 odd years: from mobile telephony being the preserve of the rich with a car battery to power it to something that the rural peasant of India or China is more likely to own than not. Trickle down economics might have a bad reputation but trickle down technology certainly seems to work.

October 4, 2012

Claim: more women die of domestic violence than cancer

Filed under: Law, USA — Tags: , — Nicholas @ 12:43

A friend of mine posted this claim on Twitter earlier today and it struck me as being incredibly unlikely. A quick Google search turns up the following numbers for causes of death in the United States in 2009 (total 2,437,163):

  • Heart disease: 599,413
  • Cancer: 567,628
  • Chronic lower respiratory diseases: 137,353
  • Stroke (cerebrovascular diseases): 128,842
  • Accidents (unintentional injuries): 118,021
  • Alzheimer’s disease: 79,003
  • Diabetes: 68,705
  • Influenza and Pneumonia: 53,692
  • Nephritis, nephrotic syndrome, and nephrosis: 48,935
  • Intentional self-harm (suicide): 36,909

If we assume that exactly half the reported deaths from cancer are women, that says 283,814 women died of various forms of cancer in 2009. How does that stack up against the murder statistics (which would include domestic violence along with all other killings)?

13,636

One of these numbers is not like the other (and of the reported 13,636 homicides, 77% of the victims were male).

This is not to diminish the dangers of domestic violence, but throwing out numbers as my friend did doesn’t actually help the situation.

September 22, 2012

Mismeasuring inequality

Filed under: Economics, Media, Politics, USA — Tags: , , , , — Nicholas @ 10:20

If you haven’t encountered a journalist or an activist going on about the Gini Coefficient, you certainly will soon, as it’s become a common tool to promote certain kinds of political or economic action. It is also useful for pushing certain agendas because while the numbers appear to show one thing clearly (the relative income inequality of a population), it hides nearly as much as it reveals:

The figures they use for a comparison are here. Looking at those you might think, well, if the US is at 0.475, Sweden is at 0.23 (yes, the number of 23.0 for Sweden is the same as 0.23 in this sense) then given that a lower number indicates less inequality then Sweden is a less unequal place than the US. You would of course be correct in your assumption: but not because of these numbers.

For the US number is before taxes and before benefits. The Swedish number is after all taxes and all benefits. So, the US number is what we call “market income”, or before all the things we do to shift money around from rich to poor and the Swedish number (in, fact, the numbers for all other countries) are after all the things we do to reduce inequality.

[. . .]

The US is reporting market inequality, before the effects of taxes and benefits, the Europeans are reporting the inequality after the effect of taxes and benefits.

[. . .]

Which brings us to the 300 million people in the US. Is it really fair to be comparing income inequality among 300 million people with inequality among the 9 million of Sweden? Quite possibly a more interesting comparison would be between the 300 million of the US and the 500 million of the European Union. Or the smaller number in the EU 15, thus leaving out the ex-communist states with their own special problems. Not that it matters all that much as the two numbers for the Gini are the same: 0.3*. Note again that this is post tax and post benefit. On this measure the US is at 0.38. So, yes, the US is indeed more unequal than Europe. But by a lot smaller margin than people generally recognise: or than by he numbers that are generally bandied about.

Which brings us to the second point. Even here the US number is (marginally) over-stated. For even in the post-tax and post-benefit numbers the US is still an outlier in the statistical methods used. In looking at inequality, poverty, in the US we include the cash that poor people are given to alleviate their poverty. But we do not include the things that people are given in kind: the Medicaid, SNAP, Section 8 and so on. It’s possible (I’m not sure I’m afraid) that we don’t include the EITC either. We certainly don’t in the poverty statistics but might in the inequality. All of the other countries do include the effects of such policies. Largely because they don’t offer benefits in kind they just give the poor more money and tell them to buy it themselves. This obviously turns up in figures of how much money the poor have.

September 18, 2012

Canada ranks fifth in the world for economic freedom

Filed under: Australia, Cancon, Economics, Liberty, USA — Tags: , , , , , — Nicholas @ 12:19

The annual Fraser Institute report on world economic freedom may confirm what a lot of Canadians have been noticing: we’re now much more free than our American friends, at least by the measurements tracked in this series of rankings (PDF):

  • In the chain-linked index, average economic freedom rose from 5.30 (out of 10) in
    1980 to 6.88 in 2007. It then fell for two consecutive years, resulting in a score of
    6.79 in 2009 but has risen slightly to 6.83 in 2010, the most recent year available.
    It appears that responses to the economic crisis have reduced economic freedom
    in the short term and perhaps prosperity over the long term, but the upward
    movement this year is encouraging.
  • In this year’s index, Hong Kong retains the highest rating for economic freedom,
    8.90 out of 10. The other top 10 nations are: Singapore, 8.69; New Zealand, 8.36;
    Switzerland, 8.24; Australia, 7.97; Canada, 7.97; Bahrain, 7.94; Mauritius, 7.90;
    Finland, 7.88; and Chile, 7.84.
  • The rankings (and scores) of other large economies in this year’s index are the United
    Kingdom, 12th (7.75); the United States, 18th (7.69); Japan, 20th (7.64); Germany,
    31st (7.52); France, 47th (7.32); Italy, 83rd (6.77); Mexico, 91st, (6.66); Russia, 95th
    (6.56); Brazil, 105th (6.37); China, 107th (6.35); and India, 111th (6.26).
  • The scores of the bottom ten nations in this year’s index are: Venezuela, 4.07;
    Myanmar, 4.29; Zimbabwe, 4.35; Republic of the Congo, 4.86; Angola, 5.12;
    Democratic Republic of the Congo, 5.18; Guinea-Bissau, 5.23; Algeria, 5.34; Chad,
    5.41; and, tied for 10th worst, Mozambique and Burundi, 5.45.
  • The United States, long considered the standard bearer for economic freedom
    among large industrial nations, has experienced a substantial decline in economic
    freedom during the past decade. From 1980 to 2000, the United States was generally
    rated the third freest economy in the world, ranking behind only Hong Kong and
    Singapore. After increasing steadily during the period from 1980 to 2000, the chainlinked
    EFW rating of the United States fell from 8.65 in 2000 to 8.21 in 2005 and
    7.70 in 2010. The chain-linked ranking of the United States has fallen precipitously
    from second in 2000 to eighth in 2005 and 19th in 2010 (unadjusted ranking of 18th).

September 15, 2012

Our collective maladjusted attitude to small risks

Filed under: Economics, Europe, Italy — Tags: , , , , , — Nicholas @ 09:44

Tim Harford shows that you can learn a lot about economics by looking at the process of hiring a rental car:

Here’s a puzzle. If it costs €500 to hire a €25,000 car, how much should you expect to pay to hire a €50 child’s car seat to go with it? Arithmetic says €1; experience suggests you will pay 50 times that.

This was just one of a series of economics posers that raised their heads during my summer vacation – indeed, within a few minutes of clearing customs in Milan. One explanation is that the apparently extortionate price reflects some unexpected cost of cleaning, fitting or insuring the seat – possible but implausible. Or perhaps parents with young families are less sensitive to price than other travellers. This, again, is possible but unconvincing. In other contexts, such as package holidays and restaurants, children with families are often given discounts on the assumption that money is tight and bargains keenly sought.

[. . .]

After paying through the nose for the car seat we were alerted to a risk. “If your car is damaged or stolen, you are liable for the first €1,000 of any loss.” Gosh. I hadn’t really given the matter any thought but the danger suddenly felt very real. And for just €20 a day, or something like that, I could make that danger vanish.

[. . .]

What’s happening here? Behavioural economists have long known about “loss aversion”: we’re disproportionately anxious at the prospect of small but salient risks. The car hire clerk carefully created a very clear image of a loss, even though that loss was unlikely. I haven’t paid such fees for years and have saved enough cash to write off a couple of hire cars in future.

September 8, 2012

Gamers are not superstitious (all the time) about their “lucky dice”

Filed under: Gaming — Tags: , , — Nicholas @ 00:19

Many gamers are highly protective of the “lucky D20” they use for certain die rolls. In some cases, that’s not superstition at all, it’s taking advantage of a manufacturing flaw in polyhedral dice:

One of the biggest manufacturers of RPG dice is a company called Chessex. They make a huge variety of dice, in all kinds of different colors and styles. These dice are put through rock tumblers that give them smooth edges and a shiny finish, so they look great. Like many RPG fans, I own a bunch of them.

I also own a set of GameScience dice. They’re not polished, painted or smoothed, so they’re supposed to roll better than Chessex dice, producing results closer to true random. I like them, but mostly because they don’t roll too far, and their sharp edges look cool. I couldn’t tell you if they truly produce more random results.

But the good folks over at the Awesome Dice Blog can. They recently completed a massive test between a Chessex d20 and a GameScience d20, rolling each over 10,000 times, by hand, to determine which rolls closer to true.

In a video from a few years back, Lou Zocchi explains why his dice are the best quality in the business:

September 7, 2012

“When I discover something surprising in data, the most common explanation is that I made a mistake.”

Filed under: Business, Economics, Government, Media — Tags: , , , — Nicholas @ 08:20

John Kay suggests you always ask how a statistic was created before you consider what the presenter wants you to think:

Always ask yourself the question: “where does that data come from?”. “Long distance rail travel in Britain is expected to increase by 96 per cent by 2043.” Note how the passive voice “is expected” avoids personal responsibility for this statement. Who expects this? And what is the basis of their expectation? For all I know, we might be using flying platforms in 2043, or be stranded at home by oil shortages: where did the authors of the prediction acquire their insight?

“On average, men think about sex every seven seconds.” How did the researchers find this out? Did they ask men how often they thought about sex, or when they last thought about sex (3½ seconds ago, on average)? Did they give their subjects a buzzer to press every time they thought about sex? How did they confirm the validity of the responses? Is it possible that someone just made this statement up, and that it has been repeated frequently and without attribution ever since? Many of the numbers I hear at business conferences have that provenance.

[. . .]

Be careful of data defined by reference to other documents that you are expected not to have read. “These accounts have been compiled in accordance with generally accepted accounting principles”, or “these estimates are prepared in line with guidance given by HM Treasury and the Department of Transport”. Such statements are intended to give a false impression of authoritative endorsement. A data set compiled by a national statistics organisation or a respected international institution such as the Organisation for Economic Co-operation and Development or Eurostat will have been compiled conscientiously. That does not, however, imply that the numbers mean what the person using them thinks or asserts they mean.

September 4, 2012

True-but-misleading factoid: “7 kg Of Grain To Make 1 kg Of Beef”

Filed under: Economics, Environment, Food, Health — Tags: — Nicholas @ 00:05

Tim Worstall on the mis-use of a vegetarian-friendly data point:

I asked Larry Elliott where the number came from and was sent this from Fidelity Investments (not online so far as I know).

    The demand for more protein has a significant knock-on impact on grain demand. Livestock is reared on grain-feed, making production heavily resource intensive. Indeed, it takes 7 kilograms of grain to produce just 1 kilogram of meat. As demand for meat rises, this increases the demand for and prices of feedstock — these increased costs of productions flow back to the consumers in the form of higher meat prices. Adding to the upward pressure on feedstock price and much to the dislike of livestock farmers, have been US environmental regulations (the Renewable Fuel Standard) that require a proportion of corn crops be used for the production of bio-fuel.

So, case closed, right? We all need to give up eating meat to save Mother Gaia? Not necessarily. The numbers given are accurate, but only in a particular context: that of raising meat for the US (and, probably, Canadian) market. The rest of the world doesn’t do it this way:

It is only in US or US style feedlot operations than cattle are fed on this much grain. Thus the equation is useful if you want information about what is going to happen with US cattle and grain futures: for that’s the general production method feeding those cattle futures. But very little of the rest of the world uses these feedlots as their production methods. I’m not certain whether we have any at all in the UK for example, would be surprised if there were many in the EU. Around where I live in Portugal pigs forage for acorns (yes, from the same oak trees that give us cork) or are fed on swill, goats and sheep graze on fields that would support no form of arable farming at all (they can just about, sometimes, support low levels of almond, olive or carob growing). Much beef cattle in the UK is grass fed with perhaps hay or silage in the winters.

My point being that sure, it’s possible to grow a kilo of beef by using 7 kilos of grain. But it isn’t necessary. The number might be useful when looking at agricultural futures in the US but it’s a hopelessly misguiding one to use to try and determine anything at all about the global relationship between meat and grain production. And most certainly entirely wrong in leading to the conclusion that we must all become vegetarians.

Which brings us to the lesson of this little screed. Sure, numbers are great, can be very informative. But you do have to make sure that you’re using the right numbers. Numbers that are applicable to whatever problem it is that you want to illuminate. If you end up, just as a little example, comparing grain to meat numbers for a specific intensive method of farming really only used in the US then you’re going to get very much the wrong answer when you try to apply that globally.

September 2, 2012

Institutionalizing income inequality

Filed under: Business, Cancon, Government — Tags: , — Nicholas @ 11:07

At the Worthwhile Canadian Initiative blog, Frances Woolley explains how a couple of data points will work to “bake in” income inequality:

If these are the rules used to determine wages, income inequality will prevail.

It’s impossible for all firms to pay their CEOs above the median salary — by definition, half of executives must be paid below the median. If the majority of firms adopt a compensation policy like the Bell Canada Enterprises one quoted above, CEO salaries will increase inexorably.

At the same time, allowing firms to bring in temporary workers at less than the prevailing market wage prevents the price of labour from being bid up in response to labour shortages, dampening salary growth for workers at the lower wage end of the labour market.

Inequality rules.

August 14, 2012

Anecdotes are not data: Demise of Guys based on anecdotal evidence

Filed under: Media, Randomness — Tags: , , , , — Nicholas @ 09:15

Jacob Sullum on the recent ebook The Demise of Guys: Why Boys Are Struggling and What We Can Do About It, by Philip G. Zimbardo and Nikita Duncan.

Zimbardo’s thesis is that “boys are struggling” in school and in love because they play video games too much and watch too much porn. But he and his co-author, a recent University of Colorado graduate named Nikita Duncan, never establish that boys are struggling any more nowadays than they were when porn was harder to find and video games were limited to variations on Pong. The data they cite mostly show that girls are doing better than boys, not that boys are doing worse than they did before xvideos.com and Grand Theft Auto. Such an association would by no means be conclusive, but it’s the least you’d expect from a respected social scientist like Zimbardo, who oversaw the famous Stanford “prison experiment” that we all read about in Psych 101.

[. . .]

One source of evidence that Zimbardo and Duncan rely on heavily, an eight-question survey of people who watched Zimbardo’s TED talk online, is so dubious that anyone with a bachelor’s degree in psychology (such as Duncan), let alone a Ph.D. (such as Zimbardo), should be embarrassed to cite it without a litany of caveats. The most important one: It seems probable that people who are attracted to Zimbardo’s talk, watch it all the way through, and then take the time to fill out his online survey are especially likely to agree with his thesis and especially likely to report problems related to electronic diversions. This is not just a nonrepresentative sample; it’s a sample bound to confirm what Zimbardo thinks he already knows. “We wanted our personal views to be challenged or validated by others interested in the topic,” the authors claim. Mostly validated, to judge by their survey design.

[. . .]

Other sources of evidence cited by Zimbardo and Duncan are so weak that they have the paradoxical effect of undermining their argument rather than reinforcing it. How do Zimbardo and Duncan know about “the sense of total entitlement that some middle-aged guys feel within their relationships”? Because “a highly educated female colleague alerted us” to this “new phenomenon.” How do they know that “one consequence of teenage boys watching many hours of Internet pornography…is they are beginning to treat their girlfriends like sex objects”? Because of a theory propounded by Daily Mail columnist Penny Marshall. How do they know that “men are as good as their women require them to be”? Because that’s what “one 27-year-old guy we interviewed” said.

Even when more rigorous research is available, Zimbardo and Duncan do not necessarily bother to look it up. How do they know that teenagers “who spend their nights playing video games or texting their friends instead of sleeping are putting themselves at greater risk for gaining unhealthy amounts of weight and becoming obese”? Because an NPR correspondent said so. Likewise, the authors get their information about the drawbacks of the No Child Left Behind Act from a gloss of a RAND Corporation study in a San Francisco Chronicle editorial. This is the level of documentation you’d expect from a mediocre high school student, not a college graduate, let alone a tenured social scientist at a leading university.

August 12, 2012

Relative poverty is not a very useful measurement

Filed under: China, Economics — Tags: , — Nicholas @ 08:37

At the Adam Smith Institute blog, Tim Worstall explains why measuring relative poverty isn’t helpful:

Here actually is the problem with using relative poverty as a measure:

    Compared to the 1960s, China today has higher income inequality, but also incomparably lower levels of material poverty. By Brady’s definition, China was less impoverished in the near-starvation years of the 1960s than it is as an economic superpower today. According to the OECD, during the last three decades the share of Chinese living in absolute poverty dramatically declined from eight in ten to one in ten (Garroway and de Laiglesia 2011). During the same period, relative poverty, measured exactly as Brady measures it, roughly doubled. Although inequality and relative poverty are not irrelevant for measuring the well-being of a society, we should be apprehensive about a measure of poverty that is incapable of detecting the largest decline in material poverty in human history.

As pointed out, a measure of poverty that not just ignores, but actually gets the sign wrong on, the largest reduction in poverty in the history of our species is of limited value.

[. . .]

Which leads us to something of a conclusion: it’s fine to consider the distribution of incomes within a society. But we do it rather too much with the constant political obsession over relative poverty. We need to be paying a lot more attention to absolute standards of living: most especially how these change over time. Most specifically I’m thinking about the effects attempts to reduce relative poverty might affect our ability to increase absolute standards of living in the future.

August 10, 2012

Who’s more dangerous to a random American citizen, terrorists or police officers?

Filed under: Media, USA — Tags: , , , — Nicholas @ 14:56

According to Jim Harper at the Cato@Liberty blog, you’re eight times more likely to be shot by the police than killed by a terrorist:

It got a lot of attention this morning when I tweeted, “You’re Eight Times More Likely to be Killed by a Police Officer than a Terrorist.” It’s been quickly retweeted dozens of times, indicating that the idea is interesting to many people. So let’s discuss it in more than 140 characters.

In case it needs saying: Police officers are unlike terrorists in almost all respects. Crucially, the goal of the former, in their vastest majority, is to have a stable, peaceful, safe, law-abiding society, which is a goal we all share. The goal of the latter is … well, it’s complicated. I’ve cited my favorite expert on that, Audrey Kurth Cronin, here and here and here. Needless to say, the goal of terrorists is not that peaceful, safe, stable society.

I picked up the statistic from a blog post called: “Fear of Terror Makes People Stupid,” which in turn cites the National Safety Council for this and lots of other numbers reflecting likelihoods of dying from various causes. So dispute the number(s) with them, if you care to.

I take it as a given that your mileage may vary. If you dwell in the suburbs or a rural area, and especially if you’re wealthy, white, and well-spoken, your likelihood of death from these two sources probably converges somewhat (at very close to zero).

July 30, 2012

If your source data is flawed, your conclusions are useless

Filed under: Environment, Media, Science — Tags: , , , — Nicholas @ 10:30

James Delingpole on the recent paper from Anthony Watts and his co-authors:

Have a look at this chart. It tells you pretty much all you need to know about the much-anticipated scoop by Anthony Watts of Watts Up With That?

What it means, in a nutshell, is that the National Oceanic and Atmospheric Administration (NOAA) — the US government body in charge of America’s temperature record, has systematically exaggerated the extent of late 20th century global warming. In fact, it has doubled it.

Is this a case of deliberate fraud by Warmist scientists hell bent on keeping their funding gravy train rolling? Well, after what we saw in Climategate anything is possible. (I mean it’s not like NOAA is run by hard-left eco activists, is it?) But I think more likely it is a case of confirmation bias. The Warmists who comprise the climate scientist establishment spend so much time communicating with other warmists and so little time paying attention to the views of dissenting scientists such as Henrik Svensmark — or Fred Singer or Richard Lindzen or indeed Anthony Watts — that it simply hasn’t occurred to them that their temperature records need adjusting downwards not upwards.

What Watts has conclusively demonstrated is that most of the weather stations in the US are so poorly sited that their temperature data is unreliable. Around 90 per cent have had their temperature readings skewed by the Urban Heat Island effect. While he has suspected this for some time what he has been unable to do until his latest, landmark paper (co-authored with Evan Jones of New York, Stephen McIntyre of Toronto, Canada, and Dr. John R. Christy from the Department of Atmospheric Science, University of Alabama, Huntsville) is to put precise figures on the degree of distortion involved.

July 25, 2012

Michael Bloomberg’s call for a national police strike

Filed under: Law, Liberty, USA — Tags: , , , , — Nicholas @ 10:45

At the Simple Justice blog, Scott H. Greenfield explains why New York City mayor Michael Bloomberg is so very, very wrong to call for a national police strike:

There are some virtues that come with having a billionaire mayor. He’s not easy to bribe, for example, so you know whatever comes out of his mouth does so honestly. And therein lies the downside when he says something like this:

    “I don’t understand why the police officers across this country don’t stand up collectively and say, ‘We’re going to go on strike. We’re not going to protect you. Unless you, the public, through your legislature, do what’s required to keep us safe,’” Bloomberg said on CNN Monday night.

Within this idiotic comment are two fallacious assumptions. The first is the “war on cops” tripe, that there is a trend against cops, putting them at increasing risk of harm from gun-toting criminals. Radley Balko has beaten that myth to death. Mike Riggs too. It’s a good myth to further a public agenda in favor of order at the expense of law, but it just doesn’t hold water.

The second, however, is the mayor’s encouragement to police to take the First Rule of Policing a step further than ever before, to use their singular authority to hold a nation hostage. This is perhaps the most dangerous idea Bloomberg could promote.

[. . .]

Ironically, the only means of staying this armed takeover, should the police ever come to recognize that they have the power if not the authority to seize control, would be guns in the hands of citizens. No rational person could want it to come to such a battle.

So while a billionaire mayor may be above the influences that drive mere mortals, they sometimes utter the most insanely foolish things that take us to a place we must never go. The day the police, as a whole, think they can use their posts to take our government hostage is the day every citizen will need to dust off his arms. The day a billionaire mayor suggests that the police should use their power to influence our government is a day he’s been in office too long.

Update: Walter Olson at the Cato@Liberty blog:

It’s enough to make you wonder whether Bloomberg is secretly a passionate admirer of the Second Amendment and keeps saying things this outrageous from a covert intent to sabotage the case for gun control.

July 19, 2012

Choice: re-evaluating the notion that too much choice is a bad thing

Filed under: Economics, Liberty, Science — Tags: , , , — Nicholas @ 09:37

There was a famous study several years ago that supposedly “proved” that providing too many choices to consumers was worse than providing fewer choices. At the time, I thought there must have been something wrong with the study.

The study used free jam samples in a supermarket, varying between offering 24 samples and only six, to test whether people were more likely to purchase the products (they were given a discount coupon in both variants). The result was that people who sampled from the smaller selection were more likely to actually buy the jam than those who had the wider selection to choose from. This was taken to prove that too many choices were a bad thing (and became a regular part of anti-consumer-choice advocacy campaigns).

Tim Harford explores more recent attempts to reproduce the study’s outcome:

But a more fundamental objection to the “choice is bad” thesis is that the psychological effect may not actually exist at all. It is hard to find much evidence that retailers are ferociously simplifying their offerings in an effort to boost sales. Starbucks boasts about its “87,000 drink combinations”; supermarkets are packed with options. This suggests that “choice demotivates” is not a universal human truth, but an effect that emerges under special circumstances.

Benjamin Scheibehenne, a psychologist at the University of Basel, was thinking along these lines when he decided (with Peter Todd and, later, Rainer Greifeneder) to design a range of experiments to figure out when choice demotivates, and when it does not.

But a curious thing happened almost immediately. They began by trying to replicate some classic experiments – such as the jam study, and a similar one with luxury chocolates. They couldn’t find any sign of the “choice is bad” effect. Neither the original Lepper-Iyengar experiments nor the new study appears to be at fault: the results are just different and we don’t know why.

After designing 10 different experiments in which participants were asked to make a choice, and finding very little evidence that variety caused any problems, Scheibehenne and his colleagues tried to assemble all the studies, published and unpublished, of the effect.

« Newer PostsOlder Posts »

Powered by WordPress