Quotulatiousness

June 29, 2014

Maclean’s puts Canada on the map, sorta

Filed under: Cancon — Tags: , — Nicholas @ 11:39

For Canada Day, Maclean’s tries portraying the country in various different ways:

Happy Canada Day! For a different perspective on the country this year, Maclean’s went to the maps. Drawing on a variety of sources, from government statistics to various online databases to tweets, here are some maps to illustrate Canada as you’ve never seen it before.

[…]

It’s always a surprise when people first learn that the very tip of southwestern Ontario is at a lower latitude than parts of California — which got us wondering: How do other parts of the country line up with the rest of the world? Here are the results, using Earthtools.org. Most of the cities on this map, and their global counterparts, lie within less than 50 km of each other, latitudinally speaking, of course. Only Quebec-Ulan Bator and Fort McMurray-Moscow are a full degree apart.

Canadian cities and other cities by latitude

[…]

What to say? Canada is a land of contrasts. It also offers up a bounty of clichés.

Canadian provinces by clichés

May 9, 2014

QotD: Real history and economic modelling

Filed under: Economics, History, Media, Quotations — Tags: , , — Nicholas @ 08:23

I am not an economist. I am an economic historian. The economist seeks to simplify the world into mathematical models — in Krugman’s case models erected upon the intellectual foundations laid by John Maynard Keynes. But to the historian, who is trained to study the world “as it actually is”, the economist’s model, with its smooth curves on two axes, looks like an oversimplification. The historian’s world is a complex system, full of non-linear relationships, feedback loops and tipping points. There is more chaos than simple causation. There is more uncertainty than calculable risk. For that reason, there is simply no way that anyone — even Paul Krugman — can consistently make accurate predictions about the future. There is, indeed, no such thing as the future, just plausible futures, to which we can only attach rough probabilities. This is a caveat I would like ideally to attach to all forward-looking conjectural statements that I make. It is the reason I do not expect always to be right. Indeed, I expect often to be wrong. Success is about having the judgment and luck to be right more often than you are wrong.

Niall Ferguson, “Why Paul Krugman should never be taken seriously again”, The Spectator, 2013-10-13

May 4, 2014

Quarterback boom or bust metrics

Filed under: Football — Tags: , , — Nicholas @ 11:31

At the Daily Norseman, CCNorseman has been working on developing a set of metrics for determining the chances of NFL success for prospective draft picks at the quarterback position:

This past off-season I have been scouring current and past scouting reports to try to develop a metric that we can use to evaluate quarterback prospects. I started by developing a metric to evaluate the traits of successful quarterbacks. I cataloged the traits found in pre-draft scouting reports of an elite list of 25 successful quarterbacks that have been drafted since 1998, and based the metric on those traits that were most common among that pool of players. In other words, I attempted to answer the question, “What common traits did the most successful quarterbacks in the NFL have coming out of college?” Then I went back and re-evaluated the “success metric” based on excellent feedback from the readers here at the Daily Norseman. I also developed a second metric to evaluate the traits of quarterback busts. It was the same process, except that I catalogued the common traits of the 17 quarterback busts since 1998 and based the bust metric on those traits that were most common among those players. That led me to the final Boom or Bust metric, which you can also find in that second link (and is listed below). The last step in this process is what you’ll find here: verifying the accuracy of the metric. I have gone back and run the metric on quarterbacks drafted in the 1st round of past drafts to see how successful it would have been at predicting the future successes of those players. The short of it is: it’s more accurate than a random guess. It’s not fool-proof mind you, but over the course of seven drafts from 2004 through 2010, it would have accurately predicted which 1st round quarterbacks would bust and which would be serviceable or better 73% of the time. Why did I only go back to 2004? Well, I really wanted to use at least two scouting reports for every quarterback when testing the metric to ensure better accuracy, but the farther back in time I went, the harder and harder it was to find reliable scouting reports online. I wasn’t able to track down more than one reliable scouting report for the quarterbacks drafted in 2003 and earlier, so there really is no other reason than that. I stopped at 2010, because a quarterback needs at least 4 years in the league to qualify as a bust or not, and those quarterbacks drafted in 2011 and later haven’t had a full 4 years yet.

[…]

It’s worth pointing out that in this particular data set (2004-2010), the Bust Metric by itself was almost as accurate overall as the combined metric in predicting the future of these quarterbacks and was 68% accurate by itself (although they each had slightly different results on a per quarterback basis). The success metric by itself was a little less accurate, correctly predicting the future only 61% of the time. In any case listed below are the 19 first round quarterbacks drafted between 2004 and 2010, with their metric scores from their pre-draft scouting reports and pre-draft prediction. I have taken some leeway in assigning the outcome score to this. My biggest concern in all of this is to ensure that if the metric predicts the quarterback to be in the bust category that they truly are a bust. After that, we can end up splitting hairs all day about what makes a quarterback “average” or “successful” or not. In other words, if the metric predicts that a quarterback will be merely league average, but he turns out to be a successful one then I’ll still call it a win for the metric, because it didn’t predict that quarterback to bust. I think teams are mostly concerned with not having their 1st round quarterback bust (like JaMarcus Russell or Ryan Leaf), than whether or not they get a Jason Campbell versus Aaron Rodgers type. I have given each quarterback an outcome label of “yes”, “maybe” or “no”. A “maybe” label essentially means that the player has performed reasonably well, but still has enough time left in their career to qualify for their prediction label. In those cases, the quarterback receives half-credit for their outcome.

May 3, 2014

Fat’s negative health impact reconsidered

Filed under: Food, Health, Science — Tags: , , , — Nicholas @ 09:19

Hmm. Today seems to be health news day. In the Wall Street Journal, Nina Teicholz looks at the dubious science behind the saturated fat demonization we’ve all seen in so many health stories:

“Saturated fat does not cause heart disease” — or so concluded a big study published in March in the journal Annals of Internal Medicine. How could this be? The very cornerstone of dietary advice for generations has been that the saturated fats in butter, cheese and red meat should be avoided because they clog our arteries. For many diet-conscious Americans, it is simply second nature to opt for chicken over sirloin, canola oil over butter.

The new study’s conclusion shouldn’t surprise anyone familiar with modern nutritional science, however. The fact is, there has never been solid evidence for the idea that these fats cause disease. We only believe this to be the case because nutrition policy has been derailed over the past half-century by a mixture of personal ambition, bad science, politics and bias.

Our distrust of saturated fat can be traced back to the 1950s, to a man named Ancel Benjamin Keys, a scientist at the University of Minnesota. Dr. Keys was formidably persuasive and, through sheer force of will, rose to the top of the nutrition world — even gracing the cover of Time magazine — for relentlessly championing the idea that saturated fats raise cholesterol and, as a result, cause heart attacks.

[…]

Critics have pointed out that Dr. Keys violated several basic scientific norms in his study. For one, he didn’t choose countries randomly but instead selected only those likely to prove his beliefs, including Yugoslavia, Finland and Italy. Excluded were France, land of the famously healthy omelet eater, as well as other countries where people consumed a lot of fat yet didn’t suffer from high rates of heart disease, such as Switzerland, Sweden and West Germany. The study’s star subjects — upon whom much of our current understanding of the Mediterranean diet is based — were peasants from Crete, islanders who tilled their fields well into old age and who appeared to eat very little meat or cheese.

As it turns out, Dr. Keys visited Crete during an unrepresentative period of extreme hardship after World War II. Furthermore, he made the mistake of measuring the islanders’ diet partly during Lent, when they were forgoing meat and cheese. Dr. Keys therefore undercounted their consumption of saturated fat. Also, due to problems with the surveys, he ended up relying on data from just a few dozen men — far from the representative sample of 655 that he had initially selected. These flaws weren’t revealed until much later, in a 2002 paper by scientists investigating the work on Crete — but by then, the misimpression left by his erroneous data had become international dogma.

April 7, 2014

Big data’s promises and limitations

Filed under: Economics, Science, Technology — Tags: , — Nicholas @ 07:06

In the New York Times, Gary Marcus and Ernest Davis examine the big claims being made for the big data revolution:

Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. Jeopardy! champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.

The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful. A big data analysis might reveal, for instance, that from 2006 to 2011 the United States murder rate was well correlated with the market share of Internet Explorer: Both went down sharply. But it’s hard to imagine there is any causal relationship between the two. Likewise, from 1998 to 2007 the number of new cases of autism diagnosed was extremely well correlated with sales of organic food (both went up sharply), but identifying the correlation won’t by itself tell us whether diet has anything to do with autism.

Second, big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement. Molecular biologists, for example, would very much like to be able to infer the three-dimensional structure of proteins from their underlying DNA sequence, and scientists working on the problem use big data as one tool among many. But no scientist thinks you can solve this problem by crunching data alone, no matter how powerful the statistical analysis; you will always need to start with an analysis that relies on an understanding of physics and biochemistry.

March 29, 2014

Nate Silver … in the shark tank

Filed under: Environment, Media — Tags: , , , — Nicholas @ 10:24

Statistician-to-the-stars Nate Silver can shrug off attacks from Republicans over his 2012 electoral forecast or from Democrats unhappy with his latest forecast for the 2014 mid-terms, but he’s finding himself under attack from an unexpected quarter right now:

Ever wondered how it would feel to be dropped from a helicopter into a swirling mass of crazed, genetically modified oceanic whitetip sharks in the middle of a USS-Indianapolis-style feeding frenzy?

Just ask Nate Silver. He’s been living the nightmare all week – ever since he had the temerity to appoint a half-way skeptical scientist as resident climate expert at his “data-driven” journalism site, FiveThirtyEight.

Silver has confessed to The Daily Show that he can handle the attacks from Paul Krugman (“frivolous”), from his ex-New York Times colleagues, and from Democrats disappointed with his Senate forecasts. But what has truly spooked this otherwise fearless seeker-after-truth, apparently, is the self-righteous rage from the True Believers in Al Gore’s Church of Climate Change.

“We don’t pay that much attention to what media critics say, but that was a piece where we had 80 percent of our commenters weigh in negatively, so we’re commissioning a rebuttal to that piece,” said Silver. “We listen to the people who actually give us legs.”

The piece in question was the debut by his resident climate expert, Roger Pielke, Jr., arguing that there was no evidence to support claims by alarmists that “extreme weather events” are on the increase and doing more damage than ever before. Pielke himself is a “luke-warmer” – that is, he believes that mankind is contributing to global warming but is not yet convinced that this contribution will be catastrophic. But neither his scientific bona fides (he was Director of the Center for Science and Technology Policy Research at the University of Colorado Boulder) nor his measured, fact-based delivery were enough to satisfy the ravening green-lust of FiveThirtyEight’s mainly liberal readership.

March 28, 2014

Opinions, statistics, and sex work

Filed under: Law, Liberty, Media — Tags: , , , — Nicholas @ 09:04

Maggie McNeill explains why the “sex trafficking” meme has been so relentlessly pushed in the media for the last few years:

Imagine a study of the alcohol industry which interviewed not a single brewer, wine expert, liquor store owner or drinker, but instead relied solely on the statements of ATF agents, dry-county politicians and members of Alcoholics Anonymous and Mothers Against Drunk Driving. Or how about a report on restaurants which treated the opinions of failed hot dog stand operators as the basis for broad statements about every kind of food business from convenience stores to food trucks to McDonald’s to five-star restaurants?

You’d probably surmise that this sort of research would be biased and one-sided to the point of unreliable. And you’d be correct. But change the topic to sex work, and such methods are not only the norm, they’re accepted uncritically by the media and the majority of those who the resulting studies. In fact, many of those who represent themselves as sex work researchers don’t even try to get good data. They simply present their opinions as fact, occasionally bolstered by pseudo-studies designed to produce pre-determined results. Well-known and easily-contacted sex workers are rarely consulted. There’s no peer review. And when sex workers are consulted at all, they’re recruited from jails and substance abuse programs, resulting in a sample skewed heavily toward the desperate, the disadvantaged and the marginalized.

This sort of statistical malpractice has always been typical of prostitution research. But the incentive to produce it has dramatically increased in the past decade, thanks to a media-fueled moral panic over sex trafficking. Sex-work prohibitionists have long seen trafficking and sex slavery as a useful Trojan horse. In its 2010 “national action plan,” for example, the activist group Demand Abolition writes,“Framing the Campaign’s key target as sexual slavery might garner more support and less resistance, while framing the Campaign as combating prostitution may be less likely to mobilize similar levels of support and to stimulate stronger opposition.”

March 24, 2014

Interpersonal communication in Shakespeare, or “Juliet and Her Nurse”

Filed under: Humour, Media — Tags: , , — Nicholas @ 08:40

Emma Pierson does a bit of statistical analysis of some of Shakespeare’s plays and discovers that some of the play names are rather misleading, at least in terms of romantic dialogue:

More than 400 years after Shakespeare wrote it, we can now say that Romeo and Juliet has the wrong name. Perhaps the play should be called Juliet and Her Nurse, which isn’t nearly as sexy, or Romeo and Benvolio, which has a whole different connotation.

I discovered this by writing a computer program to count how many lines each pair of characters in Romeo and Juliet spoke to each other,1 with the expectation that the lovers in the greatest love story of all time would speak more than any other pair. I wanted Romeo and Juliet to end up together — if they couldn’t in the play, at least they could in my analysis — but the math paid no heed to my desires. Juliet speaks more to her nurse than she does to Romeo; Romeo speaks more to Benvolio than he does to Juliet. Romeo gets a larger share of attention from his friends (Benvolio and Mercutio) and even his enemies (Tybalt) than he does from Juliet; Juliet gets a larger share of attention from her nurse and her mother than she does from Romeo. The two appear together in only five scenes out of 25. We all knew that this wasn’t a play predicated on deep interactions between the two protagonists, but still.

I’m blaming Romeo for this lack of communication. Juliet speaks 155 lines to him, and he speaks only 101 to her. His reticence toward Juliet is particularly inexcusable when you consider that Romeo spends more time talking than anyone else in the play. (He spends only one-sixth of his time in conversation with the supposed love of his life.) One might be tempted to blame this on the nature of the plot; of course the lovers have no chance to converse, kept apart as they are by the loathing of their families! But when I analyzed the script of a modern adaptation of Romeo and JulietWest Side Story — I found that Tony and Maria interacted more in the script than did any other pair.

All this got me thinking: Do any of Shakespeare’s lovers actually, you know, talk to each other? If Romeo and Juliet don’t, what hope do the rest of them have?

Update, 28 March: Chateau Heartiste says that this study shows that pick-up artists and “game” practitioners are right and also proves that “Everything important you need to know about men and women you can find in the works of Shakespeare”.

March 20, 2014

Today in misunderstood income inequality stats…

Filed under: Britain, Economics, Media — Tags: , , — Nicholas @ 08:20

Tim Worstall pokes fun at a recent Oxfam report that claims that Britain’s five richest families own more than the bottom 20% of the population:

I read this and thought, “well, yes, this is obvious and what the hell’s it got to do with increasing inequality?” Of course Gerald Grosvenor (aka Duke of Westminster) has more wealth than the bottom 10 per cent of the country put together. It’s obvious that the top five families will have more than 20 per cent of all Britons. Do they think we all just got off the turnip truck or something?

They’ve also managed to entirely screw up the statistic they devised themselves by missing the point that if you’ve no debts and a £10 note then you’ve got more wealth than the bottom 10 or 20 per cent of the population has in aggregate. The bottom levels of our society have negative wealth.

[…]

Given what we classify as wealth, the poor have no assets at all. Property, financial assets (stocks, bonds etc), private sector pension plans, these are all pretty obviously wealth.

But then the state pension is also wealth: it’s a promise of a future stream of income. That is indeed wealth just as much as a share certificate or private pension is. But we don’t count that state pension as wealth in these sorts of calculations.

The right to live in a council house at a subsidised rent of the rest of your life is wealth, but that’s not counted either. Hell, the fact that we live in a country with a welfare system is a form of wealth — but we still don’t count that.

Doing this has been called (not by me, originally anyway) committing Worstall’s Fallacy. Failing to take account of the things we already do to correct a problem in arguing that more must be done to correct said problem. We already redistribute wealth by taxing the rich to provide pensions, housing, free education (only until 18 these days) and so on to people who could not otherwise afford them. But when bemoaning the amount of inequality that clearly cries out for more redistribution, we fail to note how much we’re already doing.

So Oxfam are improperly accounting for wealth and they’ve also missed the point that, given the existence of possible negative wealth, then of course one person or another in the UK will have more wealth than the entire lowest swathe.

February 23, 2014

David Friedman looks at the “missing heat is going into the deep ocean” claim

Filed under: Environment — Tags: , — Nicholas @ 10:20

David Friedman is an economist, so of course he doesn’t claim to be a climate scientist. He can, however, do math and examine numerical evidence … which doesn’t seem to support the most recent explanation for the pause in global warming:

One claim I have repeatedly seen in online arguments about global warming is that it has not really paused, because the “missing heat” has gone into the ocean. Before asking whether that claim is true, it is worth first asking how anyone could know it is true. A simple calculation suggests that the answer is one couldn’t. As follows …

Part of the claim, which I assume is true, is that from 90% to 95% of global heat goes into the ocean, which implies that the heat capacity of the ocean is 10 to 20 times that of the rest of the system. If so, and if the pause in surface and atmosphere temperatures was due to heat for some reason going into the ocean instead, that should have warmed the ocean by 1/10 to 1/20th of the amount by which the rest of the system didn’t warm.

The global temperature trend in the IPCC projections is about .03°C/year. If surface and atmospheric temperature has been flat for 17 years, that would put it about .5° below trend. If the explanation is the heat going into the ocean, the average temperature of the ocean should have risen as a result above its trend by between .025° and .05°.

Would anyone like to claim that we have data on ocean temperature accurate enough to show a change that small? If not, then the claim is at this point not an observed fact, which is how it is routinely reported, but a conjecture, a way of explaining away the failure of past models to correctly predict current data.

January 6, 2014

Police killed in line of duty – the good news and the not-so-good news

Filed under: Law, USA — Tags: , , , — Nicholas @ 10:32

The good news is that in the United States, the number of police officers killed in the performance of their duties dropped to a level last seen in 1959. The bad news is that the number of people killed by the police didn’t drop:

The go-to phrase deployed by police officers, district attorneys and other law enforcement-related entities to justify the use of excessive force or firing dozens of bullets into a single suspect is “the officer(s) feared for his/her safety.” There is no doubt being a police officer can be dangerous. But is it as dangerous as this oft-deployed justification makes it appear?

    The annual report from the nonprofit National Law Enforcement Officers Memorial Fund also found that deaths in the line of duty generally fell by 8 percent and were the fewest since 1959.

    According to the report, 111 federal, state, local, tribal and territorial officers were killed in the line of duty nationwide this past year, compared to 121 in 2012.

    Forty-six officers were killed in traffic related accidents, and 33 were killed by firearms. The number of firearms deaths fell 33 percent in 2013 and was the lowest since 1887.

This statistical evidence suggests being a cop is safer than its been since the days of Sheriff Andy Griffith. Back in 2007, the FBI put the number of justifiable homicides committed by officers in the line of duty at 391. That count only includes homicides that occurred during the commission of a felony. This total doesn’t include justifiable homicides committed by police officers against people not committing felonies and also doesn’t include homicides found to be not justifiable. But still, this severe undercount far outpaces the number of cops killed by civilians.

We should expect the number to always skew in favor of the police. After all, they are fighting crime and will run into dangerous criminals who may respond violently. But to continually claim that officers “fear for their safety” is to ignore the statistical evidence that says being a cop is the safest it’s been in years — and in more than a century when it comes to firearms-related deaths.

December 14, 2013

Canada edges ahead of the US in economic freedoms

Last week, the Fraser Institute published Economic Freedom of North America 2013 which illustrates the relative changes in economic freedom among US states and Canadian provinces:

Click to go to the full document

Click to go to the full document

Reason‘s J.D. Tuccille says of the report, “Canadian Provinces Suck Slightly Less Than U.S. States at Economic Freedom”:

For readers of Reason, Fraser’s definition of economic freedom is unlikely to be controversial. Fundamentally, the report says, “Individuals have economic freedom when (a) property they acquire without the use of force, fraud, or theft is protected from physical invasions by others and (b) they are free to use, exchange, or give their property as long as their actions do not violate the identical rights of others.”

The report includes two rankings of economic freedom — one just comparing state and provincial policies, and the other incorporating the effects of national legal systems and property rights protections. Since people are subject to all aspects of the environment in which they operate, and not just locally decided rules and regulations, it’s that “world-adjusted all-government” score that matters most, and it has a big effect — especially since “gaps have widened between the scores of Canada and the United States in these areas.” The result is is that:

    [I]n the world-adjusted index the top two jurisdictions are Canadian, with Alberta in first place and Saskatchewan in second. In fact, four of the top seven jurisdictions are Canadian, with the province of Newfoundland & Labrador in sixth and British Columbia in seventh. Delaware, in third spot, is the highest ranked US state, followed by Texas and Nevada. Nonetheless, Canadian jurisdictions, Prince Edward Island and Nova Scotia, still land in the bottom two spots, just behind New Mexico at 58th and West Virginia at 57th.

Before you assume that the nice folks at Fraser are gloating, or that you should pack your bags for a northern relocation, the authors caution that things aren’t necessarily getting better north of the border. Instead, “their economic freedom is declining more slowly than in the US states.”

December 10, 2013

Origins of the “infographic” plague

Filed under: Books, Media — Tags: , , , — Nicholas @ 09:28

As Tim Harford says, “So it’s HIS fault”:

In the 1930s, Austrian sociologist, philosopher and curator Otto Neurath and his wife Marie pioneered ISOTYPE — the International System Of TYpographic Picture Education, a new visual language for capturing quantitative information in pictograms, sparking the golden age of infographics in print.

The Transformer: Principles of Making Isotype Charts is the first English-language volume to capture the story of Isotype, an essential foundation for our modern visual language dominated by pictograms in everything from bathroom signage to computer interfaces to GOOD’s acclaimed Transparencies.

Isotype1

The real cherry on top is a previously unpublished essay by Marie Neurath, who was very much on par with Otto as Isotype’s co-inventor, written a year before her death in 1986 and telling the story of how she carried on the Isotype legacy after Otto’s death in 1946.

Isotype2

December 6, 2013

Mismeasuring inequality

Filed under: Economics, USA — Tags: , , — Nicholas @ 15:20

Tim Worstall on a Wall Street Journal article which asks “how do we measure inequality”. Tim says “not that way, idiots” (although I might have imagined the “idiots” part):

The title of the piece is “How do you measure ‘inequality’?” to which a very good response is “Not that way”. For although all the numbers there are exact and accurate (well, as much as any economic statistic is such) the whole statement is entirely misleading. For the numbers that are being used for the USA are calculated on an entirely different basis to the way that the numbers for the other countries are. So much so that in this instance we have Wikipedia being more accurate than either the WSJ or the CIA itself. Which, while amusing, isn’t quite the world I think we’d all like to have.

Here’s what the problem is. Conceptually we can measure inequality in a number of different ways and this particular one, the Gini, looks at the spread of incomes across the society. OK, no need for the details of how we calculate it except for one. We again, conceptually, have two different incomes that can be measured.

So, the guy pulling down $1 million a year dealing bonds on Wall Street. Does he really have an income of $1 million a year? Or is it more true to say that he gets $600,000 a year after the Feds, NY State and NYC have all dipped their hands into his paycheck? And the guy at the other end, making $15,000 a year as a greeter at WalMart. Is he really making $15,000? Or should we add in the EITC, the State EITC (if there is one), Section 8 housing vouchers, Medicaid and all the rest to what he’s earning? He might be consuming as if he’s getting $25 k a year, even though his market income is only $15k.

What we actually do is we calculate both of these. The first is called the Gini at market incomes, the second the Gini after taxes and benefits. There’s nothing either right or wrong about either measure: they just are what they are. However, we do have to be clear about which we are using in any circumstance and similarly, very clear about not comparing inequality in one country by one measure with inequality in another by the other measure. Yet, sadly, that is exactly what is being done here.

December 4, 2013

Apple iPhone pricing in different markets

Filed under: Economics, Technology — Tags: , , , — Nicholas @ 08:06

In Forbes, Tim Worstall explains a misunderstanding of Ricardo’s Iron Law of One Price on the part of the Guardian:

This is a fun little bit of data calculation and visualisation. It’s a database and then mapping of the global price list for Apple’s iPhone 5s. And there are two interesting ways of using it. The first is simply to look at how prices differ around the world:

iPhone price mapYou can do this in USD or GBP as you wish. And this can be used to explore the violations of Ricardo’s Iron Law of One Price. Which is where David Ricardo insisted that the prices of traded goods would inevitably move to being equal all over the world. Well, equal minus the transport costs of getting them around the world. And transport costs for an iPhone are trivial: it would be amazing if Apple were paying more than a couple of dollars to airfreight one to anywhere at all. So, we would expect prices to be the same everywhere: but they obviously are not.

[…]

However, when The Guardian reports on this something appears to go wrong. Not their fault I suppose, it’s about economics and lefties never really do get that subject. But here:

    Similar to the way the Economist tracks the cost of the ubiquitous McDonalds burger across countries, nations and states, Mobile Unlocked tracked the price of the iPhone 5S across 47 countries in native currencies with native sales tax, and then converted those prices into US dollars (USD) or British pounds (GBP).

No … the Big Mac Index operates entirely and exactly the other way around. We need to make the distinction between traded goods and non-traded goods. The Iron Law only works on traded goods. What we’re trying to find out with PPP calculations is what are the price differentials of non-traded goods? Which is why the Big Mac is used. It is (supposedly at least) exactly the same all over the world. It is also made almost entirely from local produce bought at the local price in local markets. US Big Macs use American beef, Argentine ones Argentine and so on. So we get to see the impact of local prices on the same product worldwide. That’s what we’re actually attempting with that Big Mac Index. The Economist then goes on to compare the prices of this non-traded good with exchange rates and attempt to work out whether the exchange rates are correct or not.

This is entirely different from using the price of a traded good to measure local price variations. For what we’re going to be measuring here is what interventions there are into stopping the Iron Law working, not what local price levels are.

« Newer PostsOlder Posts »

Powered by WordPress