Quotulatiousness

May 2, 2013

A layman’s guide to evaluating statistical claims

Filed under: Media, Politics, Science — Tags: , , — Nicholas @ 10:23

We’re awash with statistics, 43.2% of which seem to be made up on the spot (did you see what I did there?). Betsey Stevenson & Justin Wolfers offer some guidance on how non-statisticians should approach the numbers we’re presented with in the media:

So how can non-experts and policy makers separate the useful research from the dross? Allow us to offer six rules.

1. Focus on how robust a finding is, meaning that different ways of looking at the evidence point to the same conclusion. Do the same patterns repeat in many data sets, in different countries, industries or eras? Are the findings fragile, changing as one makes small changes in how phenomena are measured, and do the results depend on whether particularly influential observations are included? Thanks to Moore’s Law of increasing computing power, it has never been easier or cheaper to assess, test and retest an interesting finding. If the author hasn’t made a convincing case, then don’t be convinced.

2. Data mavens often make a big deal of their results being statistically significant, which is a statement that it’s unlikely their findings simply reflect chance. Don’t confuse this with something actually mattering. With huge data sets, almost everything is statistically significant. On the flip side, tests of statistical significance sometimes tell us that the evidence is weak, rather than that an effect is nonexistent. Remember, results can be useful even if they don’t meet significance tests. Sometimes questions are so important that we need to glean whatever meaning we can from available data. The best bad evidence is still more informative than no evidence.

3. Be wary of scholars using high-powered statistical techniques as a bludgeon to silence critics who are not specialists. If the author can’t explain what they’re doing in terms you can understand, then you shouldn’t be convinced. You wouldn’t be convinced by an analysis just because it was written in ancient Latin, so why be impressed by an abundance of Greek letters? Sophisticated statistical methods can be helpful, but they can also hide more than they reveal.

4. Don’t fall into the trap of thinking about an empirical finding as “right” or “wrong.” At best, data provide an imperfect guide. Evidence should always shift your thinking on an issue; the question is how far.

5. Don’t mistake correlation for causation. For instance, even after revisions and corrections, Reinhart and Rogoff have demonstrated that economic growth is typically slower when government debt is higher. But does high debt cause slow growth, or is slow growth in gross domestic product the cause of higher debt-to-GDP ratios? Or are there other important determinants, such as populist spending by a government looking to get re- elected, which is more likely when growth is slow and typically drives debt up?

6. Always ask “so what?” Are the factors that drove the observed negative correlation between debt and GDP likely to exist today, in the U.S.? Does it even make sense to speak of “the” relationship between debt and economic growth, when there are surely many such relationships: Governments borrowing simply to fund their re-election are likely harming growth, while those investing in much-needed public works can provide the foundation for growth. The “so what” question is about moving beyond the internal validity of a finding to asking about its external usefulness.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress