Quotulatiousness

March 5, 2020

“Maybe … Trump’s victory caused an unusual number of spontaneous abortions in Ontario”

Filed under: Cancon, Health, Politics, USA — Tags: , , , , , — Nicholas @ 05:00

Colby Cosh on the recently published findings of a p-hacking conspiracy study on how the election of President Donald Trump was reflected in the birth ratio of liberals in Ontario:

Front view of Toronto General Hospital in 2005. The new wing, as shown in the photograph, was completed in 2002.
Photo via Wikimedia Commons.

On Monday there came a surprising piece of science news from BMJ Open, an open-access title affiliated with the British Medical Journal. It seems two researchers from Mount Sinai Hospital in Toronto, an endocrinologist and a statistician, have convinced themselves that the election of Donald Trump to the American presidency in November 2016 had a nerve-shattering effect on Ontario. The province of Ontario, that is, not the Los Angeles suburb.

Trump’s victory, according to the researchers, was so awful that, like a war or a disaster, it briefly altered the sex ratio in live births in the province. This is, I should say, a fairly well-established effect of extreme social traumas. When mothers experience physiological stress, the uterine environment becomes less hospitable, and male fetuses, more vulnerable to such changes, become less likely to survive pregnancy. (This makes sense from a Darwinian standpoint, because girls are more valuable than boys in replacing population after a calamity.)

In 2020 nobody should need me to say that a cute, counterintuitive scientific “result” like this, appearing in the newspapers on literally the day of its publication, should be greeted with extreme skepticism. The sex ratio at birth, always expressed in medical literature as a ratio of boys to girls, tends to hover around 1.06 under natural circumstances. (Even in an advanced civilization, things even out within the age cohort over the next 20 years as the lads explore dirt bikes, rock fights, and roofs.)

The Mount Sinai researchers, Ravi Retnakaran and Chang Ye, had records of the sexes of all children born in Ontario from April 2010 to October 2017. Even in a place as large as Ontario, the ratio naturally bounces around randomly between 1.1 and 1.0, and there are seasonal effects that the duo corrected for.

There is no obvious signature of a Trump effect in a scatterplot of the adjusted data, which serves as a warning that the effect being claimed may be an artifact of analysis. But when you apply “segmented regression” using the same parameters as Retnakaran and Ye, you find that the (unadjusted) ratio dipped to 1.03 in March 2017, the fifth month after Trump’s win, and then climbed to 1.08 in June and July before reverting to the long-term norm.

April 30, 2017

[p-hacking] “is one of the many questionable research practices responsible for the replication crisis in the social sciences”

Filed under: Health, Media, Science — Tags: , , , , — Nicholas @ 03:00

What happens when someone digs into the statistics of highly influential health studies and discovers oddities? We’re in the process of finding out in the case of “rockstar researcher” Brian Wansink and several of his studies under the statistical microscope:

Things began to go bad late last year when Wansink posted some advice for grad students on his blog. The post, which has subsequently been removed (although a cached copy is available), described a grad student who, on Wansink’s instruction, had delved into a data set to look for interesting results. The data came from a study that had sold people coupons for an all-you-can-eat buffet. One group had paid $4 for the coupon, and the other group had paid $8.

The hypothesis had been that people would eat more if they had paid more, but the study had not found that result. That’s not necessarily a bad thing. In fact, publishing null results like these is important — failure to do so leads to publication bias, which can lead to a skewed public record that shows (for example) three successful tests of a hypothesis but not the 18 failed ones. But instead of publishing the null result, Wansink wanted to get something more out of the data.

“When [the grad student] arrived,” Wansink wrote, “I gave her a data set of a self-funded, failed study which had null results… I said, ‘This cost us a lot of time and our own money to collect. There’s got to be something here we can salvage because it’s a cool (rich & unique) data set.’ I had three ideas for potential Plan B, C, & D directions (since Plan A had failed).”

The responses to Wansink’s blog post from other researchers were incredulous, because this kind of data analysis is considered an incredibly bad idea. As this very famous xkcd strip explains, trawling through data, running lots of statistical tests, and looking only for significant results is bound to turn up some false positives. This practice of “p-hacking” — hunting for significant p-values in statistical analyses — is one of the many questionable research practices responsible for the replication crisis in the social sciences.

H/T to Kate at Small Dead Animals for the link.

Powered by WordPress