Quotulatiousness

April 28, 2023

Use and misuse of the term “regression to the mean”

Filed under: Books, Business, Football, Sports, USA — Tags: , , , — Nicholas @ 03:00

I still follow my favourite pro football team, the Minnesota Vikings, and last year they hired a new General Manager who was unlike the previous GM in that not only was he a big believer in analytics, he actually had worked in the analytics area for years before moving into an executive position. The first NFL draft under the new GM and head coach was much more in line with what the public analytics fans wanted — although the result on the field is still undetermined as only one player in that draft class got significant playing time. Freddie deBoer is a fan of analytics, but he wants to help people understand what the frequently misunderstood term “regression to the mean” actually … means:

Kwesi Adofo-Mensah, General Manager of the Minnesota Vikings. Adofo-Mensah was hired in 2022 to replace Rick Spielman.
Photo from the team website – https://www.vikings.com/team/front-office-roster/kwesi-adofo-mensah

The sports analytics movement has proven time and again to help teams win games, across sports and leagues, and so unsurprisingly essentially every team in every major sport employs an analytics department. I in fact find it very annoying that there are still statheads that act like they’re David and not Goliath for this reason. I also think that the impact of analytics on baseball has been a disaster from an entertainment standpoint. There’s a whole lot one could say about the general topic. (I frequently think about the fact that Moneyball helped advance the course of analytics, and analytics is fundamentally correct in its claims, and yet the fundamental narrative of the book was wrong.*) But while the predictive claims of analytics continue to evolve, they’ve been wildly successful.

I want to address one particular bugaboo I have with the way analytical concepts are discussed. It was inevitable that popularizing these concepts was going to lead to some distortion. One topic that I see misused all the time is regression/reversion to the mean, or the tendency of outlier performances to be followed up by performances that are closer to the average (mean) performance for that player or league. (I may use reversion and regression interchangeably here, mostly because I’m too forgetful to keep one in my head at a time.) A guy plays pro baseball for five years, he hits around 10 or 12 homeruns a year, then he has a year where he hits 30, then he goes back to hitting in the low 10s again in following seasons – that’s an example of regression to the mean. After deviation from trends we tend (tend) to see returns to trend. Similarly, if the NFL has a league average of about 4.3 yards per carry for a decade, and then the next year the league average is 4.8 without a rule change or other obvious candidate for differences in underlying conditions, that’s a good candidate for regression to the mean the next year, trending back towards that lower average. It certainly doesn’t have to happen, but it’s likely to happen for reasons we’ll talk about.

Intuitively, the actual tendency isn’t hard to understand. But I find that people talk about it in a way that suggests a misunderstanding of why regression to the mean happens, and I want to work through that here.

So. We have a system, like “major league baseball” or “K-12 public education in Baltimore” or “the world”. Within those systems we have quantitative phenomena (like on-base percentage, test scores, or the price of oil) that are explainable by multiple variables, AKA the conditions in which the observed phenomena occur. Over time, we observe trends in those phenomena, which can be in the system as a whole (leaguewide batting average), in subgroups (team batting average), or individuals (a player’s batting average). Those trends are the result of underlying variables/conditions, which include internal factors like an athlete’s level of ability, as well as elements of chance and unaccounted-for variability. (We could go into a big thing about what “chance” really refers to in a complex system, but … let’s not.) The more time goes on, and the more data is collected, the more confidently we can say that a trend is an accurate representation of some underlying reality, again like an athlete’s level of ability. When we say a baseball player is a good hitter, it’s because we’ve observed over time that he has produced good statistics in hitting, and we feel confident that this consistency is the product of his skill and attributes rather than exogenous factors.

However, we know that good hitters have bad games, just as bad hitters have good games. We know that good hitters have slumps where they have bad three or five or ten etc game stretches. We even acknowledge that someone can be a good hitter and have a bad season, or at least a season that’s below their usual standards. However, if a hitter has two or three bad seasons, we’re likely to stop seeing poor performance as an outlier and change our overall perception of the player. The outlier becomes the trend. There is no certain or objective place where that transition happens.

Here’s the really essential point I want to make: outliers tend to revert to the mean because the initial outlier performance was statistically unlikely; a repeat of that outlier performance is statistically unlikely for the same reasons, but not because of the previous outlier. For ease of understanding let’s pretend underlying conditions stay exactly the same, which of course will never happen in a real-world scenario. If that’s true, then the chance of having an equally unlikely outcome is exactly as likely as the first time; repetition of outliers is not made any less likely by the fact that the initial outlier happened. That is, there’s no inherent reason why a repetition of the outlier becomes more unlikely, given consistent underlying conditions. I think it’s really important to avoid the Gambler’s Fallacy here, thinking that a roulette wheel is somehow more likely to come up red because it’s come up black a hundred times in a row. Statistically unlikely outcomes in the past don’t make statistically unlikely outcomes any less likely in the future. The universe doesn’t “remember” that there’s been an outlier before. Reversion to the mean is not a force in the universe. It’s not a matter of results being bent back into the previous trend by the gods. Rather, if underlying conditions are similar (if a player is about as good as he was the previous year and the role of variability and chance remains the same), and he had an unlikely level of success/failure the prior year, he’s unlikely to repeat that performance because reaching that level of performance was unlikely in the first place.


    * – the A’s not only were not a uniquely bad franchise, they had won the most games of any team in major league baseball in the ten years prior to the Moneyball season
    – major league baseball had entered an era of unusual parity at that time, belying Michael Lewis’s implication that it was a game of haves and have-nots
    – readers come away from the book convinced that the A’s won so many games because of Scott Hatteberg and Chad Bradford, the players that epitomize the
    Moneyball ethos, but the numbers tell us they were so successful because of a remarkably effective rotation in Tim Hudson, Barry Zito, and Mark Mulder, and the offensive skill of shortstop Miguel Tejada – all of whom were very highly regarded players according to the old-school scouting approach that the book has such disdain for.
    – Art Howe was not an obstructionist asshole.

April 5, 2023

Justin Trudeau chooses the Argentinian model over the Canadian model

In The Line, Matt Gurney considers the proposition that “Canada is broken”:

To the growing list of articles grappling with the issue of whether Canada is broken — and how it’s broken, if it is — we can add this one, by the Globe and Mail‘s Tony Keller. I can say with all sincerity that Keller’s is one of the better, more thoughtful examples in this expanding ouevre. Keller takes the issue seriously, which is more than can be said of some Canadian thought leaders, whose response to the question is often akin to the Bruce Ismay character from Titanic after being told the ship is doomed.

(Spoiler: it sank.)

But back to the Globe article. Specifically, Keller writes about how once upon a time, just over a century ago, Canada and Argentina seemed to be on about the same trajectory toward prosperity and stability. If anything, Argentina may have had the edge. Those with much grasp of 20th-century history will recall that that isn’t exactly how things panned out. I hope readers will indulge me a long quote from Keller’s piece, which summarizes the key points:

    By the last third of the 20th century, [Argentina] had performed a rare feat: it had gone backward, from one of the most developed countries to what the International Monetary Fund now classifies as a developing country. Argentina’s economic output is today far below Canada’s, and consequently the average Argentinian’s income is far below that of the average Canadian.

    Argentina was not flattened by a meteor or depopulated by a plague. It was not ground into rubble by warring armies. What happened to Argentina were bad choices, bad policies and bad government.

    It made no difference that these were often politically popular. If anything, it made things worse since the bad decisions – from protectionism to resources wasted on misguided industrial policies to meddling in markets to control prices – were all the more difficult to unwind. Over time the mistakes added up, or rather subtracted down. It was like compound interest in reverse.

And this, Keller warns, might be Canada’s future. As for the claim made by Pierre Poilievre that “Canada is broken”, Keller says this: “It’s not quite right, but it isn’t entirely wrong.”

I disagree with Keller on that, but I suspect that’s because we define “broken” differently. We at The Line have tried to make this point before, and it’s worth repeating here: we think a lot of the pushback against the suggestion that Canada might be broken is because Canada is still prosperous, comfy, generally safe, and all the rest. Many, old enough to live in paid-off homes that are suddenly worth a fortune, may be enjoying the best years of their lives, at least financially speaking. Suggesting that this is “broken” sometimes seems absurd.

But it’s not: it’s possible we are broken but enjoying a lag period, spared from feeling the full effects of the breakdown by our accumulated wealth and social capital. The engines have stopped, so to speak, but we still have enough momentum to keep sailing for a bit. Put more bluntly, “broken” isn’t a synonym for “destroyed”. A country can still be prosperous and stable and also be broken — especially if it was prosperous and stable for long enough before it broke. The question then becomes how long the prosperity and stability will last. Canada is probably rich enough to get away with being broken for a good long while. What’s already in the pantry will keep us fed and happy for years to come.

But not indefinitely.

March 26, 2023

Plandemic? Manufactured crisis? Mass formation psychosis?

In The Conservative Woman, James Delingpole lets his skeptic flag fly:

“Covid 19 Masks” by baldeaglebluff is licensed under CC BY-SA 2.0

Tell me about your personal experiences of Covid 19. Actually, wait, don’t. I think I may have heard it already, about a million times. You lost all sense of smell or taste – and just how weird was that? It floored you for days. It gave you a funny dry cough, the dryness and ticklishness of which was unprecedented in your entire coughing career. You’ve had flu a couple of times and, boy, when you’ve got real flu do you know it. But this definitely wasn’t flu. It was so completely different from anything you’ve ever known, why you wouldn’t be surprised to learn that it had been bioengineered in a lab with all manner of spike proteins and gain-of-function additives, perhaps even up to and including fragments of the Aids virus …

Yeah, right. Forgive me for treading on the sacred, personal domain of your lived experience. But might I cautiously suggest that none of what you went through necessarily validates lab-leak theory. Rather what it may demonstrate is the power of susceptibility, brainwashing and an overactive imagination. You lived – we all did – through a two-year period in which health-suffering anecdotes became valuable currency. Whereas in the years before the “pandemic”, no one had been much interested in the gory details of your nasty cold, suddenly everyone wanted to compare notes to see whether they’d had it as bad as you – or, preferably, for the sake of oneupmanship, even worse. This in turn created a self-reinforcing mechanism of Covid panic escalation: the more everyone talked about it, the more inconvertible the “pandemic” became.

Meanwhile, in the real world, hard evidence – as opposed to anecdotal evidence – for this “pandemic” remained stubbornly non-existent. The clincher for me was a landmark article published in January 2021 by Simon Elmer at his website Architects For Social Housing. It was titled “Lies, Damned Lies and Statistics: Manufacturing the Crisis”.

In it Elmer asked the question every journalist should have asked but which almost none did: is this “pandemic” really as serious as all the experts, and government ministers and media outlets and medics are telling us it is? The answer was a very obvious No. As the Office for National Statistics data cited by Elmer clearly showed, 2020 – Year Zero for supposedly the biggest public health threat since “Spanish Flu” a century earlier – was one of the milder years for death in the lives of most people.

Let’s be clear about this point, because something you often hear people on the sceptical side of the argument say is, “Of course, no one is suggesting that Covid didn’t cause a horrific number of deaths.” But that’s exactly what they should be suggesting: because it’s true. Elmer was quoting the Age Standardised Mortality statistics for England and Wales dating back to 1941. What these show is that in every year up to and including 2008, more people died per head of population than in the deadly Covid outbreak year of 2020. Of the previous 79 years, 2020 had the 12th lowest mortality rate.

Covid, in other words, was a pandemic of the imagination, of anecdote, of emotion rather than of measured ill-health and death. Yet even now, when I draw someone’s attention to that ONS data, I find that the most common response I get is one of denial. That is, when presented with the clearest, most untainted (this was before ONS got politicised and began cooking the books), impossible-to-refute evidence that there was NO Covid pandemic in 2020, most people, even intelligent ones, still choose to go with their feelings rather with the hard data.

This natural tendency many of us have to choose emotive narratives over cool evidence makes us ripe for exploitation by the cynical and unscrupulous. We saw this during the pandemic when the majority fell for the exciting but mendacious story that they were living through a new Great Plague, and that only by observing bizarre rituals – putting strips of cloth over one’s face, dancing round one another in supermarkets, injecting unknown substances into one’s body – could one hope to save oneself and granny. And we’re seeing it now, in a slightly different variant, in which lots of people – even many who ought to know better – are falling for some similarly thrilling but erroneous nonsense about lab-leaked viruses.

It’s such a sexy story that I fell for it myself. In those early days when all the papers were still dutifully trotting out World Health Organisation-approved propaganda about pangolins and bats and the apparently notorious wet market (whatever the hell that is) in Wuhan, I was already well ahead of the game. I knew, I just knew, as all the edgy, fearless seekers of truth did that it was a lab leak wot done it. If you knew where to dig, there was a clear evidence trail to support it.

We edgy, fearless truth seekers knew all the names and facts. Dodgy Peter Daszak of the EcoHealth Alliance was in it up to the neck; so too, obviously, was the loathsomely chipper and smugly deceitful Anthony Fauci. We knew that all this crazy, Frankenvirus research had initially been conducted in Chapel Hill, North Carolina, but had been outsourced to China after President Obama changed the regulations and it became too much of a hot potato for US-based labs. And let’s not forget Ukraine – all those secret bio-research labs run on behalf of the US Deep State, but then exposed as the Russians unhelpfully overran territory such as Mariupol.

March 21, 2023

Few professions are as “optimism biased” as primary school teachers

Filed under: Books, Education, USA — Tags: , , — Nicholas @ 03:00

Freddie deBoer was commissioned to write this for a professional publication for teachers, but it was “too hot” for the editors, so he’s posted it to his Substack instead:

There’s a bias that runs throughout our educational discourse, coming from our media, academia, and the think tanks and foundations that have such sway in education policy. It’s a bias that exists both because of a natural human desire to see every child succeed and because the structural incentives in the field make rejecting that bias professionally risky. The bias I’m talking about is optimism bias, the insistence that all problems in education are solvable and that we can fix them if only we want to badly enough. At least a half-century of research, spending, policy experimentation, and dogged effort has utterly failed to close the gaps that so vex our political class. But still we hear the same old song about how we could close those gaps tomorrow if we really wanted to, an attitude that has distorted education policy and analysis for decades.

My first book, The Cult of Smart, was a commercial failure. It was released during the height of the pandemic and thus my ability to promote it was limited, but by any measure the market rejected it. It’s tough to produce a labor of love like that and find that few people were interested in it.

But there was a silver lining: since publication in 2020 I’ve heard from dozens and dozens of teachers, thanking me for putting their thoughts to print. These educators come from public, private, and charter schools, from schools with affluent study bodies and schools that are mired in poverty, from big city school districts and from low-population regional rural schools. And again and again, these teachers shared the same perspective: they agreed with the book’s overall argument, and often had thought similar things themselves for years, but felt they could not express them publicly for fear of professional consequences.

The essential argument of the book is that overwhelming empirical evidence shows that students sort themselves into academic ability bands in the performance spectrum early in life, with remarkable consistency; that the most natural and simplest explanation for this tendency is that there is such a thing as individual academic potential; and that the most likely source of this individual academic potential is likely influenced by genes. When we look at academic performance, what we see again and again is that students perform at a given level relative to peers early in schooling and maintain that level throughout formal education. (I make that case at considerable length here.) A vast number of interventions thought to influence relative performance have been revealed to make no difference in rigorous research, including truly dramatic changes to schooling and environment. Meta-analyses and literature reviews that assess the strength of many different educational interventions find effect sizes in the range of .01 to .3 standard deviations, small by any standards and subject to all sorts of questions about research quality and randomization. Even the most optimistic reading of the research literature suggests that almost nothing moves the needle in academic outcomes. Almost nothing we try works.

This implies that common sense is correct and that individual students have their own natural or intrinsic level of academic potential, which we have no reason to believe we can dramatically change. I believe that we can change large group disparities in education (such as the racial achievement gap) by addressing major socioeconomic inequalities through government policy. But even after we eliminate racial or gender gaps, there will be wide differences between individual students, regardless of pedagogy or policy. When Black students as a group score at parity with white students, there will still be large gaps within the population of Black students or white or any other group you can name, and we have no reliable interventions to make the weakest perform like the strongest.

My book’s argument is attractive to teachers because they’ve lived under an educational ideology that insists that every student is a budding genius whose potential waits to be unlocked by a dedicated teacher – and which holds teachers to that unachievable standard. From the right, they’re subject to “no excuses” culture, the constant insistence from the education reform movement that student failures are the result of lazy and feckless teachers; from the left, they’re subject to a misguided egalitarianism that mistakes the fact that every child is important and deserves to be nurtured for the idea that every child has perfectly equal potential. The result is a system that presses teachers to deliver learning outcomes in their classrooms that they can’t possibly achieve. But many of them feel that they can’t push back, for fear of professional consequences. If they speak frankly about the fact that different students have different levels of individual potential, they’ll likely be accused of shirking their duty.

January 20, 2023

Christopher Snowden on our latest “Clown World” alcohol guidelines

Filed under: Cancon, Government, Health, Wine — Tags: , , , , , — Nicholas @ 03:00

At Velvet Glove, Iron Fist, Christopher Snowden pokes gigantic holes in the stated justification for the latest Canadian drink consumption recommendations (also mentioned in this post yesterday):

Canada is on the brink of making itself an international laughing stock by cutting its drinking guidelines from two drinks a day to two drinks a week. The previous guidelines were only set in 2011 so Canadian drinkers can be forgiven for being suspicious about this dramatic change. The evidence base has not significantly changed in the interim. The evidence for the health benefits of moderate drinking has continued to pile up.

The only thing that has really changed is that neo-temperance zealots like Tim Stockwell have tightened their grip on alcohol research. Stockwell and his “no safe level” pal Tim Naimi both live in Canada and are both authors of the report that has made the ludicrous new recommendations.

I have been saying for over a decade that the “public health” plan is to get the guidelines down to zero so they can start regulating alcohol like tobacco. The evidence does not support this fundamentally ideological campaign and so the evidence has been dropped in favour of fantasy modelling and cherry-picking.

[…]

A Canadian “standard drink” contains 13.45 grams of alcohol. Three standard drinks equals 40 grams. Four standard drinks equals 53 grams. The meta-analysis has no data on people who drink so little, so the claim that colon cancer risk increases at three or more standard drinks is not supported even by the authors’ own preferred source.

As for breast cancer, which can only affect half the population and is partly why most countries have different guidelines for men and women, the report cites this meta-analysis of 22 studies, 13 of which found no statistically significant association with drinking. It pooled the studies and reported a 10 per cent increase in risk for people drinking 10 grams of alcohol a day. As with the colon cancer study, this was the minimum quantity studied so it tells us nothing about Canadians who drink 3-5 standard drinks.

In terms of mortality, another meta-analysis found that light drinking was not positively associated with any form of cancer, including breast cancer, and was negatively associated with cancer in a couple of instances […]

As countless studies have shown, heart disease and stroke risk is substantially reduced among light and moderate drinkers. For example, a meta-analysis of prospective cohort studies (which track people’s drinking habits and health status over a number of years and are the most reliable studies in observational epidemiology) found that drinkers were 25 per cent less likely to die from coronary heart disease than teetotallers. The evidence for strokes is similar.

This is main reason why life expectancy is longer for moderate drinkers and the relationship between alcohol consumption and mortality is J-shaped.

The authors of the Canadian report essentially ignore all this evidence and instead focus on a cherry-picked meta-analysis written by Stockwell, Naimi and pals which massively adjusted the figures to arrive at their desired conclusion. This is inexcusable.

At The Line, Jen Gerson points out the utter absurdity of public health officials doing their best Carry Nation bar-smashing imitations while at the same time pushing for “harm reduction” policies for cocaine, heroin, and other illegal narcotics:

“Bayer Makes Heroin” by dog97209 is licensed under CC BY-NC-ND 2.0 .

“The guidance is based on the principle of autonomy in harm reduction and the fundamental idea behind it that people living in Canada have a right to know that all alcohol use comes with risk,” noted the CCSU and, hey, yeah!

I like to understand my risks so that I can make informed decisions.

But you know what else poses significant risk?

Lots of morphine and cocaine.

I think this is generally known. But God help you if you want to engage in a conversation about the risks society might be courting with safe supply or even harm-reduction strategies, and have fun being labelled a Conservative troglodyte who just wants suffering addicts to die in the street. You’re probably just a rich, callous asshole who opposes all of these evidence-based policies who blows second-hand smoke into the faces of your children while drinking your sixth beer of the night at the local pub. Just shut up and pick up those discarded needles in your yard, you monster.

I was picking on Health Canada previously, but they’re hardly the only ones who display a bizarre split-personality on these issues. Any story by or on the CBC on the matter of alcohol use now sounds like something straight out of the Women’s Christian Temperance Union. Yet just try to find critical reporting on safe consumption sites or safe supply policies. Almost all of it is uniformly glowing.

[…]

Obviously, I don’t think that our public-health officials are telling Canadians that heroin takes the edge off a hard day better than a glass of red or a pint of beer. But did we learn nothing over the course of the pandemic about the importance of consistent and clear public-health communications? The target audience for this is not those who have carefully studied harm reduction and substance use disorders. It’s people who just like to have a drink with dinner.

If our governments want to maintain any credibility, they can’t be uptight about how many glasses of pinot noir we drink, and then appear to be loosey goosey on heroin. It’s just impossible to take that kind of suck-and-blow at face value, but that’s exactly how this messaging will come across to people who aren’t closely engaged with this issue. “The government wants to give free hard drugs to junkies but thinks my cocktail is a problem?”

January 3, 2023

1943 in Numbers – WW2 Special

World War Two
Published 1 Jan 2023

This war is massive. Our chronological coverage helps give us an understanding of it, but sometimes statistics help us understand the bigger picture.
(more…)

December 29, 2022

Selection bias in polling

Filed under: Media, USA — Tags: , , — Nicholas @ 03:00

At Astral Codex Ten, Scott Alexander points out that it’s impossible to do any kind of poll without some selection bias, so you can’t automatically dismiss any given poll on that basis. The kind of selection bias, however, may indicate whether the results will have any relationship to reality:

The real studies by professional scientists usually use Psych 101 students at the professional scientists’ university. Or sometimes they will put up a flyer on a bulletin board in town, saying “Earn $10 By Participating In A Study!” in which case their population will be selected for people who want $10 (poor people, bored people, etc). Sometimes the scientists will get really into cross-cultural research, and retest their hypothesis on various primitive tribes — in which case their population will be selected for the primitive tribes that don’t murder scientists who try to study them. As far as I know, nobody in history has ever done a psychology study on a truly representative sample of the world population.

This is fine. Why?

Selection bias is disastrous if you’re trying to do something like a poll or census. That is, if you want to know “What percent of Americans own smartphones?” then any selection at all limits your result. The percent of Psych 101 undergrads who own smartphones is different from the percent of poor people who want $10 who own smartphones, and both are different from the percent of Americans who own smartphones. The same is potentially true about “how many people oppose abortion?” or “what percent of people are color blind?” or anything else trying to find out how common something is in the population. The only good ways to do this are a) use a giant government dataset that literally includes everyone, b) hire a polling company like Gallup which has tried really hard to get a panel that includes the exact right number of Hispanic people and elderly people and homeless people and every other demographic, c) do a lot of statistical adjustments and pray.

Selection bias is fine-ish if you’re trying to do something like test a correlation. Does eating bananas make people smarter because something something potassium? Get a bunch of Psych 101 undergrads, test their IQs, and ask them how many bananas they eat per day (obviously there are many other problems with this study, like establishing causation — let’s ignore those for now). If you find that people who eat more bananas have higher IQ, then fine, that’s a finding. If you’re right about the mechanism (something something potassium), then probably it should generalize to groups other than Psych 101 undergrads. It might not! But it’s okay to publish a paper saying “Study Finds Eating Bananas Raises IQ” with a little asterisk at the bottom saying “like every study ever done, we only tested this in a specific population rather than everyone in the world, and for all we know maybe it isn’t true in other populations, whatever.” If there’s some reason why Psych 101 undergrads are a particularly bad population to test this in, and any other population is better, then you should use a different population. Otherwise, choose your poison.

December 9, 2022

QotD: Computer models of “the future”

Filed under: Economics, Media, Quotations, Technology — Tags: , , , , — Nicholas @ 01:00

The problem with all “models of the world”, as the video puts it, is that they ignore two vitally important factors. First, models can only go so deep in terms of the scale of analysis to attempt. You can always add layers — and it is never clear when a layer that is completely unseen at one scale becomes vitally important at another. Predicting higher-order effects from lower scales is often impossible, and it is rarely clear when one can be discarded for another.

Second, the video ignores the fact that human behavior changes in response to circumstance, sometimes in radically unpredictable ways. I might predict that we will hit peak oil (or be extremely wealthy) if I extrapolate various trends. However, as oil becomes scarce, people discover new ways to obtain it or do without it. As people become wealthier, they become less interested in the pursuit of wealth and therefore become poorer. Both of those scenarios, however, assume that humanity will adopt a moral and optimistic stance. If humans become decadent and pessimistic, they might just start wars and end up feeding off the scraps.

So, interestingly, what the future looks like might be as much a function of the music we listen to, the books we read, and the movies we watch when we are young as of the resources that are available.

Note that the solution they propose to our problems is internationalization. The problem with internationalizing everything is that people have no one to appeal to. We are governed by a number of international laws, but when was the last time you voted in an international election? How do you effect change when international policies are not working out correctly? Who do you appeal to?

The importance of nationalism is that there are well-known and generally-accepted procedures for addressing grievances with the ruling class. These international clubs are generally impervious to the appeals (and common sense) of ordinary people and tend to promote virtue-signaling among the wealthy class over actual virtue or solutions to problems.

Jonathan Bartlett, quoted in “1973 Computer Program: The World Will End In 2040”, Mind Matters News, 2019-05-31.

October 28, 2022

QotD: “Cliodynamics”, aka “megahistory”

Filed under: Books, History, Quotations — Tags: , , , , — Nicholas @ 01:00

For this week’s musing, I want to discuss in a fairly brief way, my views of “megahistory” or “cliodynamics” – questions about which tend to come up a fair bit in the comments – and also Isaac Asimov, after a fashion. Fundamentally, the promise of these sorts of approaches is to apply the same kind of mathematical modeling in use in many of the STEM fields to history with the promise of uncovering clear rules or “laws” in the noise of history. It is no accident that the fellow who coined the term “cliodynamics”, Peter Turchin, has his training not in history or political science but in zoology; he is trying to apply the sort of population modeling methods he pioneered on Mexican Bean Beetles to human populations. One could also put Steven Pinker, trained as a psychologist, and his Better Angels in this category as well and long time readers will know how frequently I recommend that folks read Azar Gat, War in Human Civilization instead of The Better Angels of our Nature.1

Attentive readers will have already sensed that I have issues with these kinds of arguments; indeed, for all of my occasional frustrations with political science literature (much of which is perfectly fine, but it seems a frequent and honestly overall positive dynamic that historians tend to be highly critical of political scientists) I consider “cliodynamics” to generally take the worst parts of data-driven political science methodologies to apotheosis while discarding most of the virtues of data-driven poli-sci work.

As would any good historian, I have a host of nitpicks, but my objection to the idea of “cliodynamics” has to do with the way that it proposes to tear away the context of the historical data. I think it is worth noting at the outset the claim actually being made here because there is often a bit of motte-and-bailey that goes on, where these sorts of megahistories make extremely confident and very expansive claims and then when challenged is to retreat back to much more restricted claims but Turchin in particular is explicit in Secular Cycles (2009) that “a basic premise of our study is that historical societies can be studied with the same methods physicists and biologists used to study natural systems” in the pursuit of discovering “general laws” of history.

Fundamentally, the approach is set on the premise that the solution to the fact that the details of society are both so complex (imagine charting out the daily schedules of every person one earth for even a single day) and typically so poorly attested is to aggregate all of that data to generate general rules which could cover any population over a long enough period. To my mind, there are two major problems here: predictability and evidence. Let’s start with predictability.

And that’s where we get to Isaac Asimov, because this is essentially also how the “psychohistory” of the Foundation series functions (or, for the Star Trek fans, how the predictions in the DS9 episode “Statistical Probabilities“, itself an homage to the Foundation series, function). The explicit analogy offered is that of the laws that govern gasses: while no particular molecule of a gas can modeled with precision, the entire body of gas can be modeled accurately. Statistical probability over a sufficiently large sample means that the individual behaviors of the individual gas molecules combine in the aggregate to form a predictable whole; the randomness of each molecule “comes out in the wash” when combined with the randomness of the rest.2

I should note that Turchin rejects comparisons to Asimov’s psychohistory (but also embraced the comparison back in 2013), but they are broadly embraced by his boosters. Moreover, Turchin’s claim at the end of that blog post that “prediction is overrated” is honestly a bit bizarre given how quick he is when talking with journalists to use his models to make predictions; Turchin has expressed some frustration with the tone of Graeme Wood’s piece on him, but “We are almost guaranteed” is a direct quote that hasn’t yet been removed and I can speak from experience: The Atlantic‘s fact-checking on such things is very vigorous. So I am going to assume those words escaped the barrier of his teeth and also I am going to suggest here that “We are almost guaranteed” is, in fact, a prediction and a fairly confident one at that.

The problem with applying something like the ideal gas law – or something like the population dynamics of beetles – to human societies is fundamentally interchangeability. Statistical models like these have to treat individual components (beetles, molecules) the way economists treat commodities: part of a larger group where the group has qualities, but the individuals merely function to increase the group size by 1. Raw metals are a classic example of a commodity used this way: add one ton of copper to five hundred tons of copper and you have 501 tons of copper; all of the copper is functionally interchangeable. But of course any economist worth their pencil-lead will be quick to remind you that not all goods are commodities. One unit of “car” is not the same as the next. We can go further, one unit of “Honda Civic” is not the same as the next. Heck, one unit of 2012 Silver Honda Civic LX with 83,513 miles driven on it is not the same as the next even if they are located in the same town and owned by the same person; they may well have wildly different maintenance and accident histories, for instance, which will impact performance and reliability.

Humans have this Honda Civic problem (that is, they are not commodities) but massively more so. Now of course these theories do not formally posit that all, say, human elites are the same, merely that the differences between humans of a given grouping (social status, ethnic group, what have you) “come out in the wash” at large scales with long time horizons. Except of course they don’t and it isn’t even terribly hard to think of good examples.

    1 Yes, I am aware that Gat was consulted for Better Angels and blurbed the book. This doesn’t change my opinion of the two books. my issue is fundamentally evidentiary: War is built on concrete, while Better Angels is built on sand when it comes to the data they propose to use. As we’ll see, that’s a frequent issue.

    2 Of course the predictions in the Foundation series are not quite flawlessly perfect. They fail in two cases I can think of: the emergence of a singular exceptional individual with psychic powers (the Mule) and situations in which the subjects of the predictions become aware of them. That said Seldon is able to predict things with preposterous accuracy, such that he is able to set up a series of obstacles for a society he knows they will overcome. The main problem is that these challenges frequently involve conflict or competition with other humans; Seldon is at leisure to assume such conflicts are predictable, which is to say they lack Clausewitzian (drink!) friction. But all conflicts have friction; competition between peers is always unpredictable.

Bret Devereaux, “Fireside Friday: October 15, 2021”, A Collection of Unmitigated Pedantry, 2021-10-15.

October 27, 2022

QotD: Life expectancy before the modern age

Filed under: Health, History, Quotations — Tags: , , , , — Nicholas @ 01:00

Locke held the same assumptions about life, because life was almost inconceivably cheap in his day. Locke was born in 1632. A person born then would have a decent shot at making it to age 50 if he survived until age 5 … but only about half did. And “a decent shot” needs to be understood blackjack-style. Conventional wisdom says to hit if the dealer shows 16 and you’re holding 16 yourself — even though you’ve got a 62% chance of going bust, you’re all but certain to lose money if you stand. Living to what we Postmoderns call “middle age” was, in Locke’s world, hitting on a 16. We Postmoderns hear of a guy who dies at 50 and we assume he did it to himself — he was a grossly obese smoker with a drug problem or a race car driver or something. In Locke’s world, they’d wonder what his secret was to have made it so long.

Severian, “Overturning Locke: Life”, Rotten Chestnuts, 2019-09-11.

September 22, 2022

California’s Central Valley of despair

Filed under: Economics, USA — Tags: , , , , , , — Nicholas @ 03:00

At Astral Codex Ten, Scott Alexander wonders why California’s Central Valley is in such terrible shape — far worse than you’d expect even if the rest of California is looking a bit curdled:

Here’s a topographic map of California (source):

You might notice it has a big valley in the center. This is called “The Central Valley”. Sometimes it also gets called the San Joaquin Valley in the south, or the the Sacramento Valley in the north.

The Central Valley is mostly farms — a little piece of the Midwest in the middle of California. If the Midwest is flyover country, the Central Valley is drive-through country, with most Californians experiencing it only on their way between LA and SF.

Most, myself included, drive through as fast as possible. With a few provisional exceptions — Sacramento, Davis, some areas further north — the Central Valley is terrible. It’s not just the temperatures, which can reach 110°F (43°C) in the summer. Or the air pollution, which by all accounts is at crisis level. Or the smell, which I assume is fertilizer or cattle-related. It’s the cities and people and the whole situation. A short drive through is enough to notice poverty, decay, and homeless camps worse even than the rest of California.

But I didn’t realize how bad it was until reading this piece on the San Joaquin River. It claims that if the Central Valley were its own state, it would be the poorest in America, even worse than Mississippi.

This was kind of shocking. I always think of Mississippi as bad because of a history of racial violence, racial segregation, and getting burned down during the Civil War. But the Central Valley has none of those things, plus it has extremely fertile farmland, plus it’s in one of the richest states of the country and should at least get good subsidies and infrastructure. How did it get so bad?

First of all, is this claim true?

I can’t find official per capita income statistics for the Central Valley, separate from the rest of California, but you can find all the individual counties here. When you look at the ones in the Central Valley, you get a median per capita income of $21,729 (this is binned by counties, which might confuse things, but by good luck there are as many people in counties above the median-income county as below it, so probably not by very much). This is indeed lower than Mississippi’s per capita income of $25,444, although if you look by household or family income, the Central Valley does better again.

Of large Central Valley cities, Sacramento has a median income of $33,565 (but it’s the state capital, which inflates it with politicians and lobbyists), Fresno of $25,738, and Bakersfield of $30,144. Compare to Mississippi, where the state capital of Jackson has $23,714, and numbers 2 and 3 cities Gulfport and Southhaven have $25,074 and $34,237. Overall Missisippi comes out worse here, and none of these seem horrible compared to eg Phoenix with $31,821. Given these numbers (from Google), urban salaries in the Central Valley don’t seem so bad. But when instead I look directly at this list of 280 US metropolitan areas by per capita income, numbers are much lower. Bakersfield at $15,760 is 260th/280, Fresno is 267th, and only Sacramento does okay at 22nd. Mississippi cities come in at 146, 202, and 251. Maybe the difference is because Google’s data is city proper and the list is metro area?

Still, it seems fair to say that the Central Valley is at least somewhat in the same league as Mississippi, even though exactly who outscores whom is inconsistent.

September 4, 2022

Who was reading during Plague Year Two, and what format did they prefer?

Filed under: Books, Business, Cancon — Tags: , , , — Nicholas @ 03:00

In the latest edition of the SHuSH newsletter, Kenneth Whyte summarizes some of the findings of the most recent Booknet survey:

H/T to Marc Adkins who shared this meme at the perfect moment for me to steal it, file off the serial number and repost it here.

A few months ago, BookNet Canada, which does a lot of valuable research into the book market, released the 2021 edition of its annual survey of Canadian leisure and reading habits. It’s always an interesting study. I was slow getting to it this year because there’s been so much else going on. Here are the ten most interesting findings.

  1. Canadians have had plenty of time on their hands: 81 per cent report having enough or more than enough leisure. Pre-COVID, about 25 per cent of respondents said they had more-than-enough; during COVID, that jumped to about 35 per cent. The pandemic wasn’t all bad.
  2. Canadians read books more than they listen to radio or play video games but less than they shop or cook; 42 per cent of us (led by 58 per cent of the 65-plus crowd) read books daily; 35 per cent of the 18-29 age group read daily and 57 per cent of that cohort read at least once a week. Young people are no more likely to read books less than once a month than any other under-65 segment, which bodes well for the future of the industry.
  3. The statement “books are for enjoyment, entertainment, or leisure” received a ‘yes’ from 62 per cent of respondents, and a ‘sometimes’ from 34 per cent; the statement “books are for learning or education” received a ‘yes’ from 41 per cent of respondents and a ‘sometimes’ from 50 per cent.
  4. The top reasons for selecting a book to read are the author (40 per cent), the book’s description (30 per cent), recommendations (25 per cent), the main character or the series (20 per cent), its bestselling status (14 per cent), and reference needs (13 per cent). “Recommendations and the impact of bestseller lists have trended down from 2019 to 2021.”
  5. The love affair with print continues with 68 per cent of readers citing hard copies as their preferred format. Ebooks came in at 16 per cent and audiobooks at 10 per cent Interestingly, readers had a marked preference for paperbacks over hardcovers. Format preferences differ from age group to age group, with some evidence that the kids might not be keen on physical books:

I was interested in that apparent rejection of physical books by the 18-29 cohort so I looked back at the last two years of the survey […] The results are quite different, suggesting a methodological shortcoming (the survey sample is 1,282 adults so the margin of error will be large when you eliminate those who don’t read a given format, those without format preferences, and break the remainder down into five age groups).

August 26, 2022

QotD: The WARG rating – “Wins Above Replacement Goldstein”

No totalitarian regime has ever successfully solved what you might call the Emmanuel Goldstein Problem. They just can’t exist without some kind of existential threat to rally around; it’s their nature. In 1984, the Party simply created Goldstein out of whole cloth, but they seemed to believe this was just another temporary expedient — they were counting on technology to do all the heavy lifting of mass mind control, so they wouldn’t have to resort to things like Goldstein and MiniTrue.

Obviously that ain’t gonna work in Clown World, Cthulhuvious and Sasqueetchia being notso hotso on the STEM. They’ll always need a Goldstein, then, and that’s a real problem, because whatever else Bad Orange Man is, he’s also pushing 80 years old. How long would he have, even in a sane world? 10 more years, tops? And the candidates for Replacement BOM are generally a sorry lot … but even if they weren’t, they’re about to get purged, too. The WARG is already going negative …

For overseas readers (and those not conversant with “Moneyball”: Baseball has these weird “sabermetric” stats that purport to compare players from different teams and eras in terms of absolute value. It’s acronymed (it’s a word) WAR, Wins Above Replacement; “replacement” being an absolutely average player. Like all baseball “sabermetrics” it quickly gets ridiculous, but see here. According to this guy, then, Babe Ruth has a per-season WAR of 10.48 in right field. That means Babe Ruth, himself, personally, alone, was worth 10 and a half wins above your “average” player. If Ruth goes down for the season in a tragic Spring Training beer mishap, you can go ahead and take 10 wins off the Yankees’ record that season (assuming they replace the Bambino with some scrub just off the bus, which back then is what would’ve happened).

WARG, then, is Wins Above Replacement Goldstein. I’d say that Orange Man set the bar for Goldsteins, but that’s not statistically useful, since Trump Derangement Syndrome is so far the apex of liberal lunacy. To make statistical comparisons useful — to find a “replacement Goldstein,” as it were — we have to have someone the Left considered an existential enemy at the time, but who didn’t really do much in the grand scheme of things. So I nominate George W. Bush. If Bush is the “Replacement Goldstein,” then his WARG is a nice round zero. Trump would have a Babe Ruth-ian WARG.

This gives us a convenient measurement for looking at various Republicans, both current and historical. Richard Nixon would have a pretty high WARG — he drove them even more nuts than W. did — and Ronnie Raygun would be up there, too. Gerald Ford would have a slightly negative WARG, since not even the New York Times could pretend Gerald fucking Ford was a threat to the Progressive takeover. Your steeply negative WARGs would be those “Republicans” actively working with the opposition, like Bitch McConnell.

I’d argue that Ron DeSantis and maybe Greg Abbott still have positive WARGs … for now. But they’re going to get gulaged here in pretty short order. Who’s the next guy on the bench? Your Marjorie Taylor Greenes and whatnot drive certain segments of the Left insane, but she’s just too ludicrous to have a positive WARG. And then things start getting really pathetic …

The Law of Diminishing Returns makes the WARG problem even more acute. Freakout fatigue is a real thing. The Media will give it the old college try, of course, but you really just can’t convince people that a goof like Greene is some kind of existential threat to Our Democracy. Her WARG goes negative every time she opens her mouth.

Severian, Salon Roundup”, Founding Questions, 2022-08-20.

July 29, 2022

“Shrinkflation” isn’t the only way companies try to sell you less for the same price

Filed under: Business, Economics — Tags: , , , — Nicholas @ 05:00

And, as Virginia Postrel points out, “shrinkflation” does get noticed for economic statistics, unlike some of the other changes many companies are making:

Original image from www.marpat.co.uk

My latest Bloomberg Opinion column is explained well in an excellent subhead (contrary to popular assumptions, writers don’t craft the headlines or subheads that appear on their work): “Packaging less stuff for the same price doesn’t fool consumers or economists. But diminishing quality imposes equally maddening extra costs that are almost impossible to measure.” Excerpt:

    If a 16-ounce box contracts to 14 ounces and the price stays the same, I asked Bureau of Labor Statistics economist Jonathan Church, how is that recorded? “Price increase”, he said quickly. You just divide the price by 14 instead of 16 and get the price per ounce. Correcting for shrinkflation is straightforward.

    New service charges for things that used to be included in the price, from rice at a Thai restaurant to delivery of topsoil, also rarely sneak past the inflation tallies any more than they fool consumers.

    But a stealthier shrinkflation is plaguing today’s economy: declines in quality rather than quantity. Often intangible, the lost value is difficult to capture in price indexes.

    Faced with labor shortages, for example, many hotels have eliminated daily housekeeping. For the same room price, guests get less service. It’s not conceptually different from shrinking a bag of potato chips. But would the consumer price index pick up the change?

    Probably not, Church said.

This phenomenon, which Doug Johnson aptly dubbed “disqualiflation” in a Facebook comment, is widespread. One example is the four-hour airport security line I chronicled in an earlier Substack post. Another is the barely trained newbie who screws up your sandwich order — a far more common experience today than four years ago. It’s the flip side of a phenomenon I wrote about in The Substance of Style and in economics columns in the early 2000s (see here and here).

    During the 2000s and 2010s, inflation was probably overstated because of unmeasured quality increases. Now there’s the opposite phenomenon. Quality reductions have become so pervasive that even today’s scary inflation numbers are almost certainly understated.

If you can read the column at Bloomberg, please do. But if you run into the paywall, which allows a few articles a month, you can use this link to the WaPo version, which doesn’t have links.

June 30, 2022

2020’s spike in homicides in the United States

Filed under: Law, Politics, USA — Tags: , , , , — Nicholas @ 03:00

At Astral Codex Ten, Scott Alexander looks at the rapid rise in murders across the United States in early 2020, and considers the common explanation for the phenomenon:

In my review of San Fransicko, I mentioned that it was hard to separate the effect of San Francisco’s local policies from the general 2020 spike in homicides, which I attributed to the Black Lives Matter protests and subsequent police pullback.

Several people in the comments questioned my attribution, saying that they’d read news articles saying the homicide spike was because of the pandemic, or that nobody knew what was causing the spike. I agree there are many articles like that, but I disagree with them. Here’s why:

Timing

When exactly did the spike start? The nation shut down for the pandemic in mid-March 2020, but the BLM protests didn’t start until after George Floyd’s death in late May 2020. So did the homicide spike start in March, or May?

Let’s check in with the Council on Criminal Justice:

It very clearly started in late May, not mid-March. The months of March, April, and early May had the same number of homicides as usual.

[…]

Police Pullback

My specific claim is that the protests caused police to do less policing in predominantly black areas. This could be because of any of:

  • Police interpreted the protests as a demand for less policing, and complied.
  • Police felt angry and disrespected after the protests, and decided to police less in order to show everybody how much they needed them.
  • Police worried they would be punished so severely for any fatal mistake that they made during policing that they were less willing to take the risk.
  • The “Defund The Police” movement actually resulted in police being defunded, either of literal funds or political capital, and that made it harder for them to police.

I don’t want to speculate on which of these factors was most decisive, only to say that at least one of them must be true, and that police did in fact pull back.

[…]

Victims

Who is being targeted in these extra murders?

The 2020 homicide spike primarily targeted blacks.

(there also seems to be a much smaller spike for Native Americans, but there are so few Natives that I think this might be random, or unrelated).

Most violent crime is within a racial community, and there was no corresponding rise in hate crimes the way I would expect if this was whites targeting blacks, so I think the perpetrators were most likely also black. This was a rise in the level of violence within black communities.

A priori there’s no reason to expect lockdowns and “cabin fever” to hit blacks much harder than every other ethnic group. But there are lots of reasons to expect that the Black Lives Matter protests would cause police to pull back from black communities in particular. I think this is independent evidence that the homicide spike was because of the protests and not the pandemic.

« Newer PostsOlder Posts »

Powered by WordPress