The prime fact of human nature which the wise statesman must take into account is that men will exert themselves for their own benefit, or for that of their families, regarded as an extension of themselves, as they will exert themselves for no one else; and, in particular, men are not prepared to work for the state or for any other collectivity as they will work for themselves or for their families … If it is made impossible for him to advance himself or his family by his exertions, the average man will cease to exert himself. No motive comparable in its effects … has yet been known in the history of the human race … Because socialism expects the average man to exert himself for the state as he would for himself, … the socialist is doomed to disappointment when he comes to put his ideas into practice.
Ivor Thomas, The Socialist Tragedy, 1951.
May 1, 2026
April 30, 2026
QotD: The terrible economics of (most) recycling efforts
New York City confidently predicted that it would save money by starting a mandatory recycling program in 1992, but it took so much extra labor to collect and process the recyclables that the city couldn’t recoup the costs from selling the materials. In fact, the recyclables often had so little value that the city had to pay still more money to get rid of them. The recycling program cost the city more than $500 million during its first seven years, and the losses have continued to mount. A new study by Howard Husock of the Manhattan Institute shows that eliminating the city’s recycling program and sending all its municipal trash to landfills could now save taxpayers hundreds of millions of dollars annually — enough money to increase the parks department’s budget by at least half.
Even those calculations underestimate the cost of recycling because they include only the direct outlays, chiefly the $686 per ton that the city spends to collect recyclables. But what about all the valuable time that New Yorkers spend sorting and rinsing their trash and delivering it to the recycling bin? For a New York Times Magazine article in 1996, I hired a Columbia University student to keep track of how much time he spent recycling cans and bottles and how much material he gathered in a week. Using those figures (eight minutes to gather four pounds), I calculated that if the city paid New Yorkers a typical janitor’s wage for their recycling labors, their labor would cost $792 per ton of recyclables — over $100 per ton more than what the city pays its sanitation workers to collect it.
As the economics of recycling worsened, cities in America and Europe found that the only viable markets for their recyclables were in poor countries, chiefly in China and other Asian nations, where processing recyclables was still profitable, thanks to lower wages and lower standards for worker safety and environmental quality. But as those countries have gotten wealthier, they’ve become reluctant to accept foreign trash. As bales of unwanted recyclables pile up in warehouses, towns have had to start sending them to landfills, and dozens of American municipalities have finally had the sense to cancel their recycling programs.
John Tierney, “Let’s Hold On to the Throwaway Society”, City Journal, 2020-09-13.
Update, 1 May: Welcome, Instapundit readers! Have a look around at some of my other posts you may find of interest. I send out a daily summary of posts here through my Substack – https://substack.com/@nicholasrusson that you can subscribe to if you’d like to be informed of new posts in the future.
April 29, 2026
QotD: The battlefield role of the general in pre-modern battles
We have to start not with tactics or the physics of shouting orders, but with cultural expectations. First, we need to establish some foundations here. First, in a pre-modern battle (arguably in any battle) morale is the most critical element of the battle; battles are not won by killing all of the enemies, but by making the enemies run away. They are thus won and lost in the minds of the soldiers (whose minds are, of course, heavily influenced by the likelihood that they will be killed or the battle lost, which is why all of the tactics still matter). Second, and we’ve actually discussed this before, it is important to remember that the average soldier in the army likely has no idea if the plan of battle is good or not or even if the battle is going well or not; he cannot see those things because his vision is likely blocked by all of his fellow soldiers all around him and because (as discussed last time) the battlefield is so large that even with unobstructed vision it would be hard to get a sense of it.
So instead of assessing a battle plan – which they cannot observe – soldiers tend to assess battle commanders. And they are going to assess commanders not against abstract first principles (nor can they just check their character sheet to see how many “stars” they have next to “command”), but against their idea of what a “good general” looks like. And that idea is – as we’re about to demonstrate – going to be pretty dependent on their culture because different cultures import very different assumptions about war. As I noted back in the Helm’s Deep series, “an American general who slaughtered a goat in front of his army before battle would not reassure his men; a Greek general who failed to do so might well panic them.” An extreme example to be sure, but not an absurd one. In essence then, a general who does the things his culture expects from him is effectively performing leadership as we’ve defined it above.
But the inverse of this expectation held by the soldiers is that generals are not generally free to command however they’d like, even if they wanted to (though of course most generals are going to have the same culturally embedded sense of what good generalship is as their soldiers). Precisely because a general knows his soldiers are watching him for signs that he is their idea of a “good general”, the general is under pressure to perform generalship, whatever that may look like in this cultural context. That is going to be particularly true because almost all of the common models of generalship demand that the general be conspicuous, be available to be seen and observed by his soldiers. As a result, cultural ideals are going to heavily constrain what the general can do on the battlefield, especially if they demand that the general engage personally in combat.
Different sorts of generals
We can actually get a sense of a good part of the range simply by detailing the different expectations for generalship in ancient Greek, Macedonian and Roman societies and how they evolved (which has the added benefit of sticking within my area of expertise!).
On one end, we have what we might call the “warrior-hero general”. This is, for instance, the style of leadership that shows up in Homer (particularly in the Iliad), but this model is common more broadly. For Homer, the leaders were among the promachoi – “fore-fighters”, who fought in the front ranks or even beyond them, skirmishing with the enemy in the space between their formations (which makes more sense, spatially, if you imagine Homeric armies mostly engaging in longer range missile exchanges in pitched battle like many “first system” armies).
The idea here is not (as with the heroes of Homer) that the warrior-hero general simply defeats the army on his own, but rather that he is motivating his soldiers by his own conspicuous bravery, “leading by example”. This kind of leadership, of course, isn’t limited to just Homer; you may recall Bertran de Born praising it as well:
And I am as well pleased by a lord
when he is first in the attack,
armed, upon his horse, unafraid,
so he makes his men take heart
by his own brave lordliness.On the opposite end of the spectrum, there is the pure “general as commander” ideal, where the commanding general (who may have subordinates, of course, who may even in later armies have “general” in the name of their rank) is expected to stay well clear of the actual fighting and instead be a coordinating figure. This style […] is fairly rare in the pre-gunpowder era, but becomes common afterwards. Because in this model the general’s role is seen primarily in terms of coordinating various independently maneuvering elements of an army; a general that is “stuck in” personally cannot do this effectively. And it may seem strange, but violating these norms with excessive bravery can provoke a negative response in the army; confederate general Robert E. Lee attempted to advance with an attack by the Texas Brigade at the Battle of the Wilderness (May 6, 1864) only to have his own soldiers refuse to advance until he retired to a more protected position. Of course this sort of pure coordination model is common in tactical video games which only infrequently put the player-as-general on the battlefield (or even if the “general” of the army is represented on the battlefield, the survival of that figure is in no way connected to the player’s ability to coordinate the army).
In practice, pre-modern (which is to say, pre-gunpowder) generals almost never adopt this pure coordination model of generalship. The issue here is that effective control of a gunpowder army both demands and allows for a lot more coordination. Because units are not in melee contact, engagements are less decisive (units advance, receive fire, break, fall back and then often reform to advance again; by contrast a formation defeated in a shock engagement tends not to reform because it is chased by the troops that defeated it), giving more space for units to maneuver in substantially longer battles. Moreover, units under fire can maneuver, whereas units in shock generally cannot, which is to say that a formation receiving musket or artillery fire can still be controlled and moved about the field, but a unit receiving sword strikes is largely beyond effective command except for “retreat!”
In between these two extremes sits variations on what Wheeler terms a “battle manager”, which is a bit more complex and we’ll return to it in a moment.
What I want to note here is that these expectations are going to impact where the general is on the battlefield and thus what he can do to exert command. A general in a culture which expects its leaders to be at the front leading the army has the advantage of being seen by at least some of his soldiers (indeed that is the point – they need to see him performing heroic leadership), but once engaged, he cannot go anywhere or command anyone. This is also true, by the by, in cultures where the general is expected to be on foot to show that they share in the difficulties and dangers of the infantry; this is fairly rare but for much of the Archaic and Classical periods, this was expected of Greek generals. Even if a general on foot isn’t in combat directly, their ability to see or move about the battlefield is going to be extremely limited.
On the flipside, a general who is following the “commander” ideal is likely to be in the rear, perhaps in an elevated position for observation. The obvious limitation here is that such a commander is going to struggle to display leadership because no one can see them (everyone is facing towards the enemy, after all). But that also impacts their ability to command – no one is looking at them so if they want to change their plans on the fly they need to send word somehow to subordinate officers who are with or in front of the battle line who can then use their visibility to communicate those orders to the troops.
Bret Devereaux, “Collections: Total Generalship: Commanding Pre-Modern Armies, Part II: Commands”, A Collection of Unmitigated Pedantry, 2022-06-03.
April 28, 2026
QotD: The cultural history of the Tidewater and Deep South regions of the United States
The first nation [as described in American Nations, by Colin Woodard] that struck my interest was Tidewater, earliest of the English nations. (El Norte and New France, as Woodard names them, are the remnants of colonial empires that predate English settlement in North America.) Founded on the shores of the Chesapeake Bay by gentlemen from southern England, and with a sizeable population influx a generation later from Royalists who had found themselves on the losing side of the English Civil War, Tidewater began with an aristocratic ethos. Its gentlemen wanted to recreate the rural manor life of the English landowners: ruling benevolently over their estates and the tenants who inhabited the associated villages, presiding over the courts and local churches, hunting and visiting their neighbors and paying for the weddings and funerals of the poor. To play the role of the peasantry in this semi-feudal system, they imported indentured servants from among the English poor. But unlike English villagers, who were engaged in a variety of subsistence farming endeavors or local forms of production in much the same way that their ancestors had been, the indentured servants of Tidewater were mostly put to work farming tobacco for export.
This may not seem like a huge difference — does it really matter if you’re growing wheat or tobacco, if you’re farming someone else’s land? — but it had profound implications for what happened after the indenture. In theory, the formerly-indentured should have taken on the role of either the English tenant farmer (think Emma‘s Robert Martin) or yeoman/freeholder (a small-time landowner but not of the scale or social class to be a “gentleman”). In practice, though the colony was a plantation economy exporting a cash crop: there was very little local manufacturing, since it was so easy for a ship from London or Bristol to sail right up to some great landowner’s dock on the river and unload whatever he might have ordered. Independent small-scale farmers simply couldn’t compete for tobacco export with their larger neighbors, and especially not if they also had to pay rent. But luckily for them, they had something no Englishman had had for centuries: empty land nearby. Or, you know, sort of empty. (Several of the rebellions in early Virginia were fought over the colonial government’s refusal to drive the Indians off the land former servants wanted to settle.) They could just leave.
The obvious solution for the Tidewater elites — the clear way for gentlemen to maintain an aristocratic lifestyle without a peasantry tied to the land — was African slaves. And here’s the important difference between Tidewater and it neighboring nation, the Deep South: Tidewater turned to slavery in the hopes of perpetuating their social structures, while the Deep South was envisioned from the first as a slave society.
The Deep South had been founded in the 1670s by Barbados sugar planters who ran out of room on their tiny island and were now exporting their particularly brutal combination of slave gangs and sugarcane to the coastal lowlands around Charleston Harbor. (Like the Tidewater gentry, the Barbadians had originally experimented with indentured servants from Britain, but they were worked to death so rapidly that the authorities objected.) The planter class quickly became phenomenally wealthy — by the American Revolution, per capita wealth in the Deep South was four times that of Tidewater and six times either New York or Philadelphia, and the money was much more concentrated than anywhere else in the colonies — but unlike the manorial idyll of Tidewater, with its genteel pursuits and colonial capitals all but abandoned when the legislature was out of session, the Deep South planters spent as much time as possible in the city.
Charles Town (later Charleston), South Carolina, modeled on the capital of Barbados, was filled with theaters, taverns, brothels, cockfighting rings, private clubs, and shops stocked with goods imported from London. Life in the city was a constant churn of social engagements, signalling, and status competition: in 1773, a pseudonymous correspondent wrote in the South Carolina Gazette that “if we observe the Behavior of the polite Part of this Country, we shall see, that their whole Lives are one continued Race; in which everyone is endeavouring to distance all behind him, and to overtake or pass by, all before him; everyone is flying from his Inferiors in Pursuit of his Superiors, who fly from him with equal Alacrity …” The planters of the Deep South had no interest in being lords of their estates, which were managed by overseers, or indeed in their land or the people who worked it. Certainly there existed poor whites in the colonies of the Deep South, but they never entered into the conversation: where Tidewater imagined agricultural labor performed by the English “salt of the earth” but had to fall back on slaves, the Deep South always planned on slaves.
This may not seem like an important difference, especially if you’re a slave,1 but it matters a great deal for national character. Culture, after all, lives as much in a people’s values and ideals as in their daily routines: a culture that praises loyalty to clan and family will behave very differently from one that lauds fair dealing with strangers. And the Deep Southern ideal, the nation’s vision of how life ought to be, was more or less Periclean Athens: a tremendous efflorescence of wealth, art, and personal distinction for the great and the good, with no consideration whatsoever for the slaves and metics who made up the bulk of the population. A good life meant leisure and luxury, wealth and freedom, the full exploration of personal capacity for the few and who cares about the many. The Tidewater ideal, on the other hand, was basically the Shire: bucolic, rural, politically dominated by a cousinage of great families who shared a profound sense of noblesse oblige and populated by a virtuous, hardworking yeomanry who knew their place but were worthy of their betters’ respect.
Did that world actually exist? Of course not, neither here or in its English model,2 any more than the Puritans’ commonwealth in Massachusetts Bay was a new Zion inhabited by saints. But a culture’s picture of how life ought to be determines its reaction to changing circumstance, and Tidewater pictured an enlightened rural gentry ruling benevolently over lower orders who nevertheless mattered. In contrast to the aggressively middle class northern nations, the fiercely independent Appalachians, and the elite-centric Deep South, Tidewater imagined itself as an aristocracy. And it was the only one among the American nations.
Tidewater had a disproportionate influence on the early United States, contributing far more than its fair share of early statesmen and generals as well as a healthy dose of the philosophical underpinnings for many of our founding documents. Unfortunately for the lowland Virginia gentlemen, however, they were hemmed in to the west by the hill people of Greater Appalachia: when the other nations began to expand deeper into the continent after 1789, Tidewater was stuck in its starting position. Soon the nation that had been “the South” on the national stage was dwarfed by Greater Appalachia (more than doubled between 1789 and 1840) and especially by the Deep South (ten times larger). When the young United States began to polarize over the issues of slavery, Tidewater — by then a minority in Maryland, Delaware, North Carolina, and even Virginia3 — had to retreat to the political protection of the Deep South and began to lose its cultural distinctiveness. It never really emerged again as its own ideological force.
Jane Psmith, “REVIEW: American Nations, by Colin Woodard”, Mr. and Mrs. Psmith’s Bookshelf, 2024-02-19.
- Though it actually mattered a great deal to slaves, who were imported to the Deep South in great waves only to be worked to death; the enslaved population of Tidewater, by contrast, increased steadily over the entire antebellum period.
- Though I will point out that Akenfield suggests the total immiseration of the tenant farmers in the early 20th century has something to do with the land being owned by rich farmers and implies that the local gentry are more generous employers.
- West Virginia’s eventual secession back to the Union would put Tidewater back in the majority there.
April 27, 2026
QotD: The false economy of reducing plastic packaging for food products
One morning in 1996, I sat with a class of fifth-graders in Manhattan as they gazed mournfully at a photo of a supermarket package of red apples. It was part of a slide presentation by the director of environmental education for the Environmental Action Coalition, the guest lecturer at that day’s science class.
“Look at the plastic, the Styrofoam or cardboard underneath,” she told the class. “Do you need this much wrapping when you buy things?”
“Noooo,” the fifth-graders replied.
It was all so obvious to them, the fifth-graders as well as their lecturer. She was barely out of college, but she thought that she knew more about selling produce than supermarket executives and packaging engineers who had spent their careers studying this question. She was sure that plastic wrap and Styrofoam were wasteful and harmful to the environment because she had never seriously considered the alternative or wondered why those products were introduced.
To merchants and shoppers in the late 1920s, there was nothing wasteful about the revolutionary packaging material introduced by DuPont. Cellophane seemed miraculous because it was not only moisture-proof but also transparent. “EYE IT before you BUY IT,” DuPont advertised, and shoppers welcomed this new feature enabling them to judge the quality of produce and meat before they paid up. Cellophane kept things fresh much longer, an advantage advertised to everyone from homemakers to soldiers. During World War II, a DuPont ad showed a German soldier looking on enviously as American prisoners of war opened packages of cigarettes from home that were wrapped in cellophane: “The prisoners who have better cigarettes than their guards.”
Soviet citizens in the 1980s were similarly envious of Westerners’ new plastic grocery bags, which sold for $5 apiece on the black market in Moscow. The bags were coveted partly as a status symbol (a hard-to-get imported product) and partly because they were so light and compact. In a shortage-plagued economy, Muscovites never knew when a scarce item would suddenly become available in a nearby store, so they wanted to have an empty bag with them, just in case.
American merchants and shoppers switched from paper to plastic packaging because it reduced waste. Plastic was cheaper because it required fewer resources to manufacture. It required less energy to transport because it was lighter. Plastic took up less space in landfills than paper, and it further reduced the volume of household trash because it preserved food longer. The typical household in Mexico City, for example, generated more garbage than an American household because it bought fewer packaged products and ended up discarding more food that had spoiled.
But activists eager to find some reason to oppose disposable products have ignored these advantages. They blame America’s throwaway society for polluting the oceans with plastic, though virtually all that pollution comes from either fishing vessels or from developing countries with primitive waste-management systems — mostly the Asian countries that were importing plastic recyclables from America. Instead of castigating American consumers, environmentalists should blame themselves for creating the recycling programs that sent plastic to countries where it was allowed to leak into rivers. The best way to protect marine life is to throw used plastic into the trash, not the recycling bin, so that it goes straight to a well-lined local landfill instead of ending up in the ocean.
And instead of campaigning to ban plastic grocery bags, green activists should be promoting their environmental advantages. Banning them results in higher carbon emissions because the substitutes are thicker and heavier, requiring more materials and energy to manufacture and transport, and these paper bags and tote bags typically aren’t reused often enough to offset their initial carbon footprint. Greens may feel virtuous lugging groceries home in a paper or tote bag, but the shoppers choosing plastic are actually doing more to combat global warming and reduce consumption of natural resources.
John Tierney, “Let’s Hold On to the Throwaway Society”, City Journal, 2020-09-13.
April 26, 2026
QotD: College Town, USA
Everything human changes, but Nature does not change. That’s “conservatism”, I guess, and for lack of a better term. And that’s what causes Noticing, I’m coming to believe. It’s not that we dislike “change” — that would be as absurd as disliking the seasons. We dislike change qua change; change for change’s sake, and that instinctive distaste for change qua change is why we Notice. We have that sense of Impermanent Permanence, so we can’t help but Notice that today’s Current Thing is the exact opposite of yesterday’s.
It’s not “change” in the sense we understand, and instinctively accept — it’s not “change” in the way the seasons change. It’s directed change — somebody decided to do it. And if it’s not immediately apparent who, or why, we are naturally suspicious. We are “based”, if you will, in the Permanent, so we are acutely aware of the deliberate aspects of the Impermanent.
City life gives you the opposite, indeed overwhelming, sense of Permanent Impermanence. Nothing stays the same; the only constant is change. I remember seeing it in College Town, which was not particularly large, population-wise, but had almost all the “amenities” you’d expect from a major metro. Bearing in mind, as always, that “College Town” is a composite of several different places … but they’re all basically the same, and that’s the point.
The first thing that struck me about College Town — that you see in every College Town, coast to coast — was how shabby it was. Even the brand-new apartment complexes (of which there were many, Higher Ed being a growth industry at that time) all looked dilapidated. The next thing I Noticed was the lack of institutions. College Town had every imaginable “amenity” — exotic cuisine, 24 hour everything — but no playgrounds, no ball fields, no churches. Hardly any schools, despite being pretty good size relative to the surrounding area, because why would there be? All that stuff is for people who actually live there, as opposed to the transients, or even the “permanent residents”, if you will, on the faculty (what an unconsciously telling phrase that is!).
Nobody’s from there, and nobody stays there. Not even the faculty — they always have one foot out the door, no matter if they’re Department Chairs with 30+ years’ seniority. It is crucial to their amour-propre to believe that they’re always about to get the call from Harvard, which in part explains the weird phenomenon of the “faculty ghetto”. They’ll spend a zillion dollars “restoring” a frankly tiny house in the “historic” district, by which is meant “gutting it, and making it as close to a Current Year McMansion as the physical infrastructure can bear”. Then they’ll spend a zillion more on yearly maintenance, when they could’ve gotten twice the house, with the latest and greatest everything, built to spec on the outskirts of town …
… which is five minutes away; it’s not like they’re facing some huge commute (and it’s not like they walk or even bike to campus, and God forbid they take the bus. No, they’d much rather gut or knock down another old building, just to have a garage in which to park the huge gas-guzzling SUV they drive the 45 linear feet to “work”, because how else would they show off how important they are, without parking in their designated space in the one fucking lot in the entire town?).
In other words, they don’t want to admit that they live there — they are, at most, “permanent residents”. There are no public playgrounds, because their one designer baby isn’t going to rub elbows with the children of the few greasy proles they grudgingly tolerate in the absolutely necessary service industries — you know, the mechanics and plumbers and snow plow drivers and such. There are no churches, just one or two Temples of the Current Thing, and only to the extent that a few of them have paraphilias involving clerical vestments. No ball fields, no Cub Scout packs or Elks Lodges or American Legion posts, because c’mon man. A town that size anywhere else would have a Walmart and a Minor League team and a big rivalry game between the local high schools; College Town has head shops and Egyptian-Thai fusion cuisine and DoorDash.
Permanent Impermanence, in other words. Deliberate impermanence. Nothing lasts, nothing can last, nothing should last. There are some people who find that attitude — which I would call straight-out, shit-flinging nihilism — deeply appealing, and … well … there it is.
Severian, “Transience”, Founding Questions, 2026-01-19.
April 25, 2026
QotD: Goethe, the lost German master
This was the atmosphere in which I discovered Germany. It was a minor act of defiance to choose German instead of Latin for O-level, but with hindsight I was extremely fortunate to have the choice. There were two German teachers in my grammar school of just 600 pupils. Today, even the best state schools seldom offer the subject; not one of our four children has had the opportunity that I had to study German language and, especially, literature up to the high standard that was then expected at A-level.
Today, the texts are almost all recent and appear to be chosen partly with the film of the book in mind. In particular, Goethe has disappeared from the syllabus, presumably because the language is considered too archaic. Yet I recall the immense pleasure and satisfaction of mastering a Goethe play — Egmont. The story of the dashing Dutchman and his martial defiance of the sinister Duke of Alba, the courage of his beloved, Klärchen, who fantasises in song about how wonderful it would be to be a man and fight the Spaniards — “ein Glück sondergleichen ein Mannsbild zu sein“. Somehow I even obtained an LP of Beethoven’s incidental music for Egmont: seldom heard apart from the overture, but brilliantly evoking the grandeur of the drama.
Like Homer, Dante and Shakespeare, Goethe belongs not just to German literature, but to world literature, Weltliteratur — a term he coined. I am told that even in German Gymnasien, Goethe is little studied now. He is certainly a rare bird in English schools — or even universities. It is tragic that educated people, including students of literature, so seldom encounter the greatest of Germans even in translation. We might get on better with Germany if we did.
Daniel Johnson, “How I discovered Germany”, The Critic, 2020-08-02.
April 24, 2026
QotD: Cant
How does one distinguish cant from real concern or real emotion? The authors rightly say that there is no fool-proof test. Some people are more sensitive than others to the wrongs of the world: as Mrs. Gummidge puts it, “I feel it more”. But if I were to say with a pained expression, “I am so concerned about the situation in the Southern Sudan that I cannot sleep at night”, and you knew perfectly well that I slept like a log the night before after dining well, and furthermore that I had no connections whatever with the Southern Sudan, you would know that I was canting.
As the authors point out, canting has an inherent positive feedback mechanism. For example, in the game of more-compassionate-than-thou it is always possible to be outflanked by someone who claims an even wider circle of concern, a deeper fellow-feeling with the downtrodden, than any that you have expressed, so that you feel obliged, in order to come out top in this competition, to go another step beyond your original claim, which was bogus to start with. Once you start canting, it is difficult to stop, at least in the short term.
Theodore Dalrymple, “The Expanding Tyranny of Cant”, The Iconoclast, 2020-08-26.
April 23, 2026
QotD: The problems of a “no first use” nuclear weapons policy
Now, you might ask at this point: why not defuse some of this tension with a “no first use” policy – openly declare that you won’t be the first to use nuclear weapons even in a non-nuclear conflict?
For the United States during the Cold War, the problem with declaring a “no first use” policy was the worry that it would essentially serve as a “green light” for conventional Soviet military action in Europe. Recall, after all, that the Soviet military was stronger in conventional forces in Europe during the Cold War and that episodes like the Berlin Blockade (and resultant Berlin Airlift) seemed to confirm Soviet interest in expanding their control over central Europe. At the same time, the Soviet use of military force to crush the Hungarian Revolution (1956) and the Prague Spring (1968) continued to reaffirm that the USSR had no intention of letting Central or Eastern Europe choose their own fates – this was an empire that ruled by domination and intended to expand if it could.
The solution to blocking that expansion was NATO, the North Atlantic Treaty Organization. Not because NATO collectively could defeat the USSR in a conventional war – the general assumption was that they probably couldn’t – but because NATO’s article 5 clause pledging mutual defense essentially meant that the nuclear powers of NATO (Britain, the United States, and France) pledged to defend the territory of all NATO members with nuclear weapons. But just like deterrence, mutual defense alliances are based on the perception that all members will defend each other. Declaring that the United States wouldn’t use nuclear weapons first would essentially be telling the Germans, “we’ll fight for you, but we won’t use our most powerful weapons for you” in the event of a conventional war; it would be creating a giant unacceptable asterisk next to that mutual defense clause.
So the United States had to be committed to at least the possibility that it would respond to a conventional military assault on West Germany with nuclear retaliation (often envisaged as a “tactical” use of nuclear weapons – that is, using smaller nuclear weapons against enemy military formations. That said, even in the 1950s, Bernard Brodie was already warning that restraining the escalation to general use of nuclear weapons once a tactical nuclear weapon was used would be practically impossible).
Bret Devereaux, “Collections: Nuclear Deterrence 101”, A Collection of Unmitigated Pedantry, 2022-03-11.
April 22, 2026
QotD: Traditional Chinese approaches to science
Those of you who have studied physics know that the laws of motion are usually introduced through the mechanics and dynamics of point particles, or of simple objects acting under the influence of discrete and coherent forces. The reason for this is straightforward: even a tiny bit more complexity, and the system’s behaviour quickly dissolves into a morass that’s analytically intractable and computationally infeasible. The fact that the mutual gravitational influences of just three celestial objects results in chaotic dynamics has entered into popular culture as the “three-body problem”. But even a simple double-pendulum is impossible to predict, even with all kinds of simplifying assumptions (massless rods, no friction, no air resistance, etc., etc.).
It’s not just physics. The central technique of modern science is that of boiling something down to its absolute simplest form, understanding the simplest non-trivial case as thoroughly as possible, and only then building back up to more familiar situations. In physics we start with contrived gedankenexperimenten: “what if two particles collided in a vacuum”, and build experimental apparatuses designed to mimic these ultra-simple cases. In economics we imagine markets with a single buyer and a single seller, both perfectly rational. In political philosophy we imagine human beings in a state of nature, or societies established by a primitive contract. In biology we try to understand the functions of organisms, organs, or other systems by recursively taking them apart and trying to figure out each part in isolation. In every case, what we’re engaging in is “analysis”, ἀνά-λυσις, literally a “thorough unravelling”, understanding the whole by first understanding its parts.
This approach is totally alien to the traditional Chinese understanding of reality, which held instead that no part of the world could be understood except in its relation to the rest of the universe. You can see this in the domains of science where they did maintain a lead. Is it really a coincidence that the Medieval Chinese got frighteningly far with the mathematics of wave mechanics? Or quickly deduced the causes of the tides? Or made great strides with magnetism? In each of these cases, the physical phenomenon in question was compatible with an “organicist conception in which every phenomenon was connected with every other according to a hierarchical order”. Indeed, in all of these cases real understanding was aided by the assumption that a universal harmony underlay all things and connected all things. The tides really are in harmony with the moon, and the lodestone with the earth.
This science, founded on holism rather than on analysis, made great strides in some fields but fell behind in others. It readily imbibed action at a distance, but it could not and would not tolerate the theory of atoms. In this way it serves as a strange mirror of Medieval European science, which also loved the theory of correspondences, also loved alchemy and disdained analysis. The difference is that the glorious intellectual synthesis of Neo-Confucianism was never seriously challenged, it survived the Mongol conquest, it survived the desolation of the civil wars that preceded the Ming founding, it survived everything until communism. In contrast, the eerily-similar Thomistic metaphysics of the High Middle Ages was broken apart by the Reformation, and sufficiently discredited that analytical methods could take their first tentative steps.
This is, to be clear, my own crazy theory, because Needham never really gave a solution to his own puzzle. I came up with it only as a sort of thought-experiment, because I wanted to see if I could find a solution to Needham’s puzzle that disdained material explanations in favour of intellectual tendencies, because I find such theories curiously underrated in our culture. I only half-believe this theory,1 but I find it interesting because twentieth-century Western science has in some ways come back around to the holistic view of things: from Lagrangian methods in theoretical physics, to category theory in mathematics, to systems biology and ecology. It wouldn’t be the first time that a way of viewing the world useful to one age became an impediment to reaching the next one. The question is: what are we missing today?
John Psmith, “REVIEW: Science in Traditional China, by Joseph Needham”, Mr. and Mrs. Psmith’s Bookshelf, 2023-08-14.
- The thing about material conditions is they usually are dispositive!
April 21, 2026
QotD: “Bibliophiles are massive losers, why can’t we just admit that?”
It’s a conspiracy. Every piece of worthless advice I ever hear tells me I must, “Read, read, read”. I can’t even try to listen to music on YouTube without entrepreneurs, life coaches and other snake oil salesmen popping up on shouty adverts, posing alongside other people’s Lamborghinis and Learjets, asking me to guess how many books the world’s top fifty “Super Achievers” read each year. (It’s fifty-two, conveniently.) “The more you learn, the more you earn!” these morons confidently claim. As if reading books makes you a billionaire.
I don’t buy it. I bet billionaires don’t read at all. Not only because they don’t have the time, but because every big reader I know is broke. Without exception, books have overloaded their minds, and their lives are in total disarray. When they’re not consumed by tortuous examinations of Socialist Realism in the shallower subsections of the Baltic Canal between late October 1933 and early March 1934, they’re deconstructing turgid translations of 9th Century Glagolitic poetry from the White Carpathian territories of Great Moravia. On weekends, for light relief, they dip into obscure anthologies of critically-acclaimed feminist speculative fiction championing unsung writers born in the shadow of the Chappal Waddi in the Mambilla Plateau. What should have been their office hours are spent haggling with elderly volunteers in Oxfam bookshops over worthless, dogeared volumes of Dietrich Bonhoeffer’s early letters or needlessly exhaustive histories of the Sepoy Mutiny of 1857 in the Ganges-Brahmaputra basin. They own vast stacks of surplus, dust-magnet books, but they never own art, or cars, or houses. Bibliophiles are massive losers — why can’t we just admit that? There’s a clear correlation between reading and underachievement. There’s a reason homeless vagabonds line their coat pockets with paperbacks and newspapers. Our children must be warned, before it happens to them.
Reading is even less helpful to writers. If you write, you are incurably influenced by whatever garbage you happen to be reading at the time. For example, if I’m reading Hemingway, I finish this sentence here. Whereas, in the rare, transcending moments that I am reading, say, Henry James, I find, to my eternal chagrin, that I write — if, indeed, “write” is the morpheme, or mot juste, for which I rightly delve — in my lasting endeavours — my contention, if you will, against the ordained — in a spirit of refined demonstration, or braggadocio, as the case may be, that … Where was I?
Then, of course, there’s the snobbery associated with reading. “Read a book!” command the enlightened few, should you dare disagree with them on any trendy subject. It’s ridiculous, but if you read — or, better still, opine pretentiously about what you read — the chattering classes will clamber to pressgang you into their fanatical ranks. Nobody cares if you write anything, so long as you describe the latest high-status books as “vital”, “necessary”, “required”, or “essential”. Trust me, you can get away for years with pretending that you are “working on something big that I’d rather not talk about for fear of jinxing it” while freely enjoying all the wine and canapes you can stomach. But suggest you don’t read, and people quickly get suspicious.
Dominic Hilton, “All Booked Up”, The Critic, 2020-08-17.
April 20, 2026
QotD: The quality of evidence problem for historians
The major problem isn’t with quantity of evidence, it’s quality of evidence. More fundamentally, it’s a question of the very nature of evidence. As far as I understand it — which is “not very” — contemporary accounts of the Battle of Crecy seem wildly implausible, even by medieval standards. And that’s the first indicator of the problem right there: By medieval standards. Medieval numbers, as we’ve noted probably ad nauseam, are Rachel Maddowesque — they’re there to augment The Narrative, nothing more. “We were opposed by fifty thousand Saracens” thus can mean anything from “bad guys as far as the eye could see” to “it just wasn’t our day, so we ran”.
And yet, you can’t entirely discount them, either. Crecy (along with of course Agincourt) is supposed to be the triumph of the English longbow, and that’s the thing: We’ve reconstructed English longbows, and put them through all kinds of trials. The results, as I understand it — which, again, ain’t much — were highly variable. A very strong, well-fed, highly trained longbowman, firing an ideally constructed and maintained bow under optimal conditions, really can put X number of arrows up a flea’s ass at Y range in Z time.
Or they could miss the broad side of a barn at twenty feet, depending.
So: What was the weather like in Northern France on 26 August 1346? That’s not an idle question. Rather, it’s the central question. Assume perfect shooting conditions, and you’ve got a far, far different picture of the battle than if you assume poor ones. And if that seems to be giving too much credit to the weather, watch a few baseball games — you’ll quickly discover that quite often, the difference between a home run and a long out is just a few percentage points of relative humidity.
Ultimately it comes down to judgment. More importantly, it’s a judgment on how any particular event fits into the larger argument you’re trying to make. In a way, then, the details really don’t matter very much on their own — the mechanics of how the English won are almost irrelevant, except insofar as they feed into an analysis of why they won. Why did the French king attack uphill, in the mud? Was he stupid? Overconfident? Did he feel he had to, because of political problems inside his host? Did he have faulty information? Did he have accurate information, but just made a bad call?
That’s the art of History, and why, despite what the Peter Turchin (and Karl Marx) crowd keeps insisting, it will always be an art, not a science. We can have a high degree of confidence, most times, in what happened — there really was a battle at Crecy, and the English really did win it. It’s the why that is susceptible to radical reinterpretation.
Severian, “Friday Mailbag”, Founding Questions, 2022-06-17.
April 19, 2026
QotD: The (only?) man who didn’t fear Margaret Thatcher
The late John Hoskyns, head of Margaret Thatcher’s policy unit from 1979-1982, was the last person on earth to be afraid of the Iron Lady. In the summer of 1981, he sent her a memo entitled “Your political survival”, which addressed her in terms other men would have flinched from. “You lack management competence,” he wrote, tearing into her style of leadership. “You break every rule of good man-management … you bully your weaker colleagues … You criticise colleagues in front of each other … You give little praise or credit, and you are too ready to blame others when things go wrong.”
Thatcher reacted with cold fury, but Hoskyns was unabashed. Another story tells of his arriving for a meeting with the PM only to be intercepted by Ian Gow, her personal secretary. “Our girl is tired,” said Gow, trying to bar his path. “I’m tired too,” muttered Hoskyns. “It goes with the bloody job. I’m going in.”
Such irreverence to the “Great She-Elephant”, “She who must be obeyed”, “Attila the Hen” is rare enough, but this isn’t the main thing to remember Hoskyns for. Far more interesting – and relevant to now – is the work he did with fellow conservative Norman Strauss in the mid-1970s to diagnose the exact sources of Britain’s economic malaise, and come up with clear policies about how the country could haul itself out of it.
Robin Ashenden, “Thatcher’s ‘Wiring Diagram’ and why we need it once again”, The Critic, 2020-08-25.
April 18, 2026
QotD: Democratic versus Republican Senators
A few years ago, Democratic Senator Kyrsten Sinema provoked the rage of progressive activists by occasionally having an independent thought, voting with her caucus on “party unity votes” only 96% of the time. Senator Joe Manchin caused comparable disgust by joining his party in those votes only 92% of the time. Sinema and Manchin were EVIL MONSTERS, and progressive activists showed up at their public events to scream at them.
Sinema eventually left the Democratic Party, declaring herself an independent, because a disgusting heretic couldn’t remain in a party that she only agreed with 96% of the time. Burn the witch, Democrats explained.
The descent of the Democratic Party into a state of increasingly obvious group psychosis is a product of the absence of internal debate, and of the degree to which Democratic legislators wholeheartedly believe that their job is to hold up their hands in unison whenever their party leaders tell them to. They can’t talk themselves out of their madness — they don’t know how. They don’t have the cultural habit of thought. Watch someone like Chris Murphy speak, and ask yourself if there’s a person inside there. I find myself unable to say yes to that question. He will read anything that is put on that teleprompter, and I mean an-y-th-ing.
By comparison, the Republican legislative habit of wandering around like the residents of a feline daycare center is a strength, and the presence of independent thought in the GOP’s legislative caucuses makes the party relatively sane. The presence of a Rand Paul, refusing to get on board, is more a good thing than a bad thing. The culture of debate is healthier than the behavior of the North Korean legislature or Tina Smith, soulless human robots.
Chris Bray, “The Greatest Republican Strength is the Greatest Republican Weakness, Again”, Tell Me How This Ends, 2026-01-15.
April 17, 2026
QotD: The decline of cities in the late western Roman Empire
The ancient Mediterranean was a world of cities and in the eastern Mediterranean at least, it had been long before the Roman period. By the beginning of the Roman Republic (509 BC), the pattern of organization was broadly similar in Italy, Sicily, coastal North Africa, Egypt, the Levant, Mesopotamia, Anatolia and Greece: agricultural land was broken up into the territory of cities (so that each city consisted of both its urban core but also its agricultural hinterland). Those cities might then either be independent, as with the poleis of Greece and the various communities of pre-Roman Italy, or be the basic administrative units of larger empires, as in the Persian Empire (or later Roman Italy). And so, while most people still lived in the countryside, most of that countryside was in turn attached to an urban center which was the center of political, economic, religious and cultural life.
This was the world the Romans knew and the world they were most comfortable governing. Consequently, while the Romans were utterly uninterested in “civilizing” anyone, when they conquered areas which weren’t urbanized, they tended to found cities or encourage local urbanization in order to create the administrative structures through which the Romans could extract revenue most efficiently.
As mentioned above, the Romans generally wanted these cities to be mostly self-governing. While at conquest, the Romans found themselves managing a bewildering array of different styles of local urban government, over time a mix of Roman administrative preference and cultural diffusion tended to produce a fairly similar set of civic institutions. City governments, which also administered their rural countryside, were run by a town council which consisted of the wealthiest notables of the town – the curiales – in much the same way that the Roman upper-class had dominated the running of the city during the Republic. Roman authority generally protected the curiales and their wealth from the sorts of popular uprisings that tempered many Greek oligarchies in the classical period and in return the curiales managed the population and the collection of taxes for the Romans.
The curiales both managed the town affairs and were also expected to use their own wealth to fund public activity and works: maintain temples and baths, fund religious rituals and festivals, and so on. Through the first and second century, that process was mostly responsible for providing the cities of the Roman Empire with the impressive collection of often still-visible public works they boasted: baths, theaters, amphitheaters, aqueducts, temples, courthouses, public spaces and so on. While some of these structures were little more than the public posturing of the elites, many of them were open to the general public and will have represented, in as much as anything before the industrial revolution could, meaningful improvements in the lives of regular people.
While most of the wealth of any of these cities was derived from the rents and taxes extracted from their agricultural hinterlands, these cities also substantially lived off of trade and markets. Because the local city typically housed the local market, they were the obvious point for local products to enter the stream of provincial-wide or empire-wide trade or for distant imports to reach their final customers. We’ll come back to this next time when we discuss trade and the economy, but for now I want to note that this trade provided a fair bit of the economic vitality of these cities but also that it did in fact reach down beyond mere luxury goods into the basic staples that even the relatively poor might buy.
The decline and fall of these Roman cities is most extensively described in J.H.W.G. Liebeschuetz’ aptly titled, Decline and Fall of the Roman City (2001). Given his title, as you might imagine, Liebeschuetz is in the “decline and fall” camp, arguing that the classical city which defined the Roman world largely did not survive it. Regional patterns differ, with Liebescheutz identifying three “patterns”: I) Western and Central Anatolia, II) Syria, Palestine and Arabia, III) the west, including North Africa).
We’ll deal with the situation in the east in just a moment, so let’s focus here on the cities of the west, which were at the start generally smaller, less wealthy and generally far younger than those of the east (with some exceptions in Italy). Decline sets in fastest and is most severe in Britain, with the final collapse of the cities coming as early as the 360s, whereas in North Africa, the classical city doesn’t seem to tip into decline until after 400.
While each individual region and indeed each city will have been subject to its own unique conditions, a few basic causes seem to have been active everywhere to some degree. First, the crisis of the third century seems to have fundamentally disrupted empire-wide Roman trade, which then stabilized at a lower level for the fourth century, before declining precipitously in the fifth. That first decline seems to have been somewhat offset by the increased demands of imperial administration and in particular the centralized taxation in-kind and movement of goods which had to move through cities. Peter Brown describes the late Roman state as, “the crude but vigorous pump which had ensured the circulation of goods in an otherwise primitive economy” (The Rise of Western Christendom, 2nd ed., 13). We’ll return to this when we discuss the shape of the economy next time, but for now it works as a crude, but vigorous description of that facet of the late Roman economy.
At the same time, as Liebescheutz describes, the role of the curiales steadily atrophies in the fourth century. On the one hand, much of the authority and power of being on the council was steadily eroded as those functions were pulled upwards into the imperial bureaucracy. At the same time, members of the curial class who sought imperial office could get immunities from the progressively more severe taxation which otherwise often fell on the curiales and so the imperial elite often crowded out the curiales when it came to wealth and prestige in the community. As they lost both control and responsibility for their cities, the curiales‘ investment in public works and monumental architecture also ceased (though local elites do invest in church-building and monastic foundations), leading to the decay of the physical urban centers.
Finally, the warfare of the fifth century had its impact, though as Liebescheutz notes, it cannot be presented as a sole cause simply because many urban areas were already clearly in decline when conflict hit. In the case of Britain, the cities were gone by 420, decades before the arrival of any invaders. Nevertheless, political instability and violence in the fifth century seems to have delivered death-blows to ailing communities, especially in the Balkans and along the Rhine.
The end result was that in the West, urbanism declined severely between the fourth and sixth centuries. Rome, once a city of a million people, collapsed down to a population of just 80,000. Arles, which had been a thriving Roman city with an amphitheater, an aqueduct, a chariot-racing track, a theater and full city walls shrunk so severely that the remains of the city moved inside its amphitheater, repurposing it as a new set of city walls, with the town square in the middle and houses built in the stands. While many towns survived in their new, shrunken and impoverished form, urbanism in Europe outside of the Eastern Roman Empire would largely have to be reinvented during the High Middle Ages, (though with some key institutional survivals from the Roman era and often rising out of the diminished remains of Roman cities). Instead, the society of the early Middle Ages was overwhelmingly rural in both population and focus. If on politics we have a bit of a mix between decline and continuity, when it comes to the cities that made up the old political system, the “decline and fall” knight strikes a clear blow: the system of social organization that characterized the ancient world practically vanished and would have to be redeveloped centuries later. The institutions that had maintained it (like the curiales) largely vanished, replaced in some cases by local “notables” and in other cases by ruralization.
Bret Devereaux, “Collections: Rome: Decline and Fall? Part II: Institutions”, A Collection of Unmitigated Pedantry, 2022-01-28.



