Now I should note at the outset that our sources for the Gracchi are not what we might like. Tiberius Gracchus’ year as tribune was in 133 and the late second century is a period where our best sources largely cut out. Polybius, of course, was writing in the 140s and so is unavailable for later events. Livy, always useful, did write the history of this period, but it is lost save for extremely brief summaries of his books known as the Periochae. Instead, we’re reliant primarily on Plutarch and Appian. Both sources are writing much later, in the second century AD and are writing in a context where we might question if we’re getting an entirely straight narrative. As I’ve noted before, Plutarch’s biographies in his Parallel Lives (of which there is one for Tiberius Gracchus and one for Gaius Gracchus) are intended to be moralizing essays rather than straight historical accounts and Plutarch is not above bending the truth to fit his narrative; he also tends to leave out details if they don’t fit his narrative.
Meanwhile, as D.J. Gargola has noted, Appian is also bending his account of Tiberius Gracchus’ reforms, in particular by presenting the Lex Sempronia Agraria as an entirely traditional, conventional response to a pressing crisis.1 But in fact, the provisions of the Lex Sempronia Agraria were not traditional: no similar law (save for a re-enactment by Gaius Gracchus) – had ever or would ever be passed in Rome and the legal precedent that Appian presents as providing the foundation for Tiberius’ law appears to be at least substantially an anachronistic invention. Meanwhile, the crisis Appian thinks Tiberius Gracchus thought he was addressing probably didn’t exist in the form he understood it.
But that’s what we have, so it is what we must work with. And we should note that both Plutarch and Appian are quite favorable to the Gracchi, even though both men were clearly very controversial in their day. So in a sense this is a reverse of the situation we had with Cleopatra, where we had to contend with relentlessly negative sources: here the sources are broadly positive.
So, on with what we know.
Tiberius Gracchus was elected tribune in 133. His election was already unusual in that he seems to have run on something like a program (land reform, which we’ll get to); Romans generally ran on character and background rather than promising specific political actions if elected, so this was unusual. Part of the reason for it was doubtless that Tiberius Gracchus’ political fortunes were in difficulties. Now we should note here that while Tiberius Gracchus was a plebian (that is, not a patrician) that doesn’t make him a political outsider: Tiberius Gracchus was not remotely a political outsider or poor man or lacking in influence. His father (also Ti. Sempronius Gracchus) had been consul in 177 and 163 and censor in 169; his father (or grandfather) was consul in 215 and 213. Our Tiberius Gracchus’ mother, Cornelia, was the daughter of P. Cornelius Scipio Africanus, the man who defeated Hannibal. Tiberius Gracchus was born into substantial wealth and influence, the sort of man whose eventual political ascent was almost guaranteed.
(Indeed, it was so guaranteed that he gets to bend the rules and hold many of his offices early. He’s quaestor at just 26, which implies that he started his military service at 15 or 16 instead of the normal 17, doing so as a military tribune, not a common soldier. I do think this is relevant to understanding Tiberius Gracchus: this was a man born with a silver spoon and a carefully paved, flat-and-easy road to power and influence laid out for him by his family and his political backers, the most notable among whom was his key supporter Scipio Aemilianus (destroyer of Carthage and shortly Numantia).)
Except. Except he got wrapped up in something of a nasty foreign policy scandal during his year as quaestor, when he was assigned to the amazingly named but less amazingly capable C. Hostilius Mancinus who as consul in 137 was supposed to deal with Numantia in Spain. Mancinus blew it and got his army effectively trapped and sent Tiberius – his quaestor and the next highest ranking Roman present – to negotiate to get his army out. Tiberius did this, but the whole thing caused a great stink and a scandal at Rome (Roman armies are supposed to go down fighting, not negotiate shameful retreats!). Indeed, the Senate was so enraged they rejected the treaty and instead sent Mancinus, bound in chains, to the Numantines as part of a ritual process by which his treaty was disowned. Tiberius doesn’t get packed off to Numantia, but some of the political stink does rub off on him, so while he’s connected enough to get elected as a plebeian tribune in 133, he must know he needs a big second act to get his political career back on track, or he may never reach the consulship. That context – a political insider who had a golden ticket but must now win it back, rather than an outsider without connections – is important for understanding the reaction he is going to get.
Bret Devereaux, “Collections: On the Gracchi, Part I: Tiberius Gracchus”, A Collection of Unmitigated Pedantry, 2025-01-17.
- 1. D.J. Gargola, “The Gracchan Reform and Appian’s Representation of an Agrarian Crisis” in People, Land and Politics, eds. L. De Ligt and S.J. Northwood (2008).
August 23, 2025
QotD: The background of Tiberius Gracchus
August 22, 2025
QotD: “White fragility”
White fragility is the sort of powerful notion that, once articulated, becomes easily recognizable and widely applicable … But stare at it a little longer and one realizes how slippery it is, too. As defined by [White Fragility author Robin] DiAngelo, white fragility is irrefutable; any alternative perspective or counterargument is defeated by the concept itself. Either white people admit their inherent and unending racism and vow to work on their white fragility, in which case DiAngelo was correct in her assessment, or they resist such categorizations or question the interpretation of a particular incident, in which case they are only proving her point. Any dissent from “White Fragility” is itself white fragility. From such circular logic do thought leaders and bestsellers arise. This book exists for white readers. “I am white and am addressing a common white dynamic,” DiAngelo explains. “I am mainly writing to a white audience; when I use the terms us and we, I am referring to the white collective”. It is always a collective, because DiAngelo regards individualism as an insidious ideology. “White people do not exist outside the system of white supremacy,” DiAngelo writes, a system “we either are unaware of or can never admit to ourselves”. … Progressive whites, those who consider themselves attuned to racial justice, are not exempt from DiAngelo’s analysis. If anything, they are more susceptible to it. “I believe that white progressives cause the most daily damage to people of color,” she writes. “[T]o the degree that we think we have arrived, we will put our energy into making sure that others see us as having arrived …” … It is a bleak view, one in which all political and moral beliefs are reduced to posturing and hypocrisy.
Carlos Lozada, “White fragility is real. But ‘White Fragility’ is flawed,” Washington Post, quoted by Ann Althouse, 2020-06-19.
August 18, 2025
QotD: Dostoevsky’s Demons can be read as “one long, savage parody of Fathers and Sons“
To understand what happens next [in Dostoevsky’s Demons], it helps to have read some Turgenev. His most famous work, Fathers and Sons, is of a piece with the most lurid boomer fantasies. The basic plot is that there are some genteel Russian liberals, good New York Times readers, people with all the right views. Their kids come back from college and are espousing all this weird stuff: stuff about white fragility and transgenderism and boycotting Israel, stuff that makes their nice liberal parents extremely uncomfortable. But it’s okay, you see? The kids magnanimously realize that their parents were once cool revolutionaries too, and the parents make peace with the fact that the kids are just further out ahead than they are, and everybody feels good about themselves because if the kids have seen far, it’s only by standing on the shoulders of giants. The important thing to understand is that everything about this plot is identity validation wish-fulfillment for the boomer liberal parents (like Turgenev himself). It’s the political equivalent of that YouTube genre where Gen Z Afro-American kids rock out to Phil Collins.
The macro-structure of Demons mirrors this so closely, you can almost read the book as one long, savage parody of Fathers and Sons.1 The sunny opening section is a satire of the boomer liberals, and the big vibe shift part way in is their kids coming back from college. But that’s where things go off the rails. In this book, the next generation shares their parents’ anti-religious and anti-monarchist attitudes, but unlike in Fathers and Sons, the kids in Demons are disgusted by the hypocrisy and cowardice of their genteel liberal parents, and eager to plunge Russia into a hyper-totalitarian nightmare. The exact contours of that nightmare are something they frequently argue about and change their minds over, but they can all agree that it will need to begin with an enormous mountain of skulls, and that their town is as good a place as any to start.
Dostoevsky’s other works put individuals front and center, his stories have unbelievably rich characterization (Nietzsche once said that Dostoevsky was the greatest psychologist to ever live), because for Dostoevsky the very highest stakes, the most important questions in the world, were about the damnation or salvation of individual souls. But Demons is different: here the characters all blur together, their names are disgorged to you in a never-ending torrent, and only a few of them are distinctive in any way.2 How could Dostoevsky think these people don’t matter? It’s because they aren’t real people anymore. It’s because they’re possessed. Their brains have been scooped out and all you can see in their eyes is a writhing mass of worms. Their ideas and ideologies have hollowed them out and are wearing their skins as suits.
But what if the ideas don’t matter either? It’s easy to interpret the second half of Demons as a novel of ideas, but it really isn’t. Your first clue is that the ideas are just so goofy. There’s one guy who thinks that by killing himself he will become God (don’t ask, it’s Dostoevsky, man). Another has written a book with ten chapters, explaining how “Beginning with the principle of unlimited freedom I arrive at unlimited despotism”, and proposing a method of brainwashing for reducing ninety percent of humanity to a mindless “herd”. Yet another thinks that everything can be solved by killing one hundred million people, but laments that even with very efficient methods of execution this will take at least thirty years.3 My own favorite might be the guy who refuses to explain what his system is, but just smugly declares that since everybody is going to end up following it eventually, it’s pointless for him to explain it.
John Psmith, “REVIEW: Demons, by Fyodor Dostoevsky”, Mr. and Mrs. Psmith’s Bookshelf, 2023-07-17.
- Further evidence for this reading: the book contains a character, the great writer “Karmazinov”, who is a straightforward expy of Turgenev himself.
- That said if you do need to keep track of them, this alignment chart made by some genius on the internet is a pretty handy guide: link.
- This one probably seems less funny after the 20th century than it did when Dostoevsky wrote it.
August 16, 2025
QotD: Rich anarchists
So you talk about mobs and the working classes as if they were the question. You’ve got that eternal idiotic idea that if anarchy came it would come from the poor. Why should it? The poor have been rebels, but they have never been anarchists; they have more interest than anyone else in there being some decent government. The poor man really has a stake in the country. The rich man hasn’t; he can go away to New Guinea in a yacht. The poor have sometimes objected to being governed badly; the rich have always objected to being governed at all. Aristocrats were always anarchists …
G.K. Chesterton, The Man Who Was Thursday, 1908.
August 15, 2025
Ted Gioia on Hunter S. Thompson
I must admit that I got hooked on Hunter S. Thompson’s writing very early. I read Fear and Loathing in Las Vegas in my mid-teens and it blew my mind. I couldn’t actually believe everything he wrote, but I couldn’t completely discount it either. I certainly haven’t read everything he wrote … especially his later sports commentary, but I have read most of the best-known books. On his Substack, Ted Gioia is running a three-part series on the writer and his work:
That’s Hunter Thompson. There’s always someone in control behind the wheel — even when he seems most out of control.
This hidden discipline showed up in other ways. Years later, when he ran for sheriff in Aspen or showed up in Washington, D.C. to cover an election for Rolling Stone, savvy observers soon grasped that Thompson had better instincts and organizational skills than some of the most high-powered political operatives. People rallied around him — he was always the ringleader, even going back to his rowdy childhood. And hidden behind the stoned Gonzo exterior was an ambitious strategist who could play a long term game even as he wagered extravagantly on each spin of the roulette wheel that was his life.
“I don’t think you have any idea who Hunter S. Thompson is when he drops the role of court jester,” he wrote to Kraig Juenger, a 34-year-old married woman with whom he had an affair at age 18. “First, I do not live from orgy to orgy, as I might have made you believe. I drink much less than most people think, and I think much more than most people believe.”
That wasn’t just posturing. It had to be true, merely judging by how well-read and au courant Thompson became long before his rise to fame. “His bedroom was lined with books,” later recalled his friend Ralston Steenrod, who went on to major in English at Princeton. “Where I would go home and go to sleep, Hunter would go home and read.” Another friend who went to Yale admitted that Thompson “was probably better read than any of us”.
Did he really come home from drinking binges, and open up a book? It’s hard to believe, but somehow he gave himself a world class education even while living on the bleeding edge. And in later years, Thompson proved it. When it came to literary matters, he simply knew more than most of his editors, who could boast of illustrious degrees Thompson lacked. And when covering some new subject he didn’t know, he learned fast and without slowing down a beat.
But Thompson had another unusual source of inspiration he used in creating his unique prose style. It came from writing letters, which he did constantly and crazily — sending them to friends, lovers, famous people, and total strangers. Almost from the start, he knew this was the engine room for his career; that’s why he always kept copies, even in the early days when that required messy carbon paper in the typewriter. Here in the epistolary medium he found his true authorial voice, as well as his favorite and only subject: himself.
But putting so much sound and fury into his letters came at a cost. For years, Thompson submitted articles that got rejected by newspapers and magazines — and the unhinged, brutally honest cover letters that accompanied them didn’t help. He would insult the editor, and even himself, pointing out the flaws in his own writing and character as part of his pitch.
What was he thinking? You can’t get writing gigs, or any gigs, with that kind of attitude. Except if those cover letters are so brilliant that the editor can’t put them down. And over time, his articles started resembling those feverish cover letters — a process unique in the history of literature, as far as I can tell.
When Thompson finally got his breakout job as Latin American correspondent for the National Observer (a sister publication to the Wall Street Journal in those days), he would always submit articles to editor Clifford Ridley along with a profane and unexpurgated cover letter that was often more entertaining than the story. In an extraordinary move, the newspaper actually published extracts from these cover letters as a newspaper feature.
If you’re looking for a turning point, this is it. Thompson now had the recipe, and it involved three conceptual breakthroughs:
- The story behind the story is the real story.
- The writer is now the hero of each episode.
- All this gets written in the style of a personal communication to the reader of the real, dirty inside stuff — straight, with no holds barred.
Why can’t you write journalism like this? In fact, a whole generation learned to do just that, mostly by imitating Hunter S. Thompson …
August 13, 2025
The Dispossessed: State Happens
Feral Historian
Published 21 Mar 2025Ursula K. le Guin’s The Dispossessed is one of the most in-depth examinations of how a large anarchist society might function, addressing both the problems it solves and those it creates for itself. It’s a must-read for anyone interested in the communist-leaning variants of anarchism in particular.
00:00 Intro
01:58 Anarres is not an Island
04:45 Shevek goes to Urras
07:00 Abolition of Property
08:30 Social Pressures and Pravic
12:30 Necessity and Ossification
14:45 Necessity of Conflict
15:45 Shevek’s Wild RideThis video is in part a companion to this one — Cloak of Anarchy : Gradations of Stat… from a few weeks ago. The original cut of that one had a brief mention of a couple details from The Dispossessed, but it really needed its own video.
August 8, 2025
Debunking the idea that Japan was about to surrender anyway
Dr. Robert Lyman on the common misunderstanding of Japan’s situation in July and August of 1945 — no, they weren’t “on the brink of surrender so atomic bombing was unjustified” … instead, they were intending to make the assault on the Home Islands the biggest bloodbath ever:

Atomic cloud over Hiroshima, taken from “Enola Gay” flying over Matsuyama, Shikoku, 6 August, 1945.
US Army Air Force photo via Wikimedia Commons.
It’s the anniversary of Hiroshima again today. I wasn’t going to write anything to mark the event (more coming next week on VJ Day), but I’ve been triggered already by nonsense on the radio which suggests that the atomic bombs on Hiroshima and Nagasaki were unnecessary, because Japan was about to surrender.
Nonsense. There is not a shred of real evidence to support this idea. In fact, the evidence that Japan wanted to keep on fighting is irrefutable. And yet this lie persists, despite the deluge of scholarly work demonstrating Japan’s commitment to the ritual suicide of its entire nation right until the end, when Hirohito pulled the plug. If you are in any doubt about the facts of the case, as opposed to the propaganda, read Toland’s Rising Sun (1970), Frank’s Downfall (2001), Spector’s In The Ruins of Empire (2007), Pike’s Hirohito’s War and, more recently, Stewart Binn’s Japan’s War (2025). All are excellent, clear, analytical and well researched. There are lots more, too.
Why does this canard keep on popping up? Is it because people don’t read? Or is it that they just don’t want to believe in the necessity of such a dramatic event to force Japan to surrender and thus bring about an end to the greatest man-made tragedy the world has ever suffered? The origins of this wishful myth in fact derives from hard right nationalist propaganda in post-war Japan (driven by Admiral Suzuki himself), quickly lapped up by the gullible and wishful thinkers in the West. Its one of the most enduring of the Hiroshima and Nagasaki myths, in part because it seems palatable to many, and because it is inherently anti-American.
What is the real story? In short, the Allies tried hard to persuade Japan to surrender. They demonstrated unequivocally to Japan that it was going to lose the war by defeating its armies and by beginning the long, slow and painful crawl towards the Japanese home islands. All the books I’ve mentioned note the extreme chaos of Japanese decision-making before and during the war. Who really was in charge? Who could one talk to, to secure a commitment to negotiate? In any case, the chaotic government under Koiso which replaced that of General Tojo following the fall to the Americans of Saipan in 1944, made not a single effort to engage with the Allies to seek terms. This government also collapsed on 5 April 1945. The replacement prime minister was Admiral Suzuki, and it was from this man that the myth seems to have arisen, after the war, that Japan was considering surrender and that the A-bombs were unnecessary. This is not true. During his entire time as Prime Minister he resolutely refused to do anything but continue to fight, unless the ending of the war could be secured on Japan’s terms. There were some initiatives to persuade the Suzuki government to surrender, but none of them amounted to much, because they didn’t engage directly with the government in Tokyo, and they didn’t derive from the Allied powers. The evidence that peace-feelers were being put out by various sources (such as the Vatican) in 1944 and 1945 is evidence only that the Japanese government ignored them. None were taken seriously in Tokyo.
Indeed, throughout the period of the Suzuki government, the war parties were dominant. In early June the military Supreme Command submitted a paper entitled The Fundamental Policy to be Followed Henceforth in the Conduct of the War, in which it demanded that the government confirmed that Japan would fight to the very last Japanese in an act of national suicide leading to the “honourable death of the hundred million”:
With a faith born of eternal loyalty as our inspiration, we shall – thanks to the advantages of our terrain and the unity of our nation, prosecute the war to the bitter end in order to uphold our national essence, protect the Imperial land and achieve our goals of conquest.
The proposition was passed, not unanimously, but overwhelmingly nonetheless.
There were some in the government – interestingly including Tojo himself – who saw that this was self-defeating, and that Japan must negotiate to secure acceptable peace terms. Naively, it was hoped that this would enable it to retain parts of its empire. Suzuki was part of this group who thought that Japan could negotiate favourable terms to end the war, in the form of a negotiated settlement such as that had brought about the end of the Russo-Japanese war in 1905, but when he suggested this in parliament on 13 June he was shouted down by the war mongers. Hirohito then endorsed an approach to the Soviets in late June. Bizarrely – though Moscow was neutral in the Far Eastern war at this point – Tokyo’s emissaries suggested that the USSR and Japan join forces to rule the world. It was yet more evidence of how Tokyo fundamentally misunderstood the world, and its enemies, and the way the war would have to end: complete and utter surrender by Japan.
Moscow, of course, scorned these “negotiations” as meaningless.
August 7, 2025
QotD: The lost-then-found-again Hittite civilization
… Mycenaean Greece was as much an outlier as sub-Roman Britain: the civilizational collapse in the Aegean was unusually prolonged and severe compared to the fates of many of the other peoples of the Late Bronze Age. Here I have helpfully reformatted Cline’s chart of how resilient the various societies proved:
Let’s take a brief tour through the various fates of these societies. I’ll come back to the Phoenicians at the end, because their example raises interesting questions when considered in contrast with the Mycenaeans. For the moment, though, let’s begin like civilization itself: in Mesopotamia.
Before the Late Bronze Age Collapse, the Assyrian and Babylonian empires had numbered among the Great Powers of the age: linked by marriage, politics, war, and trade to the other mighty kings, they spent much of their time conducting high-level diplomacy and warfare. As far as we can tell, they did well in the initial collapse: there’s a brief hiatus in Assyrian royal inscriptions running from about 1208 to 1132 BC, but records resume again with the reign of Aššur-reša-iši I and his repeated battles with his neighbor to the south, the Babylonian king Nebuchadnezzar I (no relation). But although the kings of the late twelfth century continued much as their Bronze Age predecessors had — waging war, building palaces, going hunting, accepting tribute, collecting taxes, and ordering it all recorded in stone and clay — the world had changed around them. No longer were there huge royal gifts sent to and from fellow great kings, “My Majesty’s brother”1 overseas; now their diplomatic world consisted of tiny petty kings of nearby cities who could be looted or extorted at will.
Mesopotamia didn’t escape unscathed forever: beginning around 1080 BC, texts begin to record severe droughts, invading Aramaeans, and total crop failures. There was a major drought in 1060 BC, and then both the Assyrian and Babylonian records record further drought every ten years like clockwork — sometimes accompanied by plague, sometimes by “troubles and disorder” — until the end of the eleventh century BC. Most of the tenth century was equally dire, with chronicles recording grain shortages, invasions, and a cessation of regular offerings to the gods.
But unlike the Mycenaeans, and in spite of real suffering (ancient Babylonia is estimated to have lost up to 75% of its population in the three hundred years after the Collapse), both Mesopotamian empires were able to hang on to civilization. There were still kings, there were still scribes, and there were still boundary stones on which to record things like “distress and famine under King Kaššu-nadin-ahhe”. And when conditions finally improved, Assyria and Babylonia were both able to bounce back. When at last the Assyrian recovery began under Aššur-dan II (934-912 BC), for example, he (or more realistically, his scribe) was able to write: “I brought back the exhausted people of Assyria who had abandoned their cities and houses in the face of want, hunger, and famine, and had gone up to other lands. I settled them in cities and homes which were suitable and they dwelt in peace”. Clearly, Assyria still retained enough statehood to effect the sort of mass population transfer that had long been a feature of Mesopotamian polities.2
Over the next few centuries, the Neo-Assyrian Empire would come to dominate the Near East, regularly warring with (and eventually conquering) Babylon and collecting tribute from smaller states all over the region. At its peak, it was the largest empire history had ever known, covering a geographic extent unsurpassed until the Achaemenids. The Babylonians had to wait a little longer for their moment in the sun, but near the end of the seventh century they overthrew their Assyrian overlords and ushered in the Neo-Babylonian Empire. (Less than a century later, Cyrus showed up.)
So how did Babylon and Assyria hold on to civilization — statehood, literacy, monumental architecture, and so forth — when the Greeks lost it and had to rebuild almost from scratch? Unfortunately, Cline doesn’t really answer this. He offers extensive descriptions of all the historical and archaeological evidence for the diverse fates of various Late Bronze Age societies, but only at the very end of the book does he briefly run through the theories (and even then it’s pretty lackluster). He does have a suggestion about the timing — the ninth century Assyrian resurgence lines up almost perfectly with the abnormally wet conditions during the Assyrian megapluvial — but why was it the Assyrians who found themselves particularly well-positioned to take advantage of the shift in the climate? Why not, say, the Hittites?
Sometime around 1225 BC, the Hittite king Tudhaliya IV wrote to his brother-in-law and vassal, Shaushgamuwa of Amurru, that only the rulers of Egypt, Babylonia, and Assyria were “Kings who are my equals in rank”.3 A mere thirty years later, though, his capital city of Hattusa would lie abandoned and destroyed. Modern excavators describe ruins reduced to “ash, charred wood, mudbricks and slag formed when mud-bricks melted from the intense heat of the conflagration”.
And with that, the Hittites essentially vanished from history.
They were so thoroughly forgotten, in fact, that when nineteenth-century archaeologists discovered the ruins of their civilization in Anatolia, they had no idea who these people were. (Eventually they identified the new sites with the Hittites of the Bible, who lived hundreds of years later and hundreds of miles to the south, out of sheer ¯\_(ツ)_/¯.)4
What happened to the Hittites? Well, Cline suggests the usual mélange of drought, famine, and interruption of international trade routes, as well as a potential usurpation attempt from Tudhaliya’s cousin Kurunta, but the actual answer is that we’re not sure. Given the timing, they may have been the first of the Late Bronze Age dominos to fall; given the lack of major rivers in central Anatolia, they may have been uniquely susceptible to drought. Hattusa may have been abandoned before the fire — its palaces and temples show little sign of looting, suggesting they [may] already have been emptied out — but many other sites in the Hittites’ central Anatolian heartland were destroyed around the same time, and some of those have bodies in the destruction layer. But whatever the order of events, Hittite civilization collapsed as thoroughly and dramatically as the Mycenaeans’ had done, and with a similar pattern of depopulation and squatters in the ruins. Unlike the Mycenaeans, though, the Hittites would never be followed by successors who inherited their culture; the next civilization of Anatolia was the Phrygians, who probably arrived from Europe in the vacuum following the Hittites’ fall.
There was one exception: in the Late Bronze Age, cadet branches of the Hittite royal family had ruled a few small satellite statelets in what is now northern Syria, and many of these “Neo-Hittite” polities managed to survive the Collapse. A tiny, far-flung corner of a much greater civilization, they nevertheless outlasted the destruction of their metropole and maintained Hittite-style architecture and hieroglyphic inscriptions well into the Iron Age.5 (They would be swallowed up by the Neo-Assyrian Empire in the late eighth century BC.) And though the Neo-Hittite kings ruled over tiny rump states, we’re now able to translate inscriptions in which they referred themselves by the same titles the Bronze Age Hittite “Great Kings” had employed. The records of their larger neighbors, which had a much greater historical impact, seem to have followed suit: the Neo-Hittites in Syria probably actually were the Hittites of the Bible! Chalk up another one for nineteenth century archaeology.
Jane Psmith, “REVIEW: After 1177 B.C., by Eric H. Cline”, Mr. and Mrs. Psmith’s Bookshelf, 2024-07-08.
1. I really think we should bring back monarchs referring to themselves as “my Majesty”. So much cooler than the royal “we”. Or combine them: “our Majesty”!
2. The Babylonian Captivity, much later in the Iron Age, was far from historically unique.
3. The list actually reads, “the King of Egypt, the King of Babylonia, the King of Assyria,
and the King of Ahhiyawa” — the strikethrough appears in the original clay tablet! A generation earlier, under Tudhaliya’s father Hattusili III, the Hittite texts had consistently referred to the king of Ahhiyawa as a “great King” and a “brother”, but apparently the geostrategic position of the Mycenaean ruler had degraded substantially.4. We now know that the Hittites spoke an Indo-European language and referred to themselves “Neshites”, but the name has stuck.
5. I went looking for a good historical analogy for the Neo-Hittite kingdoms and discovered, to my delight, the Kingdom of Soissons, which preserved Romanitas for a few decades after the fall of the Western Roman Empire. The Neo-Hittites lasted a lot longer.
August 4, 2025
TERF Island
At Spiked, Jo Bartosch reviews Fiona McAnena’s TERF Island: How the UK Resisted Trans Ideology:
The truth is, before they are revered, history-makers are almost always reviled. From universal suffrage to the abolition of the slave trade, the freedoms we take for granted today began as the unpopular obsessions of the awkward and bloody-minded. Fiona McAnena’s TERF Island: How the UK Resisted Trans Ideology charts how just such a small group of determined women – mocked, maligned and misrepresented – dragged sex-based rights back from the brink, often at huge personal cost. It’s the story of how they were hated before they became feted.
Part battle manual and part whodunnit, TERF Island is an insider’s chronicle of how a scrappy, unfunded grassroots movement of mostly middle-aged women outmanoeuvred a lobby bankrolled by billionaires and cheered on by multinational corporations and well-intentioned human-resources departments.
I have been involved in the TERF wars for a decade, and I know McAnena herself is no bystander. Formerly a volunteer at Fair Play for Women and now director of campaigns at Sex Matters, she has done her time in the trenches, too. Each chapter is a vivid, accurate and compelling profile of a key figure in the movement, including Transgender Trend’s Stephanie Davies-Arai, Fair Play for Women’s Nicola Williams, Let Women Speak founder Kellie-Jay Keen and Maya Forstater, whose case against her employer established gender-critical beliefs as protected in UK law – all women I’m proud to know.
It’s almost hard to remember how recently it was considered heresy to say, to use the words popularised by Keen, that “a woman is an adult human female”. In April, the Supreme Court confirmed this truth in law. The BBC may still choke on it, but the legal precedent stands. Yet only a few years ago, saying this out loud could land you in a police station, on the dole queue or even in hospital.
McAnena captures the febrile atmosphere of those early days, when stating a biological fact was enough to have you smeared as a fascist. She takes us inside the campaigns that exposed the lunacy of housing violent male offenders in women’s prisons, the cruelty of sterilising confused children and the institutional capture of sporting organisations. Now, a decade after Davies-Arai launched Transgender Trend, barely a week passes without a professional body or council quietly reversing a discriminatory “trans inclusive” policy. That didn’t happen by accident.
What makes TERF Island so readable is that it doesn’t just document the headline moments. McAnena records the unglamorous grind: women lobbying MPs, poring over policy documents and calmly dismantling pseudoscience from stalls in the high streets of British towns. As McAnena puts it, the campaign against gender self-identification, which galvanised the resistance, brought “hundreds of women on to the streets and thousands more online to defend their sex-based rights”. “It was the catalyst for greater awareness, resistance and campaigning for the rights of women and children in the face of the demands of transgender ideology.”
July 28, 2025
QotD: The technology ecosystem
A lot of thinkfluencers will describe technology as an “ecosystem” without grappling with the full implications of that term. Most often when they say it they’re referring to a cluster of consumer-facing businesses that rent space or other capabilities from a “platform” provider, like apps on an App Store. But that isn’t an ecosystem, that’s a shopping mall. Real ecosystems have energy and nutrient flow both up and down the food chain, as well as laterally; they have vast swarms of bottom feeders, fungi, and other detritivores that recycle matter through decomposition and make its constituents bioavailable once more; they also have a constant source of energy input (usually the sun) to make up for the constant entropic drag that would otherwise grind things to a halt. One of the great discoveries of modern ecology is that apex predators, macrofauna, the plants and animals we notice and admire are perched precariously atop a vast network of invisible supports. A tiger is the temporary result of too many worms gathering in one place.
Technology is also an ecosystem, not the way bluechecks talk about it, but in this more profound sense. A Boeing or a Google is like a tiger: the highly-visible culmination of a vast subterranean drama. Turn over a spade and you’ll find them — the suppliers and subcontractors, investor networks, tooling manufacturers, feeder universities, advisors, researchers, shipping and packaging experts, friendly bankers and government officials, producers of upstream technological inputs, and a vast collection of lower-tier companies in related markets that act like an economic flywheel, absorbing and releasing excess labor as the economy shudders through its fits and starts.
In nature, it’s energy and nutrients that move through the food webs. Here their analogues are capital and knowledge. It’s hard to miss the money sloshing back and forth — world-changing companies are nurtured through their awkward adolescence by sophisticated and patient pools of capital, and the high-flying champions of those companies become the next generation’s venture investors after cashing out. Harder to see but even more influential is the vast economic dark matter made up of professionals who struck it rich enough to live comfortably but not rich enough to fly private. These unobtrusive capitalists are the first to hear through professional whispernets that so-and-so has quit his job to work on such-and-such. Since they’re still in the rat-race, they can have an informed opinion on the caliber both of the idea and of the team around it, and are usually the early champions of the most unusual and speculative ventures. And finally, money sloshes around between the companies themselves through a complicated network of deals, joint ventures, and strategic investments.
The money is more visible, but the way knowledge moves is more important. Part of it is academic, propositional knowledge or technical data whose discovery is accelerated when a dozen different teams are on its scent, sometimes racing each other to the prize, sometimes egging each other on and celebrating each others’ victories. But the bulk of what makes this ecosystem hum, the true currency that drives nearly every barter or exchange, is practical, process knowledge of the sort that 莊子 first described and Michael Oakeshott later re-popularized for our benighted and ignorant age. What makes process knowledge unusual is that by its very nature it cannot be separated from people, cannot be digitized or divorced or attached to an email. It is at once the nous of a technological ecosystem and the thing that makes it fundamentally illegible — an immaterial, intangible essence that inheres only in individuals, like a mind or a soul.
John Psmith, “REVIEW: Flying Blind by Peter Robison”, Mr. and Mrs. Psmith’s Bookshelf, 2023-02-06.
July 27, 2025
QotD: London coffeehouses and Paris salons of the Ancien Régime
Marie Antoinette arrived in Paris at the end of this era of strict censorship, which helps explain why her honeymoon with French public opinion was short-lived. The official press, notably the Mercure and Gazette, continued churning out fawning snippets of society news about the royal couple. But the scandal-mongering libelles and pamphlets had their own paragraph men, called nouvellistes, who picked up “news” from well-informed sources posted on benches in the Tuileries, Luxembourg Gardens, and, of course, under the tree of Cracow. Police efforts to repress nouvellistes‘ gossip proved futile in the face of high demand. One famous libelle of the era, Le Gazetier cuirassé promised “scandalous anecdotes about the French court”. (It was printed in London, out of reach of official French censors.) Another publication printed in London starting in the 1760s was the famous Mémoires secrets, an anonymous chronicle of insider gossip and anecdotes from Parisian high society. A scurrilous book about Louis XV’s mistress, Madame du Barry, also appeared as a collection of gossip that nouvellistes had picked up around Paris.
Despite the libelles circulating in Paris, the Bourbon monarchy was still relatively protected compared with the hurly-burly across the channel in London, where coffeehouses buzzed with political innuendo and intrigue. Some French philosophes, it is true, attempted to replicate London’s coffeehouse culture at Parisian cafés, such as the Procope on the Left Bank. (Voltaire frequented the place, where he liked to add chocolate to his coffee.) Other regulars at the Procope — named after the Byzantine writer Procopius, famous for his Secret History — were Rousseau, Danton, and Robespierre, as well as Americans Benjamin Franklin and Thomas Jefferson.
The Parisian equivalent of the coffeehouse was the salon, which differed from London coffeehouses in both ambience and function. Whereas London coffeehouses were boisterously public, salons were essentially closed spaces, usually held in private homes. Most were by invitation only. Many were hosted by women, usually titled or wealthy ladies with an interest in culture and politics — such as Madame de Rambouillet, Madame Necker, Madame Geoffrin, and Mademoiselle Lespinasse. There was also the Marquise du Deffand, a friend to Voltaire and the English man of letters Horace Walpole, to whom she bequeathed not only her papers, but also her pet dog, Tonton.
As access to these rarefied spaces increasingly became a symbol of social success, admission got more tightly controlled. (Madame Geoffrin expelled Diderot from her salon because she found his conversation “quite beyond control”.) Still, those who frequented salons represented a great diversity within the elites — from rising young writers and established authors to powerful politicians and eccentric aristocrats. The tacit rule was, as in London coffeehouses, that wit was more important than rank. Many great French writers launched their careers thanks to their admittance. One was the philosopher Montesquieu, who found success at the salon of Madame Lambert.
Matthew Fraser, “Marie Antoinette: Figure of Myth, Magnet for Lies”, Quillette, 2020-06-24.
July 19, 2025
QotD: William Wilberforce and the anti-slavery movement
“What Wilberforce vanquished was something even worse than slavery,” says [Eric] Metaxas [in Amazing Grace], “something that was much more fundamental and can hardly be seen from where we stand today: he vanquished the very mindset that made slavery acceptable and allowed it to survive and thrive for millennia. He destroyed an entire way of seeing the world, one that had held sway from the beginning of history, and he replaced it with another way of seeing the world.” Ownership of existing slaves continued in the British West Indies for another quarter-century, and in the United States for another 60 years, and slave trading continued in Turkey until Atatürk abolished it in the Twenties and in Saudi Arabia until it was (officially) banned in the Sixties, and it persists in Africa and other pockets of the world to this day. But not as a broadly accepted “human good”.
There was some hard-muscle enforcement that accompanied the new law: the Royal Navy announced that it would regard all slave ships as pirates, and thus they were liable to sinking and their crews to execution. There had been some important court decisions: in the reign of William and Mary, Justice Holt had ruled that “one may be a villeyn in England, but not a slave,” and in 1803 William Osgoode, Chief Justice of Lower Canada, ruled that the institution was not compatible with the principles of British law. But what was decisive was the way Wilberforce “murdered” (in Metaxas’ word) the old acceptance of slavery by the wider society. As he wrote in 1787, “God almighty has set before me two great objects: the suppression of the slave trade and the reformation of manners”.
The latter goal we would now formulate as “changing the culture” — which is what he did. The film of Amazing Grace shows the Duke of Clarence and other effete toffs reeling under a lot of lame bromides hurled by Wilberforce on behalf of “the people”. But, in fact, “the people” were a large part of the problem. Then as now, citizens of advanced democracies are easily distracted. The 18th-century Church of England preached “a tepid kind of moralism” disconnected both from any serious faith and from the great questions facing the nation. It was a sensualist culture amusing itself to death: Wilberforce goes to a performance of Don Juan, is shocked by a provocative dance, and is then further shocked to discover the rest of the audience is too blasé even to be shocked. The Paris Hilton of the age, the Prince of Wales, was celebrated for having bedded 7,000 women and snipped from each a keepsake hair. Twenty-five per cent of all unmarried females in London were whores; the average age of a prostitute was 16; and many brothels prided themselves on offering only girls under the age of 14. Many of these features — weedy faint-hearted mainstream churches, skanky celebs, weary provocations for jaded debauchees — will strike a chord in our own time.
“There is a great deal of ruin in a nation,” remarked Adam Smith. England survived the 18th century, and maybe we will survive the 21st. But the life of William Wilberforce and the bicentennial of his extraordinary achievement remind us that great men don’t shirk things because the focus-group numbers look unpromising. What we think of as “the Victorian era” was, in large part, an invention of Wilberforce which he succeeded in selling to his compatriots. We, children of the 20th century, mock our 19th-century forebears as uptight prudes, moralists and do-gooders. If they were, it’s because of Wilberforce. His legacy includes the very notion of a “social conscience”: in the 1790s, a good man could stroll past an 11-year-old prostitute on a London street without feeling a twinge of disgust or outrage; he accepted her as merely a feature of the landscape, like an ugly hill. By the 1890s, there were still child prostitutes, but there were also charities and improvement societies and orphanages. It is amazing to read a letter from Wilberforce and realize that he is, in fact, articulating precisely 220 years ago what New Yorkers came to know in the Nineties as the “broken windows” theory: “The most effectual way to prevent greater crimes is by punishing the smaller.”
Mark Steyn, The [Un]documented Mark Steyn, 2014.
July 18, 2025
The Napoleonic-era Royal Navy
David Friedman on some of the aspects of Britain’s Royal Navy in the late eighteenth and early nineteenth century which may be incomprehensible to modern readers who encounter it in works of historical fiction (like the books of C.S. Forester or Patrick O’Brian):

“HMS Victory in Portsmouth Harbour”
Painting by Charles Edward Dixon (1872-1934) via Wikimedia Commons.
I have read and enjoyed several series of novels set in the British navy during the Napoleonic wars, most recently one by Naomi Novik that departs a little further from history than its predecessors by providing the British and their enemies with dragons. The internal structure and the associated rules and customs of the navy seem very strange to a modern eye, yet it was a strikingly successful institution.
One feature likely to catch an economist’s eye was prize money. If a naval vessel captured a legitimate prize, an enemy warship or merchantman, and brought it back to port, the vessel and its contents were sold and the money distributed among those responsible. One large chunk went to the captain, another was distributed among his officers, a third among the crew, a fourth to the admiral under whose orders he was operating.
Another feature of the system was the role of patronage, political influence both within the navy and outside it in the career of an officer, especially a young officer. The critical step of promotion from lieutenant to captain depended in part on performance, in particular on the opinion of the captain under whom a lieutenant was serving. But it depended also on things that seem, to us, irrelevant.
One of Patrick O’Brien’s novels contains a conversation between Maturin, one of his protagonists, and a friend, a young officer of aristocratic birth. The officer has been having an affair with the separated wife of a high naval official and wants to know whether he should live openly with her. Maturin’s response is that, moral issues aside, it might be imprudent for him to offend a powerful official and so risk his future career. His friend replies that he has considered that matter but his family controls a significant number of seats in both houses of parliament and he thinks their influence will be sufficient to balance that of the man he will be offending.
Neither party sees anything strange in either the assumption that giving personal offense to someone within the bureaucracy will make it harder for a competent officer to be promoted or that having a politically influential family will make it easier; that is just part of how the system works. It was a system that produced extraordinarily successful results, a navy that, from the late 18th century to the early 20th won almost every ship to ship or fleet to fleet battle it fought at anything close to even odds.1
A third feature was the seniority system. Once a lieutenant was promoted to captain, his future rank depended only on how long he survived. His name was on the list of captains, the list was ordered by strict seniority, and the next captain to be promoted to admiral would be the one at the top of the list. When two or more captains were working together it was the senior who commanded. That provided an unambiguous rule for allocating command, since every captain knew where he was on the list and knew, or could readily find out, where any other captain was. But it was a rule that had nothing to do with the relative competence of two officers of the same nominal rank.
Promotion beyond captain was entirely determined by seniority; what the officer got to do with his rank was not. An insufficiently competent captain who made it to admiral would end up as an admiral of the yellow, an admiral without a fleet, effectively retired on half pay. A sufficiently competent captain could be assigned particularly important duties, including the command of a group of ships with the temporary position of commodore — provided none of the other captains in the squadron was senior to him. A sufficiently incompetent captain could end up without a ship, on half pay with no chance of prize money. In peacetime, when there was no shortage of competent captains, a minor failing might do it.
[…]
Consider the case of the pre-modern British navy. Prize money was a property solution. The admiralty wanted captains to have an incentive to capture enemy merchant ships, defeat and capture enemy warships, even at risk to their lives. Most of the relevant decisions were made by the captain, so he got the largest part of the reward, but other people, including the admiral whose orders determined what opportunities the captain had to earn prize money, got some of it. A pattern that shows up in the novels, and presumably in the real history, is an admiral who puts an unusually competent and aggressive captain in places where he is likely to encounter enemy warships not because he likes the captain but because he hopes to profit from successful encounters.
Allen argues that prize money was an imperfect property solution because capturing a warship was much riskier, more likely to get the captain killed, than capturing a merchant ship, but prize money was awarded for both. One puzzle he does not consider is why the navy did not solve the problem of misaligned incentives by lowering the prize money awarded for merchant ships or raising it for warships, which should have been easy enough to do.
Allen offers the imperfect alignment of incentives, such as the temptation for a captain in a fleet action to hang back and let other ships and their captains take the risk, as a reason why the property solution had to be supplemented with elements of the other two systems. The admiralty had detailed information on what a captain did through a system of three different logs, one by the captain, one by his first lieutenant, one by the sailing master, the ship’s senior warrant officer. A captain whose career showed him to be incompetent or too inclined to go after merchant ships and avoid warships might end up spending the rest of his career on shore with no ship, hence no opportunity for prize money. A captain who declined a clear opportunity for combat with a ship of the same class was subject to trial by court martial; one admiral ended up convicted and executed for failing to pursue the enemy fleet after an engagement.
1. With the possible exception of the War of 1812.
July 15, 2025
American (religious) exceptionalism
Christianity has been in retreat across the western world for decades, with the United States being the laggard in abandoning the faith. Canada is closer to the western European rate of secularization. On Substack, Fortissax explains why it has become uncommon to find a believing Christian outside the US in response to a query on X about people turning to various neopagan faiths:
First, I believe there are two factors at play. One is the divide between the United States and the rest of the Western world. The United States still has the highest percentage of weekly churchgoers in the West, at around 24 percent. In the U.S., Christianity remains a living tradition. Millions still attend church, or at least try to. Many people share a common faith, believe in God, and are familiar with Christian references in public life, politics, and law. In contrast, in countries like Canada and much of Europe, regular church attendance is closer to 5 percent. That number often includes recent immigrants who tend to be more socially conservative. Among native-born Canadians and Europeans, especially in urban areas, church attendance is even lower. Religion in these places is often kept alive only by older or rural populations. Among the youth, it has largely faded. Second, many Western countries have experienced secularization for much longer.
I believe this first one is not obvious to a majority of people. There are significant cultural differences and experiences within in the United States and outside of it. I believe it would be appropriate to say that the U.S. is still a Christian country, and not just nominally, regardless of whether or not it was established on Lockean principles and Greco-Romain inspiration (some would say revision), of the liberal enlightenment. Sure, the faith is not what it used to be, but probably the majority of Americans at least understand Christian references in common parlance.
I can share a personal anecdote that I believe is fairly typical.
I was born and raised in a region where Christianity had long disappeared from everyday life, following a slow process of state secularization. My great-grandfather was Catholic, but he changed denominations to marry my great-grandmother in the 1930s. It was a utilitarian choice. He believed in God but didn’t care for the petty tyrannies of ethnic and cultural association by denomination. His son, my grandfather, saw hypocrisy in both Catholic and Protestant institutions. As a boy, he was told he could not be friends with a Protestant by a the Catholic priest of his best friend, and he was kicked out of the house by his mother for attending Catholic mass with his girlfriend, even though his father had once been Catholic. My parents were irreligious agnostics. They were not hostile to Christianity, just indifferent, because they were not raised in it. As for me, I grew up in a post-liberal, post-Christian society. I believe in the divine and understand the importance of religion to civilization, but I have no living connection to what came before. In my country of Canada and among my people, Christianity is no longer part of the cultural fabric. I believe this to be the case in Western Europe as well.
There is a common joke that if someone likes paganism so much, they should try the most pagan tradition of all: converting to Christianity. But the unfortunate reality is that secular liberalism has exercised a longer and deeper influence in the modern West than many realize. In response, one could just as easily say that the most Christian tradition of all is converting to secular liberalism, which has formally shaped the cultural and institutional framework of the West for more than 275 years.
For people raised in multi-generational secularized liberal contexts, there is nothing to return to. Christianity is not a living tradition. They cannot come home to Jesus the way many Americans still can, and they cannot undo the liberal Enlightenment. They can only move forward through it. At best, something new might be reinterpreted or reformed from its remnants. But Christianity was never part of their lived experience. It was not seen, heard, or practiced. Churches were never attended. Christmas and Easter functioned as civic holidays focused on family rather than faith. Christianity resembled a historical artifact, something like a beautiful mantelpiece in an old house. It had aesthetic and historical value, but no emotional, cultural, or spiritual presence. This situation is common in much of the non-American West.
This is why many contemporary efforts at Christian revival often feel disconnected. They are built on the assumption that secular individuals are lapsed believers who simply need to be reminded of what they once knew. But these individuals are not returning exiles. They are cultural natives of a secular world. They did not lose the faith, it was never given to them. There were no prayers at the dinner table, no hymns embedded in childhood memory, no sacred calendar shaping the flow of life. Organized religion belonged to the past, replaced with secular civic cults they’re largely unaware of. It was something other people had, something no longer meant for them. This group is not necessarily hostile to Christianity. In many cases, they admire it. They recognize its role in shaping art, architecture, law, and moral tradition. When foreigners attack these, they defend them. They understand its civilizational significance. But the faith speaks a language they do not understand. Its metaphors do not resonate. Its moral claims appear without context. Its stories feel distant.
A useful comparison can be found in the Heliand, a ninth-century Old Saxon gospel poem that re-imagined the life of Christ using the language and imagination of Germanic warrior culture. In that version, Christ is not a wandering teacher from a distant land, but a noble chieftain surrounded by loyal retainers. His mission is framed in terms of honor, loyalty, kinship, fealty, and sacred duty. The gospel message is not altered in its substance, but it is reshaped so that it resonates with the values, social structures, and poetic traditions of a people for whom neither Scripture nor Roman religious order had any living relevance.
This work was part of a broader process of the Germanization of Christianity, a phenomenon that has been studied in detail by scholars like James C. Russell and Fr. G. Ronald Murphy, SJ. Russell, in The Germanization of Early Medieval Christianity, argues that the conversion of the Germanic peoples did not consist merely in the passive reception of Christian doctrine, but in a complex synthesis between Germanic folk-religious consciousness and Christian metaphysics. The resulting Christianities of the early medieval West were distinct, rooted in local mythic frameworks, and expressed through tribal loyalty, sacrificial kingship, and heroic virtue. Murphy, in works such as The Heliand: The Saxon Gospel, explores how the Heliand uses alliteration, formulaic verse, and martial imagery to make Christ intelligible to a newly converted warrior society. He shows how the gospel was not just translated into the Saxon tongue, but into the Saxon soul.
This is the historical precedent that today’s Church must study carefully. The peoples of early medieval Europe were not apostates. They were unbaptized, uncatechized, and culturally alien to Christianity. They were brought into the faith through through cultural immersion. Christianity did not ask them to surrender their world entirely. It entered their world, dignified their heroic values, and redirected them toward the divine. Only then did conversion become possible.
Even those outside the Church understand that this work is urgent. The crisis of meaning in secular liberal societies is visible. The desire for transcendence, rootedness, and spiritual structure has not disappeared. It has been redirected into political identity, consumer behavior, and digital escapism.
If Christianity is to succeed, the same kind of work is needed today. Christianity must once again become a missionary faith. This time, the mission field is not a remote foreign land, but the secularized cities and postmodern suburbs of the Western world. The people being addressed are cultural outsiders. Many were born into environments where the gospel was never lived, never spoken, never embodied. Christianity was not abandoned. It was never truly encountered.
A future for Christianity in the West will not be built on appeals to lost memory or civilizational guilt. It will not be recovered through progressive accommodation or through aesthetic traditionalism that treats churches, vestments, and relics as ornaments of cultural decline. It will only re-emerge through an act of deep cultural translation. That act must begin with an honest assessment of what has been lost, and a willingness to reframe the sacred in terms that can again be understood.
The alternative is a continued descent into spiritual confusion and civilizational forgetfulness. Christianity may continue to grow in the Global South. It may endure as a global religion. But in the West, it will only live again if it learns how to speak, once more, to those who were never taught how to listen.
Looking in from the outside, it seems to me that the majority of Christian priests and ministers have already made their peace with the inevitable extinction of their faith and far too many of them are actively working toward that end. Feminist and progressive currents move far more local Christian leaders than the message of the faith itself, hence any hopes of western Christianity reforging itself depend on a tiny minority of the clergy.
“A Cloak of Anarchy”: Gradations of Statelessness
Feral Historian
Published 21 Feb 2025“A Cloak of Anarchy”, written by Larry Niven and published in 1972, is a simple story. But it offers us an entry to examine the basic ideas of Anarchism without diving head-first into the political theorizing of the big anarchist philosophers. This one is a 3-minute look at a simple short story wrapped in a 20-minute attempt to cast aside the most basic misunderstandings of what anarchism is.
I don’t consider myself to be an anarchist, but by most standards I’m damn close. Take what I say here in that light.
00:00 Intro
01:00 Cops-Eyes
03:09 Absence of Rulers
06:37 AnComs and AnCaps
10:07 Is Anarchism Leftist?
11:30 Practical Considerations
16:17 Anarchism and Environmentalism
19:00 Closing Ramble







