Thomas Hobbes blamed the English Civil War on “ghostly authority”. Where the Bible is unclear, the crowd of simple believers will follow the most charismatic preacher. This means that religious wars are both inevitable, and impossible to end. Hobbes was born in 1588 — right in the middle of the Period of the Wars of Religion — and lived another 30 years after the Peace of Westphalia, so he knew what he was talking about.
There’s simply no possible compromise with an opponent who thinks you’re in league with the Devil, if not the literal Antichrist. Nothing Charles I could have done would’ve satisfied the Puritans sufficient for him to remain their king, because even if he did everything they demanded — divorced his Catholic wife, basically turned the Church of England into the Presbyterian Kirk, gave up all but his personal feudal revenues — the very act of doing these things would’ve made his “kingship” meaningless. No English king can turn over one of the fundamental duties of state to Scottish churchwardens and still remain King of England.
This was the basic problem confronting all the combatants in the various Wars of Religion, from the Peasants’ War to the Thirty Years’ War. No matter what the guy with the crown does, he’s illegitimate. It took an entirely new theory of state power, developed over more than 100 years, to finally end the Wars of Religion. In case your Early Modern history is a little rusty, that was the Peace of Westphalia (1648), and it established the modern(-ish) sovereign nation-state. The king is the king because he’s the king; matters of religious conscience are not a sufficient casus belli between states, or for rebellion within states. Cuius regio, eius religio, as the Peace of Augsburg put it — the prince’s religion is the official state religion — and if you don’t like it, move. But since the Peace of Westphalia also made heads of state responsible for the actions of their nationals abroad, the prince had a vested interest in keeping private consciences private.
I wrote “a new theory of state power”, and it’s true, the philosophy behind the Peace of Westphalia was new, but that’s not what ended the violence. What did, quite simply, was exhaustion. The Thirty Years’ War was as devastating to “Germany” as World War I, and all combatants in all nations took tremendous losses. Sweden’s king died in combat, France got huge swathes of its territory devastated (after entering the war on the Protestant side), Spain’s power was permanently broken, and the Holy Roman Empire all but ceased to exist. In short, it was one of the most devastating conflicts in human history. They didn’t stop fighting because they finally wised up; they stopped fighting because they were physically incapable of continuing.
The problem, though, is that the idea of cuius regio, eius religio was never repudiated. European powers didn’t fight each other over different strands of Christianity anymore, but they replaced it with an even more virulent religion, nationalism.
Severian, <--–>”Arguing with God”, Rotten Chestnuts, 2020-01-20.
September 14, 2022
QotD: The Wars of Religion and the (eventual) Peace of Westphalia
September 13, 2022
QotD: J.R.R. Tolkien’s childhood and schooling
One reason highbrow people dislike The Lord of the Rings is that it is so backward-looking. But it could never have been otherwise. For good personal reasons, Tolkien was a fundamentally backward-looking person. He was born to English parents in the Orange Free State in 1892, but was taken back to the village of Sarehole, north Worcestershire, by his mother when he was three. His father was meant to join them later, but was killed by rheumatic fever before he boarded ship.
For a time, the fatherless Tolkien enjoyed a happy childhood, devouring children’s classics and exploring the local countryside. But in 1904 his mother died of diabetes, leaving the 12-year-old an orphan. Now he and his brother went to live with an aunt in Edgbaston, near what is now Birmingham’s Five Ways roundabout. In effect, he had moved from the city’s rural fringes to its industrial heart: when he looked out of the window, he saw not trees and hills, but “almost unbroken rooftops with the factory chimneys beyond”. No wonder that from the moment he put pen to paper, his fiction was dominated by a heartfelt nostalgia.
Nostalgia was in the air anyway in the 1890s and 1900s, part of a wider reaction against industrial, urban, capitalist modernity. As a boy, Tolkien was addicted to the imperial adventure stories of H. Rider Haggard, and it’s easy to see The Lord of the Rings as a belated Boy’s Own adventure. An even bigger influence, though, was that Victorian one-man industry, William Morris, inspiration for generations of wallpaper salesmen. Tolkien first read him at King Edward’s, the Birmingham boys’ school that had previously educated Morris’s friend Edward Burne-Jones. And what Tolkien and his friends adored in Morris was the same thing you see in Burne-Jones’s paintings: a fantasy of a lost medieval paradise, a world of chivalry and romance that threw the harsh realities of industrial Britain into stark relief.
It was through Morris that Tolkien first encountered the Icelandic sagas, which the Victorian textile-fancier had adapted into an epic poem in 1876. And while other boys grew out of their obsession with the legends of the North, Tolkien’s fascination only deepened. After going up to Oxford in 1911, he began writing his own version of the Finnish national epic, the Kalevala. When his college, Exeter, awarded him a prize, he spent the money on a pile of Morris books, such as the proto-fantasy novel The House of the Wolfings and his translation of the Icelandic Volsunga Saga. And for the rest of his life, Tolkien wrote in a style heavily influenced by Morris, deliberately imitating the vocabulary and rhythms of the medieval epic.
Dominic Sandbrook, “This is Tolkien’s world”, UnHerd.com, 2021-12-10.
September 12, 2022
QotD: On the nature of our evidence of the ancient world
As folks are generally aware, the amount of historical evidence available to historians decreases the further back you go in history. This has a real impact on how historians are trained; my go-to metaphor in explaining this to students is that a historian of the modern world has to learn how to sip from a firehose of evidence, while the historian of the ancient world must learn how to find water in the desert. That decline in the amount of evidence as one goes backwards in history is not even or uniform; it is distorted by accidents of preservation, particularly of written records. In a real sense, we often mark the beginning of “history” (as compared to pre-history) with the invention or arrival of writing in an area, and this is no accident.
So let’s take a look at the sort of sources an ancient historian has to work with and what their limits are and what that means for what it is possible to know and what must be merely guessed.
The most important body of sources are what we term literary sources, which is to say long-form written texts. While rarely these sorts of texts survive on tablets or preserved papyrus, for most of the ancient world these texts survive because they were laboriously copied over the centuries. As an aside, it is common for students to fault this or that later society (mostly medieval Europe) for failing to copy this or that work, but given the vast labor and expense of copying and preserving ancient literature, it is better to be glad that we have any of it at all (as we’ll see, the evidence situation for societies that did not benefit from such copying and preservation is much worse!).
The big problem with literary evidence is that for the most part, for most ancient societies, it represents a closed corpus: we have about as much of it as we ever will. And what we have isn’t much. The entire corpus of Greek and Latin literature fits in just 523 small volumes. You may find various pictures of libraries and even individuals showing off, for instance, their complete set of Loebs on just a few bookshelves, which represents nearly the entire corpus of ancient Greek and Latin literature (including facing English translation!). While every so often a new papyrus find might add a couple of fragments or very rarely a significant chunk to this corpus, such additions are very rare. The last really full work (although it has gaps) to be added to the canon was Aristotle’s Athenaion Politeia (“Constitution of the Athenians”) discovered on papyrus in 1879 (other smaller but still important finds, like fragments of Sappho, have turned up as recently as the last decade, but these are often very short fragments).
In practice that means that, if you have a research question, the literary corpus is what it is. You are not likely to benefit from a new fragment or other text “turning up” to help you. The tricky thing is, for a lot of research questions, it is in essence literary evidence or bust. […] for a lot of the things people want to know, our other forms of evidence just aren’t very good at filling in the gaps. Most information about discrete events – battles, wars, individual biographies – are (with some exceptions) literary-or-bust. Likewise, charting complex political systems generally requires literary evidence, as does understanding the philosophy or social values of past societies.
Now in a lot of cases, these are topics where, if you have literary evidence, then you can supplement that evidence with other forms […], but if you do not have the literary evidence, the other kinds of evidence often become difficult or impossible to interpret. And since we’re not getting new texts generally, if it isn’t there, it isn’t there. This is why I keep stressing in posts how difficult it can be to talk about topics that our (mostly elite male) authors didn’t care about; if they didn’t write it down, for the most part, we don’t have it.
Bret Devereaux, “Fireside Friday: March 26, 2021 (On the Nature of Ancient Evidence”, A Collection of Unmitigated Pedantry, 2021-03-26.
September 11, 2022
QotD: De-institutionalization
[In Desperate Remedies: Psychiatry’s Turbulent Quest to Cure Mental Illness, Andrew] Scull stresses the degree to which external pressures have shaped psychiatry. “Community psychiatry” supplanted “institutional psychiatry” in part because of professional insecurity. Psychiatrists needed a new model for dealing with mental diseases to keep pace with the advances that mainstream health care was making with other diseases. Fiscal conservatives viewed the practice of confining hundreds of thousands of Americans to long-term commitment as overly expensive, and civil libertarians viewed it as unjust.
Deinstitutionalization began slowly at first, in the 1950s, but the pace accelerated around 1970, despite signs that all was not going according to plan. On the ground, psychiatrists noticed earlier than anyone else that the most obvious question — where are these people going to go when they leave the mental institutions? — had no clear answers. Whatever misgivings psychiatrists voiced over the system’s abandonment of the mentally ill to streets, slums, and jails was too little and too late.
That modern psychiatry is mostly practiced outside of mental institutions is not its only difference from premodern psychiatry. Scull devotes extensive coverage to two equally decisive developments: the rise and fall of Freudianism, and psychopharmacology.
The Freudians normalized therapy in America and provided crucial intellectual support for the idea that mental health care is for everyone, not just the deranged. Around the same time as deinstitutionalization, Freud’s reputation, especially in elite circles, was on a level with Newton and Copernicus. Since then, Freudianism has mostly gone the way of phlogiston and leeches. That happened not just because people decided the psychoanalysts’ approach to therapy didn’t work but also because insurance wouldn’t pay for it. Insurance would, however, pay for modes of therapy that were less open-ended than the “reconstruction of personality that psychoanalysis proclaimed as its mission”, more targeted to a specific psychological symptom, and, most crucially of all, performed by non-M.D.s. Therapy was on the rise, but psychiatrists found themselves doing less and less of it.
As psychiatry cast aside Freudian concepts such as the “refrigerator mother”, which rooted mental illness in psychodynamic tensions, it increasingly trained its focus on biology. Drugs contributed to, and gained a boost from, this reorientation. Scull loathes the drug industry and only grudgingly allows that it has made improvements in the lives of mentally ill Americans. He divides up the vast American drug-taking public into three groups: those for whom they work, those for whom they don’t work, and those for whom they may work, but not enough to counter the unpleasant side effects. He argues that the last two groups are insupportably large.
Stephen Eide, “Soul Doctors”, City Journal, 2022-05-18.
September 10, 2022
QotD: “Working toward the Führer“
Sir Ian Kershaw was broadly right about how the Third Reich operated. He says Nazi functionaries were “working towards the Führer“. In other words, the Führer — the idealized, mythologized leader, not Adolf Hitler the individual — made it known that “National Socialism stands for X“. Hitler was famously averse to giving direct orders, so that’s often the only thing big, important parts of the government had to work from — the Führer‘s* pronouncement that “National Socialism means X“. It was up to them to put it into practice as best they could.
This had several big advantages. First, it’s in line with Nazi philosophy. The Nazis were Social Darwinists. Social Darwinists hold that “survival of the fittest” applies not only to humans as a whole, but to human social groups as well. Any given organization, then, must exist to do something, to advance some cause, to reach some goal. Ruthless competition between groups, and inside each group, is how the goal works itself out (you should be hearing echoes of Hegel here). The struggle refines and clarifies what the group’s goal is, even as the individual group members compete to reach it. The end result gets forced back up the system to the Führer, such that, dialectically (again, Hegel), “National Socialism means X” now encompasses the result of the previous struggle.
[…]
As with philosophy, “working towards the Führer” fit well with German military culture. Auftragstaktik is a fun word that means “mission-type tactics.” In practice, it delegates authority to the lowest possible level. Each subordinate commander is given an objective, a force, and a due date. High command doesn’t care how the objective gets taken; it only cares that the objective gets taken. Done right, it’s a wonderfully efficient system. It’s the reason the Wehrmacht could keep fighting for so long, and so well, despite being overpowered in every conceivable way by the Allies. The Allies, too, were constantly flabbergasted by their opponents’ low rank — corporals and sergeants in the Wehrmacht were doing the work of an entire Allied company command staff (and often doing it better).
Consider the career of Adolf Eichmann. In the deepest, darkest part of the war, this man pretty much ran the Reich’s rail network. Say what you will about the Nazi’s plate-of-spaghetti org chart, that’s some serious power. He was a lieutenant colonel.
The final great advantage of “working towards the Führer” is “plausible deniability”. Let’s stipulate Atrocity X. Let’s further stipulate that we’re in the professional historian’s fantasy world, where every conceivable document exists, and they’re all clear and unambiguous. It’s a piece of cake to pin Atrocity X on someone … and that someone would, in all probability, be a corporal or a sergeant. Maybe a lieutenant. What you wouldn’t be able to do is trace it up the chain any higher. Everyone from the captain to Hitler himself could / would give you the “Who, me?” routine. “I didn’t tell Sergeant Schultz to execute those prisoners. All I said was to go secure that objective / defeat that army / that National Socialism means fighting with an iron will.”
*I’m deliberately conflating them here — to make it clearer how confusing this could be — but in talking about this stuff the terminology is crucial. Adolf Hitler, the man, played the role of The Führer. What Hitler the man wanted was often in line with what the Führer role required, of course, but not always. This is one of the footholds Holocaust deniers have. Did Hitler-the-man actually put his name to a liquidation order? No. Did Hitler-the-man actually want it to happen? Unquestionably yes, but like all men, Hitler-the-man vacillated, had second thoughts, doubted himself, etc., and you can find documented instances of that. But The Führer very obviously wanted it to happen, and it was The Führer that motivated the rank-and-file. The man created the role, but very soon the role started playing the man …
Severian, “Working Towards the Deep State”, Rotten Chestnuts, 2020-01-06.
September 9, 2022
QotD: The BBC behind-the-scenes in 1983
By 10pm on the night of 9th June 1983 BBC Television center was humming. In Studio Two, amid a beige version of the set from Alien, David Dimbleby and Robin Day were about to start the election results show, though everybody already knew Thatcher was going to walk it.
I was in the studio next door, which had been transformed into a vast Green Room, tables stacked high with food and booze. Us trainees had been brought in to help organise the guests and manage the hospitality.
And that party was only getting started. As the night wore on and the politicians, academics and journalists came and went, but mostly came and stayed, the whole place, and the labyrinth of corridors, scenery docks and stage lifts surrounding it, began to resemble something between a University May Ball and the last days of Rome.
People were being sick in corridors, being discovered “in flagrante” in lifts or sneaking off into unlocked offices. Some, bearing an uncanny resemblance to their Spitting Image puppets, became far too slurry and unsteady on their feet to go before Dimbleby and co at the appointed time.
Back then, juniors like me were often sent to pick up politicians and other public figures, because if they were not physically guided they’d forget to turn up altogether or go to the wrong place. We’d arrive rather sheepishly outside clubs, parties and private homes — sometimes not the private homes that they were supposed to be living in. We’d gently lead them away from whatever drunken dinner they were at and take them to the studio where more free alcohol was always available. And everyone was smoking.
For politicians and journalists alike, it was an especially louche time. And secrets, by and large, were kept along an arc of tolerated misbehaviour that ran from Westminster through St James’ to Notting Hill and White City. Albertines Wine Bar and Julie’s restaurant both had booths you could dissolve into during lunches that slipped toward early evening, and the “cinq a sept” trade in the local hotels was always healthy.
There was a BBC chauffeur driving company run by a man called Niven, and a late night “Niven Car” was the ultimate perk when the White City and Lime Grove bars finally closed. I’m not the only BBC veteran who’ll remember when a certain public figure left an item of intimate female clothing on the back seat of her “Niven” after an over-enthusiastic snog on her way home. It was duly recovered, popped into a plastic bag and discreetly couriered back to its proper owner.
I’m making it sound more fun than it was. There was a lot of awful behaviour that went unremarked and unpunished, especially the leering, groping and grabbing that my female colleagues had to put up with endlessly, some of which would today rightly be called sexual assault. And, of course, this permissive culture was the ideal environment for celebratory predators, the Jimmy Savilles, Stuart Halls and Cyril Smiths (one of David Dimbleby’s guests that very election night). We all heard the gossip, but nobody made a challenge.
But if I could have any part of that world back it would be this: we didn’t expect, need or want our MPs, ministers or their shadows to be plaster saints.
Phil Craig, “I’m done with po faced politicians”, The Critic, 2022-05-18.
September 8, 2022
QotD: Pre-modern armies could not march much faster than 8-12 miles per day … on good days
Well, getting started ate quite a few hours, but at least we’re going to move at a constant speed all day right? Of course not. These are humans – they need to eat (lunch), drink and relieve themselves. Men will fall out of line because they are sick or because they sprained an ankle or because they’re tired of marching and faking it (many army guidelines put the medics at the back of the marching column for this purpose). To add to this, wagons get stuck in the mud, mules and horses get stubborn or lame (that chance may seem low, but remember we’re dealing with thousands of animals – small percentages add up fast when you have a few thousand of something).
For reference on how much time this can eat up, 1950s US Army marching regulations (this is again FM21-18 “Foot Marches”) suggest that “battle groups or smaller” (800 men or less, generally – so small, fast-moving infantry) can “under favorable conditions” (read: good, modern paved roads in good weather) make 15-20 miles in a continuous eight hour march. A forced march – marching longer than 8 hours and at a higher than normal pace – can cover more ground (c. 35 miles in a day in some cases) but such a pace will wear out an infantry force fast.
At the end of the day, the army needs to arrive at its planned camp site [early] enough to make camp. Cooking needs to be done. Food that was foraged by flanking units needs to get to the camp, be recorded and stored (or processed and eaten) – speaking of which, note that we haven’t even discussed flankers, scouts and foraging parties. Wages may need to be paid, paperwork needs to be done. In many armies, the camp will need to be fortified – the Romans built a wood-palisade fortified camp every night on the march. And then everyone goes to sleep around 9pm. And that, to be clear, is when everything works like clockwork – which it never does.
For a large army, the breaking camp, waiting to begin marching, waiting for the last man to arrive, dealing with pack animals and wagons slices a few hours off of that eight hour march routine. All of which is why a normal large body of infantry moves something closer 8-12 miles per day than the 24 miles (8 hours x 3.1mph) per day implied by Wikipedia’s Average Human Walking Speed.
Historians doing studies of campaigns thus tend to use these sorts of rule-of-thumb speeds without much feeling the need to explain why armies move so slow because I think they expect that most of their readers are either fellow historians or former soldiers and in either case, already know. These rules of thumb, in turn, derive from staff planning in the age when armies still mostly walked to war (especially the 1800s and early 1900s): those staff office planners would have (and presumably still do have) elaborate tables of how many men can move how fast over what sort of roads in what kind of weather – because bad staff work multiplied over massive armies can mean catastrophic logistics and timing failures (see: Frontiers, Battle of the (1914) for examples).
If anything, for a medieval army of conscripts, fresh from a successful battle, with a long supply-train moving off of the main roads, 12 miles per day is actually quite fast. Large armies with lots of wagons often strayed into single-digit marching speeds. And, to be clear, marching speeds are highly variable based on terrain and the rest.
Bret Devereaux, “New Acquisitions: How Fast Do Armies Move?”, A Collection of Unmitigated Pedantry, 2019-10-06.
September 7, 2022
QotD: Gender Dysphoria
My lifelong gender dysphoria has certainly been a primary inspiration for my entire career as a researcher and writer. I have never for a moment felt female — but neither have I ever felt male either. I regard my ambiguous position between the sexes as a privilege that has given me special access to and insight into a broad range of human thought and response. If a third gender option (“Other”) were ever added to government documents, I would be happy to check it. However, I have never believed, and do not now, that society has any obligation to bend over backwards to accommodate my particular singularity of identity. I am very concerned about current gender theory rhetoric that convinces young people that if they feel uneasy about or alienated from their assignment to one sex, then they must take concrete steps, from hormone therapy to alarmingly irreversible surgery, to become the other sex. I find this an oddly simplistic and indeed reactionary response to what should be regarded as a golden opportunity for flexibility and fluidity. Furthermore, it is scientifically impossible to change sex. Except for very rare cases of intersex, which are developmental anomalies, every cell of the human body remains coded with one’s birth sex for life.
Beyond that, I believe that my art-based theory of “sexual personae” is far more expansive and truthful about human psychology than is current campus ideology: who we are or want to be exceeds mere gender, because every experimental persona that we devise contains elements of gesture, dress, and attitude rich with historical and cultural associations. (For Halloween in childhood, for example, I defiantly dressed as Robin Hood, a Roman soldier, a matador, Napoleon, and Hamlet.) Because of my own personal odyssey, I am horrified by the escalating prescription of puberty-blockers to children with gender dysphoria like my own: I consider this practice to be a criminal violation of human rights. Have the adults gone mad? Children are now being callously used for fashionable medical experiments with unknown long-term results.
In regard to the vexed issue of toilets and locker rooms, if private unisex facilities can be conveniently provided through simple relabeling, it would be humane to do so, but I fail to see why any school district, restaurant, or business should be legally obligated to go to excess expense (which ultimately penalizes the public) to serve such a minuscule proportion of the population, however loud their voices. And speaking of voices: as a libertarian, I oppose all intrusion by government into the realm of language, which belongs to the people and which evolves organically over time. Thus the term “Ms.” eventually became standard English, but another 1970s feminist hybrid, “womyn”, did not: the populace as a whole made that decision, as it always does with argot or slang filtering up from ethnic or avant-garde subgroups. The same principle applies to preferred transgender pronouns: they are a courtesy that we may choose to defer to, but in a modern democracy, no authority has the right to compel their usage.
Camille Paglia, “Prominent Democratic Feminist Camille Paglia Says Hillary Clinton ‘Exploits Feminism’”, Washington Free Beacon, 2017-05-15.
September 6, 2022
QotD: The fitness club
Yesterday we looked at what happens when a cult becomes a movement. I said there are two fundamental, structural problems that arise. The first is that the leadership’s goals start diverging from, and eventually run counter to, the cult’s dogma. That’s where the eco-scam finds itself these days. It doesn’t bother the Green True Believers that their leadership flies around in private jets — see yesterday’s discussion of disconfirmation — but it does put a damper on recruiting. We’re a stupid, spoiled, star-struck generation, but even we expect our leaders to walk the walk for a mile or two every now and again.
The second problem, though, is: What to do with the True Believers?
Let’s return to the metaphor of the
gymfitness club. As we noted yesterday, the real money isn’t in the hardcore people who actually do the exercising. It’s in all the lardasses who sign up, and keep paying the membership fee, but never actually go. This leads to the perverse-seeming conclusion that the best gym, from the gym-owner’s perspective, is one that stands empty — gleaming, never-used equipment that just sits there, one mute inglorious depreciation tax writeoff, un-maintained by no paid staff. See what I mean? The whole point of owning a gym — the cult dogma, as it were — is to get people in shape, but the optimal gym from the cult leader’s perspective is a group of perpetual fatasses, buying themselves workout indulgences at $75 a month.I trust that the analogues in the eco-scam are obvious, so let’s move on. Even the most optimal-for-the-owner gym, though, is going to have a few True Believers who are in there day after day, grinding out sets and jogging on treadmills and doing whatever those CrossFit freaks do.* If you let them, they’ll take over everything. Ever been in a gym and seen a piece of equipment designed to isolate one muscle that you’d never think could be worked out in the first place? Congrats, your gym’s got a True Believer. Just stake out the Urethra-cizer for a few hours; you’ll see her; she’s unmistakable. She’s pushing 50 but has the body of a 20-year old, except made out of beef jerky …
… anyway, the point is, savvy gym owners know how to handle True Believers. You don’t buy ’em off with new equipment; you buy ’em off with new exercises. P90X is for pussies. Do Ultra-Kegels, and in just 60 days you’ll be able to lift an entire can of paint with your …
* Obviously I can’t write about gyms and cults without taking a cheap shot at CrossFit. They’re probably chock full of lessons on how to business-optimize your cult without letting it go mainstream, but I’m too terrified to look. Honest to God, there are some days where the only exercise I get is dodging and weaving away from the CrossFit cultists at the office.
Severian, “If the UFO Actually Comes, Part II”, Rotten Chestnuts, 2019-09-26.
September 5, 2022
QotD: Why bureaucracies are inherently slow
It is important to remember that all government law enforcement agencies are bureaucracies. And all bureaucracies have certain behavioral tendencies owing to their institutional structure and the incentives that structure generates.
The great economist Ludwig von Mises analyzed these tendencies and incentives in his 1944 book Bureaucracy.
In that book, Mises identified “slowness and slackness” as among the inherent features of government bureaucracy that no reform can remove.
We have all experienced the “slowness and slackness” of government bureaucracy: with the post office, the DMV, the public school system, etc. That’s why the animated movie Zootopia had sloths working at the DMV and everyone got the joke. And police bureaucracies are no exception to this reputation.
Why is this so? In part, it is due to another indelible feature of bureaucracy: that it is, as Mises wrote, “bound to comply with detailed rules and regulations fixed by the authority of a superior body. The task of the bureaucrat is to perform what these rules and regulations order him to do. His discretion to act according to his own best conviction is seriously restricted by them.”
Sometimes a delay is simply due to the fact that the government employee is too tied up in red tape to respond in a timely manner. The timely response may be outright prohibited by the rules. Or the delay may be owing to Kafkaesque procedural mazes that first must be navigated or chains of command that must be climbed for permission.
[…]
Again, Mises considered such features of bureaucracy to be unreformable. Why? He argued that it is the only way that a government bureaucracy can be made at all accountable to the public. A bureaucrat with a free hand is even more dangerous than a bureaucrat with his hands tied.
“If one assigns to the authorities the power to imprison or even to kill people,” Mises wrote, “one must restrict and clearly circumscribe this power. Otherwise the officeholder or judge would turn into an irresponsible despot.”
Dan Sanchez, “How Bureaucracy May Have Cost Lives in Uvalde”, Foundation for Economic Education, 2022-05-31.
September 4, 2022
QotD: Sparta’s fatal problem – oliganthropia
The consequence of the Spartan system – the mess contributions, the inheritance, the diminishing number of kleroi in circulation and the apparently rising numbers of mothakes and hypomeiones – was catastrophic, and once the downhill spiral started, it picked up speed very fast. From the ideal of 8,000 male spartiates in 480, the number fell to 3,500 by 418 (Thuc. 5.68) – there would be no recovery from the great earthquake. The drop continued to just 2,500 in 394 (Xen. Hell. 4.2.16). Cinadon – the leader of the above quoted conspiracy against the spartiates – supposedly brought a man to the market square in the center of the village of Sparta and asked him to count – out of a crowd of 4,000! – the number of spartiates, probably c. 390. The man counted the kings, the gerontes and ephors (that’s around 35 men) and 40 more homoioi besides (Xen. Hell. 3.3.5). The decline continued – just 1,500 in 371 (Xen. Hell. 6.1.1; 4.15.17) and finally just around 700 with only 100 families with full citizen status and a kleros, according to Plutarch by 254 B.C. (Plut. Agis. 5.4).
This is is the problem of oliganthropia (“people-shortage” – literally “too-few-people-ness”) in Sparta: the decline of the spartiate population. This is a huge and contentious area of scholarship – no surprise, since it directly concerns the decline of one of the more powerful states in Classical Greece – with a fair bit of debate to it (there’s a decent rundown by Figueira of the demography behind it available online here). What I want to note here is that a phrase like “oliganthropia” makes it sound like there was an absolute decline in population, but the evidence argues against that. At two junctures in the third century, under Agis IV and then later Cleomenes III (so around 241 and 227) attempts were made to revive Sparta by pulling thousands of members of the underclass back up into the spartiates (the first effort fails and the second effort was around a century too late to matter). That, of course, means that there were thousands of individuals – presumably mostly hypomeiones, but perhaps some mothakes or perioikoi – around to be so considered.
Xenophon says as much with Cinadon’s observation about the market at Sparta. Now obviously, we can’t take that statement as a demographic survey, but as a general sense, 40 homoioi, plus a handful of higher figures, in a crowd of 4,000 speaks volumes about the growth of Sparta’s underclass. And that is in Laconia, the region of the Spartan state (in contrast to Messenia, the other half of Sparta’s territory), where the Spartans live and where the density of helots is lowest.
This isn’t a decline in the population of Sparta, merely a decline in the population of spartiates – the tiny, closed class of citizen-elites at the top.
So we come back to the standard assertion about Sparta: its system lasted a long time, maintaining very high cohesion – at least among the citizens class and its descendants. This is a terribly low bar – a society cohesive only among its tiny aristocracy. And yet, as low of a bar as this is, Sparta still manages to slink below it. Economic cohesion was a mirage created by the exclusion of any individual who fell below it. Sparta maintained the illusion of cohesion by systematically removing anyone who was not wealthy from the citizen body.
If we really want to gauge this society’s cohesion, we ought to track households, one generation after the next, regardless of changes in status. If we do that, what do we find? A society with an increasingly tiny elite – and a majority which, I will again quote Xenophon, “would eat them raw“. Hardly a model of social cohesion.
Moreover, this system wasn’t that stable. The core labor force – the enslaved helots – are brutally subjugated by Sparta no earlier than 680 (even this is overly generous – the consolidation process in Messenia seems to have continued into the 500s). The austerity which supposedly underlined cohesion among the spartiates by banishing overt displays of wealth is only visible archaeologically beginning in 550, which may mark the real beginning of the Spartan system as a complete unit with all of its parts functioning. And by 464 – scarcely a century later – terminal and irreversible decline had set in. Spartan power at last breaks permanently and irretrievably in 371 when Messenia is lost to them […]
This is a system that at the most generous possible reading, lasted three centuries. In practice, we are probably better in saying it lasts just 170 or so – from c. 550 (the completion of the consolidation of Messenia, and the beginning of both the Peloponnesian League and the famed Spartan austerity) to 371.
To modern ears, 170 years still sounds impressive. Compared to the remarkably unstable internal politics of Greek poleis, it probably seemed so. But we are not ancient Greeks – we have a wider frame of reference. The Roman Republic ticked on, making one compromise after another, for four centuries (509 to 133; Roman enthusiasts will note that I have cut that ending date quite early) before it even began its spiral into violence. Carthage’s republic was about as long lived as Rome. We might date constitutional monarchy in Britain as beginning in 1688 or perhaps 1721 – that system has managed around 300 years.
While we’re here – although it was interrupted briefly, the bracket dates for the notoriously unstable Athenian democracy, usually dated from the Cleisthenic reforms 508/7 to the suppresion of the fourth-century democracy in 322, are actually longer, 185 years, give or take, with just two major breaks, consisting of just four months and one year. Sparta had more years with major, active helot revolts controlling significant territory than Athens had oligarchic coups. And yet Athens – rightly, I’d argue – has a reputation for chronic instability, while Sparta has a reputation for placid regularity. Might I suggest that stable regimes do not suffer repeated, existential slave revolts?
In short, the Spartan social system ought not be described as cohesive, and while it was relatively stable by Greek standards (not a high bar!) it is hardly exceptionally stable and certainly not uniquely so. So much for cohesion and stability.
Bret Devereaux, “Collections: This. Isn’t. Sparta. Part IV: Spartan Wealth”, A Collection of Unmitigated Pedantry, 2019-08-29.
September 3, 2022
QotD: The Severian Corollary to Hanlon’s Razor
Y’all are doubtless familiar with “Hanlon’s Razor”:
Never attribute to malice that which is adequately explained by stupidity.
A while back I wrote that this needs an update, and being a humble guy, I named it the “Severian Corollary”:
There’s a level of stupidity so profound, you actually hope it’s malice.
Severian, “Quick Take: The Severian Corollary”, Rotten Chestnuts, 2019-07-31.
September 2, 2022
QotD: Historical parallels between the British and American empires
… let us compare the US imperial experience to its British model. A whimsical exercise in comparative dates.
England was colonised by the Norman Empire (a tribe that spread across France, Britain, Italy, and the Middle East can be referred to as an empire I believe), in 1066. After some initial fierce resistance, they settled well, integrated with the local economy, and started developing a more advanced economic society.
North America was colonised by the British Empire (and Spanish and French of course), in the sixteenth century. After some initial fierce resistance, they settled well, integrated with the local economy, and started developing a more advanced economic society.
Norman England spent the next few centuries gradually taking out its neighbours. Wales, Ireland, and eventually Scotland (though the fact that the Scottish King James I & VI actually inherited England confuses this concept a bit). The process was fairly violent.
The North American “English” colonies spent the next few centuries taking out their neighbours. Indian tribes, Dutch, Spanish and French colonists, etc. The process was fairly violent.
England fought a number of wars over peripheral areas, particularly the Hundred Years war over claims to lands in France.
The North American colonies enthusiastically joined (if not blatantly incited) the early world wars, with the desire of taking over nearby French and Spanish colonies
The English fought a civil war in the 1640s to 50s over the issue of how to share power between the executive government, the oligarchs, and the commons. It appears that the oligarchs incited the commons (which was not very common in those days anyway). It was extremely bloody, and those on the periphery — particularly the Scots and Irish — came out badly (and with a long term bad taste for their over-mighty neighbour).
The Colonies fought their first civil war over the issue of how to share power between the executive, the oligarchs and the commons in the 1770s to 80s. It is clear that the oligarchs incited the commons (who in the US were still not very common — every male except those Yellow, Red or Black. An improvement? Certainly not considering the theoretical philosophical base of the so-called Revolution!). It was not really so bloody, but those on the periphery — particularly the Indians and slaves (both of which were pro-British), and the Loyalists and Canadians — came out badly. (60-100,000 “citizens” were expelled or forced to flee for being “loyalists”, let alone Indians and ex-slaves). Naturally the Canadians and their new refugee citizens developed a long term bad taste for their over-mighty neighbour — who attempted to attack them at the drop of a hat thereafter.
The British spent the next century and a half accumulating bits of empire — the Dominions, the Crown Colonies, and the Protectorates — in a haphazard fashion. Usually, but not always, troops followed traders and settlers.
The United States spent the next century and a half accumulating bits of empire — conquests from the Indians, purchases from France and Russia, conquests from Mexico and Spain, annexations of places like Hawaii, etc. — in a haphazard fashion. Usually, but not always, troops followed traders and settlers.
Nigel Davies, “The Empires of Britain and the United States – Toying with Historical Analogy”, rethinking history, 2009-01-10.
September 1, 2022
QotD: The origins of the Bloody Mary
“Yes, yes please dear boy. You can prepare me a small rhesus negative Bloody Mary. And you must tell me all the news. I haven’t seen you since you finished your last film.” Uncle Monty, the lubricious booby in Bruce Robinson’s wonderful Withnail & I, selects his pre-prandial from a drinks table pregnant with possibilities with all the care one would expect from a seasoned practitioner. And so he might. For a Bloody Mary is perfect at almost any time of day and in every kind of weather.
All the best bars serve Bloody Marys. But the best Bloody Mary is served in The Grenadier, the one-time officers’ mess of the Foot Guards and the long-time public house in Wilton Row, Belgravia. Pass beneath the low lintel of this tiny tavern, pick your way through the crowded tap room and press up against the burr mahogany bar to order the world’s premier pick-me-up.
[…]
The Grenadier is rightly proud of its reputation for restorative remedies and particularly this one. Into the glass go vodka, and a sizeable slug (preferably Stolichnaya), a stirrup of sherry (the secret weapon), tomato juice, then spices (Tabasco and Worcestershire sauce), citron (lemon or lime) and salt to personal taste. The addition of a celery stick is entirely superfluous but for those in search of their hair of the dog, a little light salad I suppose is a help — of sorts.
Yet this pub, though an historic watering hole, cannot claim to be the home of the Mary. That right is retained across the Channel in Paris by Harry’s New York Bar. There exactly a century ago in the City of Light, Fernand Petiot, perfected his “bucket of blood”. The name was soon changed to something a little less Madame la Guillotine and the Mary as we know it was born.
Christopher Pincher, “Hail Mary”, The Critic, 2022-05-30.
August 31, 2022
QotD: John Keegan’s The Face of Battle
The Face of Battle (1976) is in some ways an oddly titled book. The title implies there is a singular face to battle that the author, John Keegan, is going to discover (and indeed, to take his forward, that is certainly the question he looked to answer). But that plan doesn’t survive contact with the table of contents, which makes it quite clear that Keegan is going to present not one face of battle, but the faces of three different battles and they will look rather different. Rather than reinventing the wheel, I am going to follow Keegan’s examples to make my point here (although I should note that of course The Face of Battle is a book not without its flaws, as is true with any work of history).
Keegan’s first battle is Agincourt (1415). While famous for the place of the English longbow in it, at Agincourt the French advance (both mounted and dismounted) did reach the English lines; of this the sources for the battle are quite clear. And so the terror we are discussing is the terror of shock; not shock in the sense of a sudden shock or in the sense of a jolt of electricity, rather shock as the opposite of fire. Shock combat is the combat when two bodies of soldiers press into each other in mass hand-to-hand combat (which is, contrary to Hollywood, not so much a disorganized melee as a series of combats along the line of contact where the two formations meet). The advancing French had to will themselves forward into a terrifying shock encounter, while the English had to (like our hoplites above) hold themselves in place while watching the terrifying prospect of a shock engagement walk steadily towards them.
There is actually quite a bit of evidence that the terror of a shock engagement is something different from the other terrors of war (to be clear, not “better” or “worse”, merely different in important ways). There are numerous examples of units which could stand for extend periods under fire but which collapsed almost immediately at the potential of a shock engagement. To draw a much more recent example, at Bai Beche in 2001, a force of Taliban withstood two days of heavy bombing and had repulsed an infantry assault besides, but collapsed almost immediately when successfully surprised by a cavalry charge (yes, in 2001) in their rear (an incident noted in S. Biddle, “Afghanistan and the Future of Warfare”, Foreign Affairs 82.2 (2003)).
And so our sources for state-on-state pre-gunpowder warfare (which is where you tend to find more fully “shock” oriented combat systems) stress similar sequences of fear: the dread inspired by the sight of the enemy army drawing up before you (Greek literature is particularly replete with descriptions of teeth-chattering and trembling in those moments and it is not hard to imagine why), followed by the steady dread-anticipation as the armies advanced, each step bringing that moment of collision closer. Often in such engagements one side might break before contact as the fear not of what was happening, but what was about to happen built up. And only then the long anticipated not-so-sudden shock of the formations coming together – rarely for long given the overpowering human urge not to be near an enemy trying to stab you with a sharp stick. There is something, I think, quite fundamental in the human psyche that understands another human with a sharp point, or a huge horse rapidly closing on a deeper level than it understands bullets or arrows.
Which brings us to Keegan’s second battle, Waterloo (1815), defined in part by the ability of the British to manage to hold firm under extended fire from artillery and infantry. The French artillery in an 80-gun grand battery opened fire at 11:50am and kept it up for hours until the French cavalry advanced (hoping that the British troops were suitably “softened” by the guns to be dislodged) at 4pm. In contrast to Agincourt (or a hoplite battle) which may have ended in just a couple of hours and consisted mostly of grim anticipation, soldiers (on both sides) at Waterloo were forced to experience a rather different sort of terror: forced to stand in active harm for hours on end, as bullets and cannon shot whizzed overhead.
The difference of this is perhaps most clearly extreme if we move still forward to the Somme (1916) and bombardment. The British had prepared for their assault with a week long artillery barrage, in which British guns fired 1.5 million shells (that is about 148 shells fired a minute, every minute for a week). At the first sound of guns, soldiers (in this case, the Germans, but it had been the French’s turn just that February to be on the receiving end of a bombardment at Verdun) rushed into their dug-out bomb shelters at the base of their trench and then waited. Unlike the British at Waterloo, who might content themselves that, one way or another, the terror of fire would not last a day, the soldier of WWI had no way of knowing when the barrage would cease and the battle proper begin. Indeed, they could not see the battlefield at all, only sit under the ground as it shook around them and try to be ready, at any moment when the barrage stopped to rush back up to the lip of the trench to set up the machine guns – because if they were late to do it, they’d arrive to find British grenades and bayonets instead.
We will get into wounds, both physical and mental, next week, but it is striking to me that repeatedly there are reports after such barrages of soldiers so mentally broken by the strain of it that they wandered as if dazed or mindless, apparently driven mad by the bombardment. Reports of such immediate combat trauma are vanishingly rare in the pre-modern corpus (Hdt. 6.117 being the rare example). And it is not hard to see why the constant threat of sudden, unavoidable death hanging over you, day and night, for days or in some cases weeks on end produces a wholly different kind of terror.
And yet, to extend beyond Keegan’s three studies, in talking to contemporary veterans, it seems to me this terror of fire – being forced to stand (or hide) under long continuous fire – is not always quite the same as the terror of the modern battlefield. Of course I can only speak to this second hand (but what else can a historian generally do?), but there seems to be something different about a battlefield where everything might seem peaceful and fine and even a bit boring until suddenly the mortar siren sounds or a roadside IED goes off and the peril is immediate. The experience of such fear sometimes expresses itself in a sort of hypervigilance which seems entirely unknown to Greek or Roman writers (who in most cases could hardly have needed such vigilance; true surprise attacks were quite rare as it is extremely hard to sneak one entire army up on another) and doesn’t seem particularly prominent in the descriptions of “shell-shock” (which today we’d call PTSD) from the First World War, compared to the prominence of intense fatigue, the thousand-yard-stare and raw emotional exhaustion. I do wonder though if we might find something quite analogous looking into the trauma of having a village raided by surprise under the first system of war.
Bret Devereaux, “Collections: The Universal Warrior, Part IIa: The Many Faces of Battle”, A Collection of Unmitigated Pedantry, 2021-02-05.



