As for conditions on the eve of coal’s rapid rise in the late sixteenth century, they were actually even less intense. Following the Black Death, London’s population took centuries to recover, and by 1550 was still below its estimated medieval peak. Having once had over 70-80,000 souls, by 1550 it had only recovered to about 50,000. And the woodlands fuelling London were clearly still intact. Foreign visitors in the 1550s, who mostly stayed close to the city, described the English countryside as “all enclosed with hedges, oaks, and many other sorts of trees, so that in travelling you seem to be in one continued wood”, and remarked that the country had “an abundance of firewood”.1 Even in the 1570s, when London’s population had likely begun to finally push past its medieval peak, the city seems to have drawn its wood from a much smaller radius than before. Whereas in the crunch of the 1300s it seemingly needed to draw firewood from as far as 17 miles away over land, in the 1570s even a London MP, with every interest in exaggerating the city’s demands, complained that it only sometimes had to source wood from as far away as just 12 miles.2
And not far along the coast from the city were also the huge woodlands of the Weald, which stretched across the southeastern counties of Sussex, Surrey and Kent, and which did not even send much of their wood to London at all. Firewood from the Weald was not only exported to the Low Countries and the northern coast of France, but those exports more than tripled between 1490 and the early 1530s, from some 1.5 million billets per year to over 4.7 million. That level was still being reached in 1550, when not interrupted by on-and-off war with France, but by then the Weald was also meeting yet another new demand, for making iron.3
Ironmaking was extremely wood-hungry. In the 1550s Weald, making just a single ton of “pig” or cast iron, fit only for cannon or cooking pots, required almost 4 tons of charcoal, which in turn required roughly another 28 tons or so of seasoned wood. England in the early sixteenth century had imported the vast majority of its iron from Spain, but between 1530 and 1550 Wealden pig iron production increased eightfold. The expansion would have demanded, on a very conservative estimate, the sustained annual output of at least 50,000 acres of woodland — an area over sixty times the size of New York’s Central Park. Yet even this hugely understates the true scale of the expansion, as pig iron needed to be refined into bar or wrought iron in order to be fit for most uses, which required twice as much charcoal again — or in other words, a total of 86 tons of seasoned wood had to be first baked and then burned, just to make one ton of bar iron from the ore. And all this was just the beginning. By the 1590s the output of the Wealden ironworks had more than tripled again, for pig iron alone (though the efficiency of charcoal usage had also halved — a story for another time, perhaps).4
Given the rapidity of these changes, it will come as no surprise that there were complaints from the locals about how much the ironworks had increased the price of fuel for their homes. No doubt the wood being exported was having a similar effect as well. But the 1540s and 50s were also time of rapid general inflation, because of a dramatic debasement of the currency initiated by Henry VIII to pay for his wars. This not only made imports significantly more expensive, and so likely spurred much of the activity in the Weald to replace increasingly unaffordable iron from Spain, but they also made exports significantly cheaper for buyers abroad — and thus unaffordable for the English themselves.
In 1548-9, in a desperate bid to keep prices down, royal proclamations repeatedly and futilely banned the export of English wheat, malt, oats, barley, butter, cheese, bacon, beef, tallow, hides, and leather, to which the following year were added — like a game of inflation whack-a-mole — rye, peas, beans, bread, biscuits, mutton, veal, lamb, pork, ale, beer, wool, and candles. And of course charcoal and wood.5 For us to have records of the Weald exporting large quantities of wood in 1550 then, they must either have been sold through special royal licence, or have all been shipped out before the ban came in force just halfway through the year in May. Presumably a great deal more than recorded was also smuggled out. In 1555, parliament saw the need to put the ban on exporting victuals and wood into law, adding severe penalties. Transgressing merchants would lose their ship and have to pay a fine worth double the value of the contraband goods, while the ship’s mariners would see all their worldly possessions seized, and be imprisoned for at least a year without bail.6
It’s perhaps no wonder that the Weald’s ironworks continued to expand at such a rapid pace: the export ban would have freed up a great deal of woodland for their use. And ironmaking soon spread to other parts of England too, to where it did not have to compete for fuel with people’s homes. Given iron was significantly more valuable by both weight and volume than wood, it could easily bear the cost of transporting it from further away, and so could be made much further inland, away from the coasts and rivers whose woodlands served cities. In the early seventeenth century, iron ore and pig iron from the southwest of England was sometimes shipped all the way to well-wooded Ireland for smelting or refining into bar.7 In the early eighteenth century scrap iron from as far away as even the Netherlands was being recycled in the forested valleys of southwestern Scotland.8
Whenever ironmaking hit the limits of what could be sustainably grown in an area, it simply expanded into the next place where wood was cheap. And there was almost always another place. England, having had to import some three quarters of its iron from Spain in the 1530s, by the 1580s was almost entirely self-sufficient, after which the total amount of iron it produced using charcoal continued to grow, reaching its peak another two hundred years later in the 1750s.9 Had iron-making not been able to find sustainable supplies of fuel within England, it would have disappeared within just a few years rather than experiencing almost two centuries of expansion.10
And that’s just iron. The late sixteenth century also saw the rapid rise in England of a charcoal-hungry glass-making industry too. Green glass for small bottles had long been made in some of England’s forests in small quantities, but large quantities of glass for windows had had to be imported from the Low Countries and France. Just as with iron, however, the effect of debasement was to make the imports unaffordable for the English, and so French workers were enticed over in the 1550s and 60s to make window glass in the Weald. Soon afterwards, Venetian-style crystal-clear drinking glasses were being made there too.
What makes glass even more interesting than iron, however, is that its breakability meant it could not be made too far away from the cities in which it would be sold, and so had to compete directly with people’s homes for its fuel. Yet by the 1570s crystal glass was even being made even within London itself. Despite charcoal supplies being by far the largest cost of production, over the course of the late sixteenth century the price of glass in England remained stable, making it increasingly common and affordable while the price of pretty much everything else rose.11
What we have then is not evidence of a mid-sixteenth-century shortage of wood for fuel, and certainly not of those demands causing deforestation. It is instead evidence of truly unprecedented demands being generally and sustainably met.
And despite these unprecedented demands, the intensity with which under-woods were exploited for fuel seems to have actually decreased. During the medieval population peaks, the woods and hedges that supplied London had been squeezed for more fuel by simply cropping the trunks and branches more often, cutting them away every six or seven years rather than waiting for them to grow into larger poles or logs. After the Black Death killed off half the population, the cropping cycle could again lengthen to about eleven. But under-woods in the mid-sixteenth century were being cropped on average only twelve or so years — about twice as long a cycle as before the Black Death — which by the nineteenth century had lengthened still further to fourteen or fifteen.12
The lengthening of the cropping cycle can imply a number of things, and we’ll get to them all. But one possibility is that in order to meet unprecedented demands, more firewood was being collected at the expense of the other major use of trees: for timber.
Anton Howes, “The Coal Conquest”, Age of Invention, 2024-10-04.
1. Estienne Perlin, “A description of England and Scotland” [1558], in The Antiquarian Repertory, vol.1 (1775), p.231. Perlin must have visited Britain in early 1553, as he mentions the arrival of a new French ambassador, which occurred in April 1553, as well as the wedding of Lady Jane Grey, which occurred in May of that year. Also Danielo Barbaro, “Report (May 1551)” in Calendar of State Papers Relating to English Affairs in the Archives of Venice, Vol 5: 1534-1554 (Her Majesty’s Stationery Office, 1873). And: Paul Warde and Tom Williamson, “Fuel Supply and Agriculture in Post-Medieval England”, The Agricultural History Review 62, no. 1 (2014), p.71
2. Galloway et al., p.457 for the estimate of 17.4 miles overland as the outer limit of London’s firewood supply; Proceedings in the Parliaments of Elizabeth I, Vol I: 1558-1581, ed. T.E. Hartley (Leicester University Press, 1981), p.370: specifically, the London MP Rowland Hayward complained of the cost of firewood billets and charcoal having increased in price over the previous 30 years (which would encompass the period of debasement-induced inflation), before noting that “Sometimes the want of wood has driven the City to make provision in such places as they have been driven to carry it 12 miles by land”.
3. Mavis E. Mate, Trade and Economic Developments, 1450-1550: The Experience of Kent, Surrey and Sussex (Boydell Press, 2006), pp.83, 92, 101.
4. These statistics are derived from a combination of Peter King, “The Production and Consumption of Bar Iron in Early Modern England and Wales”, The Economic History Review 58, no. 1 (1 February 2005), pp.1–33 for the iron production estimates, and G. Hammersley, “The Charcoal Iron Industry and Its Fuel, 1540-1750”, The Economic History Review 26, no. 4 (1973), pp.593–613 for the estimates of how much charcoal, wood, and land was required at a given date to produce a given quantity of pig or bar iron.
5. Paul L. Hughes and James F. Larkin, eds., Tudor Royal Proclamations., Vol. I: The Early Tudors (1485-1553) (Yale University Press, 1964), proclamations nos. 304, 310, 318, 319, 345, 357, 361, 365, 366.
6. 1 & 2 Philip & Mary, c.5 (1555)
7. William Brereton, Travels in Holland, the United Provinces, England, Scotland and Ireland 1634-1635, ed. Edward Hawkins (The Chetham Society, 1844), p.147
8. T. C. Smout, ed., “Journal of Henry Kalmeter’s Travels in Scotland, 1719-20”, in Scottish Industrial History: A Miscellany, vol. 14, 4 (Scottish History Society, 1978), p.19
9. See King. Note that there was an interruption to this growth in the mid-seventeenth century, for reasons I mention later on.
10. There was a period in the early-to-mid seventeenth century when English ironmaking stagnated, but this was due to the growth of a competitive ironmaking industry in Sweden.
11. D. W. Crossley, “The Performance of the Glass Industry in Sixteenth-Century England”, The Economic History Review 25, no. 3 (1972), pp.421–33
12. Galloway et al. On cropping cycles in particular, see pp.454-5: they note how the average cropping of wood in their sample c.1300 was about every seven years, but by 1375-1400 — once population pressures had receded due to the Black Death — the average had increased to every eleven. See also Rackham, pp.140-1. John Worlidge, Systema agriculturæ (1675), p.96 mentions that coppice “of twelve or fifteen years are esteemed fit for the axe. But those of twenty years’ standing are better, and far advance the price. Seventeen years’ growth affords a tolerable fell”.
January 13, 2025
January 12, 2025
QotD: Kaiser Wilhelm II
Following the all-too brief reign of Frederick III, his son Wilhelm II, grandson of the first German Emperor, took power in 1888 (known as the “year of the three emperors”). From the start, the young Wilhelm was determined not to be the reserved figure of his grandfather and still less the liberal reformer that his ill-fated father had wished to be. Instead, Wilhelm believed it was his right and duty to be directly involved in the country’s governing.
This was completely incompatible with Bismarck’s system, which had centralized power upon his own person. With uncharacteristic focus and subtlety, Wilhelm sought to reclaim the power that his grandfather had ceded to the chancellor. This was not to prove especially difficult; Bismarck’s position had always relied upon his indispensability to the emperor. Thus, when Bismarck offered his resignation (as he often did during disputes) Wilhelm merely accepted it. The last great man of the wars of unification had now disappeared from the balance.
While the German Empire never became a true autocracy, Wilhelm succeeded in creating what historian, and biographer of the Kaiser, John C. Röhl called a “personalist” system.1 The Kaiser had significant power over personnel. Promotions in the officer corps required his assent. Advancement within civil service (from which civilian ministers were appointed) was also dependent on his favor. By exercising this power, Wilhelm was able to ensure the highest levels of the German government were men agreeable to his point of view. Though they were not mere “yes men”, Wilhelm ensured that they were knowingly dependent on his favor for their position. The Kaiser — even to the end of the monarchy — exercised considerable “negative power” (as Röhl termed it.)2 While Wilhelm’s ability to actively make policy was limited, anything he disapproved of was simply not proposed.
Wilhelm II’s reign marked a departure from the more restrained leadership of his predecessors, as he sought to assert direct influence over the German Empire’s governance and military affairs. This shift toward a more “personalist” system, where loyalty to the Kaiser outweighed true statesmanship, weakened the effectiveness of German leadership and contributed to its eventual strategic missteps. The rigid adherence to the Schlieffen Plan and the technocratic focus on material advantages, such as firepower and mobility, overshadowed the need for adaptable strategic thinking. These failures in both leadership and military planning set the stage for Germany’s disastrous involvement in World War I, where an empire led by personalities rather than policies was ill-prepared for the complexities of modern warfare. Ultimately, Wilhelm’s influence and the culture of sycophancy he fostered played a pivotal role in leading Germany down the path of ruin.
Kiran Pfitzner and Secretary of Defense Rock, “The Kaiser and His Men: Civil-Military Relations in Wilhelmine Germany”, Dead Carl and You, 2024-10-02.
1. John C. G. Röhl, Kaiser Wilhelm II: A Concise Life (Cambridge: Cambridge University Press. 2014).
2. Negative power refers to the ability of an actor or group to block, veto, or prevent actions, decisions, or policies from being implemented, rather than directly initiating or shaping outcomes.
January 11, 2025
QotD: “Composite” pre-gunpowder infantry units
I should be clear I am making this term up (at least as far as I know) to make a contrast between what Total War has, which are single units made up of soldiers with identical equipment loadouts that have a dual function (hybrid infantry) and what it doesn’t have: units composed of two or more different kinds of infantry working in concert as part of a single unit, which I am going to call composite infantry.
This is actually a very old concept. The Neo-Assyrian Empire (911-609 BC) is one of the earliest states where we have pretty good evidence for how their infantry functioned – there was of course infantry earlier than this, but Bronze Age royal records from Egypt, Mesopotamia or Anatolia tend to focus on the role of elites who, by the late Bronze Age, are increasingly on chariots. But for the early Iron Age Neo-Assyrian empire, the fearsome effectiveness of its regular (probably professional) infantry, especially in sieges, was a key component of its foreign-policy-by-intimidation strategy, so we see a lot more of them.
That infantry was split between archers and spear-and-shield troops, called alternately spearmen (nas asmare) or shield-bearers (sab ariti). In Assyrian artwork, they are almost always shown in matched pairs, each spearman paired off with a single archer, physically shielding the archer from attack while the archer shoots. The spearmen are shown with one-handed thrusting spears (of a fairly typical design: iron blade, around 7 feet long) and a shield, either a smaller round shield or a larger “tower” shield. Assyrian records, meanwhile, reinforce the sense that these troops were paired off, since the number of archers and spearmen typically match perfectly (although the spearmen might have subtypes, particularly the “Qurreans” who may have been a specialist type of spearman recruited from a particular ethnic group; where the Qurreans show up, if you add Qurrean spearmen to Assyrian spearmen, you get the number of archers). From the artwork, these troops seem to have generally worked together, probably lined up in lines (in some cases perhaps several pairs deep).
The tactical value of this kind of composite formation is obvious: the archers can develop fire, while the spearmen provide moving cover (in the form of their shields) and protection against sudden enemy attack by chariot or cavalry with their spears. The formation could also engage in shock combat when necessary; the archers were at least sometimes armored and carried swords for use in close combat and of course could benefit (at least initially) from the shields of the front rank of spearmen.
The result was self-shielding shock-capable foot archer formations. Total War: Warhammer also flirts with this idea with foot archers who have their own shields, but often simply adopts the nonsense solution of having those archers carry their shields on their backs and still gain the benefit of their protection when firing, which is not how shields work (somewhat better are the handful of units that use their shields as a firing rest for crossbows, akin to a medieval pavisse).
We see a more complex version of this kind of composite infantry organization in the armies of the Warring States (476-221 BC) and Han Dynasty (202 BC – 220 AD) periods in China. Chinese infantry in this period used a mix of weapons, chiefly swords (used with shields), crossbows and a polearm, the ji which had both a long spearpoint but also a hook and a striking blade. In Total War: Three Kingdoms, which represents the late Han military, these troop-types are represented in distinct units: you have a regiment of ji-polearm armed troops, or a regiment of sword-and-shield troops, or a regiment of crossbowmen, which maneuver separately. So you can have a block of polearms or a block of crossbowmen, but you cannot have a mixed formation of both.
Except that there is a significant amount of evidence suggesting that this is exactly how the armies of the Han Dynasty used these troops! What seems to have been common is that infantry were organized into five-man squads with different weapon-types, which would shift their position based on the enemy’s proximity. So against a cavalry or chariot charge, the ji might take the front rank with heavier crossbows in support, while the sword-armed infantry moved to the back (getting them out of the way of the crossbows while still providing mass to the formation). Of course against infantry or arrow attack, the swordsmen might be moved forward, or the crossbowmen or so on (sometimes there were also spearmen or archers in these squads as well). These squads could then be lined up next to each other to make a larger infantry formation, presenting a solid line to the enemy.
(For more on both of these military systems – as well as more specialist bibliography on them – see Lee, Waging War (2016), 89-99, 137-141.)
Bret Devereaux, “Collection: Total War‘s Missing Infantry-Type”, A Collection of Unmitigated Pedantry, 2022-04-01.
January 10, 2025
QotD: The “bottle service” model as a “douchebag Potlatch”
Gabriel: So far we’ve been discussing bottle service from the consumer’s point of view as a potlatch, but the core of the book is that it requires an enormous amount of extremely convoluted work to mobilize models as a sort of rent-an-entourage to be guests at the potlatch. Veblen observed that one of the functions of dependents, and indeed the primary function of dependents with little or no functional purpose, is to consume beyond what a rich man could consume himself and thereby demonstrate the rich man’s wealth and power. A Wall Street bro can probably consume a lot more alcohol than a woman with a body mass index of 18, but several underweight women at the bro’s table can considerably expand the amount of alcohol that the table can collectively consume. The distinctive feature of bottle service is that rather than the guests being either the host’s long-standing dependents or the host’s frenemy and the frenemy’s long-standing dependents as in a classic potlatch, the models are strangers to the host and their presence is arranged by the club, which subcontracts this to party promoters. I suppose this isn’t totally unprecedented since the synoptic gospels’ parable of the feast (Matthew 22 and Luke 14) also involves mobilizing a bunch of randos to benefit from the host’s largesse; but (a) the host in the parable relied on randos as a substitute when his regular dependents blew off his invitation, and (b) the parables aren’t intended to be realistic stories so aren’t good evidence than an actual 1st century AD host would behave this way.
As to whether I’ve gotten a grant to pay for bottle service, mercifully no. I have had dinner with a (former) model, but it was Ashley herself at the kind of restaurant that occasionally has a hedge fund Powerpoint deck critiquing its management go viral. It was a decent hour, both of us were completely sober, there was no party promoter arranging the meeting, and there was no EDM played at OSHA-violation decibel levels. My idea of a good time is a lucid conversation with a smart friend, such as both that occasion and this email exchange, whereas I’d pay a good amount of money to avoid getting extremely drunk and staying out until dawn in an environment too loud for conversation.1 However I salute Ashley for doing so and thereby providing us with this book. The most I’ve had to suffer for my scholarship is writing response memos to annoying reviewer questions, or struggling with merge errors whilst munging data files.
And yet, contrary to my own taste, the people at the night clubs are paying a lot of money and/or waiting in line to get in, so obviously they seem to think it is appealing. I don’t think we can call this false consciousness either. Ashley is very clear that part of the reasons the models go to the clubs is as a favor to the promoters (much more on that later), but part of it is that a lot of models think going to a famous night club is really glamorous and cool. I’m tempted to say this is just de gustibus non disputandum, but just shrugging at taste is kind of a cop out for a sociologist since one of our mandates is to explain socially patterned taste.
I think a key explanation for what’s going on is Girard’s mimetic desire. The club is glamorous because there’s a long line of people outside waiting to get past the velvet rope. The women are beautiful because everyone agrees that tall skinny women are beautiful, even though in other contexts a lot of men (including the promoters) are more attracted to the kind of shorter curvier women who are barred entry to the club as “midgets”. Everyone and everything in the world of models and bottles that is desirable is desirable primarily because they or it are desired by others.
John: I’ve never read a fashion magazine or watched a runway show, so I just naively assumed that models were stunningly attractive and feminine. But as Mears points out, the models are not actually to most men’s tastes. They tend to have boyish figures and to be unusually tall.2 Is this because the fashion industry is dominated by gay men, who gravitate towards women who look like teen boys? Whatever the origins of it, there is a model “look”, and the industry has slowly optimized for a more and more extreme version of it, like a runaway neural network, or like those tribes with the rings that stretch their necks or the boards that flatten their skulls. There’s actually a somewhat uncanny or even posthuman look to many of the models. The club promoters denigrate women who lack the model look as “civilians”, but freely admit that they’d rather sleep with a “good civilian” than with a model. The model’s function, as you say, is as a locus of mimetic desire. They’re wanted because they’re wanted, in a perfectly tautological self-bootstrapping cycle; and because, in the words of one promoter: “They really pop in da club because they seven feet tall”.
The men don’t want to sleep with the models, and by and large they don’t. This leads directly to one of the most jaw-dropping insights in the entire book: the models are a potlatch of sorts too. The men are buying thousand dollar bottles of champagne and dumping them out on the floor, destroying economic value just to show that they can. And likewise, they’re surrounding themselves with dozens of beautiful women and then not sleeping with them. A potlatch of female beauty, sexuality, and reproductive potential — flaunting their wealth by hoarding women and conspicuously declining to enjoy their company, but at the same time denying them to every other man. An anti-harem.
Actually, you know what else the models remind me of? Medieval jesters. There was a point in the 15th century when every ruler in Europe had to have a dwarf in his entourage — not because there was anything intrinsic or valuable about very short men, but just because it was rare. The first guy did it to show that he was a big enough deal to have something expensive and hard to find, and then everybody else started doing it because it was the thing to do. (When dwarfs became too commonplace, the status symbols got weirder.) I think model phenotype is a little bit like that — desirable because it is rare, and because gathering and showcasing all these rare objects is a way to demonstrate your wealth and power.
John Psmith and Gabriel Rossman, “GUEST JOINT REVIEW: Very Important People, by Ashley Mears”, Mr. and Mrs. Psmith’s Bookshelf, 2024-03-04.
1. Another interesting work of scholarship on partying is Minjae Kim’s work on Korean work team binge drinking. Minjae shows that people go binge drinking because most of them hate it and thus it serves as a costly signal of loyalty.
2. In fact, an unusually high proportion of models are intersex individuals with a Y-chromosome and androgen insensitivity syndrome.
January 9, 2025
QotD: Teenagers
At some point I stopped wanting to go to the farm on Sundays; I was suffering from Sudden Onset Self-Addled Sullen Disengagement Syndrome, which strikes when you blow out 14 candles.
James Lileks, The Bleat, 2005-08-10.
January 8, 2025
January 7, 2025
QotD: Most people hate their jobs
A Gallup poll found that 85 percent of people hate their jobs. Business schools would say that this is due to poor strategy, poor leadership, or poor innovation. Nothing that cannot be fixed with an MBA degree. The real explanation is much simpler, however: 85 percent of people hate their jobs because, given the choice, they would never do them in the first place. Twenty years ago, I applied to a business school. A good one. Actually, one of the best. When the acceptance letter arrived, I was convinced that I’d been admitted under some female quota, as my abilities are perfectly average. Then I started the course and realised that so were everyone else’s. No one in my class was especially bright. Or if they were, it was of the topical, tactical sort of intelligence — one that allows a person to see the different angles but somehow totally miss the point. The course itself was akin to vocational training: two months of accounting, two months of strategy, two months of marketing, and then off you go, ripe and ready for the office. Sorry, for leadership — which is telling other people in the office what to do.
We do, of course, have a choice. If you don’t like office work, you can become a PE teacher. If you are bad with authority, start your own business. The corporate sector is too greedy for you? Join an NGO. How glorious our life would be if things were so simple. Regrettably, they are not. Nicolai Berdyaev, a Russian religious philosopher during the first half of the twentieth century, argued — quite convincingly — that this choice to which we habitually refer is not really a choice at all. There is no freedom in it. It is a decision to adjust, adapt, and fit in. It is not a choice to create. At best, it is the choice of an animal looking for food and shelter, not of a human agent created in God’s image. He was right. As we leave childhood and the need to earn a living becomes increasingly urgent, our dreams start getting trimmed and trampled and squashed, until there comes a day when we no longer remember them. We begin by seeking the sublime. We end up resigned to the ordinary.
Elena Shalneva, “Work — the Tragedy of Our Age”, Quillette, 2020-01-29.
January 6, 2025
QotD: The right to bear arms
Thomas Jefferson’s question, posed in his inaugural address of 1801, still stings. If a man cannot be trusted with the government of himself, how can he be trusted with the government of others? And this is where history and politics circle back to ethics and psychology: because “the dignity of a free (wo)man” consists in being competent to govern one’s self, and in knowing, down to the core of one’s self, that one is so competent.
And that is where ethics and psychology bring us back to the bearing of arms. For causality runs both ways here; the dignity of a free man is what makes one ethically competent to bear arms, and the act of bearing arms promotes (by teaching its hard and subtle lessons) the inner qualities that compose the dignity of a free man.
It is not always so, of course. There is a 3% or so of psychotics, drug addicts, and criminal deviants who are incapable of the dignity of free men. Arms in the hands of such as these do not promote virtue, but are merely instruments of tragedy and destruction. But so, too, are cars. And kitchen knives. And bricks. The ethically incompetent readily (and effectively) find other means to destroy and terrorize when denied arms. And when civilian arms are banned, they more readily find helpless victims.
But for the other 97%, the bearing of arms functions not merely as an assertion of power but as a fierce and redemptive discipline. When sudden death hangs inches from your right hand, you become much more careful, more mindful, and much more peaceful in your heart — because you know that if you are thoughtless or sloppy in your actions or succumb to bad temper, people will die.
Too many of us have come to believe ourselves incapable of this discipline. We fall prey to the sick belief that we are all psychopaths or incompetents under the skin. We have been taught to imagine ourselves armed only as villains, doomed to succumb to our own worst nature and kill a loved one in a moment of carelessness or rage. Or to end our days holed up in a mall listening to police bullhorns as some SWAT sniper draws a bead …
But it’s not so. To believe this is to ignore the actual statistics and generative patterns of weapons crimes. “Virtually never”, writes criminologist Don B. Kates, “are murderers the ordinary, law-abiding people against whom gun bans are aimed. Almost without exception, murderers are extreme aberrants with lifelong histories of crime, substance abuse, psychopathology, mental retardation and/or irrational violence against those around them, as well as other hazardous behavior, e.g., automobile and gun accidents.”
To believe one is incompetent to bear arms is, therefore, to live in corroding and almost always needless fear of the self — in fact, to affirm oneself a moral coward. A state further from “the dignity of a free man” would be rather hard to imagine. It is as a way of exorcising this demon, of reclaiming for ourselves the dignity and courage and ethical self-confidence of free (wo)men that the bearing of personal arms, is, ultimately, most important.
This is the final ethical lesson of bearing arms: that right choices are possible, and the ordinary judgement of ordinary (wo)men is sufficient to make them.
We can, truly, embrace our power and our responsibility to make life-or-death decisions, rather than fearing both. We can accept our ultimate responsibility for our own actions. We can know (not just intellectually, but in the sinew of experience) that we are fit to choose.
Eric S. Raymond, “Ethics from the Barrel of a Gun”.
January 5, 2025
QotD: The customary Dictatorship in the Roman Republic before 82BC
It’s important to note at the outset that the Romans had no written constitution and indeed most of the rules for how the Roman Republic functioned were, well, customary. The Roman term for this was the mos maiorum, the “custom of the ancestors”, but Roman practice here isn’t that different from how common law and precedent guide the functioning of something like the British government (which also lacks a written constitution). Later Roman writers, particularly Cicero, occasionally offer theoretical commentary on the “rules” of the Republic (as a retrojected, ideal version), but just as often their observations do not actually conform to the practice we can observe from earlier periods. In practice, the idea here was that the “constitution” of the Republic consisting in doing things as they had always been done, or at least as they were understood to have always been done.
Consequently, as historians, we adopt the formulation that the Republic is what the Republic does – that is that one determines the rules of offices and laws based on how they are implemented, not through a hard-and-fast firm legal framework. Thus “how does the dictatorship work?” is less a question of formal rules and more a question of, “how did the eighty-odd Roman dictatorships work?”
The basic idea behind the office was that the dictator was a special official, appointed only in times of crisis (typically a military crisis), who could direct the immediate solution to that crisis. Rome’s government was in many ways unlike a modern government; in most modern governments the activities of the government are carried out by a large professional bureaucracy which typically reports to a single executive, be that a Prime Minister or a President or what have you. By contrast, the Roman Republic divided the various major tasks between a bunch of different magistrates, each of whom was directly elected and notionally had full authority to carry out their duties within that sphere, independent of any of the other magistrates. In crude analogy, it would be as if every member of the United States cabinet was directly elected and none of them reported to any of the rest of them but instead all of them were advised by Congress (but in a non-binding manner). Notionally, the more senior magistrates (particularly the consuls) could command more junior magistrates, but this wasn’t a “direct-report” sort of relationship, but rather an unusual imposition of a more senior magistrate on a less senior one, governed as much by the informal auctoritas of the consul as by law.
In that context, you can see the value, when rapid action was required, of consolidating the direction of a given crisis into a single individual. This is, after all, why we have single executive magistrates or officials in most countries. So, assuming you have a crisis, how does this process work?
The typical first step is that the Senate would issue its non-binding advice, a senatus consultum, suggesting that one or both of the consuls appoint a dictator. The consuls could ignore this direct, but almost never did (save once in 431, Liv. 4.26.5-7). The consuls would then have to nominate someone; they might agree on the choice (which would make things simple) or one of them might be indisposed (out of the city, etc.), which would leave the choice to the one that remained. If both consuls were present and did not agree, they’d draw lots to determine who got to pick (which happens in the aforementioned instance in 431 after the tribunes got the consuls of that year to relent and pick someone, Liv. 4.26.11).
The nominating consul could pick anyone except himself; if you, as consul, wanted to be dictator, you would need your co-consul to so nominate you. There were no formal requirements; of course nominations tended to go to experienced commanders, which tended to mean former consuls, but this was not a requirement. Publius Claudius Pulcher (cos. 249), enraged when the Senate directed him to appoint a dictator (because of his own bungled military command) infamously nominated his own freedman, Claudius Glicia, as dictator (Liv. Per. 19.2; Seut. Tib. 2.2), which was apparently a bridge too far; Glicia was forced to abdicate but his name was duly entered onto the Fasti because the appointment was valid, if ill-advised (despite the fact that, as a freedman, Glicia would have been ineligible to run for any [office]). Nevertheless, dictators were usually former consuls.
Once the name was picked, in at some cases the appointment may have been confirmed by a vote of the Comitia Curiata, Rome’s oldest voting assembly, which was responsible for conferring imperium (the power to command armies and organize law courts; essentially “the power of the kings”) on magistrates; not all magistrates had imperium (consuls, praetors, proconsuls, propraetors, dictators and their magistri equitum did; quaestors, aediles, tribunes, both plebeian and military, and censors did not). We do not know of any instance where the Comitia Curiata put the kibosh on the appointment of a dictator, so this step was little more than a rubber-stamp, and may have been entirely optional (Lintott, op. cit., 110, n. 75), but it may have also reflected the notion that all imperium had to be conferred by the people through a voting assembly. It is often hard to know with clarity about pro forma elements of Roman politics because the sources rarely report such things.
The dictator was appointed to respond to a specific issue or causa, the formula for which are occasionally recorded in our sources. The most common was rei gerundae causa, “for the business to be done” which in practice meant a military campaign or crisis. In cases where the consuls were absent (out on campaign), a dictator might also be nominated comitiorum habendorum causa, “for having an assembly”, that is, to preside over elections for the next year’s consuls, so that neither of the current consuls had to rush back to the city to do it. Dictators might also be appointed to do a few religious tasks which required someone with imperium. Less commonly but still significantly, a dictator might be appointed seditionis sedenae causa, “to quell sedition”; only one instance clearly under this causa is known, P. Manlius Capitolinus in 368, but several other instances, e.g. L. Quinctius Cincinnatus in 439, also dealt with internal matters. Finally, once in 216, Marcus Fabius Buteo held the office of dictator senatus legendi causa, “to enroll the Senate”, as the Battle of Cannae, earlier that year, had killed so many Senators that new inductions were needed (Liv. 23.23).
The dictator then named a subordinate, the magister equitum (“master of the horse”). The magister equitum was a lieutenant, not a colleague, but interestingly once selected by a dictator could not be unselected or removed, though his office ended when the dictator laid down his powers. We should note Marcus Minucius, magister equitum for Q. Fabius Maximus in 217 as an exception; his selection was forced by the people via a law and his powers were later made equal to Fabius’ powers. This turned out to be a substantial mistake, with Fabius having to bail the less prudent Minucius out at Geronium – the undermining of Fabius generally during 217 was, in retrospect viewed as a disaster, since the abandonment of his strategy led directly to the crushing defeat at Cannae in 216.
One of the ways that legal power was visually communicated in Rome was through lictors, attendants to the magistrates who carried the fasces, a bundle of rods (with an axe inserted when outside the sacred bounds of the city, called the pomerium). More lictors generally indicated a greater power of imperium (consuls, for instance, could in theory give orders to the praetors). Praetors were accompanied by six lictors; consuls by 12. The dictator had 24 lictors when outside of the pomerium to indicate his absolute power in that sphere (that is, in war), but only 12 inside the city. The magister equitum, as the dictator’s subordinate, got only six, like the praetors.
It also seems fairly clear that while dictators had almost complete power within their causa, those powers didn’t necessarily extend beyond it (e.g. Liv. 2.31.9-11, the dictator Manius Valerius, having been made dictator to resolve a military problem, insists to the Senate that he cannot resolve internal strife through his dictatorial powers and instead lays down his office early). The appointment of a dictator did not abolish the other offices (Cicero thinks they do, but he is clearly mistaken on the matter, Cic. De Leg. 3.9, see Lintott, 111). In essence then, the dictator was both a supreme military commander and also expected to coordinate the other magistrates with his greater degree of imperium, though of course in practice the ability to do that is going to substantially depend on the individual dictator’s ability to get cooperation from the other magistrates (but then, on the flip side, the dictator has just been designated as the leader in the crisis, so the social pressure to conform to his vision must have been intense). Notably, dictators could not make a law (a lex) on their own power or legislate by fiat outside of their causa; they could and did call assemblies which could by vote approve laws proposed by a dictator, however.
The dictator served for six months or until the task for which he was appointed was resolved, whichever came first. There is a tendency in teaching Roman history to represent a figure like Cincinnatus, who laid down his dictatorship after just fifteen days in 458, as exceptional but while the extreme shortness of term was exceptional, laying down power was not. Indeed, Cincinnatus (or perhaps a relative) served as dictator again in 439 and again laid down his power, this time in merely 21 days. In practice, the time-limited nature of the dictatorship meant there were few incentives to “run out the clock” on the office since it was so short anyway – better, politically, to solve the crisis quickly and lay down power ostentatiously early and “bank” the political capital than try to run out the period of power, accomplish relatively little and squander a reputation for being public-spirited.
Bret Devereaux, “Collections: The Roman Dictatorship: How Did It Work? Did It Work?”, A Collection of Unmitigated Pedantry, 2022-03-18.
January 4, 2025
QotD: The “show pillows”
The female need to pile a bed with useless pillows is an old and not particularly novel observation. It mystifies men. It’s like serving a meal where the plate is loaded with Show Potatoes, and you have to remove ten tubers before you can start. It’s like having a workbench in the garage with Show Hammers. Don’t pound with that! That’s the nice hammer we want company to see! It’ll get nicked and dinged. Or like going to someone’s house and finding out they have a Show Dog. No, no, don’t pat him on the head. Here, use this dog. And there’s some panting happy mutt they pull out of a closet. This is the company dog.
It reminds me of the bathrooms of my childhood, which were stocked with forbidden things: decorative soap in a nice dish engraved with intricate patterns that evaporated on contact with water, and decorative towels. You ended up drying your hands on the curtains, or patting them dry on the inevitable polyester shag toilet-seat cover.
Anyway. You’re wondering how I recovered from this grotesque embarrassment. I fetched two pillows from Daughter’s unoccupied room, apologized on behalf of the male side of the species, and figured the matter was closed. Oh no. Ohhhh, no. The next morning my wife made up the guest room, and emerged with an expression of despair.
The pillowcases did not match.
One was white. The other — and I tremble with shame to write these words — was ivory.
Well, an apology was in order. But how? Maybe bring it up in a roundabout way at breakfast.
“So … how’d you sleep?”
“Oh … okay, I guess. Weird dreams. I was in a paint store, looking at those strips with the different hues, and two of the shades of white looked different but I couldn’t really tell if they were and then I started crying tears in two different shades of white and when the tears hit the floor they burned like acid, and then horrible off-white slugs oozed out of the hole and started singing ABBA songs in two different keys.”
“Huh. And you?”
“I had weird dreams too. There were two philosophers who agreed on everything except for one minor, obscure point, but instead of focusing on their agreement they argued about the small difference until they decided to have a duel, but the guns didn’t fire.”
“Ah, those would be the Show Pistols. Freud had something to say about those. Well, that’s on me. The pillowcase hues were not in sync. I hope we can get past this and enjoy the day.”
James Lileks, “Show Me the Pillows”, LILEKS (James), 2024-09-30.
January 3, 2025
QotD: Whimsy
Whimsy is an aesthetic category for cultural artifacts that do not quite conform to, but do not fully violate, the rules of contemporary culture. Whimsy is licensed departure. It makes free with cultural conventions in a way we find charming, funny, winsome and sometimes freeing. Whimsy is chaos on a leash, departure that may not stray.
Grant McCracken, “Discontinuous innovation and the mysteries of Roger Ebert”, This Blog Sits at the, 2005-08-03.
January 2, 2025
QotD: Sincerity
“… in the ’90s, the human spirit was alive and free. And that’s the vibe that resonates with me.”
This is what the French call le horse pucky. If we may be so bold as to speak of “the human spirit” — which is pretty heavy for a column starting with a professional wrestler — the 90s killed it stone cold dead. The human spirit can flourish in the most awful situations, but one indispensable requirement is: Sincerity. You just can’t be snarky about the “Ode to Joy” or ironic about the Sistine Chapel. If you do, then there really is no difference between Beethoven and MC Funetik Spelyn, nothing to choose between Michelangelo and a dog turd on the sidewalk — someone placed them there intentionally, which is the only distinguishing characteristic of “art” possible in a world overrun by Postmodernists and Deconstructionists.
Severian, “Why the 90s Was the Worst Decade Ever”, Rotten Chestnuts, 2021-07-04.
January 1, 2025
QotD: The OG internet moguls
The OG internet moguls were legit Mountain Dew-addled asocial coding savant t-shirt slobs, and that style quickly became a way to intimidate tie clad IBMers and VCs in meetings. “Man, these guys must be geniuses, they don’t even GAF”
The schlubby chic look was part of the con. Made him seem like someone who didn’t care about the money, which sends a subtle anti-capitalist, anti-consumerist, anti-finance bro NLP type message.
https://twitter.com/esaagar/status/1603015654206066688Posted by Noam Blum (neontaster) on Thursday, December 15th, 2022 1:25pm
Then it became sort of a cosplay thing for vaporware charlatans targeting FOMO investors.
“psst, Bob, should we really give $20 million to this guy? He’s picking his nose and wiping it on his cargo shorts rn”
“But remember that last nosepicker we passed on? He made $10 billion”
*fun fact: 8 years ago nosepicking guy was a Ralph Lauren-wearing chairman of the Delta Chi party committee, majoring in Entrepreneurship. And then he took the Silicon Valley Dress Down for Success seminar
There are other stylistic variations on pure slovenliness; the Black Turtleneck Next Steve Jobs gambit, and the coffee-clutching Patagonia Vest & Untuckits Next Steve Bezos thing
Oh, and the hoodies, so many hoodies.
For everybody thinking “tech people need to start wearing IBM suits again”: if you showed up to work or a VC meeting in one you would be thought deranged, building security, or a lunch caterer
For me the peak of Silicon Valley style will always be the pre-internet Assembly Language programmer polyester clip-on tie & short sleeve Sears Towncraft dress shirt look. The effortless nonchalance told you “I can trust this person not to run to the Carribean with my money”
David Burge (@Iowahawk), Twitter, 2022-12-15.
December 31, 2024
QotD: Pre-revolution Russia satirized by Dostoevsky
The opening of Demons tries to fool you into thinking it’s a comedy of manners about liberal, cosmopolitan Russian aristocrats in the 1840s. The vibe is that of a Jane Austen novel, but hidden within the comforting shell of a society tale, there’s something dark and spiky. Dostoevsky pokes fun at his characters in ways that translate alarming well into 2020s America. Everybody wants to #DefundTheOkhrana and free the serfs, but is terrified that the serfs might move in next door. Characters move to Brookl … I mean to St. Petersburg to start a left-wing magazine and promptly get canceled by other leftists for it. Academics endlessly posture as the #resistance to a tyrannical sovereign (who is unaware of their existence), and try to get exiled so they can cash in on that sweet exile clout. There are polycules.1
As the book unfolds, the satire gets more and more brutal. The real Dostoevsky knew this scene well — remember he spent his early years as a St. Petersburg hipster literary magazine guy himself — and he roasts it with exquisite savagery. As a friend who read the book with me put it: the men are fatuous, deluded about their importance, lazy, their liberal politics a mere extension of their narcissism. The woman are bitchy, incurious about the world except as far as it’s relevant to their status-chasing, viewing everyone and everything instrumentally. Nobody has any actual beliefs, and everybody is motivated solely by pretension and by the desire to sneer at their country.
But this is no conservative apologia for the system these people are rebelling against either, Dostoevsky’s poison pen is omnidirectional. Many right-wing satirists are good at showing us the debased preening and backbiting, like crabs in a bucket, that surplus elites fall into when there’s a vacuum of authority. But Dostoevsky admits what too many conservatives won’t, that the libs can only do this stuff because the society they despise is actually everything that they say it is: rotting from the inside, unjust, corrupt, and worst of all ridiculous. Thus he introduces representatives of the old order, like the conceited and slow-witted general who constantly misses the point and gets offended by imagined slights. Or like the governor of the podunk town where the action takes place, who instead of addressing the various looming disasters, sublimates his anxiety over them into constructing little cardboard models.2 If there’s a vacuum of authority, it’s because men like these are undeserving of it, failing to exercise it, allowing it to slip through their fingers.
All of this is very fun,3 and yet not exactly what I expect from a Dostoevsky novel. It’s a little … frivolous? Where are the agonizingly complex psychological portraits, the weighty metaphysical debates, the surreal stroboscopic fever-dreams culminating in murder, the 3am vodka-fueled conversations about damnation? Don’t worry, it’s coming, he’s just lulling you into a false sense of security. After a few hundred pages a thunderbolt falls, the book takes a screaming swerve into darkness, and you realize that the whole first third of this novel is like the scenes at the beginning of a horror movie where everybody is walking around in the daylight, acting like stuff is normal and ignoring the ever-growing threat around them.
John Psmith, “REVIEW: Demons, by Fyodor Dostoevsky”, Mr. and Mrs. Psmith’s Bookshelf, 2023-07-17.
1. In an incredible bit of translation-enabled nominative determinism, the main cuckold is a character named Virginsky. I kept waiting for a “Chadsky” to show up, but alas he never did.
2. Look, the fact that he’s sitting there painting minis while the world burns makes the guy undeniably relatable. If you transported him to the present day he would obviously be an autistic gamer, and some of my best friends, etc., etc. Nevertheless, though, he should not be the governor.
3. For some reason, there are people who are surprised that Demons is funny. I don’t know why they’re surprised, Dostoevsky is frequently funny. The Brothers Karamazov is hilarious!
December 30, 2024
QotD: The auxilia troops of the Imperial Roman armies
As we’ve seen, there had always been non-Romans fighting alongside Roman citizens in the army, for as long as we have reliable records to judge the point. In the Republic (until the 80s BC) these had consisted mostly of the socii, Rome’s Italian allies. These were supplemented by troops from whatever allies Rome might have at the time, but there was a key difference in that the socii were integrated permanently into the Roman army’s structure, with an established place in the “org. chart”, compared to the forces of allies who might fight under their own leaders with an ad hoc relationship to the Roman army they were fighting with. The end of the Social War (91-87BC) brought the Italians into the Roman citizen body and thus their soldiers into the legions themselves; it marked the effective end of the socii system, which hadn’t been expanded outside of Italy in any case.
But almost immediately we see the emergence of a new system for incorporating non-Romans, this time provincial non-Romans, into the Roman army. These troops, called auxilia (literally, “helpers”) first appear in the Civil Wars, particularly with Caesar‘s heavy reliance on Gallic cavalry to support his legions (which at this time seem not to have featured their own integrated cavalry support, as they had earlier in the republic and as they would later in the empire). The system is at this point very ad hoc and the auxiliaries here are a fairly small part of Roman armies. But when Augustus sets out to institutionalize and stabilize the Roman army after the Battle of Actium (31BC) and the end of the civil wars, the auxilia emerge as a permanent, institutional part of the Roman army. Clearly, they were vastly expanded; by 23 AD they made up half of the total strength of the Roman army (Tac. Ann. 4.5) a rough equivalence that seems to persist at least as far as the Constitutio Antoniniana in 212.
Of course it was no particular new thing for the Romans to attempt to use their imperial subjects as part of their army. The Achaemenid army had incorporated a bewildering array of subject peoples with their own distinctive fighting styles, a fact that Achaemenid rulers liked to commemorate […] The Seleucid army at Magnesia (189) which the Romans defeated also had numerous non-Macedonian supporting troops: Cappadocians, Galatians, Carians, Cilicians, Illyrians, Dahae, Mysians, Arabs, Cyrtians and Elamites. At Raphia (217) the Ptolemaic army incorporated Egyptian troops into the phalanx for the first time, but also included Cretans, Greek mercenaries, Thracians, Gauls and Libyans, inter alia. Most empires try to do this.
The difference here is the relative performance that Rome gets out of these subject-troops (both the socii and the auxilia). Take those examples. Quite a number of the ethnicities on Xerxes monument both served in the armies of Darius III fighting against Alexander but then swiftly switched sides to Alexander after he won the battles – the Ionians, Egypt, and Babylon greeted Alexander as a liberator (at least initially) which is part of why the Achaemenid Empire could crumble so fast so long as Alexander kept winning battles. Apart from Tyre and Gaza, the tough sieges and guerilla resistance didn’t start until he reached the Persian homeland. The auxiliaries in the Seleucid army at Magnesia famously fell apart under pressure, whereas the Roman socii stuck in the fight as well as the legions; our sources give us no sense at any point that the socii were ever meaningfully weaker fighters than the legions (if anything, Livy sometimes represents them as more spirited, though he has an agenda here, as discussed). And the Ptolemaic decision to arm their Egyptian troops in the Macedonian manner won the battle (turns out, Egyptians could fight just as well as Greeks and Macedonians with the right organization and training) but their subsequent apparent decision not to pay or respect those troops as well as their Macedonians seems to have led quite directly to the “Great Revolt” which crippled the kingdom (there is some scholarly argument about this last point, but while I think Polybius’ pro-Greek, anti-Egyptian bias creeps in to his analysis, he is fundamentally right to see the connection (Plb. 5.107). Polybius thinks it was foolish to arm non-Greeks, but the solution here to saving the Ptolemaic kingdom would have been arming the Egyptians and then incorporating them into the system of rule rather than attempting to keep up the ethnic hierarchy with a now-armed, angry and underpaid underclass. The Greek-speakers-only-club system of Ptolemaic rule was unsustainable in either case, especially with Rome on the horizon).
By contrast, the auxilia were mostly very reliable. The one major exception comes from 69 AD – the “Year of the Four Emperors” to give some sense of its chaos – when the Batavian chieftain Julius Civilis (himself an auxiliary veteran and a Roman citizen) revolted and brought one ala and eight cohorts drawn from the Batavi (probably around 4,500 men or so) with him, out of an empire-wide total of c. 150,000 auxilia (so maybe something like 3.3% of the total auxilia). Indeed, the legions had worse mutinies – the mutiny on the Rhine (Tac. Ann. 1.16ff in 14AD) had involved six legions (c. 30,000 troops, nearly a quarter of Rome’s 25 legions at the time). This despite the fact that the auxilia were often deployed away from the legions, sometimes in their own forts (you’ll see older works of scholarship suggest that the auxilia were kept logistically dependent on the legions, but more recent archaeology on exactly where they were has tended to push against this view). Indeed, the auxilia were often the only military forces (albeit in small detachments) in the otherwise demilitarized “senatorial” provinces (which comprised most of the wealthy, populous “core” of the empire); they could be trusted with the job, provided they weren’t the only forces in their own home provinces (and after 69, they never were). And the auxilia fought hard and quite well. The Romans occasionally won battles with nothing but the auxilia, was with the Battle of Mons Graupius (83 AD, Tac. Agricola 35ff) where the legions were held in reserve and never committed, the auxilia winning the battle effectively on their own. Viewers of the Column of Trajan’s spiral frieze have long noted that the auxilia on the monument (the troop-types are recognizable by their equipment) do most of the fighting, while the legions mostly perform support and combat engineering tasks. We aren’t well informed about the training the auxilia went through, but what we do know points to long-service professionals who were drilled every bit as hard as the famously well-drilled legions. Consequently, they had exactly the sort of professional cohesion that we’ve already discussed.
Why this difference in effectiveness and reliability? The answer is to be found in the difference in the terms under which they served. Rather than being treated as the disposable native auxiliaries of other empires, the Romans acted like the auxilia mattered … because they did.
First of all, the auxilia were paid. Our evidence here is imperfect and still much argued about, but it seems that auxilia were paid 5/6ths of the wages of the legionary counterparts, with the cavalry auxilia actually paid more than the infantry legionaries. While it might sound frustrating to be systematically paid 1/6th less than your legionary equivalent, the legions were paid fairly well. The auxilia probably made in wages about as much as a normal day-laborer, but the wage was guaranteed (something very much not the case for civilian laborers) and while the cost of their rations was deducted from their pay, that deduction was a fixed amount that seems to have been set substantially below the market value of their rations, building in another subsidy. Most auxiliaries seem to have been volunteers, because the deal in being an auxiliary was good enough to attract volunteers looking to serve a full tour of duty (around 20 years; this was a long-service professional army now so joining it meant making a career out of it).
And most importantly, eventually (perhaps under Tiberius or shortly thereafter) the auxilia began to receive a special grant of citizenship on finishing that tour of duty, one which covered the soldier, and any children he might have had by his subsequent spouse (including children had, it seems, before he left the army; Roman soldiers in this period were legally barred from contracting legal marriages while serving, so the grant is framed so that it retroactively legitimizes any children produced in a quasi-marriage when the tour of service is completed). Consequently, whereas a soldier being dragooned or hired as a mercenary into other multi-ethnic imperial armies might end his service and go back to being an oppressed subject, the Roman auxiliary, by virtue of his service, became Roman and thus essentially joined the ruling class at least in ethnic status. Auxiliaries also clearly got a share of the loot when offensive warfare happened and while there is a lot of debate as to if they also received the praemia (the large retirement bonus legionaries got), epigraphically it is pretty clear that auxiliaries who were careful with their money could establish themselves fairly well after their service. I should also note that what we see of auxiliaries suggests they were generally well armed (with some exceptions, which may have more to do with stereotyped depictions of certain kinds of “barbarians” than anything else): metal helmets, mail shirts (an expensive and high quality armor for the period), oval shields, a spear and the spatha – a Roman version of the classic Gallic one-handed cutting sword – are the standard visual indicator in Roman artwork for generic “auxiliaries”. That is actually a fairly high-end kit; it is no surprise that the auxilia could win battles with it.
The attentive should already be noting many of the components of the old socii system now in a new form: the non-Roman troops serve under similar conditions with the Romans, get similar pay and rations (forts occupied by the auxilia show no deviation from the standard Roman military diet), a share of loot and glory and can finally be rewarded for loyal service by being inducted into the Roman citizen body itself (which could mean their sons might well enroll in the legions, a thing which does seem to have happened, as we do see a fair bit of evidence for “military families” over multiple generations).
(For those looking for more detail on the auxilia, a lot of this is drawn from a book I have already recommended, Ian Haynes, Blood of the Provinces: The Roman auxilia and the Making of Provincial Society from Augustus to the Severans (2013). Also still useful for the history of the development of the auxilia is D.B. Saddington, The Development of the Roman auxiliary Forces from Caesar to Vespasian (1982); this is, alas, not an easy book to find as it is – to my knowledge – long out of print, but your library may be able to track down a copy.)
Bret Devereaux, “Collections: The Queen’s Latin or Who Were the Romans, Part V: Saving and Losing and Empire”, A Collection of Unmitigated Pedantry, 2021-07-30.



