Tenant labor of one form or another may be the single most common form of labor we see on big estates and it could fill both the fixed labor component and the flexible one. Typically tenant labor (also sometimes called sharecropping) meant dividing up some portion of the estate into subsistence-style small farms (although with the labor perhaps more evenly distributed); while the largest share of the crop would go to the tenant or sharecropper, some of it was extracted by the landlord as rent. How much went each way could vary a lot, depending on which party was providing seed, labor, animals and so on, but 50/50 splits are not uncommon. As you might imagine, that extreme split (compared to the often standard c. 10-20% extraction frequent in taxation or 1/11 or 1/17ths that appear frequently in medieval documents for serfs) compels the tenants to more completely utilize household labor (which is to say “farm more land”). At the same time, setting up a bunch of subsistence tenant farms like this creates a rural small-farmer labor pool for the periods of maximum demand, so any spare labor can be soaked up by the main estate (or by other tenant farmers on the same estate). That is, the high rents force the tenants to have to do more labor – more labor that, conveniently, their landlord, charging them the high rents is prepared to profit from by offering them the opportunity to also work on the estate proper.
In many cases, small freeholders might also work as tenants on a nearby large estate as well. There are many good reasons for a small free-holding peasant to want this sort of arrangement […]. So a given area of countryside might have free-holding subsistence farmers who do flexible sharecropping labor on the big estate from time to time alongside full-time tenants who worked land entirely or almost entirely owned by the large landholder. Now, as you might imagine, the situation of tenants – open to eviction and owing their landlords considerable rent – makes them very vulnerable to the landlord compared to neighboring freeholders.
That said, tenants in this sense were generally considered free persons who had the right to leave (even if, as a matter of survival, it was rarely an option, leaving them under the control of their landlords), in contrast to non-free laborers, an umbrella-category covering a wide range of individuals and statuses. I should be clear on one point: nearly every pre-modern complex agrarian society had some form of non-free labor, though the specifics of those systems varied significantly from place to place. Slavery of some form tends to be the rule, rather than the exception for these pre-modern agrarian societies. Two of the largest categories of note here are chattel slavery and debt bondage (also called “debt-peonage”), which in some cases could also shade into each other, but were often considered separate (many ancient societies abolished debt bondage but not chattel slavery for instance and debt-bondsmen often couldn’t be freely sold, unlike chattel slaves). Chattel slaves could be bought, sold and freely traded by their slave masters. In many societies these people were enslaved through warfare with captured soldiers and civilians alike reduced to bondage; the heritability of that status varies quite a lot from one society to the next, as does the likelihood of manumission (that is, becoming free).
Under debt bondage, people who fell into debt might sell (or be forced to sell) dependent family members (selling children is fairly common) or their own person to repay the debt; that bonded status might be permanent, or might hold only till the debt is repaid. In the later case, as remains true in a depressing amount of the world, it was often trivially easy for powerful landlord/slave-holders to ensure that the debt was never paid and in some systems this debt-peon status was heritable. Needless to say, the situation of both of these groups could be and often was quite terrible. The abolition of debt-bondage in Athens and Rome in the sixth and fourth centuries B.C. respectively is generally taken as a strong marker of the rising importance and political influence of the class of rural, poorer citizens and you can readily see why this is a reform they would press for.
Bret Devereaux, “Collections: Bread, How Did They Make It? Part II: Big Farms”, A Collection of Unmitigated Pedantry, 2020-07-31.
April 17, 2023
QotD: Tenant-farming (aka “sharecropping”) in pre-modern societies
April 14, 2023
QotD: The three great strategic sins
The first sin is the sin of of not having a strategy in the first place, what we might call “emotive” strategy. As Clausewitz notes, policy (again, note above how what we’re calling strategy is closest to policy in Clausewitz’ sense) is “subject to reason alone” whereas the “primordial violence, hatred and enmity” is provided for in another part of the trinity (“will” or “passion”). To replace policy with passion is to invert their proper relationship and court destruction.
The second sin is the elevation of operational concerns over strategic ones, the usurpation of strategy with operations, which we have discussed before. This is, by the by, also an error in managing the relationship of the trinity, allowing the general’s role in managing friction to usurp the state’s role in managing politics.
Perhaps the greatest example of this is the Japanese attack on Pearl Harbor; an operational consideration (the destruction of the US Pacific Fleet) and even the tactics necessary to achieve that operational objective, were elevated above the strategic consideration of “should Japan, in the midst of an endless, probably unwinnable war against a third-rate power (the Republic of China) also go to war with a first-rate power (the United States) in order to free up oil-supplies for the first war”. Hara Tadaichi’s pithy summary is always worth quoting, “We won a great tactical victory at Pearl Harbor and thereby lost the war.”
How does this error happen? It tends to come from two main sources. First, it usually occurs most dramatically in military systems where the military leadership – which has been trained for operations and tactics, not strategy, which you will recall is the province of kings, ministers and presidents – usurps the leadership of the state. Second, it tends to occur when those military leaders – influenced by their operational training – take the operational conditions of their planning as assumed constants. “What do we do if we go to war with the United States” becomes “What do we do when we go to war with the United States” which elides out the strategic question “should we go to war with the United States?” entirely – and catastrophically, as for Imperial Japan, the answer to that unasked question of should we do this was clearly Oh my, NO.
(Bibliography note: It would hardly be fitting for me to declare these errors common and not provide examples. Two of the best case-studies I have read in this kind of strategic-thinking-failure-as-organizational-culture-failure are I. Hull, Absolute Destruction: Military Culture and the Practices of War in Imperial Germany (2005) and Parshall and Tully, Shattered Sword: The Untold Story of the Battle of Midway (2005). Also worth checking out, Daddis, “Chasing the Austerlitz Ideal: The Enduring Quest for Decisive Battle” in Armed Forces Journal (2006): 38-41. The same themes naturally come up in Daddis, Withdrawal: Reassessing America’s Final Years in Vietnam (2017)).
The third and final sin is easy to understand: a failure to update the strategy as conditions change. Quite often this happens in conjunction with the second sin, as once those operational concerns take over the place of strategy, it becomes difficult for leaders to consider new strategy as opposed to simply new operations in the pursuit of strategic goals which are often already lost beyond all retrieval. But this can happen without a subordination failure, due to sunk-costs and the different incentives faced by the state and its leaders. The classic example being functionally every major power in the First World War: by 1915 or 1916, it ought to have been obvious that no gains made as a result of the war could possibly be worth its continuance. Yet it was continued, both because having lost so much it seemed wrong to give up without “victory” and also because, for the politicians who had initially supported the war, to admit it was a useless waste was political suicide.
Bret Devereaux, “Collections: The Battle of Helm’s Deep, Part VIII: The Mind of Saruman”, A Collection of Unmitigated Pedantry, 2020-06-19.
April 10, 2023
QotD: Interaction between “big” farmers and subsistence farmers in pre-modern societies
What our little farmers generally have […] is labor – they have excess household labor because the household is generally “too large” for its farm. Now keep in mind, they’re not looking to maximize the usage of that labor – farming work is hard and one wants to do as little of it as possible. But a family that is too large for the land (a frequent occurrence) is going to be looking at ways to either get more out of their farmland or out of their labor, or both, especially because they otherwise exist on a razor’s edge of subsistence.
And then just over the way, you have the large manor estate, or the Roman villa, or the lands owned by a monastery (because yes, large landholders were sometimes organizations; in medieval Europe, monasteries filled this function in some places) or even just a very rich, successful peasant household. Something of that sort. They have the capital (plow-teams, manure, storage, processing) to more intensively farm the little land our small farmers have, but also, where the small farmer has more labor than land, the large landholder has more land than labor.
The other basic reality that is going to shape our large farmers is their different goals. By and large our small farmers were subsistence farmers – they were trying to farm enough to survive. Subsistence and a little bit more. But most large landholders are looking to use the surplus from their large holdings to support some other activity – typically the lifestyle of wealthy elites, which in turn require supporting many non-farmers as domestic servants, retainers (including military retainers), merchants and craftsmen (who provide the status-signalling luxuries). They may even need the surplus to support political activities (warfare, electioneering, royal patronage, and so on). Consequently, our large landholders want a lot of surplus, which can be readily converted into other things.
The space for a transactional relationship is pretty obvious, though as we will see, the power imbalances here are extreme, so this relationship tends to be quite exploitative in most cases. Let’s start with the labor component. But the fact that our large landholders are looking mainly to produce a large surplus (they are still not, as a rule, profit maximizing, by the by, because often social status and political ends are more important than raw economic profit for maintaining their position in society) means that instead of having a farm to support a family unit, they are seeking labor to support the farm, trying to tailor their labor to the minimum requirements of their holdings.
[…]
The tricky thing for the large landholder is that labor needs throughout the year are not constant. The window for the planting season is generally very narrow and fairly labor intensive: a lot needs to get done in a fairly short time. But harvest is even narrower and more labor intensive. In between those, there is still a fair lot of work to do, but it is not so urgent nor does it require so much labor.
You can readily imagine then the ideal labor arrangement would be to have a permanent labor supply that meets only the low-ebb labor demands of the off-seasons and then supplement that labor supply during the peak seasons (harvest and to a lesser extent planting) with just temporary labor for those seasons. Roman latifundia may have actually come close to realizing this theory; enslaved workers (put into bondage as part of Rome’s many wars of conquest) composed the villa’s primary year-round work force, but the owner (or more likely the villa’s overseer, the vilicus, who might himself be an enslaved person) could contract in sharecroppers or wage labor to cover the needs of the peak labor periods. Those temporary laborers are going to come from the surrounding rural population (again, households with too much labor and too little land who need more work to survive). Some Roman estates may have actually leased out land to tenant farmers for the purpose of creating that “flexible” local labor supply on marginal parts of the estate’s own grounds. Consequently, the large estates of the very wealthy required the impoverished many subsistence farmers in order to function.
Bret Devereaux, “Collections: Bread, How Did They Make It? Part II: Big Farms”, A Collection of Unmitigated Pedantry, 2020-07-31.
April 6, 2023
QotD: The general’s pre-battle speech to the army
The modern pre-battle general’s speech is quite old. We can actually be very specific: it originates in a specific work: Thucydides’ Histories [of the Peloponnesian War] (written c. 400 B.C.). Prior to this, looking at Homer or Herodotus, commanders give very brief remarks to their troops before a fight, but the fully developed form of the speech, often presented in pairs (one for each army) contrasting the two sides, is all Thucydides. It’s fairly clear that a few of Thucydides’ speeches seem to have gone on to define the standard form and ancient authors after Thucydides functionally mix and match their components (we’ll talk about them in a moment). This is not a particularly creative genre.
Now, there is tremendous debate as to if these speeches were ever delivered and if so, how they were delivered (see the bibliography note below; none of this is really original to me). For my part, while I think we need to be alive to the fact that what we see in our textual sources are dressed up literary compressions of the tradition of the pre-battle speech, I suspect that, particularly by the Roman period, yes, such speeches were a part of the standard practice of generalship. Onasander, writing about the duties of a general in the first century CE, tells us as much, writing, “For if a general is drawing up his men before battle, the encouragement of his words makes them despise the danger and covet the honour; and a trumpet-call resounding in the ears does not so effectively awaken the soul to the conflict of battle as a speech that urges to strenuous valour rouses the martial spirit to confront danger.” Onasander is a philosopher, not a military man, but his work became a standard handbook for military leaders in Antiquity; one assumes he is not entirely baseless.
And of course, we have the body of literature that records these speeches. They must be, in many cases, invented or polished versions; in many cases the author would have no way of knowing the real worlds actually said. And many of them are quite obviously too long and complex for the situations into which they are placed. And yet I think they probably do represent some of what was often said; in many cases there are good indications that they may reflect the general sentiments expressed at a given point. Crucially, pre-battle speeches, alone among the standard kinds of rhetoric, refuse to follow the standard formulas of Greek and Roman rhetoric. There is generally no exordium (meaning introduction; except if there is an apology for the lack of one, in the form of, “I have no need to tell you …”) or narratio (the narrated account), no clear divisio (the division of the argument, an outline in speech form) and so on. Greek and Roman oratory was, by the first century or so, quite well developed and relatively formulaic, even rigid, in structure. The temptation to adapt these speeches, when committing them to a written history, to the forms of every other kind of oratory must have been intense, and yet they remain clearly distinct. It is certainly not because the genre of the battle speech was more interesting in a literary sense than other forms of rhetoric, because oh my it wasn’t. The most logical explanation to me has always been that they continue to remain distinct because however artificial the versions of battle speeches we get in literature are, they are tethered to the “real thing” in fundamental ways.
Finally, the mere existence of the genre. As I’ve noted elsewhere, we want to keep in mind that Greek and Roman literature were produced in extremely militarized societies, especially during the Roman Republic. And unlike many modern societies, where military service is more common among poorer citizens, in these societies military service was the pride of the elite, meaning that the literate were more likely both to know what a battle actually looked like and to have their expectations shaped by war literature than the commons. And that second point is forceful; even if battle speeches were not standard before Thucydides, it is hard to see how generals in the centuries after him could resist giving them once they became a standard trope of “what good generals do.”
So did generals give speeches? Yes, probably. Among other reasons we can be sure is that our sources criticize generals who fail to give speeches. Did they give these speeches? No, probably not; Plutarch says as much (Mor. 803b) though I will caution that Plutarch is not always the best when it comes to the reality of the battlefield (unlike many other ancient authors, Plutarch was a life-long civilian in a decidedly demilitarized province – Achaea – who also often wrote at great chronological distance from his subjects; his sense of military matters is generally weak compared to Thucydides, Polybius or Caesar, for instance). Probably the actual speeches were a bit more roughly cut and more compartmentalized; a set of quick remarks that might be delivered to one unit after another as the general rode along the line before a battle (e.g. Thuc. 4.96.1). There are also all sorts of technical considerations: how do you give a speech to so many people, and so on (and before you all rush to the comments to give me an explanation of how you think it was done, please read the works cited below, I promise you someone has thought of it, noted every time it is mentioned or implied in the sources and tested its feasibility already; exhaustive does not begin to describe the scholarship on oratory and crowd size), which we’ll never have perfect answers for. But they did give them and they did seem to think they were important.
Why does that matter for us? Because those very same classical texts formed the foundation for officer training and culture in much of Europe until relatively recently. Learning to read Greek and Latin by marinating in these specific texts was a standard part of schooling and intellectual development for elite men in early modern and modern Europe (and the United States) through the Second World War. Napoleon famously advised his officers to study Caesar, Hannibal and Alexander the Great (along with Frederick II and the best general you’ve never heard of, Gustavus Adolphus of Sweden). Reading the classical accounts of these battles (or, in some cases, modern adaptations of them) was a standard part of elite schooling as well as officer training. Any student that did so was bound to run into these speeches, their formulas quite different from other forms of rhetoric but no less rigid (borrowing from a handful of exemplars in Thucydides) and imbibe the view of generalship they contained. Consequently, later European commanders tended for quite some time to replicate these tropes.
(Bibliography notes: There is a ton written on ancient battle speeches, nearly all of it in journal articles that are difficult to acquire for the general public, and much of it not in English. I think the best possible place to begin (in English) is J.E. Lendon, “Battle Description in the Ancient Historians Part II: Speeches, Results and Sea Battles” Greece & Rome 64.1 (2017). The other standard article on the topic is E. Anson “The General’s Pre-Battle Exohortation in Graeco-Roman Warfare” Greece & Rome 57.2 (2010). In terms of structure, note J.C. Iglesias Zoida, “The Battle Exhortation in Ancient Rhetoric” Rhetorica 25 (2007). For those with wider language skills, those articles can point you to the non-English scholarship. They can also serve as compendia of nearly all of the ancient battle speeches; there is little substitute for simply reading a bunch of them on your own.)
Bret Devereaux, “Collections: The Battle of Helm’s Deep, Part VII: Hanging by a Thread”, A Collection of Unmitigated Pedantry, 2020-06-12.
April 2, 2023
QotD: The (in-)effectiveness of chemical weapons against “Modern System” armies
… it is far easier to protect against chemical munitions than against an equivalent amount of high explosives, a point made by Matthew Meselson. Let’s unpack that, because I think folks generally have an unrealistic assessment of the power of a chemical weapon attack, imagining tiny amounts to be capable of producing mass casualties. Now chemical munition agents have a wide range of lethalities and concentrations, but let’s use Sarin – one of the more lethal common agents, as an example. Sarin gas is an extremely lethal agent, evaporating rapidly into the air from a liquid form. It has an LD50 (the dose at which half of humans in contact will be killed) of less than 40mg per cubic meter (over 2 minutes of exposure) for a human. Dangerous stuff – as a nerve agent, one of the more lethal chemical munitions; for comparison it is something like 30 times more lethal than mustard gas.
But let’s put that in a real-world context. Five Japanese doomsday cultists used about five liters of sarin in a terror attack on a Tokyo Subway in 1995, deployed, in this case, in a contained area, packed full to the brim with people – a potential worst-case (from our point of view; “best” case from the attackers point of view) situation. But the attack killed only 12 people and injured about a thousand. Those are tragic, horrible numbers to be sure – but statistically insignificant in a battlefield situation. And no army could count on ever being given the kind of high-vulnerability environment like a subway station in an actual war.
In order to produce mass casualties in battlefield conditions, a chemical attacker has to deploy tons – and I mean that word literally – of this stuff. Chemical weapons barrages in the First World War involved thousands and tens of thousands of shells – and still didn’t produce a high fatality rate (though the deaths that did occur were terrible). But once you are talking about producing tens of thousands of tons of this stuff and distributing it to front-line combat units in the event of a war, you have introduced all sorts of other problems. One of the biggest is shelf-life: most nerve gasses (which tend to have very high lethality) are not only very expensive to produce in quantity, they have very short shelf-lives. The other option is mustard gas – cheaper, with a long shelf-life, but required in vast quantities (during WWII, when just about every power stockpiled the stuff, the stockpiles were typically in the many tens of thousands of tons range, to give a sense of how much it was thought would be required – and then think about delivering those munitions).
[…]
But that’s not the only problem – the other problem is doctrine. Remember that the modern system is all about fast movement. I don’t want to get too deep into maneuver-warfare doctrine (one of these days!) but in most of its modern forms (e.g. AirLand Battle, Deep Battle, etc) it aims to avoid the stalemate of static warfare by accelerating the tempo of the battle beyond the defender’s ability to cope with, eventually (it is hoped) leading the front to decompose as command and control breaks down.
And chemical weapons are just not great for this. Active use of chemical weapons – even by your own side – poses all sorts of issues to an army that is trying to move fast and break things. This problem actually emerged back in WWI: even if your chemical attack breaks the enemy front lines, the residue of the attack is now an obstruction for you. […] A modern system army, even if it is on the defensive operationally, is going to want to make a lot of tactical offensives (counterattacks, spoiling attacks). Turning the battle into a slow-moving mush of long-lasting chemical munitions (like mustard gas!) is counterproductive.
But that leaves the fast-dispersing nerve agents, like sarin. Which are very expensive, hard to store, hard to provision in quantity and – oh yes – still less effective than high explosives when facing another expensive, modern system army, which is likely to be very well protected against such munitions (for instance, most modern armored vehicles are designed to be functionally immune to chemical munitions assuming they are buttoned up).
This impression is borne out by the history of chemical weapons; for top-tier armies, just over a century of being a solution in search of a problem. The stalemate of WWI produced a frantic search for solutions – far from being stupidly complacent (as is often the pop-history version of WWI), many commanders were desperately searching for something, anything to break the bloody stalemate and restore mobility. We tend to remember the successful innovations – armor, infiltration tactics, airpower – because they shape subsequent warfare. But at the time, there were a host of efforts: highly planned bite-and-hold assaults, drawn out brutal et continu efforts, dirigibles, mining and sapping, ultra-massive artillery barrages (trying a wide variety of shell-types and weights). And, of course, gas. Gas sits in the second category: one more innovation which failed to break the trench stalemate. In the end, even in WWI, it wasn’t any more effective than an equivalent amount of high explosives (as the relative casualty figures attest). Tanks and infiltration tactics – that is to say, the modern system – succeeded where gas failed, in breaking the trench stalemate, with its superiority at the role demonstrated vividly in WWII.
Bret Devereaux, “Collections: Why Don’t We Use Chemical Weapons Anymore?”, A Collection of Unmitigated Pedantry, 2020-03-20.
March 29, 2023
QotD: Sacrifice
As a terminology note: we typically call a living thing killed and given to the gods a sacrificial victim, while objects are votive offerings. All of these terms have useful Latin roots: the word “victim” – which now means anyone who suffers something – originally meant only the animal used in a sacrifice as the Latin victima; the assistant in a sacrifice who handled the animal was the victimarius. Sacrifice comes from the Latin sacrificium, with the literal meaning of “the thing made sacred”, since the sacrificed thing becomes sacer (sacred) as it now belongs to a god, a concept we’ll link back to later. A votivus in Latin is an object promised as part of a vow, often deposited in a temple or sanctuary; such an item, once handed over, belonged to the god and was also sacer.
There is some concern for the place and directionality of the gods in question. Sacrifices for gods that live above are often burnt so that the smoke wafts up to where the gods are (you see this in Greek and Roman practice, as well in Mesopotamian religion, e.g. in Atrahasis, where the gods “gather like flies” about a sacrifice; it seems worth noting that in Temple Judaism, YHWH (generally thought to dwell “up”) gets burnt offerings too), while sacrifices to gods in the earth (often gods of death) often go down, through things like libations (a sacrifice of liquid poured out).
There is also concern for the right animals and the time of day. Most gods receive ritual during the day, but there are variations – Roman underworld and childbirth deities (oddly connected) seem to have received sacrifices by night. Different animals might be offered, in accordance with what the god preferred, the scale of the request, and the scale of the god. Big gods, like Jupiter, tend to demand prestige, high value animals (Jupiter’s normal sacrifice in Rome was a white ox). The color of the animal would also matter – in Roman practice, while the gods above typically received white colored victims, the gods below (the di inferi but also the di Manes) darkly colored animals. That knowledge we talked about was important in knowing what to sacrifice and how.
Now, why do the gods want these things? That differs, religion to religion. In some polytheistic systems, it is made clear that the gods require sacrifice and might be diminished, or even perish, without it. That seems to have been true of Aztec religion, particularly sacrifices to Quetzalcoatl; it is also suggested for Mesopotamian religion in the Atrahasis where the gods become hungry and diminished when they wipe out most of humans and thus most of the sacrifices taking place. Unlike Mesopotamian gods, who can be killed, Greek and Roman gods are truly immortal – no more capable of dying than I am able to spontaneously become a potted plant – but the implication instead is that they enjoy sacrifices, possibly the taste or even simply the honor it brings them (e.g. Homeric Hymn to Demeter 310-315).
We’ll come back to this idea later, but I want to note it here: the thing being sacrificed becomes sacred. That means it doesn’t belong to people anymore, but to the god themselves. That can impose special rules for handling, depositing and storing, since the item in question doesn’t belong to you anymore – you have to be extra-special-careful with things that belong to a god. But I do want to note the basic idea here: gods can own property, including things and even land – the temple belongs not to the city but to the god, for instance. Interestingly, living things, including people can also belong to a god, but that is a topic for a later post. We’re still working on the basics here.
Bret Devereaux, “Collections: Practical Polytheism, Part II: Practice”, A Collection of Unmitigated Pedantry, 2019-11-01.
March 25, 2023
QotD: Sparta’s fate
What becomes of Sparta after its hegemony shatters in 371, after Philip II humiliates it in 338 and after Antipater crushes it in 330? This is a part of Spartan history we don’t much discuss, but it provides a useful coda on the Sparta of the fifth and fourth century. Athens, after all, remained a major and important city in Greece through the Roman period – a center for commerce and culture. Corinth – though burned by the Romans – was rebuilt and remained a crucial and wealthy port under the Romans.
What became of Sparta?
In short, Sparta became a theme-park. A quaint tourist get-away where wealthy Greeks and Romans could come to look and stare at the quaint Spartans and their silly rituals. It developed a tourism industry and the markets even catered to the needs of the elite Roman tourists who went (Plutarch and Cicero both did so, Plut. Lyc. 18.1; Tusc. 5.77).
In term of civic organization, after Cleomenes III’s last gasp effort to make Sparta relevant – an effort that nearly wiped out the entire remaining Spartiate class (Plut. Cleom. 28.5) – Sparta increasingly resembled any other Hellenistic Greek polis, albeit a relatively famous and also poor one. Its material and literary culture seem to converge with the rest of the Greeks, with the only distinctively Spartan elements of the society being essentially Potemkin rituals for boys put on for the tourists who seem to be keeping the economy running and keeping what is left of Sparta in the good graces of their Roman overlords.
Thus ended Sparta: not with a brave last stand. Not with mighty deeds of valor. Or any great cultural contribution at all. A tourist trap for rich and bored Romans.
Bret Devereaux, “Collections: This. Isn’t. Sparta. Part VII: Spartan Ends”, A Collection of Unmitigated Pedantry, 2019-09-27.
March 21, 2023
QotD: The elephant as a weapon of war
The pop-culture image of elephants in battle is an awe-inspiring one: massive animals smashing forward through infantry, while men on elephant-back rain missiles down on the hapless enemy. And for once I can surprise you by saying: this isn’t an entirely inaccurate picture. But, as always, we’re also going to introduce some complications into this picture.
Elephants are – all on their own – dangerous animals. Elephants account for several hundred fatalities per year in India even today and even captured elephants are never quite as domesticated as, say, dogs or horses. Whereas a horse is mostly a conveyance in battle (although medieval European knights greatly valued the combativeness of certain breeds of destrier warhorses), a war elephant is a combatant in his own right. When enraged, elephants will gore with tusks and crush with feet, along with using their trunks as weapons to smash, throw or even rip opponents apart (by pinning with the feet). Against other elephants, they will generally lock tusks and attempt to topple their opponent over, with the winner of the contest fatally goring the loser in the exposed belly (Polybius actually describes this behavior, Plb. 5.84.3-4). Dumbo, it turns out, can do some serious damage if prompted.
Elephants were selected for combativeness, which typically meant that the ideal war elephant was an adult male, around 40 years of age (we’ll come back to that). Male elephants enter a state called “musth” once a year, where they show heightened aggressiveness and increases interest in mating. Trautmann (2015) notes a combination of diet, straight up intoxication and training used by war elephant handlers to induce musth in war elephants about to go into battle, because that aggression was prized (given that the signs of musth are observable from the outside, it seems likely to me that these methods worked).
(Note: In the ancient Mediterranean, female elephants seem to have also been used, but it is unclear how often. Cassius Dio (Dio 10.6.48) seems to think some of Pyrrhus’s elephants were female, and my elephant plate shows a mother elephant with her cub, apparently on campaign. It is possible that the difficulty of getting large numbers of elephants outside of India caused the use of female elephants in battle; it’s also possible that our sources and artists – far less familiar with the animals than Indian sources – are themselves confused.)
Thus, whereas I have stressed before that horses are not battering rams, in some sense a good war elephant is. Indeed, sometimes in a very literal sense – as Trautmann notes, “tearing down fortifications” was one of the key functions of Indian war elephants, spelled out in contemporary (to the war elephants) military literature there. A mature Asian elephant male is around 2.75m tall, masses around 4 tons and is much more sturdily built than any horse. Against poorly prepared infantry, a charge of war elephants could simply shock them out of position a lot of the time – though we will deal with some of the psychological aspects there in a moment.
A word on size: film and video-game portrayals often oversize their elephants – sometimes, like the Mumakil of Lord of the Rings, this is clearly a fantasy creature, but often that distinction isn’t made. As notes, male Asian (Indian) elephants are around 2.75m (9ft) tall; modern African bush elephants are larger (c. 10-13ft) but were not used for war. The African elephant which was trained for war was probably either an extinct North African species or the African forest elephant (c. 8ft tall normally) – in either case, ancient sources are clear that African war elephants were smaller than Asian ones.
Thus realistic war elephants should be about 1.5 times the size of an infantryman at the shoulders (assuming an average male height in the premodern world of around 5’6?), but are often shown to be around twice as tall if not even larger. I think this leads into a somewhat unrealistic assumption of how the creatures might function in battle, for people not familiar with how large actual elephants really are.
The elephant as firing platform is also a staple of the pop-culture depiction – often more strongly emphasized because it is easier to film. This is true to their use, but seems to have always been a secondary role from a tactical standpoint – the elephant itself was always more dangerous than anything someone riding it could carry.
There is a social status issue at play here which we’ll come back to […] The driver of the elephant, called a mahout, seems to have typically been a lower-status individual and is left out of a lot of heroic descriptions of elephant-riding (but not driving) aristocrats (much like Egyptian pharaohs tend to erase their chariot drivers when they recount their great victories). Of course, the mahout is the fellow who actually knows how to control the elephant, and was a highly skilled specialist. The elephant is controlled via iron hooks called ankusa. These are no joke – often with a sharp hook and a spear-like point – because elephants selected for combativeness are, unsurprisingly, hard to control. That said, they were not permanent ear-piercings or anything of the sort – the sort of setup in Lord of the Rings is rather unlike the hooks used.
In terms of the riders, we reach a critical distinction. In western media, war elephants almost always appear with great towers on their backs – often very elaborate towers, like those in Lord of the Rings or the film Alexander (2004). Alexander, at least, has it wrong. The howdah – the rigid seat or tower on an elephant’s back – was not an Indian innovation and doesn’t appear in India until the twelfth century (Trautmann supposes, based on the etymology of howdah (originally an Arabic word) that this may have been carried back into India by Islamic armies). Instead, the tower was a Hellenistic idea (called a thorkion in Greek) which post-dates Alexander (but probably not by much).
This is relevant because while the bowmen riding atop elephants in the armies of Alexander’s successors seem to be lower-status military professionals, in India this is where the military aristocrat fights. […] this is a big distinction, so keep it in mind. It also illustrates neatly how the elephant itself was the primary weapon – the society that used these animals the most never really got around to creating a protected firing position on their back because that just wasn’t very important.
In all cases, elephants needed to be supported by infantry (something Alexander (2004) gets right!) Cavalry typically cannot effectively support elephants for reasons we’ll get to in a moment. The standard deployment position for war elephants was directly in front of an infantry force (heavy or light) – when heavy infantry was used, the gap between the two was generally larger, so that the elephants didn’t foul the infantry’s formation.
Infantry support covers for some of the main weaknesses elephants face, keeping the elephants from being isolated and taken down one by one. It also places an effective exploitation force which can take advantage of the havoc the elephants wreck on opposing forces. The “elephants advancing alone and unsupported” formation from Peter Jackson’s Return of the King, by contrast, allows the elephants to be isolated and annihilated (as they subsequently are in the film).
Bret Devereaux, “Collections: War Elephants, Part I: Battle Pachyderms”, A Collection of Unmitigated Pedantry, 2019-07-26.
March 17, 2023
QotD: The unique nature of Roman Egypt
I’ve mentioned quite a few times here that Roman Egypt is a perplexing part of understanding the Roman Empire because on the one hand it provides a lot of really valuable evidence for daily life concerns (mortality, nuptiality, military pay, customs and tax systems, etc.) but on the other hand it is always very difficult to know to what degree that information can be generalized because Roman Egypt is such an atypical Roman province. So this week we’re going to look in quite general terms at what makes Egypt such an unusual place in the Roman world. As we’ll see, some of the ways in which Egypt is unusual are Roman creations, but many of them stretch back before the Roman period in Egypt or indeed before the Roman period anywhere.
[…]
… what makes Roman Egypt’s uniqueness so important is one of the unique things about it: Roman Egypt preserves a much larger slice of our evidence than any other place in the ancient world. This comes down to climate (as do most things); Egypt is a climatically extreme place. On the one hand, most of the country is desert and here I mean hard desert, with absolutely minuscule amounts of precipitation. On the other hand, the Nile River creates a fertile, at points almost lush, band cutting through the country running to the coast. The change between these two environments is extremely stark; it is, I have been told (I haven’t yet been to Egypt), entirely possible in many places to stand with one foot in the “green” and another foot in the hard desert.
That in turn matters because while Egypt was hardly the only arid region Rome controlled, it was the only place you were likely to find very many large settlements and lots of people living in such close proximity to such extremely arid environments (other large North African settlements tend to be coastal). And that in turn matters for preservation. When objects are deposited – lost, thrown away, carefully placed in a sanctuary, whatever – they begin to degrade. Organic objects (textile, leather, paper, wood) rot as microorganisms use them as food, while metal objects oxidize (that is, rust). Aridity arrests (at least somewhat) both processes. Consequently things survive from the Roman period (or indeed, from even more ancient periods) in Egypt that simply wouldn’t survive almost anywhere else.
By far the most important of those things is paper, particularly papyrus paper. The Romans actually had a number of writing solutions. For short-term documents, they used wax writing tablets, an ancient sort of “dry erase board” which could be scraped smooth to write a new text when needed; these only survive under very unusual circumstances. For more permanent documents, wood and papyrus were used. Wood tablets, such as those famously recovered from the Roman fort at Vindolanda, are fairly simple: thin wooden slats are smoothed so they can be written on with ink and a pen, creating a rigid but workable and cheap writing surface; when we find these tablets they have tended to be short documents like letters or temporary lists, presumably in part because storing lots of wood tablets would be hard so more serious records would go on the easier to store papyrus paper.
Papyrus paper was lighter, more portable, more storeable option. Papyrus paper is produced by taking the pith of the papyrus plant, which is sticky, and placing it in two layers at right angles to each other, before compressing (or crushing) those layers together to produce a single sheet, which is then dried, creating a sheet of paper (albeit a very fibery sort of paper). Papyrus paper originated in Egypt and the papyrus plant is native to Egypt, but by the Roman period we generally suppose papyrus paper to have been used widely over much of the Roman Empire; it is sometimes supposed that papyrus was cheaper and more commonly used in Egypt than elsewhere, but it is hard to be sure.
Now within the typical European and Mediterranean humidity, papyrus doesn’t last forever (unlike the parchment paper produced in the Middle Ages which was far more expensive but also lasts much longer); papyrus paper will degrade over anything from a few decades to a couple hundred years – the more humidity, the faster decay. Of course wood tablets and wax tablets fare no better. What that means is that in most parts of the Roman Empire, very little casual writing survives; what does survive were the sorts of important official documents which might be inscribed on stone (along with the literary works that were worth painstakingly copying over and over again by hand through the Middle Ages). But letters, receipts, tax returns, census records, shopping lists, school assignments – these sorts of documents were all written on less durable materials which don’t survive except in a few exceptional sites like Vindolanda.
Or Egypt. Not individual places in Egypt; pretty much the whole province.
In the extremely dry conditions of the Egyptian desert, papyrus can survive (albeit typically in damaged scraps rather than complete scrolls) from antiquity to the present. Now the coverage of these surviving papyri is not even. The Roman period is far better represented in the surviving papyri than the Ptolemaic period (much less the proceeding “late” period or the New Kingdom before that). It’s also not evenly distributed geographically; the Arsinoite nome (what is today el-Fayyum, an oasis basin to the West of the Nile) and the Oxyrhynchus nome (roughly in the middle of Egypt, on the Nile) are both substantially overrepresented, while the Nile Delta itself has fewer (but by no means zero) finds. Consequently, we need to be worried not only about the degree to which Egypt might be representative of the larger Roman world, but also the degree to which these two nomes (a nome is an administrative district within Egypt, we’ll talk about them more in a bit) are representative of Egypt. That’s complicated in turn by the fact that the Arsinoite nome is not a normal nome; extensive cultivation there only really begins under Ptolemaic rule, which raises questions about how typical it was. It also means we lack a really good trove of papyri from a nome in Lower Egypt proper (the northern part of the country, covering the delta of the Nile) which, because of its different terrain, we might imagine was in some ways different.
Nevertheless, it is difficult to overstate the value of the papyri we do recover from Egypt. Documents containing census and tax information can give us important clues about the structure of ancient households (revealing, for instance, a lot of complex composite households). Tax receipts (particularly for customs taxes) can illuminate a lot about how Roman customs taxes (portoria) were assessed and conducted. Military pay stubs from Roman Egypt also provide the foundation for our understanding of how Roman soldiers were paid, recording for instance, pay deductions for rations, clothes and gear. We also occasionally recover fragments of literary works that we know existed but which otherwise did not survive to the present. And there is so much of this material. Whereas new additions to the corpus of ancient literary texts are extremely infrequent (the last very long such text was the recovery of the Athenaion Polteia or Constitution of the Athenians, from a papyrus discovered in the Fayyum (of course), published in 1891), the quantity of unpublished papyri from Egypt remains vast and there is frankly a real shortage of trained Egyptologists who can work through and publish this material (to the point that the vast troves of unpublished material has created deeply unfortunate opportunities for theft and fraud).
And so that is the first way in which Egypt is unusual: we know a lot more about daily life in Roman Egypt, especially when it comes to affairs below the upper-tier of society. Recovered papyrological evidence makes petty government officials, regular soldiers, small farming households, affluent “middle class” families and so on much more visible to us. But of course that immediately raises debates over how typical those people we can see are, because we’d like to be able to generalize information we learn about small farmers or petty government officials more broadly around the empire, to use that information to “fill in” regions where the evidence just does not survive. But of course the rejoinder is natural to point out the ways in which Egypt may be unusual beyond merely the survival of evidence (to include the possibility that cheaper papyrus in Egypt may have meant that more things were committed to paper here than elsewhere).
Consequently the debate about how strange a place Roman Egypt was is also a fairly important and active area of scholarship. We can divide those arguments into two large categories: the way in which Roman rule itself in Egypt was unusual and the ways in which Egypt was a potentially unusual place in comparison to the rest of Roman world already.
Bret Devereaux, “Collections: Why Roman Egypt Was Such a Strange Province”, A Collection of Unmitigated Pedantry, 2022-12-02.
March 13, 2023
QotD: The components of an oath in pre-modern cultures
Which brings us to the question how does an oath work? In most of modern life, we have drained much of the meaning out of the few oaths that we still take, in part because we tend to be very secular and so don’t regularly consider the religious aspects of the oaths – even for people who are themselves religious. Consider it this way: when someone lies in court on a TV show, we think, “ooh, he’s going to get in trouble with the law for perjury”. We do not generally think, “Ah yes, this man’s soul will burn in hell for all eternity, for he has (literally!) damned himself.” But that is the theological implication of a broken oath!
So when thinking about oaths, we want to think about them the way people in the past did: as things that work – that is they do something. In particular, we should understand these oaths as effective – by which I mean that the oath itself actually does something more than just the words alone. They trigger some actual, functional supernatural mechanisms. In essence, we want to treat these oaths as real in order to understand them.
So what is an oath? To borrow Richard Janko’s (The Iliad: A Commentary (1992), in turn quoted by Sommerstein [in Horkos: The Oath in Greek Society (2007)]) formulation, “to take an oath is in effect to invoke powers greater than oneself to uphold the truth of a declaration, by putting a curse upon oneself if it is false”. Following Sommerstein, an oath has three key components:
First: A declaration, which may be either something about the present or past or a promise for the future.
Second: The specific powers greater than oneself who are invoked as witnesses and who will enforce the penalty if the oath is false. In Christian oaths, this is typically God, although it can also include saints. For the Greeks, Zeus Horkios (Zeus the Oath-Keeper) is the most common witness for oaths. This is almost never omitted, even when it is obvious.
Third: A curse, by the swearers, called down on themselves, should they be false. This third part is often omitted or left implied, where the cultural context makes it clear what the curse ought to be. Particularly, in Christian contexts, the curse is theologically obvious (damnation, delivered at judgment) and so is often omitted.
While some of these components (especially the last) may be implied in the form of an oath, all three are necessary for the oath to be effective – that is, for the oath to work.
A fantastic example of the basic formula comes from Anglo-Saxon Chronicles (656 – that’s a section, not a date), where the promise in question is the construction of a new monastery, which runs thusly (Anne Savage’s translation):
These are the witnesses that were there, who signed on Christ’s cross with their fingers and agreed with their tongues … “I, king Wulfhere, with these king’s eorls, war-leaders and thanes, witness of my gift, before archbishop Deusdedit, confirm with Christ’s cross” … they laid God’s curse, and the curse of all the saints and all God’s people on anyone who undid anything of what was done, so be it, say we all. Amen.” [Emphasis mine]
So we have the promise (building a monastery and respecting the donation of land to it), the specific power invoked as witness, both by name and through the connection to a specific object (the cross – I’ve omitted the oaths of all of Wulfhere’s subordinates, but each and every one of them assented “with Christ’s cross”, which they are touching) and then the curse to be laid on anyone who should break the oath.
Of the Medieval oaths I’ve seen, this one is somewhat odd in that the penalty is spelled out. That’s much more common in ancient oaths where the range of possible penalties and curses was much wider. The Dikask‘s oath (the oath sworn by Athenian jurors), as reconstructed by Max Frankel, also provides an example of the whole formula from the ancient world:
I will vote according to the laws and the votes of the Demos of the Athenians and the Council of the Five Hundred … I swear these things by Zeus, Apollo and Demeter, and may I have many good things if I swear well, but destruction for me and my family if I forswear.
Again, each of the three working components are clear: the promise being made (to judge fairly – I have shortened this part, it goes on a bit), the enforcing entity (Zeus, Apollo and Demeter) and the penalty for forswearing (in this case, a curse of destruction). The penalty here is appropriately ruinous, given that the jurors have themselves the power to ruin others (they might be judging cases with very serious crimes, after all).
Bret Devereaux, “Collections: Oaths! How do they Work?”, A Collection of Unmitigated Pedantry, 2019-06-28.
March 9, 2023
QotD: Iron ore mining before the Industrial Revolution
Finding ore in the pre-modern period was generally a matter of visual prospecting, looking for ore outcrops or looking for bits of ore in stream-beds where the stream could then be followed back to the primary mineral vein. It’s also clear that superstition and divination often played a role; as late as 1556, Georgius Agricola feels the need to include dowsing in his description of ore prospecting techniques, though he has the good sense to reject it.
As with many ancient technologies, there is a triumph of practice over understanding in all of this; the workers have mastered the how but not the why. Lacking an understanding of geology, for instance, meant that pre-modern miners, if the ore vein hit a fault line (which might displace the vein, making it impossible to follow directly) had to resort to sinking shafts and exploratory mining an an effort to “find” it again. In many cases ancient miners seem to have simply abandoned the works when the vein had moved only a short distance because they couldn’t manage to find it again. Likewise, there was a common belief (e.g. Plin. 34.49) that ore deposits, if just left alone for a period of years (often thirty) would replenish themselves, a belief that continues to appear in works on mining as late as the 18th century (and lest anyone be confused, they clearly believe this about underground deposits; they don’t mean bog iron). And so like many pre-modern industries, this was often a matter of knowing how without knowing why.
Once the ore was located, mining tended to follow the ore, assuming whatever shape the ore-formation was in. For ore deposits in veins, that typically means diggings shafts and galleries (or trenches, if the deposit was shallow) that follow the often irregular, curving patterns of the veins themselves. For “bedded” ore (where the ore isn’t in a vein, but instead an entire layer, typically created by erosion and sedimentation), this might mean “bell pitting” where a shaft was dug down to the ore layer, which was then extracted out in a cylinder until the roof became unstable, at which point the works were back-filled or collapsed and the process begun again nearby.
All of this digging had to be done by hand, of course. Iron-age mining tools (picks, chisels, hammers) fairly strongly resemble their modern counterparts and work the same way (interestingly, in contrast to things like bronze-age picks which were bronze sheaths around a wooden core, instead of a metal pick on a wooden haft).
For rock that was too tough for simple muscle-power and iron tools to remove, the typical expedient was “fire-setting“, which remained a standard technique for removing tough rocks until the introduction of explosives in the modern period. Fire-setting involves constructing a fuel-pile (typically wood) up against the exposed rock and then letting it burn (typically overnight); the heat splinters, cracks and softens the rock. The problem of course is that the fire is going to consume all of the oxygen and let out a ton of smoke, preventing work close to an active fire (or even in the mine at all while it was happening). Note that this is all about the cracking and splintering effect, along with chemical changes from roasting, not melting the rock – by the time the air-quality had improved to the point where the fire-set rock could be worked, it would be quite cool. Ancient sources regularly recommend dousing these fires with vinegar, not water, and there seems to be some evidence that this would, in fact, render the rock easier to extract afterwards.
By the beginning of the iron age in Europe (which varies by place, but tends to start between c. 1000 and c. 600 BC), the level of mining sophistication that we see in preserved mines is actually quite considerable. While Bronze Age mines tend to stay above the water-table, iron-age mines often run much deeper, which raises all sorts of exciting engineering problems in ventilation and drainage. Deep mines could be drained using simple bucket-lines, but we also see more sophisticated methods of drainage, from the Roman use of screw-pumps and water-wheels to Chinese use of chain-pumps from at least the Song Dynasty. Ventilation was also crucial to prevent the air becoming foul; ventilation shafts were often dug, with the use of either cloth fans or lit fires at the exits to force circulation. So mining could get very sophisticated when there was a reason to delve deep. Water might also be used to aid in mining, by leading water over a deposit and into a sluice box where the minerals were then separated out. This seems to have been done mostly for mining gold and tin.
Bret Devereaux, “Iron, How Did They Make It? Part I, Mining”, A Collection of Unmitigated Pedantry, 2020-09-18.
March 5, 2023
QotD: The role of the “big” landowners in pre-modern farming societies
What generally defines our large landholders is their greater access to capital. Now we don’t want to think of capital in the sort of money-denominated, fungible sense of modern finance, but in a very concrete sense: land, infrastructure, animals, and equipment. As we’ll see, it isn’t just that the big men hold more of this capital, but that they hold fundamentally different sorts of capital and often use it very differently.
Of course this begins with land. The thing to keep in mind is that prior to the modern period […] the vast majority of economic activity was the production of the land. That meant that land was both the primary form of holding wealth but also the main income-producing asset. Consequently, larger land holdings are the assets that enable the accumulation of all of the other kinds of capital we’re discussing. By having more land – typically much more land – than is required to feed a single household, these larger farmers can […] produce for markets and trade, enabling them to afford to acquire labor, animals, equipment and so on. Our subsistence farmers of the last post, focused on producing for survival, would be hard-pressed to acquire much further in the way of substantial capital.
The next most important category is generally animals, particularly a plow team […] while our small subsistence farmers may keep chickens or pigs on some small part of the pasture they have access to, they probably do not have a complete plow-team for their own farm […]. Oxen and horses are hideously expensive, both to acquire but also to feed and for a family barely surviving one year to the next, they simply cannot afford them. They also do not have herds of animals (because their small farms absolutely cannot support acres of pasturage) and they probably have limited access to herdsmen generally (that is, transhumant pastoralists moving around the countryside) because those fellows will tend to want to interact with the community leaders who are, as noted above, the large landholders. All of which is to say that while the small farmers may keep a few animals, they do not have access to significantly large numbers of animals (or humans), which matters.
The first impact of having a plow-team is fairly obvious: a plow drawn by a couple of oxen is more effective than a plow pushed by a single human. That means that a plow-team lets the same amount of farming labor sow a larger area of land […]. It also allows for a larger, deeper plow, which in turn plows at a greater depth, which can improve yields […]. You can easily see why, for a landholder with a large farm, having a plow-team is so useful: whereas the subsistence farmer struggles by having too much labor (and too many mouths to feed) and too little land, the big landholder has a lot of land they are trying to get farmed with as little labor as possible. And of course, more to the point, the large landholder has the wealth and acreage necessary to buy and then pasture the animals in the plow-team.
The second major impact is manure. Remember that our farmers live before the time of artificial fertilizer. Crops, especially bulk cereal crops, wear out the nutrients in the soil quite rapidly after repeated harvests, which leaves the farmer two options. The first, standard option, is that the farmer can fallow the field (which also has the advantage of disrupting certain pest life-cycles); depending on the farming method, fallowing may mean planting specific plants to renew the soil’s nutrients when those plants are uprooted and left to return to the soil in the field or it may mean simply turning the field over to wild plants with a similar effect. The second option is using fertilizer, which in this case means manure. Quite a lot of it. Aggressive manuring, particularly on rich soils which have good access to moisture (because cropping also dries out the soil; fallowing can restore that moisture) allows the field to be fallowed less frequently and thus farmed more intensively. In some cases it allowed rich farmland to be continuously cropped, with fairly dramatic increases in returns-to-acreage as a result. And by increasing the nutrients in the soil, it also produces higher yields in a given season.
Now the humans in a farming household aren’t going to generate enough manure on their own to make a meaningful contribution to soil fertility. But the larger landholders generally have two advantages in this sense. First, because their landholdings are large, they can afford to turn over marginal farming ground to pasture for horses, cattle, sheep and so on; these animals not only generate animal products (or prestige, in the case of horses), they also eat the grass and generate manure which can be used on the main farm. The second way to get manure is cities; unlike farming households, cities do produce sufficient quantities of human waste for manuring fields. And where small subsistence farmers are unlikely to be able to buy that supply, large landholders are likely to be politically well-connected enough and wealthy enough to arrange for human waste to be used on their lands, especially for market oriented farms close to cities. And if you just stopped and said, “wait – these guys were paying for human waste?” … yes, yes they sometimes did (and not just for farming! Check out how saltpeter was made, or what a fuller did!).
Finally, there’s the question of infrastructure: tools, machines and storage. The large landholder is the one likely to be able to afford to build things like granaries, mills and so on. Now there is, I want to note, a lot of variation from place to place about exactly how this sort of infrastructure is handled. It might be privately owned, it might be owned by the village, but frequently, the “village mill” was actually owned by the large landholder whose big manor overlooked the village (who may also be the local political authority). And while we’re looking at grain, other agricultural products which don’t store as well or as easily might need to be aggregated for transport to market and sale, a process where the large landholder’s storage facilities, political standing and market contacts are likely to make him the ideal middleman. I don’t want to get too in the weeds (pardon the pun) on all the different kinds of infrastructure (mills for grains, presses for olives, casks for wine) except to note that in many cases the large landholder is the one likely to be able to afford these investments and that smaller farmers growing the same crops nearby might well want to use them.
Bret Devereaux, “Collections: Bread, How Did They Make It? Part II: Big Farms”, A Collection of Unmitigated Pedantry, 2020-07-31.
March 1, 2023
QotD: What do we mean by “the humanities”?
First, just to define my terms, what are the humanities? Broadly, they are the disciplines that study human society (that is, that are concerned with humanity): language study, literature, philosophy, history, art history, archaeology, anthropology, and so on. It is necessarily a bit of a fuzzy set. But what I think defines the humanities more than subject matter is method; the humanities study things which (we argue) cannot be subjected to the rigors of the scientific method or strictly mathematical approaches. You cannot perform a controlled trial in beauty, mathematical certainty in history is almost always impossible, and there is no way to know much stress a society can bear except to see it fail. Some things cannot be reduced to numbers, at least not by the powers of the technology-aided human mind.
By way of example, that methodological difference is why there’s a division between political science and history, despite the two disciplines historically being concerned with many of the same subjects and the same questions (to the point that Thucydides is sometimes produced as the founder of both): they use different methods. History is a humanities discipline through and through, whereas political science attempts to hybridize humanities and STEM (Science, Technology, Engineering and Mathematics) approaches; that’s not to say historians never use statistical approaches (I do, actually, quite a lot) but that there are very real differences in methodology. As you might imagine, that difference leads to some competition and conflict between the disciplines as to whose methodology best answers those key questions or equips students to think about them. Given that I have a doctorate in history and self-identify as a historian, you will have no trouble guessing which side of this I come down on, although that might be a bit self-interested on my part.
So if the STEM fields are, at some level, fundamentally about numbers, the humanities are fundamentally about language. The universe may be made of numbers, but the human mind and human societies are constructed out of language. Unlike computers, we do not think in numbers, but in words and consequently, the study of humans as thinking creatures is mostly about those words (yes, yes, I see you there, economics and psychology; there are edge cases, of course). Our laws are written in words because our thoughts form in our heads as words; we naturally reason with words and we even feel with words. Humans are linguistic creations in a mathematical universe; consequently, while the study of the universe is mediated through math, the study of humans and human minds is fundamentally linguistic in nature.
Thus, the humanities.
Bret Devereaux, “Collections: The Practical Case on Why We Need the Humanities”, A Collection of Unmitigated Pedantry, 2020-07-03.
February 25, 2023
QotD: Feudalism versus “Manorialism”
… the economic system in much of medieval Europe is better understood under this term, manorialism, rather than “feudalism”. Feudalism, as a term, has been generally going out of style among medievalists for a long time, but it is especially inapt here. In a lot of popular discourse (and high school classrooms), feudalism gets used as a catch-all to mean both the political relationships between aristocrats and other aristocrats, and the economic relationships between peasants and aristocrats, but these were very different relationships. Peasants did not have fiefs, they did not enter into vassalage agreements (the feodum of feudalism). Thus in practice my impression is that the experts in medieval European economics and politics tend to eschew “feudalism” as an unhelpful term, preferring “manoralism” to describe the economic system (including the political subordination of the peasantry) and “vassalage” to describe the system of aristocratic political relationships.
Bret Devereaux, “Collections: Bread, How Did They Make It? Part IV: Markets, Merchants and the Tax Man”, A Collection of Unmitigated Pedantry, 2020-08-21.
February 21, 2023
QotD: The Gods as (literal) machines
So we have the basic rules in place: in order to achieve a concrete, earthly result, we need to offer something to the appropriate god and in exchange, they’ll use their divine power to see that things turn out our way.
But what do we offer? What do we ask for? How do we ask? This isn’t write-your-own-religion, after all: you can’t just offer whatever you feel like (or more correctly, you can, and the god’s silent disapproval will be the response). After all, if your plan is to get me to do something, and you show up at my door with awful, nasty Cherry Pepsi, you are bound to be disappointed; if you show up with some delicious Dr. Pepper, you may have better luck. That’s how people work – why would the gods be any different?
So different gods prefer different things, delivered in different ways, with different words, at different times. There are so many possible details and permutations – but this is important, it matters and you must get it right! So how can you be sure that you are offering the right thing, at the right time, in the right way, to the right god, for the right result?
And that’s where our knowledge from last week comes in. You aren’t left trying to figure this out on your own from scratch, because you can draw on the long history and memory of your community and thus perform a ritual which worked in the past, for the same sort of thing.
The thing to understand about that kind of knowledge is that it’s a form of black box tech; the practitioner doesn’t know why it works, only that it works because – as we discussed – the ritual wasn’t derived from some abstract first-principles understanding of the gods, but by trial and error. Thinking about the ritual as a form of functional, but not understood, technology can help us understand the ancient attitude towards ritual.
Let’s say we discovered a functioning alien spaceship with faster-than-light propulsion, but no aliens and no manual. We don’t understand anything about how it works. What would we do? We might try to copy the ship, but remember: we don’t know what parts are functional and what parts are just cosmetic or what does what. So we’d have to copy the ship exactly, bolt for bolt, to be sure that it would work when we turned it on.
Ritual in ancient polytheistic religions is typically treated the same way: given an unknowable, but functional system, exactitude is prized over understanding. After all, understanding why the ritual works does not help it work any better – only performing it correctly. An error in performance might offend the god, or create confusion about what effect is desired, or for whom. But an error in understanding causes no problems, so long as the ritual was performed exactly anyway. Just as it doesn’t matter what you think is happening when you, say, turn on your TV – it turns on anyway – it doesn’t matter what you think is happening in the ritual. It happens anyway.
Bret Devereaux, “Collections: Practical Polytheism, Part II: Practice”, A Collection of Unmitigated Pedantry, 2019-11-01.