Quotulatiousness

May 11, 2023

QotD: Divination

Divination is often casually defined in English as “seeing into the future”, but the root of the word gives a sense of its true meaning: divinare shares the same root as the word “divine” (divinus, meaning “something of, pertaining or belonging to a god”); divination is more rightly the act of channeling the divine. If that gives a glimpse of the future, it is because the gods are thought to see that future more clearly.

But that distinction is crucial, because what you are actually doing in a ritual involving divination is not asking questions about the future, but asking questions of the gods. Divination is not an exercise in seeing, but in hearing – that is, it is a communication, a conversation, with the divine. […]

Many current religions – especially monotheistic ones – tend to view God or the gods as a fundamentally distant, even alien being, decidedly outside of creation. The common metaphor is one where God is like a painter or an architect who creates a painting or a building, but cannot be in or part of that creation; the painter can paint himself, but cannot himself be in the painting and the architect may walk in the building but she cannot be a wall. Indeed, one of the mysteries – in the theological sense […] – of the Christian faith is how exactly a transcendent God made Himself part of creation, because this ought otherwise be inconceivable.

Polytheistic gods do not work this way. They exist within the world, and are typically created with it (as an aside: this is one point where, to get a sense of the religion, one must break with the philosophers; Plato waxes philosophic about his eternal demiurge, an ultimate creator-god, but no one in Greece actually practiced any kind of religion to the demiurge. Fundamentally, the demiurge, like so much fine Greek writing about the gods, was a philosophical construct rather than a religious reality). As we’ll get to next week, this makes the line between humans and gods a lot more fuzzy in really interesting ways. But for now, I want to focus on this basic idea: that the gods exist within creation and consequently can exist within communities of humans.

(Terminology sidenote: we’ve actually approached this distinction before, when we talked about polytheistic gods being immanent, meaning that they were active in shaping creation in a direct, observable way. In contrast, monotheistic God is often portrayed as transcendent, meaning that He sits fundamentally outside of creation, even if He still shapes it. Now, I don’t want to drive down the rabbit hole of the theological implications of these terms for modern faith (though I should note that while transcendence and immanence are typically presented as being opposed qualities, some gods are both transcendent and immanent; the resolution of an apparent contradiction of this sort in a divine act or being like this is what we call a mystery in the religious sense – “this should be impossible, but it becomes possible because of divine action”). But I do want to note the broad contrast between gods that exist within creation and the more common modern conception of a God whose existence supersedes the universe we know.)

Thus, to the polytheistic practitioner, the gods don’t exist outside of creation, or even outside of the community, but as very powerful – and sometimes inscrutable – members of the community. The exact nature of that membership varies culture to culture (for instance, the Roman view of the gods tends towards temperamental but generally benevolent guardians and partners of the state, whereas the Mesopotamian gods seem to have been more the harsh rulers set above human society; that distinction is reflected in the religious structure: in Rome, the final deciding body on religious matters was the Senate, whereas Mesopotamian cities had established, professional priesthoods). But gods do a lot of the things other powerful members of the community do: they own land (and even enslaved persons) within the community, they have homes in the community (this is how temples are typically imagined, as literal homes-away-from-home for the gods, when they’re not chilling in their normal digs), they may take part in civic or political life in their own unique way. […] some of these gods are even more tightly bound to a specific place within the community – a river, stream, hill, field.

And, like any other full member of the community (however “full membership” is defined by a society), the gods expect to be consulted about important decisions.

Bret Devereaux, “Collections: Practical Polytheism, Part III: Polling the Gods”, A Collection of Unmitigated Pedantry, 2019-11-08.

May 7, 2023

QotD: The long-term instability of bison hunting on the Great Plains

Unlike in Mongolia, where there were large numbers of wild horses available for capture, it seems that most Native Americans on the Plains were reliant on trade or horse-raiding (that is, stealing horses from their neighbors) to maintain good horse stocks initially. In the southern plains (particularly areas under the Comanches and Kiowas), the warm year-round temperature and relatively infrequent snowfall allowed those tribes to eventually raise large herds of their own horses for hunting and as a trade good. While Mongolian horses know to dig in the snow to get the grass underneath, western horses generally do not do this, meaning that they have to be stall-fed in the winter. Consequently in the northern plains, horses remained a valuable trade good and a frequently object of warfare. In both cases, horses were too valuable to be casually eating all of the time and instead Isenberg notes that guarding horses carefully against theft and raiding was one of the key and most time-demanding tasks of life for those tribes which had them.

So to be clear, the Great Plains Native Americans are not living off of their horses, they are using their horses to live off of the bison. The subsistence system isn’t horse based, but bison-based.

At the same time, as Isenberg (op. cit. 70ff) makes clear that this pure-hunting nomadism still existed in a narrow edge of subsistence. From his description, it is hard not to conclude that the margin or survival was quite a bit narrower than the Eurasian Steppe subsistence system and it is also clear that group-size and population density were quite a bit lower. It’s also not clear that this system was fully sustainable in the long run; Pekka Hämäläinen argues in The Comanche Empire (2008) that Comanche bison hunting was potentially already unsustainable in the very long term by the 1830s. It worked well enough in wet years, but an extended drought (which the Plains are subjected to every so often) could cause catastrophic decline in bison numbers, as seems to have happened the 1840s and 1850s. A sequence of such events might have created a receding wave phenomenon among bison numbers – recovering after each dry spell, but a little less each time. Isenberg (op. cit., 83ff) also hints at this, pointing out that once one factors for things like natural predators, illness and so on, estimates of Native American bison hunting look to come dangerously close to tipping over sustainability, although Isenberg does not offer an opinion as to if they did tip over that line. Remember: complete reliance on bison hunting was new, not a centuries tested form of subsistence – if there was an equilibrium to be reached, it had not yet been reached.

In any event, the arrival of commercial bison hunting along with increasing markets for bison goods drove the entire system into a tailspin much faster than the Plains population would have alone. Bison numbers begin to collapse in the 1860s, wrecking the entire system about a century and a half after it had started. I find myself wondering if, given a longer time frame to experiment and adapt the new horses to the Great Plains if Native American society on the plains would have increasingly resembled the pastoral societies of the Eurasian Steppe, perhaps even domesticating and herding bison (as is now sometimes done!) or other animals. In any event, the westward expansion of the United States did not leave time for that system to emerge.

Consequently, the Native Americans of the plains make a bad match for the Dothraki in a lot of ways. They don’t maintain population density of the necessary scale. Isenberg (op. cit., 59) presents a chart of this, to assess the impact of the 1780s smallpox epidemics, noting that even before the epidemic, most of the Plains Native American groups numbered in the single-digit thousands, with just a couple over 10,000 individuals. The largest, the Sioux at 20,000, far less than what we see on the Eurasian Steppe and also less than the 40,000 warriors – and presumably c. 120-150,000 individuals that implies – that Khal Drogo alone supposedly has [in Game of Thrones]. They haven’t had access to the horse for nearly as long or have access to the vast supply of them or live in a part of the world where there are simply large herds of wild horses available. They haven’t had long-term direct trade access to major settled cities and their market goods (which expresses itself particularly in relatively low access to metal products). It is also clear that the Dothraki Sea lacks large herds of animals for the Dothraki to hunt as the Native Americans could hunt bison; there are the rare large predators like the hrakkar, but that is it. Mostly importantly, the Plains Native American subsistence system was still sharply in flux and may not have been sustainable in the long term, whereas the Dothraki have been living as they do, apparently for many centuries.

Bret Devereaux, “That Dothraki Horde, Part II: Subsistence on the Hoof”, A Collection of Unmitigated Pedantry, 2020-12-11.

May 3, 2023

QotD: The first system of war

Filed under: History, Quotations — Tags: , , — Nicholas @ 01:00

The oldest way of war was what Native North Americans called – evocatively – the “cutting off” way of war (a phrase I am borrowing from W. Lee, “The Military Revolution of Native North America” in Empires and Indigines, ed. W. Lee (2011)), but which was common among non-state peoples everywhere in the world for the vast stretch of human history (and one may easily argue much of modern insurgency and terrorism is merely this same toolkit, updated with modern weapons). The goal of such warfare was not to subjugate a population but to drive them off, forcing them to vacate resource-rich land which could then be exploited by your group. To do this, you wanted to inflict maximum damage (casualties inflicted, animals rustled, goods stolen, people captured) at minimum risk, until the lopsided balance of pain you inflicted forced the enemy to simply move away from you to get out of your operational range.

The main tool of this form of warfare (detailed more extensively in A. Gat, War in Human Civilization (2006) and L. Keeley, War Before Civilization (1996)) was the raid. Rather than announcing your movements, a war party would attempt to advance into enemy territory in secret, hoping (in the best case) to catch an enemy village or camp unawares (typically by night) so that the population could be killed or captured (mostly killed; these are mostly non-specialized societies with limited ability to incorporate large numbers of subjugated captives) safely. Then you quickly get out of enemy territory before villages or camps allied to your target can retaliate. If you detected an incoming raid, you might rally up your allied villages or camps and ambush the ambusher in an equally lopsided engagement.

Only rarely in this did a battle result – typically when both the surprise of the raid and the surprise of the counter-raid ambush failed. At that point, with the chance for surprise utterly lost, both sides might line up and exchange missile fire (arrows, javelins) at fairly long range. Casualties in these battles were generally very low – instead the battle served both as a display of valor and a signal of resolve by both sides to continue the conflict. That isn’t to say these wars were bloodless – indeed the overall level of military mortality was much higher than in “pitched battle” cultures, but the killing was done almost entirely in the ambush and the raid.

We may call this the first system of war. It is the oldest, but as noted above, never entirely goes away. We tend to call this style “asymmetric” or “unconventional” war, but it is the most conventional war – it was the first convention, after all. It is also sometimes denigrated as primitive, but should not be judged so quickly – first system armies have managed to frustrate far stronger opponents when terrain and politics were favorable.

Bret Devereaux, “Collections: The Universal Warrior, Part IIa: The Many Faces of Battle”, A Collection of Unmitigated Pedantry, 2021-02-05.

April 29, 2023

QotD: The problem of war-elephants

The interest in war elephants, at least in the ancient Mediterranean, is caught in a bit of a conundrum. On the one hand, war elephants are undeniably cool, and so feature heavily in pop-culture (especially video games). In Total War games, elephants are shatteringly powerful units that demand specialized responses. In Paradox’s recent Imperator, elephant units are extremely powerful army components. Film gets in on the act too: Alexander (2004) presents Alexander’s final battle at Hydaspes (326) as a debacle, nearing defeat, at the hands of Porus’ elephants (the historical battle was a far more clear-cut victory, according to the sources). So elephants are awesome.

On the other hand, the Romans spend about 200 years (from c. 264 to 46 B.C.) mopping the floor with armies supported by war elephants – Carthaginian, Seleucid, even Roman ones during the civil wars (Thapsus, 46 B.C.). And before someone asks about Hannibal, remember that while the army Hannibal won with in Italy had almost no war elephants (nearly all of them having been lost in the Alps), the army he lost with at Zama had 80 of them. Romans looking back from the later Imperial period seemed to classify war elephants with scythed chariots and other failed Hellenistic “gimmick” weapons (e.g. Q. Curtius Rufus 9.2.19). Arrian (a Roman general writing in the second century A.D.) dismisses the entire branch as obsolete (Arr. Tact. 19.6) and leaves it out of his tactical manual entirely on those grounds.

This negative opinion in turn seeps into the scholarship on the matter. This is in no small part because the study of Indian history (where war elephants remained common) is so under-served in western academia compared to the study of the Greek and Roman world (where the Romans functionally ended the use of war elephants on the conclusion that they were useless). Trautmann, (2015) notes the almost pathetic under-engagement of classical scholars with this fighting system. Scullard’s The elephant in the Greek and Roman World (1974) remains the standard text in English on the topic some 45 years later, despite fairly huge changes in the study of the Achaemenids, Seleucids, and Carthaginians in that period.

All of which actually makes finding good information on war elephants quite difficult – the cheap sensational stuff often fills in the gaps left by a lack of scholarship. The handful of books on the topic vary significantly in terms of seriousness and reliability.

Bret Devereaux, “Collections: War Elephants, Part I: Battle Pachyderms”, A Collection of Unmitigated Pedantry, 2019-07-26.

April 25, 2023

QotD: What is military history?

Filed under: Books, History, Military, Quotations — Tags: , , , , — Nicholas @ 01:00

The popular conception of military history – indeed, the conception sometimes shared even by other historians – is that it is fundamentally a field about charting the course of armies, describing “great battles” and praising the “strategic genius” of this or that “great general”. One of the more obvious examples of this assumption – and the contempt it brings – comes out of the popular CrashCourse Youtube series. When asked by their audience to cover military history related to their coverage of the American Civil War, the response was this video listing battles and reflecting on the pointless of the exercise, as if a list of battles was all that military history was (the same series would later say that military historians don’t talk about about food, a truly baffling statement given the important of logistics studies to the field; certainly in my own subfield, military historians tend to talk about food more than any other kind of historian except for dedicated food historians).

The term for works of history in this narrow mold – all battles, campaigns and generals – is “drums and trumpets” history, a term generally used derisively. The study of battles and campaigns emerged initially as a form of training for literate aristocrats preparing to be officers and generals; it is little surprise that they focused on aristocratic leadership as the primary cause for success or failure. Consequently, the old “drums and trumpets” histories also had a tendency to glory in war and to glorify commanders for their “genius” although this was by no means universal and works of history on conflict as far back as Thucydides and Herodotus (which is to say, as far back as there have been any) have reflected on the destructiveness and tragedy of war. But military history, like any field, matured over time; I should note that it is hardly the only field of history to have less respectable roots in its quite recent past. Nevertheless, as the field matured and moved beyond military aristocrats working to emulate older, more successful military aristocrats into a field of scholarly inquiry (still often motivated by the very real concern that officers and political leaders be prepared to lead in the event of conflict) the field has become far more sophisticated and its gaze has broadened to include not merely non-aristocratic soldiers, but non-soldiers more generally.

Instead of the “great generals” orientation of “drums and trumpets”, the field has moved in the direction of three major analytical lenses, laid out quite ably by Jeremy Black in “Military Organisations and Military Charge in Historical Perspective” (JMH, 1998). He sets out the three basic lenses as technological, social and organizational, which speak to both the questions being asked of the historical evidence but also the answers that are likely to be provided. I should note that these lenses are mostly (though not entirely) about academic military history; much of the amateur work that is done is still very much “drums and trumpets” (as is the occasional deeply frustrating book from some older historians we need not discuss here), although that is of course not to say that there isn’t good military history being written by amateurs or that all good military history narrowly follows these schools. This is a classification system, not a straight-jacket and I am giving it here because it is a useful way to present the complexity and sophistication of the field as it is, rather than how it is imagined by those who do not engage with it.

[…]

The technological approach is perhaps the least in fashion these days, but Geoffery Parker’s The Military Revolution (2nd ed., 1996) provides an almost pure example of the lens. This approach tends to see changing technology – not merely military technologies, but often also civilian technologies – as the main motivator of military change (and also success or failure for states caught in conflict against a technological gradient). Consequently, historians with this focus are often asking questions about how technologies developed, why they developed in certain places, and what their impacts were. Another good example of the field, for instance, is the debate about the impact of rifled muskets in the American Civil War. While there has been a real drift away from seeing technologies themselves as decisive on their own (and thus a drift away from mostly “pure” technological military history) in recent decades, this sort of history is very often paired with the others, looking at the ways that social structures, organizational structures and technologies interact.

Perhaps the most popular lens for military historians these days is the social one, which used to go by the “new military history” (decades ago – it was the standard form even back in the 1990s) but by this point comprises probably the bulk of academic work on military history. In its narrow sense, the social perspective of military history seeks to understand the army (or navy or other service branch) as an extension of the society that created it. We have, you may note, done a bit of that here. Rather than understanding the army as a pure instrument of a general’s “genius” it imagines it as a socially embedded institution – which is fancy historian speech for an institution that, because it crops up out of a society, cannot help but share that society’s structures, values and assumptions.

The broader version of this lens often now goes under the moniker “war and society”. While the narrow version of social military history might be very focused on how the structure of a society influences the performance of the militaries that created it, the “war and society” lens turns that focus into a two-way street, looking at both how societies shape armies, but also how armies shape societies. This is both the lens where you will find inspection of the impacts of conflict on the civilian population (for instance, the study of trauma among survivors of conflict or genocide, something we got just a bit with our brief touch on child soldiers) and also the way that military institutions shape civilian life at peace. This is the super-category for discussing, for instance, how conflict plays a role in state formation, or how highly militarized societies (like Rome, for instance) are reshaped by the fact of processing entire generations through their military. The “war and society” lens is almost infinitely broad (something occasionally complained about), but that broadness can be very useful to chart the ways that conflict’s impacts ripple out through a society.

Finally, the youngest of Black’s categories is organizational military history. If social military history (especially of the war and society kind) understands a military as deeply embedded in a broader society, organizational military history generally seeks to interrogate that military as a society to itself, with its own hierarchy, organizational structures and values. Often this is framed in terms of discussions of “organizational culture” (sometimes in the military context rendered as “strategic culture”) or “doctrine” as ways of getting at the patterns of thought and human interaction which typify and shape a given military. Isabel Hull’s Absolute Destruction: Military Culture and the Practices of War in Imperial Germany (2006) is a good example of this kind of military history.

Of course these three lenses are by no means mutually exclusive. These days they are very often used in conjunction with each other (last week’s recommendation, Parshall and Tully’s Shattered Sword (2007) is actually an excellent example of these three approaches being wielded together, as the argument finds technological explanations – at certain points, the options available to commanders in the battle were simply constrained by their available technology and equipment – and social explanations – certain cultural patterns particular to 1940s Japan made, for instance, communication of important information difficult – and organizational explanations – most notably flawed doctrine – to explain the battle).

Inside of these lenses, you will see historians using all of the tools and methodological frameworks common in history: you’ll see microhistories (for instance, someone tracing the experience of a single small unit through a larger conflict) or macrohistories (e.g. Azar Gat, War in Human Civilization (2008)), gender history (especially since what a society views as a “good soldier” is often deeply wrapped up in how it views gender), intellectual history, environmental history (Chase Firearms (2010) does a fair bit of this from the environment’s-effect-on-warfare direction), economic history (uh … almost everything I do?) and so on.

In short, these days the field of military history, as practiced by academic military historians, contains just as much sophistication in approach as history more broadly. And it benefits by also being adjacent to or in conversation with entire other fields: military historians will tend (depending on the period they work in) to interact a lot with anthropologists, archaeologists, and political scientists. We also tend to interact a lot with what we might term the “military science” literature of strategic thinking, leadership and policy-making, often in the form of critical observers (there is often, for instance, a bit of predictable tension between political scientists and historians, especially military historians, as the former want to make large data-driven claims that can serve as the basis of policy and the later raise objections to those claims; this is, I think, on the whole a beneficial interaction for everyone involved, even if I have obviously picked my side of it).

Bret Devereaux, “Collections: Why Military History?”, A Collection of Unmitigated Pedantry, 2020-11-13.

April 21, 2023

QotD: In ancient Greek armies, soldiers were classified by the shields they carried

Filed under: Europe, Greece, History, Military, Quotations, Weapons — Tags: , , , , — Nicholas @ 01:00

Plutarch reports this Spartan saying (trans. Bernadotte Perrin):

    When someone asked why they visited disgrace upon those among them who lost their shields, but did not do the same thing to those who lost their helmets or their breastplates, he said, “Because these they put on for their own sake, but the shield for the common good of the whole line.” (Plut. Mor. 220A)

This relates to how hoplites generally – not merely Spartans – fought in the phalanx. Plutarch, writing at a distance (long after hoplite warfare had stopped being a regular reality of Greek life), seems unaware that he is representing as distinctly Spartan something that was common to most Greek poleis (indeed, harsh punishments for tossing aside a shield in battle seemed to have existed in every Greek polis).

When pulled into a tight formation, each hoplite‘s shield overlapped, protecting not only his own body, but also blocking off the potentially vulnerable right-hand side of the man to his left. A hoplite‘s armor protected only himself. That’s not to say it wasn’t important! Hoplites wore quite heavy armor for the time-period; the typical late-fifth/fourth century kit included a bronze helmet and the linothorax, a laminated, layered textile defense that was relatively inexpensive, but fairly heavy and quite robust. Wealthier hoplites might enhance this defense by substituting a bronze breastplate for the linothorax, or by adding bronze greaves (essentially a shin-and-lower-leg-guard); ankle and arm protections were rarer, but not unknown.

But the shield – without the shield one could not be a hoplite. The Greeks generally classified soldiers by the shield they carried, in fact. Light troops were called peltasts because they carried the pelta – a smaller, circular shield with a cutout that was much lighter and cheaper. Later medium-infantry were thureophoroi because they carried the thureos, a shield design copied from the Gauls. But the highest-status infantrymen were the hoplites, called such because the singular hoplon (ὅπλον) could be used to mean the aspis (while the plural hopla (ὁπλά) meant all of the hoplite‘s equipment, a complete set).

(Sidenote: this doesn’t stop in the Hellenistic period. In addition to the thureophoroi, who are a Hellenistic troop-type, we also have Macedonian soldiers classified as chalkaspides (“bronze-shields” – they seem to be the standard sarissa pike-infantry) or argyraspides (“silver-shields”, an elite guard derived from Alexander’s hypaspides, which again note – means “aspis-bearers”!), chrysaspides (“gold-shields”, a little-known elite unit in the Seleucid army c. 166) and the poorly understood leukaspides (“white-shields”) of the Antigonid army. All of the –aspides seem to have carried the Macedonian-style aspis with the extra satchel-style neck-strap, the ochane)

(Second aside: it is also possible to overstate the degree to which the aspis was tied to the hoplite‘s formation. I remain convinced, given the shape and weight of the shield, that it was designed for the phalanx, but like many pieces of military equipment, the aspis was versatile. It was far from an ideal shield for solo combat, but it would serve fairly well, and we know it was used that way some of the time.)

Bret Devereaux, “New Acquisitions: Hoplite-Style Disease Control”, A Collection of Unmitigated Pedantry, 2020-03-17.

April 17, 2023

QotD: Tenant-farming (aka “sharecropping”) in pre-modern societies

Filed under: Economics, Europe, History, Quotations — Tags: , , , , , , — Nicholas @ 01:00

Tenant labor of one form or another may be the single most common form of labor we see on big estates and it could fill both the fixed labor component and the flexible one. Typically tenant labor (also sometimes called sharecropping) meant dividing up some portion of the estate into subsistence-style small farms (although with the labor perhaps more evenly distributed); while the largest share of the crop would go to the tenant or sharecropper, some of it was extracted by the landlord as rent. How much went each way could vary a lot, depending on which party was providing seed, labor, animals and so on, but 50/50 splits are not uncommon. As you might imagine, that extreme split (compared to the often standard c. 10-20% extraction frequent in taxation or 1/11 or 1/17ths that appear frequently in medieval documents for serfs) compels the tenants to more completely utilize household labor (which is to say “farm more land”). At the same time, setting up a bunch of subsistence tenant farms like this creates a rural small-farmer labor pool for the periods of maximum demand, so any spare labor can be soaked up by the main estate (or by other tenant farmers on the same estate). That is, the high rents force the tenants to have to do more labor – more labor that, conveniently, their landlord, charging them the high rents is prepared to profit from by offering them the opportunity to also work on the estate proper.

In many cases, small freeholders might also work as tenants on a nearby large estate as well. There are many good reasons for a small free-holding peasant to want this sort of arrangement […]. So a given area of countryside might have free-holding subsistence farmers who do flexible sharecropping labor on the big estate from time to time alongside full-time tenants who worked land entirely or almost entirely owned by the large landholder. Now, as you might imagine, the situation of tenants – open to eviction and owing their landlords considerable rent – makes them very vulnerable to the landlord compared to neighboring freeholders.

That said, tenants in this sense were generally considered free persons who had the right to leave (even if, as a matter of survival, it was rarely an option, leaving them under the control of their landlords), in contrast to non-free laborers, an umbrella-category covering a wide range of individuals and statuses. I should be clear on one point: nearly every pre-modern complex agrarian society had some form of non-free labor, though the specifics of those systems varied significantly from place to place. Slavery of some form tends to be the rule, rather than the exception for these pre-modern agrarian societies. Two of the largest categories of note here are chattel slavery and debt bondage (also called “debt-peonage”), which in some cases could also shade into each other, but were often considered separate (many ancient societies abolished debt bondage but not chattel slavery for instance and debt-bondsmen often couldn’t be freely sold, unlike chattel slaves). Chattel slaves could be bought, sold and freely traded by their slave masters. In many societies these people were enslaved through warfare with captured soldiers and civilians alike reduced to bondage; the heritability of that status varies quite a lot from one society to the next, as does the likelihood of manumission (that is, becoming free).

Under debt bondage, people who fell into debt might sell (or be forced to sell) dependent family members (selling children is fairly common) or their own person to repay the debt; that bonded status might be permanent, or might hold only till the debt is repaid. In the later case, as remains true in a depressing amount of the world, it was often trivially easy for powerful landlord/slave-holders to ensure that the debt was never paid and in some systems this debt-peon status was heritable. Needless to say, the situation of both of these groups could be and often was quite terrible. The abolition of debt-bondage in Athens and Rome in the sixth and fourth centuries B.C. respectively is generally taken as a strong marker of the rising importance and political influence of the class of rural, poorer citizens and you can readily see why this is a reform they would press for.

Bret Devereaux, “Collections: Bread, How Did They Make It? Part II: Big Farms”, A Collection of Unmitigated Pedantry, 2020-07-31.

April 14, 2023

QotD: The three great strategic sins

Filed under: History, Japan, Military, Pacific, Quotations, USA, WW2 — Tags: , , , , — Nicholas @ 01:00

The first sin is the sin of of not having a strategy in the first place, what we might call “emotive” strategy. As Clausewitz notes, policy (again, note above how what we’re calling strategy is closest to policy in Clausewitz’ sense) is “subject to reason alone” whereas the “primordial violence, hatred and enmity” is provided for in another part of the trinity (“will” or “passion”). To replace policy with passion is to invert their proper relationship and court destruction.

The second sin is the elevation of operational concerns over strategic ones, the usurpation of strategy with operations, which we have discussed before. This is, by the by, also an error in managing the relationship of the trinity, allowing the general’s role in managing friction to usurp the state’s role in managing politics.

Perhaps the greatest example of this is the Japanese attack on Pearl Harbor; an operational consideration (the destruction of the US Pacific Fleet) and even the tactics necessary to achieve that operational objective, were elevated above the strategic consideration of “should Japan, in the midst of an endless, probably unwinnable war against a third-rate power (the Republic of China) also go to war with a first-rate power (the United States) in order to free up oil-supplies for the first war”. Hara Tadaichi’s pithy summary is always worth quoting, “We won a great tactical victory at Pearl Harbor and thereby lost the war.”

How does this error happen? It tends to come from two main sources. First, it usually occurs most dramatically in military systems where the military leadership – which has been trained for operations and tactics, not strategy, which you will recall is the province of kings, ministers and presidents – usurps the leadership of the state. Second, it tends to occur when those military leaders – influenced by their operational training – take the operational conditions of their planning as assumed constants. “What do we do if we go to war with the United States” becomes “What do we do when we go to war with the United States” which elides out the strategic question “should we go to war with the United States?” entirely – and catastrophically, as for Imperial Japan, the answer to that unasked question of should we do this was clearly Oh my, NO.

(Bibliography note: It would hardly be fitting for me to declare these errors common and not provide examples. Two of the best case-studies I have read in this kind of strategic-thinking-failure-as-organizational-culture-failure are I. Hull, Absolute Destruction: Military Culture and the Practices of War in Imperial Germany (2005) and Parshall and Tully, Shattered Sword: The Untold Story of the Battle of Midway (2005). Also worth checking out, Daddis, “Chasing the Austerlitz Ideal: The Enduring Quest for Decisive Battle” in Armed Forces Journal (2006): 38-41. The same themes naturally come up in Daddis, Withdrawal: Reassessing America’s Final Years in Vietnam (2017)).

The third and final sin is easy to understand: a failure to update the strategy as conditions change. Quite often this happens in conjunction with the second sin, as once those operational concerns take over the place of strategy, it becomes difficult for leaders to consider new strategy as opposed to simply new operations in the pursuit of strategic goals which are often already lost beyond all retrieval. But this can happen without a subordination failure, due to sunk-costs and the different incentives faced by the state and its leaders. The classic example being functionally every major power in the First World War: by 1915 or 1916, it ought to have been obvious that no gains made as a result of the war could possibly be worth its continuance. Yet it was continued, both because having lost so much it seemed wrong to give up without “victory” and also because, for the politicians who had initially supported the war, to admit it was a useless waste was political suicide.

Bret Devereaux, “Collections: The Battle of Helm’s Deep, Part VIII: The Mind of Saruman”, A Collection of Unmitigated Pedantry, 2020-06-19.

April 10, 2023

QotD: Interaction between “big” farmers and subsistence farmers in pre-modern societies

Filed under: Economics, Food, History, Quotations — Tags: , , , , , , — Nicholas @ 01:00

What our little farmers generally have […] is labor – they have excess household labor because the household is generally “too large” for its farm. Now keep in mind, they’re not looking to maximize the usage of that labor – farming work is hard and one wants to do as little of it as possible. But a family that is too large for the land (a frequent occurrence) is going to be looking at ways to either get more out of their farmland or out of their labor, or both, especially because they otherwise exist on a razor’s edge of subsistence.

And then just over the way, you have the large manor estate, or the Roman villa, or the lands owned by a monastery (because yes, large landholders were sometimes organizations; in medieval Europe, monasteries filled this function in some places) or even just a very rich, successful peasant household. Something of that sort. They have the capital (plow-teams, manure, storage, processing) to more intensively farm the little land our small farmers have, but also, where the small farmer has more labor than land, the large landholder has more land than labor.

The other basic reality that is going to shape our large farmers is their different goals. By and large our small farmers were subsistence farmers – they were trying to farm enough to survive. Subsistence and a little bit more. But most large landholders are looking to use the surplus from their large holdings to support some other activity – typically the lifestyle of wealthy elites, which in turn require supporting many non-farmers as domestic servants, retainers (including military retainers), merchants and craftsmen (who provide the status-signalling luxuries). They may even need the surplus to support political activities (warfare, electioneering, royal patronage, and so on). Consequently, our large landholders want a lot of surplus, which can be readily converted into other things.

The space for a transactional relationship is pretty obvious, though as we will see, the power imbalances here are extreme, so this relationship tends to be quite exploitative in most cases. Let’s start with the labor component. But the fact that our large landholders are looking mainly to produce a large surplus (they are still not, as a rule, profit maximizing, by the by, because often social status and political ends are more important than raw economic profit for maintaining their position in society) means that instead of having a farm to support a family unit, they are seeking labor to support the farm, trying to tailor their labor to the minimum requirements of their holdings.

[…]

The tricky thing for the large landholder is that labor needs throughout the year are not constant. The window for the planting season is generally very narrow and fairly labor intensive: a lot needs to get done in a fairly short time. But harvest is even narrower and more labor intensive. In between those, there is still a fair lot of work to do, but it is not so urgent nor does it require so much labor.

You can readily imagine then the ideal labor arrangement would be to have a permanent labor supply that meets only the low-ebb labor demands of the off-seasons and then supplement that labor supply during the peak seasons (harvest and to a lesser extent planting) with just temporary labor for those seasons. Roman latifundia may have actually come close to realizing this theory; enslaved workers (put into bondage as part of Rome’s many wars of conquest) composed the villa’s primary year-round work force, but the owner (or more likely the villa’s overseer, the vilicus, who might himself be an enslaved person) could contract in sharecroppers or wage labor to cover the needs of the peak labor periods. Those temporary laborers are going to come from the surrounding rural population (again, households with too much labor and too little land who need more work to survive). Some Roman estates may have actually leased out land to tenant farmers for the purpose of creating that “flexible” local labor supply on marginal parts of the estate’s own grounds. Consequently, the large estates of the very wealthy required the impoverished many subsistence farmers in order to function.

Bret Devereaux, “Collections: Bread, How Did They Make It? Part II: Big Farms”, A Collection of Unmitigated Pedantry, 2020-07-31.

April 6, 2023

QotD: The general’s pre-battle speech to the army

The modern pre-battle general’s speech is quite old. We can actually be very specific: it originates in a specific work: Thucydides’ Histories [of the Peloponnesian War] (written c. 400 B.C.). Prior to this, looking at Homer or Herodotus, commanders give very brief remarks to their troops before a fight, but the fully developed form of the speech, often presented in pairs (one for each army) contrasting the two sides, is all Thucydides. It’s fairly clear that a few of Thucydides’ speeches seem to have gone on to define the standard form and ancient authors after Thucydides functionally mix and match their components (we’ll talk about them in a moment). This is not a particularly creative genre.

Now, there is tremendous debate as to if these speeches were ever delivered and if so, how they were delivered (see the bibliography note below; none of this is really original to me). For my part, while I think we need to be alive to the fact that what we see in our textual sources are dressed up literary compressions of the tradition of the pre-battle speech, I suspect that, particularly by the Roman period, yes, such speeches were a part of the standard practice of generalship. Onasander, writing about the duties of a general in the first century CE, tells us as much, writing, “For if a general is drawing up his men before battle, the encouragement of his words makes them despise the danger and covet the honour; and a trumpet-call resounding in the ears does not so effectively awaken the soul to the conflict of battle as a speech that urges to strenuous valour rouses the martial spirit to confront danger.” Onasander is a philosopher, not a military man, but his work became a standard handbook for military leaders in Antiquity; one assumes he is not entirely baseless.

And of course, we have the body of literature that records these speeches. They must be, in many cases, invented or polished versions; in many cases the author would have no way of knowing the real worlds actually said. And many of them are quite obviously too long and complex for the situations into which they are placed. And yet I think they probably do represent some of what was often said; in many cases there are good indications that they may reflect the general sentiments expressed at a given point. Crucially, pre-battle speeches, alone among the standard kinds of rhetoric, refuse to follow the standard formulas of Greek and Roman rhetoric. There is generally no exordium (meaning introduction; except if there is an apology for the lack of one, in the form of, “I have no need to tell you …”) or narratio (the narrated account), no clear divisio (the division of the argument, an outline in speech form) and so on. Greek and Roman oratory was, by the first century or so, quite well developed and relatively formulaic, even rigid, in structure. The temptation to adapt these speeches, when committing them to a written history, to the forms of every other kind of oratory must have been intense, and yet they remain clearly distinct. It is certainly not because the genre of the battle speech was more interesting in a literary sense than other forms of rhetoric, because oh my it wasn’t. The most logical explanation to me has always been that they continue to remain distinct because however artificial the versions of battle speeches we get in literature are, they are tethered to the “real thing” in fundamental ways.

Finally, the mere existence of the genre. As I’ve noted elsewhere, we want to keep in mind that Greek and Roman literature were produced in extremely militarized societies, especially during the Roman Republic. And unlike many modern societies, where military service is more common among poorer citizens, in these societies military service was the pride of the elite, meaning that the literate were more likely both to know what a battle actually looked like and to have their expectations shaped by war literature than the commons. And that second point is forceful; even if battle speeches were not standard before Thucydides, it is hard to see how generals in the centuries after him could resist giving them once they became a standard trope of “what good generals do.”

So did generals give speeches? Yes, probably. Among other reasons we can be sure is that our sources criticize generals who fail to give speeches. Did they give these speeches? No, probably not; Plutarch says as much (Mor. 803b) though I will caution that Plutarch is not always the best when it comes to the reality of the battlefield (unlike many other ancient authors, Plutarch was a life-long civilian in a decidedly demilitarized province – Achaea – who also often wrote at great chronological distance from his subjects; his sense of military matters is generally weak compared to Thucydides, Polybius or Caesar, for instance). Probably the actual speeches were a bit more roughly cut and more compartmentalized; a set of quick remarks that might be delivered to one unit after another as the general rode along the line before a battle (e.g. Thuc. 4.96.1). There are also all sorts of technical considerations: how do you give a speech to so many people, and so on (and before you all rush to the comments to give me an explanation of how you think it was done, please read the works cited below, I promise you someone has thought of it, noted every time it is mentioned or implied in the sources and tested its feasibility already; exhaustive does not begin to describe the scholarship on oratory and crowd size), which we’ll never have perfect answers for. But they did give them and they did seem to think they were important.

Why does that matter for us? Because those very same classical texts formed the foundation for officer training and culture in much of Europe until relatively recently. Learning to read Greek and Latin by marinating in these specific texts was a standard part of schooling and intellectual development for elite men in early modern and modern Europe (and the United States) through the Second World War. Napoleon famously advised his officers to study Caesar, Hannibal and Alexander the Great (along with Frederick II and the best general you’ve never heard of, Gustavus Adolphus of Sweden). Reading the classical accounts of these battles (or, in some cases, modern adaptations of them) was a standard part of elite schooling as well as officer training. Any student that did so was bound to run into these speeches, their formulas quite different from other forms of rhetoric but no less rigid (borrowing from a handful of exemplars in Thucydides) and imbibe the view of generalship they contained. Consequently, later European commanders tended for quite some time to replicate these tropes.

(Bibliography notes: There is a ton written on ancient battle speeches, nearly all of it in journal articles that are difficult to acquire for the general public, and much of it not in English. I think the best possible place to begin (in English) is J.E. Lendon, “Battle Description in the Ancient Historians Part II: Speeches, Results and Sea Battles” Greece & Rome 64.1 (2017). The other standard article on the topic is E. Anson “The General’s Pre-Battle Exohortation in Graeco-Roman Warfare” Greece & Rome 57.2 (2010). In terms of structure, note J.C. Iglesias Zoida, “The Battle Exhortation in Ancient Rhetoric” Rhetorica 25 (2007). For those with wider language skills, those articles can point you to the non-English scholarship. They can also serve as compendia of nearly all of the ancient battle speeches; there is little substitute for simply reading a bunch of them on your own.)

Bret Devereaux, “Collections: The Battle of Helm’s Deep, Part VII: Hanging by a Thread”, A Collection of Unmitigated Pedantry, 2020-06-12.

April 2, 2023

QotD: The (in-)effectiveness of chemical weapons against “Modern System” armies

Filed under: History, Military, Quotations, Weapons, WW1, WW2 — Tags: , , , , , — Nicholas @ 01:00

it is far easier to protect against chemical munitions than against an equivalent amount of high explosives, a point made by Matthew Meselson. Let’s unpack that, because I think folks generally have an unrealistic assessment of the power of a chemical weapon attack, imagining tiny amounts to be capable of producing mass casualties. Now chemical munition agents have a wide range of lethalities and concentrations, but let’s use Sarin – one of the more lethal common agents, as an example. Sarin gas is an extremely lethal agent, evaporating rapidly into the air from a liquid form. It has an LD50 (the dose at which half of humans in contact will be killed) of less than 40mg per cubic meter (over 2 minutes of exposure) for a human. Dangerous stuff – as a nerve agent, one of the more lethal chemical munitions; for comparison it is something like 30 times more lethal than mustard gas.

But let’s put that in a real-world context. Five Japanese doomsday cultists used about five liters of sarin in a terror attack on a Tokyo Subway in 1995, deployed, in this case, in a contained area, packed full to the brim with people – a potential worst-case (from our point of view; “best” case from the attackers point of view) situation. But the attack killed only 12 people and injured about a thousand. Those are tragic, horrible numbers to be sure – but statistically insignificant in a battlefield situation. And no army could count on ever being given the kind of high-vulnerability environment like a subway station in an actual war.

In order to produce mass casualties in battlefield conditions, a chemical attacker has to deploy tons – and I mean that word literally – of this stuff. Chemical weapons barrages in the First World War involved thousands and tens of thousands of shells – and still didn’t produce a high fatality rate (though the deaths that did occur were terrible). But once you are talking about producing tens of thousands of tons of this stuff and distributing it to front-line combat units in the event of a war, you have introduced all sorts of other problems. One of the biggest is shelf-life: most nerve gasses (which tend to have very high lethality) are not only very expensive to produce in quantity, they have very short shelf-lives. The other option is mustard gas – cheaper, with a long shelf-life, but required in vast quantities (during WWII, when just about every power stockpiled the stuff, the stockpiles were typically in the many tens of thousands of tons range, to give a sense of how much it was thought would be required – and then think about delivering those munitions).

[…]

But that’s not the only problem – the other problem is doctrine. Remember that the modern system is all about fast movement. I don’t want to get too deep into maneuver-warfare doctrine (one of these days!) but in most of its modern forms (e.g. AirLand Battle, Deep Battle, etc) it aims to avoid the stalemate of static warfare by accelerating the tempo of the battle beyond the defender’s ability to cope with, eventually (it is hoped) leading the front to decompose as command and control breaks down.

And chemical weapons are just not great for this. Active use of chemical weapons – even by your own side – poses all sorts of issues to an army that is trying to move fast and break things. This problem actually emerged back in WWI: even if your chemical attack breaks the enemy front lines, the residue of the attack is now an obstruction for you. […] A modern system army, even if it is on the defensive operationally, is going to want to make a lot of tactical offensives (counterattacks, spoiling attacks). Turning the battle into a slow-moving mush of long-lasting chemical munitions (like mustard gas!) is counterproductive.

But that leaves the fast-dispersing nerve agents, like sarin. Which are very expensive, hard to store, hard to provision in quantity and – oh yes – still less effective than high explosives when facing another expensive, modern system army, which is likely to be very well protected against such munitions (for instance, most modern armored vehicles are designed to be functionally immune to chemical munitions assuming they are buttoned up).

This impression is borne out by the history of chemical weapons; for top-tier armies, just over a century of being a solution in search of a problem. The stalemate of WWI produced a frantic search for solutions – far from being stupidly complacent (as is often the pop-history version of WWI), many commanders were desperately searching for something, anything to break the bloody stalemate and restore mobility. We tend to remember the successful innovations – armor, infiltration tactics, airpower – because they shape subsequent warfare. But at the time, there were a host of efforts: highly planned bite-and-hold assaults, drawn out brutal et continu efforts, dirigibles, mining and sapping, ultra-massive artillery barrages (trying a wide variety of shell-types and weights). And, of course, gas. Gas sits in the second category: one more innovation which failed to break the trench stalemate. In the end, even in WWI, it wasn’t any more effective than an equivalent amount of high explosives (as the relative casualty figures attest). Tanks and infiltration tactics – that is to say, the modern system – succeeded where gas failed, in breaking the trench stalemate, with its superiority at the role demonstrated vividly in WWII.

Bret Devereaux, “Collections: Why Don’t We Use Chemical Weapons Anymore?”, A Collection of Unmitigated Pedantry, 2020-03-20.

March 29, 2023

QotD: Sacrifice

As a terminology note: we typically call a living thing killed and given to the gods a sacrificial victim, while objects are votive offerings. All of these terms have useful Latin roots: the word “victim” – which now means anyone who suffers something – originally meant only the animal used in a sacrifice as the Latin victima; the assistant in a sacrifice who handled the animal was the victimarius. Sacrifice comes from the Latin sacrificium, with the literal meaning of “the thing made sacred”, since the sacrificed thing becomes sacer (sacred) as it now belongs to a god, a concept we’ll link back to later. A votivus in Latin is an object promised as part of a vow, often deposited in a temple or sanctuary; such an item, once handed over, belonged to the god and was also sacer.

There is some concern for the place and directionality of the gods in question. Sacrifices for gods that live above are often burnt so that the smoke wafts up to where the gods are (you see this in Greek and Roman practice, as well in Mesopotamian religion, e.g. in Atrahasis, where the gods “gather like flies” about a sacrifice; it seems worth noting that in Temple Judaism, YHWH (generally thought to dwell “up”) gets burnt offerings too), while sacrifices to gods in the earth (often gods of death) often go down, through things like libations (a sacrifice of liquid poured out).

There is also concern for the right animals and the time of day. Most gods receive ritual during the day, but there are variations – Roman underworld and childbirth deities (oddly connected) seem to have received sacrifices by night. Different animals might be offered, in accordance with what the god preferred, the scale of the request, and the scale of the god. Big gods, like Jupiter, tend to demand prestige, high value animals (Jupiter’s normal sacrifice in Rome was a white ox). The color of the animal would also matter – in Roman practice, while the gods above typically received white colored victims, the gods below (the di inferi but also the di Manes) darkly colored animals. That knowledge we talked about was important in knowing what to sacrifice and how.

Now, why do the gods want these things? That differs, religion to religion. In some polytheistic systems, it is made clear that the gods require sacrifice and might be diminished, or even perish, without it. That seems to have been true of Aztec religion, particularly sacrifices to Quetzalcoatl; it is also suggested for Mesopotamian religion in the Atrahasis where the gods become hungry and diminished when they wipe out most of humans and thus most of the sacrifices taking place. Unlike Mesopotamian gods, who can be killed, Greek and Roman gods are truly immortal – no more capable of dying than I am able to spontaneously become a potted plant – but the implication instead is that they enjoy sacrifices, possibly the taste or even simply the honor it brings them (e.g. Homeric Hymn to Demeter 310-315).

We’ll come back to this idea later, but I want to note it here: the thing being sacrificed becomes sacred. That means it doesn’t belong to people anymore, but to the god themselves. That can impose special rules for handling, depositing and storing, since the item in question doesn’t belong to you anymore – you have to be extra-special-careful with things that belong to a god. But I do want to note the basic idea here: gods can own property, including things and even land – the temple belongs not to the city but to the god, for instance. Interestingly, living things, including people can also belong to a god, but that is a topic for a later post. We’re still working on the basics here.

Bret Devereaux, “Collections: Practical Polytheism, Part II: Practice”, A Collection of Unmitigated Pedantry, 2019-11-01.

March 25, 2023

QotD: Sparta’s fate

Filed under: Europe, Greece, History, Quotations — Tags: , , , , , — Nicholas @ 01:00

What becomes of Sparta after its hegemony shatters in 371, after Philip II humiliates it in 338 and after Antipater crushes it in 330? This is a part of Spartan history we don’t much discuss, but it provides a useful coda on the Sparta of the fifth and fourth century. Athens, after all, remained a major and important city in Greece through the Roman period – a center for commerce and culture. Corinth – though burned by the Romans – was rebuilt and remained a crucial and wealthy port under the Romans.

What became of Sparta?

In short, Sparta became a theme-park. A quaint tourist get-away where wealthy Greeks and Romans could come to look and stare at the quaint Spartans and their silly rituals. It developed a tourism industry and the markets even catered to the needs of the elite Roman tourists who went (Plutarch and Cicero both did so, Plut. Lyc. 18.1; Tusc. 5.77).

In term of civic organization, after Cleomenes III’s last gasp effort to make Sparta relevant – an effort that nearly wiped out the entire remaining Spartiate class (Plut. Cleom. 28.5) – Sparta increasingly resembled any other Hellenistic Greek polis, albeit a relatively famous and also poor one. Its material and literary culture seem to converge with the rest of the Greeks, with the only distinctively Spartan elements of the society being essentially Potemkin rituals for boys put on for the tourists who seem to be keeping the economy running and keeping what is left of Sparta in the good graces of their Roman overlords.

Thus ended Sparta: not with a brave last stand. Not with mighty deeds of valor. Or any great cultural contribution at all. A tourist trap for rich and bored Romans.

Bret Devereaux, “Collections: This. Isn’t. Sparta. Part VII: Spartan Ends”, A Collection of Unmitigated Pedantry, 2019-09-27.

March 21, 2023

QotD: The elephant as a weapon of war

The pop-culture image of elephants in battle is an awe-inspiring one: massive animals smashing forward through infantry, while men on elephant-back rain missiles down on the hapless enemy. And for once I can surprise you by saying: this isn’t an entirely inaccurate picture. But, as always, we’re also going to introduce some complications into this picture.

Elephants are – all on their own – dangerous animals. Elephants account for several hundred fatalities per year in India even today and even captured elephants are never quite as domesticated as, say, dogs or horses. Whereas a horse is mostly a conveyance in battle (although medieval European knights greatly valued the combativeness of certain breeds of destrier warhorses), a war elephant is a combatant in his own right. When enraged, elephants will gore with tusks and crush with feet, along with using their trunks as weapons to smash, throw or even rip opponents apart (by pinning with the feet). Against other elephants, they will generally lock tusks and attempt to topple their opponent over, with the winner of the contest fatally goring the loser in the exposed belly (Polybius actually describes this behavior, Plb. 5.84.3-4). Dumbo, it turns out, can do some serious damage if prompted.

Elephants were selected for combativeness, which typically meant that the ideal war elephant was an adult male, around 40 years of age (we’ll come back to that). Male elephants enter a state called “musth” once a year, where they show heightened aggressiveness and increases interest in mating. Trautmann (2015) notes a combination of diet, straight up intoxication and training used by war elephant handlers to induce musth in war elephants about to go into battle, because that aggression was prized (given that the signs of musth are observable from the outside, it seems likely to me that these methods worked).

(Note: In the ancient Mediterranean, female elephants seem to have also been used, but it is unclear how often. Cassius Dio (Dio 10.6.48) seems to think some of Pyrrhus’s elephants were female, and my elephant plate shows a mother elephant with her cub, apparently on campaign. It is possible that the difficulty of getting large numbers of elephants outside of India caused the use of female elephants in battle; it’s also possible that our sources and artists – far less familiar with the animals than Indian sources – are themselves confused.)

Thus, whereas I have stressed before that horses are not battering rams, in some sense a good war elephant is. Indeed, sometimes in a very literal sense – as Trautmann notes, “tearing down fortifications” was one of the key functions of Indian war elephants, spelled out in contemporary (to the war elephants) military literature there. A mature Asian elephant male is around 2.75m tall, masses around 4 tons and is much more sturdily built than any horse. Against poorly prepared infantry, a charge of war elephants could simply shock them out of position a lot of the time – though we will deal with some of the psychological aspects there in a moment.

A word on size: film and video-game portrayals often oversize their elephants – sometimes, like the Mumakil of Lord of the Rings, this is clearly a fantasy creature, but often that distinction isn’t made. As notes, male Asian (Indian) elephants are around 2.75m (9ft) tall; modern African bush elephants are larger (c. 10-13ft) but were not used for war. The African elephant which was trained for war was probably either an extinct North African species or the African forest elephant (c. 8ft tall normally) – in either case, ancient sources are clear that African war elephants were smaller than Asian ones.

Thus realistic war elephants should be about 1.5 times the size of an infantryman at the shoulders (assuming an average male height in the premodern world of around 5’6?), but are often shown to be around twice as tall if not even larger. I think this leads into a somewhat unrealistic assumption of how the creatures might function in battle, for people not familiar with how large actual elephants really are.

The elephant as firing platform is also a staple of the pop-culture depiction – often more strongly emphasized because it is easier to film. This is true to their use, but seems to have always been a secondary role from a tactical standpoint – the elephant itself was always more dangerous than anything someone riding it could carry.

There is a social status issue at play here which we’ll come back to […] The driver of the elephant, called a mahout, seems to have typically been a lower-status individual and is left out of a lot of heroic descriptions of elephant-riding (but not driving) aristocrats (much like Egyptian pharaohs tend to erase their chariot drivers when they recount their great victories). Of course, the mahout is the fellow who actually knows how to control the elephant, and was a highly skilled specialist. The elephant is controlled via iron hooks called ankusa. These are no joke – often with a sharp hook and a spear-like point – because elephants selected for combativeness are, unsurprisingly, hard to control. That said, they were not permanent ear-piercings or anything of the sort – the sort of setup in Lord of the Rings is rather unlike the hooks used.

In terms of the riders, we reach a critical distinction. In western media, war elephants almost always appear with great towers on their backs – often very elaborate towers, like those in Lord of the Rings or the film Alexander (2004). Alexander, at least, has it wrong. The howdah – the rigid seat or tower on an elephant’s back – was not an Indian innovation and doesn’t appear in India until the twelfth century (Trautmann supposes, based on the etymology of howdah (originally an Arabic word) that this may have been carried back into India by Islamic armies). Instead, the tower was a Hellenistic idea (called a thorkion in Greek) which post-dates Alexander (but probably not by much).

This is relevant because while the bowmen riding atop elephants in the armies of Alexander’s successors seem to be lower-status military professionals, in India this is where the military aristocrat fights. […] this is a big distinction, so keep it in mind. It also illustrates neatly how the elephant itself was the primary weapon – the society that used these animals the most never really got around to creating a protected firing position on their back because that just wasn’t very important.

In all cases, elephants needed to be supported by infantry (something Alexander (2004) gets right!) Cavalry typically cannot effectively support elephants for reasons we’ll get to in a moment. The standard deployment position for war elephants was directly in front of an infantry force (heavy or light) – when heavy infantry was used, the gap between the two was generally larger, so that the elephants didn’t foul the infantry’s formation.

Infantry support covers for some of the main weaknesses elephants face, keeping the elephants from being isolated and taken down one by one. It also places an effective exploitation force which can take advantage of the havoc the elephants wreck on opposing forces. The “elephants advancing alone and unsupported” formation from Peter Jackson’s Return of the King, by contrast, allows the elephants to be isolated and annihilated (as they subsequently are in the film).

Bret Devereaux, “Collections: War Elephants, Part I: Battle Pachyderms”, A Collection of Unmitigated Pedantry, 2019-07-26.

March 17, 2023

QotD: The unique nature of Roman Egypt

Filed under: Africa, History, Quotations — Tags: , , , , — Nicholas @ 01:00

I’ve mentioned quite a few times here that Roman Egypt is a perplexing part of understanding the Roman Empire because on the one hand it provides a lot of really valuable evidence for daily life concerns (mortality, nuptiality, military pay, customs and tax systems, etc.) but on the other hand it is always very difficult to know to what degree that information can be generalized because Roman Egypt is such an atypical Roman province. So this week we’re going to look in quite general terms at what makes Egypt such an unusual place in the Roman world. As we’ll see, some of the ways in which Egypt is unusual are Roman creations, but many of them stretch back before the Roman period in Egypt or indeed before the Roman period anywhere.

[…]

what makes Roman Egypt’s uniqueness so important is one of the unique things about it: Roman Egypt preserves a much larger slice of our evidence than any other place in the ancient world. This comes down to climate (as do most things); Egypt is a climatically extreme place. On the one hand, most of the country is desert and here I mean hard desert, with absolutely minuscule amounts of precipitation. On the other hand, the Nile River creates a fertile, at points almost lush, band cutting through the country running to the coast. The change between these two environments is extremely stark; it is, I have been told (I haven’t yet been to Egypt), entirely possible in many places to stand with one foot in the “green” and another foot in the hard desert.

That in turn matters because while Egypt was hardly the only arid region Rome controlled, it was the only place you were likely to find very many large settlements and lots of people living in such close proximity to such extremely arid environments (other large North African settlements tend to be coastal). And that in turn matters for preservation. When objects are deposited – lost, thrown away, carefully placed in a sanctuary, whatever – they begin to degrade. Organic objects (textile, leather, paper, wood) rot as microorganisms use them as food, while metal objects oxidize (that is, rust). Aridity arrests (at least somewhat) both processes. Consequently things survive from the Roman period (or indeed, from even more ancient periods) in Egypt that simply wouldn’t survive almost anywhere else.

By far the most important of those things is paper, particularly papyrus paper. The Romans actually had a number of writing solutions. For short-term documents, they used wax writing tablets, an ancient sort of “dry erase board” which could be scraped smooth to write a new text when needed; these only survive under very unusual circumstances. For more permanent documents, wood and papyrus were used. Wood tablets, such as those famously recovered from the Roman fort at Vindolanda, are fairly simple: thin wooden slats are smoothed so they can be written on with ink and a pen, creating a rigid but workable and cheap writing surface; when we find these tablets they have tended to be short documents like letters or temporary lists, presumably in part because storing lots of wood tablets would be hard so more serious records would go on the easier to store papyrus paper.

Papyrus paper was lighter, more portable, more storeable option. Papyrus paper is produced by taking the pith of the papyrus plant, which is sticky, and placing it in two layers at right angles to each other, before compressing (or crushing) those layers together to produce a single sheet, which is then dried, creating a sheet of paper (albeit a very fibery sort of paper). Papyrus paper originated in Egypt and the papyrus plant is native to Egypt, but by the Roman period we generally suppose papyrus paper to have been used widely over much of the Roman Empire; it is sometimes supposed that papyrus was cheaper and more commonly used in Egypt than elsewhere, but it is hard to be sure.

Now within the typical European and Mediterranean humidity, papyrus doesn’t last forever (unlike the parchment paper produced in the Middle Ages which was far more expensive but also lasts much longer); papyrus paper will degrade over anything from a few decades to a couple hundred years – the more humidity, the faster decay. Of course wood tablets and wax tablets fare no better. What that means is that in most parts of the Roman Empire, very little casual writing survives; what does survive were the sorts of important official documents which might be inscribed on stone (along with the literary works that were worth painstakingly copying over and over again by hand through the Middle Ages). But letters, receipts, tax returns, census records, shopping lists, school assignments – these sorts of documents were all written on less durable materials which don’t survive except in a few exceptional sites like Vindolanda.

Or Egypt. Not individual places in Egypt; pretty much the whole province.

In the extremely dry conditions of the Egyptian desert, papyrus can survive (albeit typically in damaged scraps rather than complete scrolls) from antiquity to the present. Now the coverage of these surviving papyri is not even. The Roman period is far better represented in the surviving papyri than the Ptolemaic period (much less the proceeding “late” period or the New Kingdom before that). It’s also not evenly distributed geographically; the Arsinoite nome (what is today el-Fayyum, an oasis basin to the West of the Nile) and the Oxyrhynchus nome (roughly in the middle of Egypt, on the Nile) are both substantially overrepresented, while the Nile Delta itself has fewer (but by no means zero) finds. Consequently, we need to be worried not only about the degree to which Egypt might be representative of the larger Roman world, but also the degree to which these two nomes (a nome is an administrative district within Egypt, we’ll talk about them more in a bit) are representative of Egypt. That’s complicated in turn by the fact that the Arsinoite nome is not a normal nome; extensive cultivation there only really begins under Ptolemaic rule, which raises questions about how typical it was. It also means we lack a really good trove of papyri from a nome in Lower Egypt proper (the northern part of the country, covering the delta of the Nile) which, because of its different terrain, we might imagine was in some ways different.

Nevertheless, it is difficult to overstate the value of the papyri we do recover from Egypt. Documents containing census and tax information can give us important clues about the structure of ancient households (revealing, for instance, a lot of complex composite households). Tax receipts (particularly for customs taxes) can illuminate a lot about how Roman customs taxes (portoria) were assessed and conducted. Military pay stubs from Roman Egypt also provide the foundation for our understanding of how Roman soldiers were paid, recording for instance, pay deductions for rations, clothes and gear. We also occasionally recover fragments of literary works that we know existed but which otherwise did not survive to the present. And there is so much of this material. Whereas new additions to the corpus of ancient literary texts are extremely infrequent (the last very long such text was the recovery of the Athenaion Polteia or Constitution of the Athenians, from a papyrus discovered in the Fayyum (of course), published in 1891), the quantity of unpublished papyri from Egypt remains vast and there is frankly a real shortage of trained Egyptologists who can work through and publish this material (to the point that the vast troves of unpublished material has created deeply unfortunate opportunities for theft and fraud).

And so that is the first way in which Egypt is unusual: we know a lot more about daily life in Roman Egypt, especially when it comes to affairs below the upper-tier of society. Recovered papyrological evidence makes petty government officials, regular soldiers, small farming households, affluent “middle class” families and so on much more visible to us. But of course that immediately raises debates over how typical those people we can see are, because we’d like to be able to generalize information we learn about small farmers or petty government officials more broadly around the empire, to use that information to “fill in” regions where the evidence just does not survive. But of course the rejoinder is natural to point out the ways in which Egypt may be unusual beyond merely the survival of evidence (to include the possibility that cheaper papyrus in Egypt may have meant that more things were committed to paper here than elsewhere).

Consequently the debate about how strange a place Roman Egypt was is also a fairly important and active area of scholarship. We can divide those arguments into two large categories: the way in which Roman rule itself in Egypt was unusual and the ways in which Egypt was a potentially unusual place in comparison to the rest of Roman world already.

Bret Devereaux, “Collections: Why Roman Egypt Was Such a Strange Province”, A Collection of Unmitigated Pedantry, 2022-12-02.

« Newer PostsOlder Posts »

Powered by WordPress