Quotulatiousness

November 22, 2023

“[T]he Tudors were indeed pretty awful, and that the writers who lived under this dynasty did serve as propagandists”

Filed under: Books, Britain, History — Tags: , , , , , , — Nicholas @ 03:00

I quite like a lot of what Ed West covers at Wrong Side of History, but I’m not convinced by his summary of the character of King Richard III nor do I believe him guilty of murdering his nephews, the famed “Princes in the Tower”:

As Robert Tombs put it in The English and their History, no other country but England turned its national history into a popular drama before the age of cinema. This was largely thanks to William Shakespeare’s series of plays, eight histories charting the country’s dynastic conflict from 1399 to 1485, starting with the overthrow of the paranoid Richard II and climaxing with the War of the Roses.

This second part of the Henriad covered a 30-year period with an absurdly high body count – three kings died violently, seven royal princes were killed in battle, and five more executed or murdered; 31 peers or their heirs also fell in the field, and 20 others were put to death.

And in this epic national story, the role of the greatest villain is reserved for the last of the Plantagenets, Richard III, the hunchbacked child-killer whose defeat at Bosworth in 1485 ended the conflict (sort of).

Yet despite this, no monarch in English history retains such a fan base, a devoted band of followers who continue to proclaim his innocence, despite all the evidence to the contrary — the Ricardians.

One of the most furious responses I ever provoked as a writer was a piece I wrote for the Catholic Herald calling Richard III fans “medieval 9/11 truthers”. This led to a couple of blogposts and several emails, and even an angry phone call from a historian who said I had maligned the monarch.

This was in the lead up to Richard III’s reburial in Leicester Cathedral, two and a half years after the former king’s skeleton was found in a car park in the city, in part thanks to the work of historian Philippa Langley. It was a huge event for Ricardians, many of whom managed to get seats in the service, broadcast on Channel 4.

Apparently Philippa Langly’s latest project — which is what I assume raised Ed’s ire again — is a new book and Channel 4 documentary in which she makes the case for the Princes’ survival after Richard’s reign although (not having read the book) I’d be wary of accepting that they each attempted to re-take the throne in the guises of Lambert Simnel and Perkin Warbeck.

The Ricardian movement dates back to Sir George Buck’s revisionist The History of King Richard the Third, written in the early 17th century. Buck had been an envoy for Elizabeth I but did not publish his work in his lifetime, the book only seeing the light of day a few decades later.

Certainly, Richard had his fans. Jane Austen wrote in her The History of England that “The Character of this Prince has been in general very severely treated by Historians, but as he was a York, I am rather inclined to suppose him a very respectable Man”.

But the movement really began in the early 20th century with the Fellowship of the White Boar, named after the king’s emblem, now the Richard III Society.

It received a huge boost with Josephine Tey’s bestselling 1951 novel The Daughter of Time in which a modern detective manages to prove Richard innocence. Paul Murray Kendall’s Richard the Third, published four years later, was probably the most influential non-fiction account to take a sympathetic view, although there are numerous others.

One reason for Richard’s bizarre popularity is that the Tudors were indeed pretty awful, and that the writers who lived under this dynasty did serve as propagandists.

Writers tend to serve the interests of the ruling class. In the years following Richard III’s death John Rous said of the previous king that “Richard spent two whole years in his mother’s womb and came out with a full set of teeth and hair streaming to his shoulders”. Rous called him “monster and tyrant, born under a hostile star and perishing like Antichrist”.

However, when Richard was alive the same John Rous was writing glowing stuff about him, reporting that “at Woodstock … Richard graciously eased the sore hearts of the inhabitants” by giving back common lands that had been taken by his brother and the king, when offered money, said he would rather have their hearts.

Certainly, there was propaganda. As well as the death of Clarence, William Shakespeare — under the patronage of Henry Tudor’s granddaughter — also implicated Richard in the killing the Duke of Somerset at St. Albans, when he was a two-year-old. The playwright has him telling his father: “Heart, be wrathful still: Priests pray for enemies, but princes kill”. So it’s understandable why historians might not believe everything the Bard wrote about him.

I must admit to a bias here, as I wrote back in 2011:

In the interests of full disclosure, I should point out that I portrayed the Earl of Northumberland in the 1983 re-enactment of the coronation of Richard III (at the Cathedral Church of St. James in Toronto) on local TV, and I portrayed the Earl of Lincoln in the (non-televised) version on the actual anniversary date. You could say I’m biased in favour of the revisionist view of the character of good King Richard.

November 20, 2023

QotD: Flax and linen in the ancient and medieval world

Linen fabrics are produced from the fibers of the flax plant, Linum usitatissimum. This common flax plant is the domesticated version of the wild Linum bienne, domesticated in the northern part of the fertile crescent no later than 7,000 BC, although wild flax fibers were being used to produce textiles even earlier than that. Consequently the use of linen fibers goes way back. In fact, the oldest known textiles are made from flax, including finds of fibers at Nahal Hemar (7th millennium BC), Çayönü (c. 7000 BC), and Çatalhöyük (c. 6000 BC). Evidence for the cultivation of flax goes back even further, with linseed from Tell Asward in Syria dating to the 8th millennium BC. Flax was being cultivated in Central Europe no later than the second half of the 7th millennium BC.

Flax is a productive little plant that produces two main products: flax seeds, which are used to produce linseed oil, and the bast of the flax plant which is used to make linen. The latter is our focus here so I am not going to go into linseed oil’s uses, but it should be noted that there is an alternative product. That said, my impression is that flax grown for its seeds is generally grown differently (spaced out, rather than packed together) and generally different varieties are used. That said, flax cultivated for one purpose might produce some of the other product (Pliny notes this, NH 19.16-17)

Flax was a cultivated plant (which is to say, it was farmed); fortunately we have discussed quite a bit about farming in general already and so we can really focus in on the peculiarities of the flax plant itself; if you are interested in the activities and social status of farmers, well, we have a post for that. Flax farming by and large seems to have involved mostly the same sorts of farmers as cereal farming; I get no sense in the Greco-Roman agronomists, for instance, that this was done by different folks. Flax farming changed relatively little prior to mechanization; my impression reading on it is that flax was farmed and gathered much the same in 1900 BC as it was in 1900 AD. In terms of soil, flax requires quite a lot of moisture and so grows best in either deep loam or (more commonly used in the ancient world, it seems) alluvial soils; in both cases, it should be loose, unconsolidated “sandy” (that is, small particle-sized) soil. Alluvium is loose, often sandy soil that is the product of erosion (that is to say, it is soil composed of the bits that have been eroded off of larger rocks by the action of water); the most common place to see lots of alluvial soil are in the flood-plains of rivers where it is deposited as the river floods forming what is called an alluvial plain.

Thus Pliny (NH 19.7ff) when listing the best flax-growing regions names places like Tarragona, Spain (with the seasonally flooding Francoli river) or the Po River Basin in Italy (with its large alluvial plain) and of course Egypt (with the regular flooding of the Nile). Pliny notes that linen from Sætabis in Spain was the best in Europe, followed by linens produced in the Po River Valley, though it seems clear that the rider here “made in Europe” in his text is meant to exclude Egypt, which would have otherwise dominated the list – Pliny openly admits that Egyptian flax, while making the least durable kind of linen (see below on harvesting times) was the most valuable (though he also treats Egyptian cotton which, by his time, was being cultivated in limited amounts in the Nile delta, as a form of flax, which obviously it isn’t). Flax is fairly resistant to short bursts of mild freezing temperatures, but prolonged freezes will kill the plants; it seems little accident that most flax production seems to have happened in fairly warm or at least temperate climes.

Flax is (as Pliny notes) a very fast growing plant – indeed, the fastest growing crop he knew of. Modern flax grown for fibers is generally ready for harvesting in roughly 100 days and this accords broadly with what the ancient agronomists suggest; Pliny says that flax is sown in spring and harvested in summer, while the other agronomists, likely reflecting practice further south suggest sowing in late fall and early winter and likewise harvesting relatively quickly. Flax that is going to be harvested for fibers tended to be planted in dense bunches or rows (Columella notes this method but does not endorse it, De Rust. 2.10.17). The reason for this is that when placed close together, the plants compete for sunlight by growing taller and thinner and with fewer flowers, which maximizes the amount of stalk per plant. By contrast, flax planted for linseed oil is more spaced out to maximize the number of flowers (and thus the amount of seed) per plant.

Once the flax was considered ready for harvest, it was pulled up out of the ground (including the root system) in bunches in handfuls rather than as individual plants […] and then hung to dry. Both Pliny and Columella (De Rust. 2.10.17) note that this pulling method tended to tear up the soil and regarded this as very damaging; they are on to something, since none of the flax plant is left to be plowed under, flax cultivation does seem to be fairly tough on the soil (for this reason Columella advises only growing flax in regions with ideal soil for it and where it brings a good profit). The exact time of harvest varies based on the use intended for the flax fibers; harvesting the flax later results in stronger, but rougher, fibers. Late-pulled flax is called “yellow” flax (for the same reason that blond hair is called “flaxen” – it’s yellow!) and was used for more work-a-day fabrics and ropes.

Bret Devereaux, “Collections: Clothing, How Did They Make It? Part I: High Fiber”, A Collection of Unmitigated Pedantry, 2021-03-05.

November 7, 2023

QotD: As we all know, medieval peasants wore ill-fitting clothes of grey and brown, exclusively

Filed under: Europe, History, Quotations — Tags: , , , , — Nicholas @ 01:00

the popular image of most ancient and medieval clothing is typically a rather drab affair, with the poor peasantry wearing mostly dirty, drab brown clothes (often ill-fitting ones) and so it might be imagined that regular folks had little need for involved textile finishing processes or dyeing; this is quite wrong. We have in essence already dispatched with the ill-fitting notion; the clothes of poor farmers, being often homespun and home-sewn could be made quite exactly for their wearers (indeed, loose fitting clothing, with lots of extra fabric, was often how one showed off wealth; lots of pleating, for instance, displayed that one could afford to waste expensive fabric on ornamentation). So it will not be a surprise that people in the past also liked to dress in pleasing colors and that this preference extended even to relatively humble peasants. Moreover, the simplest dyes and bleaching methods were often well within reach even for relatively humble people.

What we see in ancient and medieval artwork is that even the lower classes of society wore clothes that were bleached or dyed, often in bright, bold colors (in as much as dyes were available). At Rome, this extended even to enslaved persons; Seneca’s comment that legislation mandating a “uniform” for enslaved persons at Rome was abandoned for fear that they might realize their numbers, the clear implication being that it was often impossible to tell an enslaved person apart from a free person on the street in normal conditions (Sen. Clem. 1.24.1). Consequently, fulling and dyeing was not merely a process for the extremely wealthy, but an important step in the textiles that would have been worn even by every-day people.

That said, fulling and dyeing (though not bleaching) were fundamentally different from the tasks that we’ve discussed so far because they generally could not be done in the home. Instead they often required space, special tools and equipment and particular (often quite bad smelling) chemicals and specialized skills in order to practice. Consequently, these tasks tended to be done by specialist workers for whom textile production was a trade, rather than merely a household task.

Bret Devereaux, “Collections: Clothing, How Did They Make It? Part IVa: Dyed in the Wool”, A Collection of Unmitigated Pedantry, 2021-04-02.

October 31, 2023

Why Vampires Hate Garlic – A Transylvanian Recipe from 1580

Filed under: Europe, Food, History — Tags: , , , , — Nicholas @ 02:00

Tasting History with Max Miller
Published 19 Oct 2021
(more…)

October 24, 2023

The English language, who did what to it and when

Filed under: Books, Britain, Europe, History — Tags: , , , , , , — Nicholas @ 04:00

The latest book review from Mr. and Mrs. Psmith’s Bookshelf is John McWhorter’s Our Magnificent Bastard Tongue: The Untold History of English. I’m afraid I often find myself feeling cut adrift in discussions of the evolution of languages, as if I’m floating out of control in a maelstrom of what was, what is, and what might be, linguistically speaking. It’s an uncomfortable feeling and in retrospect explains why I did so poorly in formal grammar classes. When Jane Psmith gets around to discussing actual historical dates, I find my metaphorical feet again:

Shakespeare wrote about five hundred years ago, and even aside from the frequency of meaningless “do” in normal sentences, it’s clear that our language has changed since his day. But it hasn’t changed that much. Much less, for example, than English changed between Beowulf (probably written in the 890s AD)1 and The Canterbury Tales (completed by 1400), another five hundred year gap. Just compare this:

    Hwæt. We Gardena in geardagum, þeodcyninga, þrym gefrunon, hu ða æþelingas ellen fremedon.2

to this:

    Whan that Aprille with his shoures soote,
    The droghte of March hath perced to the roote,
    And bathed every veyne in swich licóur
    Of which vertú engendred is the flour…
    3

That’s a huge change! That’s way more than some extraneous verbs, the loss of a second person singular pronoun (thou knowest what I’m talking about), or a shift in some words’ definition.4 That’s practically unrecognizable! Why did English change so much between Beowulf and Chaucer, and so little between Shakespeare and me?

There’s a two part answer to this, and I’ll get to the real one in a minute (the changes between Old English and Middle English really are very interesting), but actually I must first confess that it was a trick question, because my dates are way off: even if people wrote lovely, fancy, highly-inflected Old English in the late 9th century, there’s no real reason to think that’s how they spoke.

On one level we know this must be true: after all, there were four dialects of Old English (Northumbrian, Mercian, Kentish, and West Saxon) and almost all our written sources are in West Saxon, even the ones from regions where that can’t have been the lingua franca.5 But it goes well beyond that: in societies where literacy is not widespread, written language tends to be highly conservative, formal, and ritualized. Take, for example, the pre-Reformation West, where all educated people used Latin for elite pursuits like philosophical disputatio or composing treatises on political theory but spoke French or Italian or German or English in their daily lives. It wasn’t quite Cicero’s Latin (though really whose is), but it was intentionally constructed so that it could have been intelligible to a Roman. Similarly, until quite recently Sanskrit was the written language of India even though it hadn’t been spoken for centuries. This happens in more modern and broadly literate societies as well: before the 1976 linguistic reforms, Greeks were deeply divided over “the language question” of whether to use the vernacular (dimotiki) or the elevated literary language (Katharevousa).6 And modern Arabic-speaking countries have an especially dramatic case of this: the written language is kept as close to the language of the Quran as possible, but the spoken language has diverged to the point that Moroccan Arabic and Saudi Arabic are mutually unintelligible.

Linguists call this phenomenon “diglossia”. It can seem counter-intuitive to English speakers, because we’ve had an unusually long tradition of literature in the vernacular, but even for those of us who use only “standard” English there are still notable differences between the way we speak and the way we write: McWhorter points out, for example, that if all you had was the corpus of Time magazine, you would never know people say “whole nother”. Obviously the situation is far more pronounced for people who speak non-standard dialects, whether AAVE or Hawaiian Pidgin (actually a creole) or Cajun English. (Even a hundred years ago, the English-speaking world had many more local dialects than it does today, so the experience of diglossia would have been far more widespread.)7

Anyway, McWhorter suggests that Old English seems to have changed very little because all we have is the writing, and the way you wrote wasn’t supposed to change. That’s why it’s so hard to date Beowulf from linguistic features: the written language of 600 is very similar to the written language of 1000! But despite all those centuries that the written language remained the perfectly normal Germanic language the Anglo-Saxons had brought to Britain, the spoken language was changing behind the scenes. As an increasing number of wealhs adopted it (because we now have the aDNA proof that the Anglo-Saxons didn’t displace the Celts), English gradually accumulated all sorts of Celtic-style “do” and “-ing” … which, obviously, no one would bother writing down, any more than the New York Times would publish an article written the way a TikTok rapper talks.

And then the Normans showed up.

The Norman Conquest had remarkably little impact on the grammar of modern English (though it brought a great deal of new vocabulary),8 but the replacement of the Anglo-Saxon ruling class more or less destroyed English literary culture. All of a sudden anything important enough to be written down in the first place was put into Latin or French, and by the time people began writing in English again two centuries later nothing remained of the traditional education in the conservative “high” Old English register. There was no one left who could teach you to write like the Beowulf poet; the only way to write English was “as she is spoke“, which was Chaucer’s Middle English.

So that’s one reason we don’t see the Celtic influence, with all its “do” and “-ing”, until nearly a thousand years after the Anglo-Saxons encountered the Celts. But there are a whole lot of other differences between Old English and Middle English, too, which are harder to lay at the Celtic languages’ door, and for those we have to look to another set of Germanic-speaking newcomers to the British Isles: the Vikings.

Grammatically, English is by far the simplest of the Germanic languages. It’s the only Indo-European language in Europe where nouns don’t get a gender — la table vs. le banc, for instance — and unlike many other languages it has very few endings. It’s most obvious with verbs: in English everyone except he/she/it (who gets an S) has a perfectly bare verb to deal with. None of this amō, amās, amat rigamarole: I, you, we, youse guys, and they all just “love”. (In the past, even he/she/it loses all distinction and we simply “loved”.) In many languages, too, you indicate a word’s role in the sentence by changing its form, which linguists call case. Modern English really only does this with our possessive (the word‘s role) and our pronouns,9 (“I see him” vs. “he sees me”); we generally indicate grammatical function with word order and helpful little words like “to” and “for”. But anyone learning Latin, or German, or Russian — probably the languages with case markings most commonly studied by English-speakers — has to contend with a handful of grammatical cases. And then, of course, there’s Hungarian.

As I keep saying, Old English was once a bog-standard Germanic language: it had grammatical gender, inflected verbs, and five cases (the familiar nominative, genitive, dative, and accusative, plus an instrumental case), each indicated by suffixes. Now it has none. Then, too, in many European languages, and all the other Germanic ones, when I do something that concerns only me — typically verbs concerning moving and feeling — I do it to myself. When I think about the past, I remember myself. If I err in German, I mistake myself. When I am ashamed in Frisian, I shame me, and if I go somewhere in Dutch I move myself. English preserves this in a few archaic constructions (I pride myself on the fact that my children can behave themselves in public, though I now run the risk of having perjured myself by saying so …), but Old English used it all the time, as in Beseah he hine to anum his manna (“Besaw he himself to one of his men”).

Another notable loss is in our direction words: in modern English we talk about “here”, “there”, or “where”, but not so long ago we could also discuss someone coming hither (“to here”) or ask whence (“from where”) they had gone. Every other Germanic language still has its full complement of directional adverbs. And most have a useful impersonal pronoun, like the German or Swedish man: Hier spricht man Deutsch.10 We could translate that as “one speaks German here” if we’re feeling pretentious, or perhaps employ the parental “we” (as in “we don’t put our feet in our mouths”), but English mostly forces this role on poor overused “you” (as in “you can’t be too careful”) because, again, we’ve lost our Old English man.

In many languages — including, again, all the other Germanic languages — you use the verb “be” to form the past perfect for words having to do with state or movement: “I had heard you speak”, but “I was come downstairs”. (This is the bane of many a beginning French student who has to memorize whether each verb uses avoir or être in the passé composée.) Once again, Old English did this, Middle English was dropping it, and modern English does it not at all. And there’s more, but I am taken pity on you …


    1. This is extremely contentious. The poem is known to us from only one manuscript, which was produced sometime near the turn of the tenth/eleventh century, and scholars disagree vehemently both about whether its composition was contemporary with the manuscript or much earlier and about whether it was passed down through oral tradition before being written. J.R.R. Tolkien (who also had a day job, in his case as a scholar of Old English — the Rohirrim are more or less the Anglo-Saxons) was a strong proponent of the 8th century view. Personally I don’t have a strong opinion; my rhetorical point here could be just as clearly made with an Old English document of unimpeachably eleventh century composition, but Beowulf is more fun.

    2. Old English orthography is not always obvious to a modern reader, so you can find a nice video of this being read aloud here. It’s a little more recognizable out loud, but not very.

    3. Here‘s the corresponding video for Middle English, which I think is actually harder to understand out loud.

    4. Of course words shift their meanings all the time. I’m presently reading Mansfield Park and giggling every time Fanny gets “knocked up” by a long walk.

    5. Curiously, modern English derives much more from Mercian and Northumbrian (collectively referred to as “Anglian”) than from the West Saxon dialect that was politically dominant in the Anglo-Saxon period. Meanwhile Scots (the Germanic language, not to be confused with the Celtic language of Scots Gaelic or whatever thing that kid wrote Wikipedia in) has its roots in the Northumbrian dialect.

    6. This is a more interesting and complicated case, because when the Greeks were beginning to emerge from under the Ottoman yoke it seemed obvious that they needed their own language (do you even nationalism, bro?) but spoken Greek was full of borrowings from Italian and Latin and Turkish, as well as degenerate vocabulary like ψάρι for “fish” when the perfectly good ιχθύς was right there. Many educated Greeks wanted to return to the ancient language but recognized that it was impractical, so Katharevousa (lit. “purifying”, from the same Greek root as “Cathar”) was invented as a compromise between dimotiki and “proper” Ancient Greek. Among other things, it was once envisioned as a political tool to entice the newly independent country’s Orthodox neighbors, who used Greek for their liturgies, to sign on to the Megali Idea. It didn’t work.

    The word ψάρι, by the way, derives from the Ancient Greek ὀψάριον, meaning any sort of little dish eaten with your bread but often containing fish; see Courtesans and Fishcakes: The Consuming Passions of Classical Athens for more. Most of the places modern Greek uses different vocabulary than the ancient tongue have equally fascinating etymologies. I think my favorite is άλογο, which replaced ίππος as the word for horse. See here for more.

    7. Diglossia is such a big deal in so many societies that I’ve always thought it would be fun to include in my favorite genre, fantasy fiction, but it would be hard to represent in English. Anyone who’s bounced off Dickon’s dialogue in The Secret Garden or Edgar’s West Country English in King Lear knows how difficult it is to understand most of the actually-existing nonstandard dialects; probably the only one that’s sufficiently familiar to enough readers would be AAVE — but that would produce a very specific impression, and probably not the one you want. So I think the best alternative would be to render the “low” dialect in Anglish, a constructed vocabulary that uses Germanic roots in place of English’s many borrowings from Latin and French. (“So I think the best other way would be to give over the ‘low’ street-talk in Anglish, a built wordhoard that uses Germanic roots in spot of English’s many borrowings …”) It turns out Poul Anderson did something similar, because of course he did.

    8. My favorite is food, because of course it is: our words for kinds of meat all derive from the French name for the animal (beef is boeuf, pork is porc, mutton is mouton) while our words for the animal itself have a good Germanic roots: cow, pig, sheep. Why? Well, think about who was raising the animal and who was eating it …

    9. And even this is endangered; how many people do you know, besides me, who say “whom” aloud?

    10. Yes, this is where Heidegger gets das Man.

October 18, 2023

QotD: The role of violence in historical societies

Filed under: Europe, History, Quotations — Tags: , , , , — Nicholas @ 01:00

Reading almost any social history of actual historical societies reveal complex webs of authority, some of which rely on violence and most of which don’t. Trying to reduce all forms of authority in a society to violence or the threat of violence is a “boy’s sociology”, unfit for serious adults.

This is true even in historical societies that glorified war! Taking, for instance, medieval mounted warrior-aristocrats (read: knights), we find a far more complex set of values and social bonds. Military excellence was a key value among the medieval knightly aristocracy, but so was Christian religious belief and observance, so were expectations about courtly conduct, and so were bonds between family and oath-bound aristocrats. In short there were many forms of authority beyond violence even among military aristocrats. Consequently individuals could be – and often were! – lionized for exceptional success in these other domains, often even when their military performance was at best lackluster.

Roman political speech, meanwhile, is full of words to express authority without violence. Most obviously is the word auctoritas, from which we get authority. J.E. Lendon (in Empire of Honor: The Art of Government in the Roman World (1997)), expresses the complex interaction whereby the past performance of virtus (“strength, worth, bravery, excellence, skill, capacity”, which might be military, but it might also be virtus demonstrated in civilian fields like speaking, writing, court-room excellence, etc) produced honor which in turn invested an individual with dignitas (“worth, merit”), a legitimate claim to certain forms of deferential behavior from others (including peers; two individuals both with dignitas might owe mutual deference to each other). Such an individual, when acting or especially speaking was said to have gravitas (“weight”), an effort by the Romans to describe the feeling of emotional pressure that the dignitas of such a person demanded; a person speaking who had dignitas must be listened to seriously and respected, even if disagreed with in the end. An individual with tremendous honor might be described as having a super-charged dignitas such that not merely was some polite but serious deference, but active compliance, such was the force of their considerable honor; this was called auctoritas. As documented by Carlin Barton (in Roman Honor: Fire in the Bones (2001)), the Romans felt these weights keenly and have a robust language describing the emotional impact such feelings had.

Note that there is no necessary violence here. These things cannot be enforced through violence, they are emotional responses that the Romans report having (because their culture has conditioned them to have them) in the presence of individuals with dignitas. And such dignitas might also not be connected to violence. Cicero clearly at points in his career commanded such deference and he was at best an indifferent soldier. Instead, it was his excellence in speaking and his clear service to the Republic that commanded such respect. Other individuals might command particular auctoritas because of their role as priests, their reputation for piety or wisdom, or their history of service to the community. And of course beyond that were bonds of family, religion, social group, and so on.

And these are, to be clear, two societies run by military aristocrats as described by those same military aristocrats. If anyone was likely to represent these societies as being entirely about the commission of violence, it would be these fellows. And they simply don’t.

Bret Devereaux, “Collections: The Universal Warrior, Part III: The Cult of the Badass”, A Collection of Unmitigated Pedantry, 2021-02-05.

October 16, 2023

QotD: Differentials of “information velocity” in a feudal society

Filed under: Britain, Government, History, Quotations — Tags: , , , , — Nicholas @ 01:00

[News of the wider world travels very slowly from the Royal court to the outskirts, but] information velocity within the sticks […] is very high. Nobody cares much who this “Richard II” cat was, or knows anything about ol’ Whatzisface – Henry Something-or-other – who might’ve replaced him, but everyone knows when the local knight of the shire dies, and everything about his successor, because that matters. So, too, is information velocity high at court – the lords who backed Henry Bolingbroke over Richard II did so because Richard’s incompetence had their asses in a sling. They were the ones who had to depose a king for incompetence, without admitting, even for a second, that

    a) competence is a criterion of legitimacy, and
    b) someone other than the king is qualified to judge a king’s competence.

Because admitting either, of course, opens the door to deposing the new guy on the same grounds, so unless you want civil war every time a king annoys one of his powerful magnates, you’d best find a way to square that circle …

… which they did, but not completely successfully, because within two generations they were back to deposing kings for incompetence. Turns out that’s a hard habit to break, especially when said kings are as incompetent as Henry VI always was, and Edward IV became. Only the fact that the eventual winner of the Wars of the Roses, Henry VII, was as competent as he was as ruthless kept the whole cycle from repeating.

Severian, “Inertia and Incompetence”, Founding Questions, 2020-12-25.

October 15, 2023

History Summarized: The Castles of Wales

Overly Sarcastic Productions
Published 23 Jun 2023

Every castle tells a story, but when one small country has over 600 castles, the collective story they tell is something like “holy heck ouch, ow, oh god, why are there so many arrows, ouch, good lord ow” – And that’s Wales for you.
(more…)

October 11, 2023

QotD: A rational army would run away …

Filed under: Economics, History, Military, Quotations — Tags: , , , , — Nicholas @ 01:00

It is a thousand years ago somewhere in Europe; you are one of a line of ten thousand men with spears. Coming at you are another ten thousand men with spears, on horseback. You do a very fast cost-benefit calculation.

    If all of us plant our spears and hold them steady, with luck we can break their charge; some of us will die but most of us will live. If we run, horses run faster than we do. I should stand.

Oops.

I made a mistake; I said “we”. I don’t control the other men. If everybody else stands and I run, I will not be the one of the ones who gets killed; with 10,000 men in the line, whether I run has very little effect on whether we stop their charge. If everybody else runs I had better run too, since otherwise I’m dead.

Everybody makes the same calculation. We all run, most of us die.

Welcome to the dark side of rationality.

This is one example of what economists call market failure — a situation where individual rationality does not lead to group rationality. Each person correctly calculates how it is in his interest to act and everyone is worse off as a result.

David D. Friedman, “Making Economics Fun: Part I”, David Friedman’s Substack, 2023-04-02.

October 10, 2023

QotD: The production of charcoal in pre-industrial societies

Filed under: Europe, History, Quotations, Technology — Tags: , , , , — Nicholas @ 01:00

Wood, even when dried, contains quite a bit of water and volatile compounds; the former slows the rate of combustion and absorbs the energy, while the latter combusts incompletely, throwing off soot and smoke which contains carbon which would burn, if it had still been in the fire. All of that limits the burning temperature of wood; common woods often burn at most around 800-900°C, which isn’t enough for the tasks we are going to put it to.

Charcoaling solves this problem. By heating the wood in conditions where there isn’t enough air for it to actually ignite and burn, the water is all boiled off and the remaining solid material reduced to lumps of pure carbon, which will burn much hotter (in excess of 1,150°C, which is the target for a bloomery). Moreover, as more or less pure carbon lumps, the charcoal doesn’t have bunches of impurities which might foul our iron (like the sulfur common in mineral coal).

That said, this is a tricky process. The wood needs to be heated around 300-350°C, well above its ignition temperature, but mostly kept from actually burning by lack of oxygen (if you let oxygen in, the wood is going to burn away all of its carbon to CO2, which will, among other things, cause you to miss your emissions target and also remove all of the carbon you need to actually have charcoal), which in practice means the pile needs some oxygen to maintain enough combustion to keep the heat correct, but not so much that it bursts into flame, nor so little that it is totally extinguished. The method for doing this changed little from the ancient world to the medieval period; the systems described by Pliny (NH 16.8.23) and Theophrastus (HP 5.9.4) is the same method we see used in the early modern period.

First, the wood is cut and sawn into logs of fairly moderate size. Branches are removed; the logs need to be straight and smooth because they need to be packed very densely. They are then assembled into a conical pile, with a hollow center shaft; the pile is sometimes dug down into the ground, sometimes assembled at ground-level (as a fun quirk of the ancient evidence, the Latin-language sources generally think of above-ground charcoaling, whereas the Greek-language sources tend to assume a shallow pit is used). The wood pile is then covered in a clay structure referred to a charcoal kiln; this is not a permanent structure, but is instead reconstructed for each charcoal burning. Finally, the hollow center is filled with brushwood or wood-chips to provide the fuel for the actual combustion; this fuel is lit and the shaft almost entirely sealed by an air-tight layer of earth.

The fuel ignites and begins consuming the oxygen from the interior of the kiln, both heating the wood but also stealing the oxygen the wood needs to combust itself. The charcoal burner (often called collier, before that term meant “coal miner” it meant “charcoal burner”) manages the charcoal pile through the process by watching the smoke it emits and using its color to gauge the level of combustion (dark, sooty smoke would indicate that the process wasn’t yet done, while white smoke meant that the combustion was now happening “clean” indicating that the carbonization was finished). The burner can then influence the process by either puncturing or sealing holes in the kiln to increase or decrease airflow, working to achieve a balance where there is just enough oxygen to keep the fuel burning, but not enough that the wood catches fire in earnest. A decent sized kiln typically took about six to eight days to complete the carbonization process. Once it cooled, the kiln could be broken open and the pile of effectively pure carbon extracted.

Raw charcoal generally has to be made fairly close to the point of use, because the mass of carbon is so friable that it is difficult to transport it very far. Modern charcoal (like the cooking charcoal one may get for a grill) is pressed into briquettes using binders, originally using wet clay and later tar or pitch, to make compact, non-friable bricks. This kind of packing seems to have originated with coal-mining; I can find no evidence of its use in the ancient or medieval period with charcoal. As a result, smelting operations, which require truly prodigious amounts of charcoal, had to take place near supplies of wood; Sim and Ridge (op cit.) note that transport beyond 5-6km would degrade the charcoal so badly as to make it worthless; distances below 4km seem to have been more typical. Moving the pre-burned wood was also undesirable because so much material was lost in the charcoaling process, making moving green wood grossly inefficient. Consequently, for instance, we know that when Roman iron-working operations on Elba exhausted the wood supplies there, the iron ore was moved by ship to Populonia, on the coast of Italy to be smelted closer to the wood supply.

It is worth getting a sense of the overall efficiency of this process. Modern charcoaling is more efficient and can often get yields (that is, the mass of the charcoal when compared to the mass of the wood) as high as 40%, but ancient and medieval charcoaling was far less efficient. Sim and Ridge (op cit.) note ratios of initial-mass to the final charcoal ranging from 4:1 to 12:1 (or 25% to 8.3% efficiency), with 7:1 being a typical average (14%).

We can actually get a sense of the labor intensity of this job. Sim and Ridge (op cit.) note that a skilled wood-cutter can cut about a cord of wood in a day, in optimal conditions; a cord is a volume measure, but most woods mass around 4,000lbs (1,814kg) per cord. Constructing the kiln and moving the wood is also likely to take time and while more than one charcoal kiln can be running at once, the operator has to stay with them (and thus cannot be cutting any wood, though a larger operation with multiple assistants might). A single-man operation thus might need 8-10 days to charcoal a cord of wood, which would in turn produce something like 560lbs (253.96kg) of charcoal. A larger operation which has both dedicated wood-cutters and colliers running multiple kilns might be able to cut the man-days-per-cord down to something like 3 or 4, potentially doubling or tripling output (but requiring a number more workers). In short, by and large our sources suggest this was a fairly labor intensive job in order to produce sufficient amounts of charcoal for iron production of any scale.

Bret Devereaux, “Iron, How Did They Make It? Part II, Trees for Blooms”, A Collection of Unmitigated Pedantry, 2020-09-25.

September 26, 2023

QotD: Bad kings, mad kings, and bad, mad kings

Filed under: Britain, Government, History, Quotations — Tags: , , , — Nicholas @ 01:00

An incompetent king doesn’t invalidate the very notion of monarchy, as monarchs are men and men are fallible. A bad, mad king (or a minor child) would surely find himself sidelined, or suffering an unfortunate hunting accident, or in extreme cases deposed, but the process of replacing X with Y on the throne didn’t invalidate monarchy per se. Deposing a king for incompetence was a very dangerous maneuver for lots of reasons, but it could be, and was, recast as a kind of “mandate of heaven” thing. Though they of course didn’t say that, the notion wasn’t a particularly tough sell in the age of Avignon and Antipopes.

But notice the implied question here: Sold to whom?

That’s where the idea of “information velocity” comes in. Exaggerating only a little for effect: Most subjects of most monarchs in the Medieval period had only the vaguest idea of who the king even was. Yeah, sure, theoretically you know that your lord’s lord’s lord owes homage to some guy called “Edward II” – that whole “feudal pyramid” thing – but as to who he might be, who cares? You’ll never lay eyes on the guy, except maybe as a face on a coin … and when will you ever even see one of those? So when you finally hear, weeks or months or years after the fact, that “Richard II” has been deposed, well … vive le roi, I guess. Meet the new boss, same as the old boss, and meanwhile life goes on the same as it ever did.

Information velocity out to the sticks, in other words, was very low. By the time you find out what the great and the good are up to, it’s already over. And, of course, the reverse – so long as the taxes come in on time, on the rare occasions they’re levied (imagine that!), the king doesn’t much care what his vassal’s vassals’ vassals’ vassals are up to.

Severian, “Inertia and Incompetence”, Founding Questions, 2020-12-25.

September 19, 2023

The end of the Western Roman Empire

Filed under: Books, Europe, History, Military — Tags: , , , , , — Nicholas @ 04:00

Theophilus Chilton updates a review from several years ago with a few minor changes:

British archaeologist and historian Bryan Ward-Perkin’s excellent 2005 work The Fall of Rome and the End of Civilization is a text that is designed to be a corrective for the type of bad academic trends that seem to entrench themselves in even the most innocuous of subjects. In this case, Ward-Perkins, along with fellow Oxfordian Peter Heather in his book The Fall of the Roman Empire: A New History of Rome and the Barbarians, sets out to fix a glaring error which has come to dominate much of the scholarly study of the 4th and 5th centuries in the western Empire for the past few decades.

This error is the view that the western Empire did not actually “fall”. Instead, so say many latter-day historical revisionists, what happened between the Gothic victory at Adrianople in 378 AD and the abdication of Romulus Augustulus, the last western Emperor, in 476 was more of an accident, an unintended consequence of a few boisterous but well-meaning neighbors getting a bit out of hand. Challenged is the very notion that the Germanic tribes (who cannot be termed “barbarians” any longer) actually “invaded”. Certainly, these immigrants did not cause harm to the western Empire — for the western empire wasn’t actually “destroyed”, but merely “transitioned” seamlessly into the era we term the Middle Ages. Ward-Perkins cites one American scholar who goes so far as to term the resettlement of Germans onto land that formerly belonged to Italians, Hispanians, Britons, and Gallo-Romans as taking place “in a natural, organic, and generally eirenic manner”. Certainly, it is gauche among many modern academics in this field to maintain that violent barbarian invasions forcibly ended high civilization and reduced the living standards in these regions to those found a thousand years before during the Iron Age.

Ward-Perkins points out the “whys” of this historical revision. Much of it simply has to do with political correctness (which he names as such) — the notion that we cannot really say that one culture is “higher” or “better” than others. Hence, when the one replaces the other, we cannot speculate as to how this replacement made things worse for all involved. In a similar vein, many continental scholars appear to be uncomfortable with the implications that the story of mass barbarian migrations and subsequent destruction and decivilization has in the ongoing discussion about the European Union’s own immigration policy — a discussion in which many of these same academics fall on the left side of the aisle.

Yet, all of this revisionism is bosh and bunkum, as Ward-Perkins so thoroughly points out. He does this by bringing to the table a perspective that many other academics in this field of study don’t have — that of a field archaeologist who is used to digging in the dirt, finding artifacts, drawing logical conclusions from the empirical evidence, and then using that evidence to decide “what really happened”, rather than just literary sources and speculative theories. Indeed, as the author shows, across the period of the Germanic invasions, the standard of living all across Roman western Europe declined, in many cases quite precipitously, from what it had been in the 3rd century. The quality and number of manufactured goods declined. Evidence for the large-scale integrative trade network that bound the western Empire together and with the rest of the Roman world disappears. In its place we find that trade goods travelled much smaller distances to their buyers — evidence for the breakdown of the commercial world of the West. Indeed, the economic activity of the West disappeared to the point that the volume of trade in western Europe would not be matched again until the 17th century. Evidence for the decline of food production suggests that populations fell all across the region. Ward-Perkins’ discussion of the decline in the size of cattle is enlightening evidence that the degeneration of the region was not merely economic. Economic prosperity, the access of the common citizen to a high standard of living with a wide range of creature comforts, disappeared during this period.

The author, however, is not negligent in pointing out the literary and documentary evidence for the horrors of the barbarian invasions that so many contemporary scholars seem to ignore. Indeed, the picture painted by the sum total of these evidences is one of harrowing destruction caused by aggressive, ruthless invaders seeking to help themselves to more than just a piece of the Roman pie. Despite the recent scholarly reconsiderations, the Germans, instead of settling on the land given to them by various Emperors and becoming good Romans, ended up taking more and more until there was nothing left to take. As Ward-Perkins puts it,

    Some of the recent literature on the Germanic settlements reads like an account of a tea party at the Roman vicarage. A shy newcomer to the village, who is a useful prospect for the cricket team, is invited in. There is a brief moment of awkwardness, while the host finds an empty chair and pours a fresh cup of tea; but the conversation, and village life, soon flow on. The accommodation that was reached between invaders and invaded in the fifth- and sixth- century West was very much more difficult, and more interesting, than this. The new arrival had not been invited, and he brought with him a large family; they ignored the bread and butter, and headed straight for the cake stand. Invader and invaded did eventually settle down together, and did adjust to each other’s ways — but the process of mutual accommodation was painful for the natives, was to take a very long time, and, as we shall see …left the vicarage in very poor shape. (pp. 82-83)

Professor Bret Devereaux discussed the long fifth century on his blog last year:

… it is not the case that the Roman Empire in the west was swept over by some destructive military tide. Instead the process here is one in which the parts of the western Roman Empire steadily fragment apart as central control weakens: the empire isn’t destroy[ed] from outside, but comes apart from within. While many of the key actors in that are the “barbarian” foederati generals and kings, many are Romans and indeed […] there were Romans on both sides of those fissures. Guy Halsall, in Barbarian Migrations and the Roman West (2007) makes this point, that the western Empire is taken apart by actors within the empire, who are largely committed to the empire, acting to enhance their own position within a system the end of which they could not imagine.

It is perhaps too much to suggest the Roman Empire merely drifted apart peacefully – there was quite a bit of violence here and actors in the old Roman “center” clearly recognized that something was coming apart and made violent efforts to put it back together (as Halsall notes, “The West did not drift hopelessly towards its inevitable fate. It went down kicking, gouging and screaming”) – but it tore apart from the inside rather than being violently overrun from the outside by wholly alien forces.

September 12, 2023

QotD: The largest input for producing iron in pre-industrial societies

Filed under: Europe, History, Quotations, Technology — Tags: , , , — Nicholas @ 01:00

… let’s start with the single largest input for our entire process, measured in either mass or volume – quite literally the largest input resource by an order of magnitude. That’s right, it’s … Trees

The reader may be pardoned for having gotten to this point expecting to begin with exciting furnaces, bellowing roaring flames and melting all and sundry. The thing is, all of that energy has to come from somewhere and that somewhere is, by and large, wood. Now it is absolutely true that there are other common fuels which were probably frequently experimented with and sometimes used, but don’t seem to have been used widely. Manure, used as cooking and heating fuel in many areas of the world where trees were scarce, doesn’t – to my understanding – reach sufficient temperatures for use in iron-working. Peat seems to have similar problems, although my understanding is it can be reduced to charcoal like wood; I haven’t seen any clear evidence this was often done, although one assumes it must have been tried.

Instead, the fuel I gather most people assume was used (to the point that it is what many video-game crafting systems set for) was coal. The problem with coal is that it has to go through a process of coking in order to create a pure mass of carbon (called “coke”) which is suitable for use. Without that conversion, the coal itself both does not burn hot enough, but also is apt to contain lots of sulfur, which will ruin the metal being made with it, as the iron will absorb the sulfur and produce an inferior alloy (sulfur makes the metal brittle, causing it to break rather than bend, and makes it harder to weld too). Indeed, the reason we know that the Romans in Britain experimented with using local coal this way is that analysis of iron produced at Wilderspool, Cheshire during the Roman period revealed the presence of sulfur in the metal which was likely from the coal on the site.

We have records of early experiments with methods of coking coal in Europe beginning in the late 1500s, but the first truly successful effort was that of Abraham Darby in 1709. Prior to that, it seems that the use of coal in iron-production in Europe was minimal (though coal might be used as a fuel for other things like cooking and home heating). In China, development was more rapid and there is evidence that iron-working was being done with coke as early as the eleventh century. But apart from that, by and large the fuel to create all of the heat we’re going to need is going to come from trees.

And, as we’ll see, really quite a lot of trees. Indeed, a staggering number of trees, if iron production is to be done on a major scale. The good news is we needn’t be too picky about what trees we use; ancient writers go on at length about the very specific best woods for ships, spears, shields, or pikes (fir, cornel, poplar or willow, and ash respectively, for the curious), but are far less picky about fuel-woods. Pinewood seems to have been a consistent preference, both Pliny (NH 33.30) and Theophrastus (HP 5.9.1-3) note it as the easiest to use and Buckwald (op cit.) notes its use in medieval Scandinavia as well. But we are also told that chestnut and fir also work well, and we see a fair bit of birch in the archaeological record. So we have our trees, more or less.

Bret Devereaux, “Iron, How Did They Make It? Part II, Trees for Blooms”, A Collection of Unmitigated Pedantry, 2020-09-25.

September 8, 2023

QotD: Rents and taxes in pre-modern societies

In most ways […] we can treat rent and taxes together because their economic impacts are actually pretty similar: they force the farmer to farm more in order to supply some of his production to people who are not the farming household.

There are two major ways this can work: in kind and in coin and they have rather different implications. The oldest – and in pre-modern societies, by far the most common – form of rent/tax extraction is extraction in kind, where the farmer pays their rents and taxes with agricultural products directly. Since grain (threshed and winnowed) is a compact, relatively transportable commodity (that is, one sack of grain is as good as the next, in theory), it is ideal for these sorts of transactions, although perusing medieval manorial contacts shows a bewildering array of payments in all sorts of agricultural goods. In some cases, payment in kind might also come in the form of labor, typically called corvée labor, either on public works or even just farming on lands owned by the state.

The advantage of extraction in kind is that it is simple and the initial overhead is low. The state or large landholders can use the agricultural goods they bring in in rents and taxes to directly sustain specialists: soldiers, craftsmen, servants, and so on. Of course the problem is that this system makes the state (or the large landholder) responsible for moving, storing and cataloging all of those agricultural goods. We get some sense of how much of a burden this can be from the prominence of what seem to be records of these sorts of transactions in the surviving writing from the Bronze Age Near East (although I should note that many archaeologists working on the ancient Near Eastern economy are pushing for a somewhat larger, if not very large, space for market interactions outside of the “temple economy” model which has dominated the field for quite some time). This creates a “catch” we’ll get back to: taxation in kind is easy to set up and easier to maintain when infrastructure and administration is poor, but in the long term it involves heavier administrative burdens and makes it harder to move tax revenues over long distances.

Taxation in coin offers potentially greater efficiency, but requires more particular conditions to set up and maintain. First, of course, you have to have coinage. That is not a given! Much of the social interactions and mechanics of farming I’ve presented here stayed fairly constant (but consult your local primary sources for variations!) from the beginnings of written historical records (c. 3,400 BC in Mesopotamia; varies place to place) down to at least the second agricultural revolution (c. 1700 AD in Europe; later elsewhere) if not the industrial revolution (c. 1800 AD). But money (here meaning coinage) only appears in Anatolia in the seventh century BC (and probably independently invented in China in the fourth century BC). Prior to that, we see that big transactions, like long-distance trade in luxuries, might be done with standard weights of bullion, but that was hardly practical for a farmer to be paying their taxes in.

Coinage actually takes even longer to really influence these systems. The first place coinage gets used is where bullion was used – as exchange for big long-distance trade transactions. Indeed, coinage seemed to have started essentially as pre-measured bullion – “here is a hunk of silver, stamped by the king to affirm that it is exactly one shekel of weight”. Which is why, by the by, so many “money words” (pounds, talents, shekels, drachmae, etc.) are actually units of weight. But if you want to collect taxes in money, you need the small farmers to have money. Which means you need markets for them to sell their grain for money and then those merchants need to be able to sell that grain themselves for money, which means you need urban bread-eaters who are buying bread with money, which means those urban workers need to be paid in money. And you can only get any of these people to use money if they can exchange that money for things they want, which creates a nasty first-mover problem.

We refer to that entire process as monetization – when I talk about economies being “monetized” or “incompletely monetized” that’s what I mean: how completely has the use of money penetrated through this society. It isn’t a one-way street, either. Early and High Imperial Rome seem to have been more completely monetized than the Late Roman Western Empire or the early Middle Ages (though monetization increases rapidly in the later Middle Ages).

Extraction, paradoxically, can solve the first mover problem in monetization, by making the state the first mover. If the state insists on raising taxes in money, it forces the farmers to sell their grain for money to pay the tax-man; the state can then take that money and use it to pay soldiers (almost always the largest budget-item in an ancient or medieval state budget), who then use the money to buy the grain the farmers sold to the merchants, creating that self-sustaining feedback loop which steadily monetizes the society. For instance, Alexander the Great’s armies – who expected to be paid in coin – seem to have played a major role in monetizing many of the areas they marched through (along with breaking things and killing people; the image of Alexander the Great’s conquests in popular imagination tend to be a lot more sanitized).

Bret Devereaux, “Collections: Bread, How Did They Make It? Part IV: Markets, Merchants and the Tax Man”, A Collection of Unmitigated Pedantry, 2020-08-21.

August 17, 2023

“… the Chinese invented gunpowder and had it for six hundred years, but couldn’t see its military applications and only used it for fireworks”

Filed under: China, History, Military, Science, Weapons — Tags: , , , , , — Nicholas @ 05:00

John Psmith would like to debunk the claim in the headline here:

An illustration of a fireworks display from the 1628-1643 edition of the Ming dynasty book Jin Ping Mei (1628-1643 edition).
Reproduced in Joseph Needham (1986). Science and Civilisation in China, Volume 5: Chemistry and Chemical Technology, Part 7: Military Technology: The Gunpowder Epic. Cambridge University Press. Page 142.

There’s an old trope that the Chinese invented gunpowder and had it for six hundred years, but couldn’t see its military applications and only used it for fireworks. I still see this claim made all over the place, which surprises me because it’s more than just wrong, it’s implausible to anybody with any understanding of human nature.

Long before the discovery of gunpowder, the ancient Chinese were adept at the production of toxic smoke for insecticidal, fumigation, and military purposes. Siege engines containing vast pumps and furnaces for smoking out defenders are well attested as early as the 4th century. These preparations often contained lime or arsenic to make them extra nasty, and there’s a good chance that frequent use of the latter substance was what enabled early recognition of the properties of saltpetre, since arsenic can heighten the incendiary effects of potassium nitrate.

By the 9th century, there are Taoist alchemical manuals warning not to combine charcoal, saltpetre, and sulphur, especially in the presence of arsenic. Nevertheless the temptation to burn the stuff was high — saltpetre is effective as a flux in smelting, and can liberate nitric acid, which was of extreme importance to sages pursuing the secret of longevity by dissolving diamonds, religious charms, and body parts into potions. Yes, the quest for the elixir of life brought about the powder that deals death.

And so the Chinese invented gunpowder, and then things immediately began moving very fast. In the early 10th century, we see it used in a primitive flame-thrower. By the year 1000, it’s incorporated into small grenades and into giant barrel bombs lobbed by trebuchets. By the middle of the 13th century, as the Song Dynasty was buckling under the Mongol onslaught, Chinese engineers had figured out that raising the nitrate content of a gunpowder mixture resulted in a much greater explosive effect. Shortly thereafter you begin seeing accounts of truly destructive explosions that bring down city walls or flatten buildings. All of this still at least a hundred years before the first mention of gunpowder in Europe.

Meanwhile, they had also been developing guns. Way back in the 950s (when the gunpowder formula was much weaker, and produced deflagarative sparks and flames rather than true explosions), people had already thought to mount containers of gunpowder onto the ends of spears and shove them in peoples’ faces. This invention was called the “fire lance”, and it was quickly refined and improved into a single-use, hand-held flamethrower that stuck around until the early 20th century.1 But some other inventive Chinese took the fire lances and made them much bigger, stuck them on tripods, and eventually started filling their mouths with bits of iron, broken pottery, glass, and other shrapnel. This happened right around when the formula for gunpowder was getting less deflagarative and more explosive, and pretty soon somebody put the two together and the cannon was born.

All told it’s about three and a half centuries from the first sage singing his eyebrows, to guns and cannons dominating the battlefield.2 Along the way what we see is not a gaggle of childlike orientals marvelling over fireworks and unable to conceive of military applications. We also don’t see an omnipotent despotism resisting technological change, or a hidebound bureaucracy maintaining an engineered stagnation. No, what we see is pretty much the opposite of these Western stereotypes of ancient Chinese society. We see a thriving ecosystem of opportunistic inventors and tacticians, striving to outcompete each other and producing a steady pace of technological change far beyond what Medieval Europe could accomplish.

Yet despite all of that, when in 1841 the iron-sided HMS Nemesis sailed into the First Opium War, the Chinese were utterly outclassed. For most of human history, the civilization cradled by the Yellow and the Yangtze was the most advanced on earth, but then in a period of just a century or two it was totally eclipsed by the upstart Europeans. This is the central paradox of the history of Chinese science and technology. So … why did it happen?


    1. Needham says he heard of one used by pirates in the South China Sea in the 1920s to set rigging alight on the ships that they boarded.

    2. I’ve left out a ton of weird gunpowder-based weaponry and evolutionary dead ends that happened along the way, but Needham’s book does a great job of covering them.

« Newer PostsOlder Posts »

Powered by WordPress