Quotulatiousness

December 3, 2025

QotD: Origins of the Mediterranean “omni-spear”

Filed under: Europe, History, Military, Weapons — Tags: , , , — Nicholas @ 01:00

From the Museo Arqueológico Nacional in Madrid, a bronze spearhead (inv. 10212) from Italy, c. 1300-900 BC, identified as Villanovan or proto-Villanovan.

Why are so many early iron spearheads shaped this way? Well, the easy answer to the question is that it is because even earlier bronze spearheads were shaped this way. In every culture I’ve studied with the omni-spear, you can find bronze spearheads with the same basic shape – the strong mid-ridge, leaf-shaped blades and circular socket – proceeding them. There are differences; the bronze spearheads of this type tend to be shorter and as a result somewhat “stubbier” (that is, they’re just as wide, but not as long) compared to the later iron spearheads which borrow their shape. That seems like it is probably a concession to metallurgy and possibly production. On the production side, bronze artifacts were generally cast to shape and depending on the temperature of the cast and type of casting method, that can place upper-limits on the size of the final artifact. Certainly ancient bronze-smiths were capable of managing very large casts with high quality metals – the heaviest recovered naval ram (the Athlit ram, as far as I know) is absolutely massive at 465kg, cast in a single piece.

That said, I suspect the real issue that limits the size of bronze spearheads is the metal itself. Weapons generally tend to push their materials to the outer edges of what they are capable of, because of the demand to keep weight low: the smith is looking to hit the absolute minimum amount of metal which will handle the strains of impact. And the strains of impact here are considerable! Bronze under stress tends to undergo plastic deformation, which is to say that it bends and doesn’t bend back, it isn’t “springy”. As a result bronze weapons – swords, spearheads, arrow-heads, etc. – tend to be quite a bit shorter than later iron weapons, so that they can withstand the rigors of combat without permanently deforming in a way that would render them useless. But iron when put under mild stress deforms elastically, which is to say it is “springy” and when the force of stress is removed it bends back to its original shape (adding carbon to make a high-carbon “spring steel” can improve this quality), so even while iron isn’t any harder than bronze (though steel most certainly is), an iron weapon can take a bigger hit and not end up hopelessly bent. And that is even more true once you begin adding really any amount of carbon to make even very mild steels.

Consequently, you can push an iron sword to be longer for the same weight because you can count on it withstanding a hit, bending a bit to absorb the force and bending back when the force is removed, better than bronze. I suspect the same thing is happening as bronze spearhead designs shift to iron: smiths are realizing they can get a somewhat longer point, with a longer more deadly taper, without an unacceptable increase in weight.

But then why keep the shape? Because a lot of bronze age sword shapes drop or are extensively modified fairly quickly in the shift to iron in places where the omni-spearhead remains the standard shape, albeit somewhat larger than its bronze variant.

Well, the answer, to me, seems to be that its a pretty useful shape, at least in a particular combat environment.

The round socket, of course, is to fit the round haft of the spear. These sockets are, as noted, generally round, which suggests that these spears are almost entirely being used to thrust. You probably could cut with the edges of these blades, but if that was how you expected to use the spear, you’d want a different haft shape so that you could feel the alignment of the edges of the blades. Interestingly, octagonal or rhombic sockets are a minority type that appears in a lot of places (both Gaul and Spain, for instance), but they remain really rare, as opposed to, say, medieval polearms, where non-circular hafts become common over time so that the wielder can feel that edge-alignment.

Extending the socket to make the mid-ridge also makes a lot of sense, as it provides a nice, thick, stout element of the spear to resist the forces of impact, which is going to be a mix of compression and bending. In an ideal, perfect impact, it’d be all compression, but in the real world, your target isn’t standing still and your hit probably isn’t dead-on, so you want some part of the spearhead that can resist that impact and hold its shape, transmitting the force instead into the shaft. The mid-ridge, being nice and thick (and generally not hollow past the socket) accomplishes this neatly.

Meanwhile, those wide, thin blades ensure a wide wound that is going to slice through a lot of the target. You want that too, because the fellow striking with the weapon wants a wound which will disable their opponent as quickly as possible. After all, all of the time between the delivery of a wound and it becoming disabling is, definitionally, a period where you are in range of their counter-attack and they are not disabled and so able to give it. If you ever wondered why a lot of really narrow, quick effective piercing weapons like rapiers were less common on the battlefield, this is a big part of it: those penetrating wounds are really lethal but often not very quickly and in a battlefield (where you may not be able to quickly back off after having delivered a fatal wound) you want a wound that, fatal or not, is going to disable fast.

Wide slicing wounds do that for you, because they cut across blood vessels, muscles and tendons. The former leads to rapid blood-loss, which can be disabling (and of course, fatal, but again, you care about disabling; fatal or not is a problem to consider once you are out of danger), while the later can instantly render limbs useless. It doesn’t matter how much adrenaline or willpower an enemy has, if a blow has sliced the muscles they would use to move their limbs, they cannot physically move those limbs.

The shape of the blades also seems intentional. While we do see neatly “oval” shaped blades, the most common shapes are “teardrop” or “leaf” shapes, which are widest close to the base. That probably helps in preventing over-penetration, because you need to be able to pull the spear back after delivering a strike; you do not want it stuck in the target. Likewise, I think that’s why truly “arrow” shaped spearheads tend to be both early and relatively rare. Instead the base curves back into the socket rather than having barbs, to make it easier to get that spear back out of an opponent after you strike them.

At the same time, spears are by no means immune to weight considerations. Ideally a combatant wants the longest spear they can manage easily in a single hand. That in turn is going to place a hard limit on the weight of the spearhead; every gram in the spearhead shifts the center of balance forward, making the weapon harder to handle. Shifting that center of balance back means adding a gram to the spear butt. Spearheads are pretty much always heavier than spear butts (often several times over), but the basic interaction is there where adding mass to the tip of the spear imposes weight costs which limit length. The trade-off is actually quite clear in medieval spears, where winged and “hewing” spears with larger spear-tips do, in fact, tend to be shorter and may have often been intended for use in two hands.

And because the humans in these systems don’t differ all that much, everyone more or less hits the same set of tradeoffs at basically the same point and so ends up developing spears with very similar weight and length characteristics. This should, I hope, help to dispel any myths that this or that group of ancient agrarian people were super-strong supermen; Greek, Roman, Spanish, Gallic, and Persian spears are all of the same basic length and weight and modern enthusiasts, reenactors and experimental archaeologists can wield those spears just fine. The basic limits of an average warrior haven’t changed all that much.

What you are left with is a spear with a 2-2.5m haft (probably just under 1kg), with spear-tips ranging from 150-450g, mostly clustered in the center of that range around 200g, and spear-butts typically very light, less than 100g and very simple in design (with an exception here for the elaborate Greek saurotar). A simple, no-frills design that would have been very effective on foot or on horseback.1

But as a basic design, the typical omni-spear provides a very effective balance of capabilities: the longest infantry spear that is easy enough to handle with a tip that is suitably deadly against lightly armored or unarmored targets and typically a spear-butt which both encloses the base of the spear (preventing it from delaminating) and provides a point which can be used to both brace the weapon and as an emergency back-up weapon, without adding unacceptable amounts of weight. Note, of course, that I’ve said unarmored or lightly armored: the wide blade on that spearhead is going to cause a strike to have to move aside quite a bit of armor if your opponent is wearing some, greatly limiting the depth of a strike if you have to move the weapon through, say, thick textile armor or mail. But assuming you only expect to strike unarmored targets, or the unarmored portions of armored targets, the shape is very effective.

Bret Devereaux, “Collections: The Mediterranean Iron Omni-Spear”, A Collection of Unmitigated Pedantry,


  1. Though the Greek cavalry spear of the late-Classical and Hellenistic, the xyston – pronounced ksuston, not zystin – is differently structured than this.

October 21, 2025

It was on EVERY work bench … until everyone forgot it

Filed under: Tools, Woodworking — Tags: , , , , — Nicholas @ 02:00

Rex Krueger
Published 12 Jun 2025

Compass Rose Toolworks: https://www.compassrosetools.com/
Check out my Courses: https://rexkrueger.retrieve.com/
Patrons saw this video early: patreon.com/rexkrueger
Follow me on Instagram: @rexkrueger

October 14, 2025

Carthaginian or Roman America?

Filed under: Africa, Americas, History, Technology — Tags: , , , , , — Nicholas @ 03:00

On the social media site formerly known as Twitter:

    Alaric The Barbarian @0xAlaric

    There’s a handful of evidence for this. Most of it’s a little fringe or circumstantial, but it exists.

    – 500s BC Carthaginian navigator Himilco described the Sargasso Sea; the original work is now lost but it was quoted in Ora Maritima a century later. If you can make it there and back, you know trade winds well enough to take Columbus’ route.

    – There’s quite a lot of copper “missing” from the Great Lakes area, and there was more bronze in the Old World than could have possibly been supported by the known copper mining infrastructure there … despite 7,000-year-old copper mines in that region, the local natives didn’t seem to really use copper for much aside from odd pictographic disks.

    – The Tecaxic-Calixtlahuaca head, discovered in 1933, was a bearded terracotta head made before European contact with modern-day Mexico. Its features and style don’t match local populations or material cultures, and it’s been dated to centuries prior. Roman experts ID it as 200s AD Roman art. Even the archaeological community isn’t sure what to make of this one; their best (non-)explanation is “it was a prank”.

    – Numerous odd discoveries were made of Old World artifacts in the American West throughout the 19th century. Alleged Roman coins, weapons, tools, etc. Some of these were hoaxes; others have been lost to time; others seem almost covered up. The wildest example is Kincaid’s alleged 1909 discovery of an ancient Egyptian-style tunnel annex hidden in the walls of the Grand Canyon, full of artifacts; and, a similar alleged discovery around Death Valley. The former was reported to have been investigated (maybe covered up?) by the Smithsonian, though they deny this; the latter is on government land now.

    – Various Old World artifacts seem to show New World goods or maps; there is a depiction of a pineapple at Pompeii, for example, and c. 350 BC Carthaginian coins show a map of the Mediterranean including the Americas to the west. Certain of Ptolemy’s odd geographic ideas are “corrected” (such as his earth-size estimate) by placing the Antilles as the Fortunate Isles. The Piri Reis Map, compiled in 1513 but surely copied from much older sources, shows a fairly accurate east coast of the Americas, as well as Antarctica. Diodorus Siculus may have even described the Americas as found by the Phoenicians, then kept secret …

    – This of course predates Rome and Carthage, but a wide swathe of native cultures had extraordinarily similar oral histories of being visited by ethnically distinct people from the east who taught them aspects of civilization … “But that’s probably nothing, right?”

    The field awaits its smoking gun, its Rosetta Stone. But I believe something is out there, just waiting for an enterprising follower of Schliemann to discover it. There’s *something* there.

And in response:

    John Ringo SF Author @Jringo1508

    The part that does it for me (that there was pre-Viking contact) is just studying the development of pottery and metallurgy in the Old World vs New.

    Old World: Burnt bits of clay with markings on them. Poorly formed “pottery” charms. Better made pottery charms. Pottery dishes. Metal ore based glazes. Simple, low temperature, metals.

    New World: POTTERY FULLY FORMED AND GOLD AND SILVER EXTRACTION BECAUSE NATIVE AMERICANS ARE AWESOME!

    The Carthaginians had a process of going to less advanced areas, teaching them some simple “advanced” technologies that in some way helped out the Carthaginians then trading with them for “stuff”. They’d teach pottery or better pottery techniques so that they (the Carthaginians) didn’t have to load themselves down with empty pots to pick up “stuff”.

    They’d then trade stuff like bronze daggers for gold, silver and spices.

    So, it entirely makes sense (if you understand the currents) that a Carthaginian/Phoenician (they’re the same) trading/exploring fleet would make it across the Atlantic in one direction (probably in winter), set up a trading center somewhere and start trading wares. They’d leave a few behind to build up a store then sail back.

    If they went over in winter and sailed back in summer, good chance they were wiped out by hurricanes.

    It could have happened multiple times with a small group of colonists left behind. Their genes would disappear in the wash.

    But going from zero to FULLY FORMED POTTERY has always been my reason to know that there was early contact.

September 5, 2025

Are replacement blades a ripoff?

Filed under: Tools, Woodworking — Tags: , , , , , — Nicholas @ 04:00

Rex Krueger
Published 4 Sept 2025

April 30, 2025

How Is Ammunition Made? A Tour of Sellier & Bellot’s Factory

Filed under: Business, Europe, Military, Weapons — Tags: , , , , — Nicholas @ 02:00

Forgotten Weapons
Published 30 Dec 2024

Today I am in Vlašim in the Czech Republic, where Sellier & Bellot has allowed me to film a tour of their ammunition plant. This is one of the largest ammo manufacturers in the world, and they start with basic raw material like lead, copper, and brass and ship out complete case ammunition. The machines involved in this process are really interesting — let’s have a look!
(more…)

January 13, 2025

QotD: The rise of coal in England

As for conditions on the eve of coal’s rapid rise in the late sixteenth century, they were actually even less intense. Following the Black Death, London’s population took centuries to recover, and by 1550 was still below its estimated medieval peak. Having once had over 70-80,000 souls, by 1550 it had only recovered to about 50,000. And the woodlands fuelling London were clearly still intact. Foreign visitors in the 1550s, who mostly stayed close to the city, described the English countryside as “all enclosed with hedges, oaks, and many other sorts of trees, so that in travelling you seem to be in one continued wood”, and remarked that the country had “an abundance of firewood”.1 Even in the 1570s, when London’s population had likely begun to finally push past its medieval peak, the city seems to have drawn its wood from a much smaller radius than before. Whereas in the crunch of the 1300s it seemingly needed to draw firewood from as far as 17 miles away over land, in the 1570s even a London MP, with every interest in exaggerating the city’s demands, complained that it only sometimes had to source wood from as far away as just 12 miles.2

And not far along the coast from the city were also the huge woodlands of the Weald, which stretched across the southeastern counties of Sussex, Surrey and Kent, and which did not even send much of their wood to London at all. Firewood from the Weald was not only exported to the Low Countries and the northern coast of France, but those exports more than tripled between 1490 and the early 1530s, from some 1.5 million billets per year to over 4.7 million. That level was still being reached in 1550, when not interrupted by on-and-off war with France, but by then the Weald was also meeting yet another new demand, for making iron.3

Ironmaking was extremely wood-hungry. In the 1550s Weald, making just a single ton of “pig” or cast iron, fit only for cannon or cooking pots, required almost 4 tons of charcoal, which in turn required roughly another 28 tons or so of seasoned wood. England in the early sixteenth century had imported the vast majority of its iron from Spain, but between 1530 and 1550 Wealden pig iron production increased eightfold. The expansion would have demanded, on a very conservative estimate, the sustained annual output of at least 50,000 acres of woodland — an area over sixty times the size of New York’s Central Park. Yet even this hugely understates the true scale of the expansion, as pig iron needed to be refined into bar or wrought iron in order to be fit for most uses, which required twice as much charcoal again — or in other words, a total of 86 tons of seasoned wood had to be first baked and then burned, just to make one ton of bar iron from the ore. And all this was just the beginning. By the 1590s the output of the Wealden ironworks had more than tripled again, for pig iron alone (though the efficiency of charcoal usage had also halved — a story for another time, perhaps).4

Given the rapidity of these changes, it will come as no surprise that there were complaints from the locals about how much the ironworks had increased the price of fuel for their homes. No doubt the wood being exported was having a similar effect as well. But the 1540s and 50s were also time of rapid general inflation, because of a dramatic debasement of the currency initiated by Henry VIII to pay for his wars. This not only made imports significantly more expensive, and so likely spurred much of the activity in the Weald to replace increasingly unaffordable iron from Spain, but they also made exports significantly cheaper for buyers abroad — and thus unaffordable for the English themselves.

In 1548-9, in a desperate bid to keep prices down, royal proclamations repeatedly and futilely banned the export of English wheat, malt, oats, barley, butter, cheese, bacon, beef, tallow, hides, and leather, to which the following year were added — like a game of inflation whack-a-mole — rye, peas, beans, bread, biscuits, mutton, veal, lamb, pork, ale, beer, wool, and candles. And of course charcoal and wood.5 For us to have records of the Weald exporting large quantities of wood in 1550 then, they must either have been sold through special royal licence, or have all been shipped out before the ban came in force just halfway through the year in May. Presumably a great deal more than recorded was also smuggled out. In 1555, parliament saw the need to put the ban on exporting victuals and wood into law, adding severe penalties. Transgressing merchants would lose their ship and have to pay a fine worth double the value of the contraband goods, while the ship’s mariners would see all their worldly possessions seized, and be imprisoned for at least a year without bail.6

It’s perhaps no wonder that the Weald’s ironworks continued to expand at such a rapid pace: the export ban would have freed up a great deal of woodland for their use. And ironmaking soon spread to other parts of England too, to where it did not have to compete for fuel with people’s homes. Given iron was significantly more valuable by both weight and volume than wood, it could easily bear the cost of transporting it from further away, and so could be made much further inland, away from the coasts and rivers whose woodlands served cities. In the early seventeenth century, iron ore and pig iron from the southwest of England was sometimes shipped all the way to well-wooded Ireland for smelting or refining into bar.7 In the early eighteenth century scrap iron from as far away as even the Netherlands was being recycled in the forested valleys of southwestern Scotland.8

Whenever ironmaking hit the limits of what could be sustainably grown in an area, it simply expanded into the next place where wood was cheap. And there was almost always another place. England, having had to import some three quarters of its iron from Spain in the 1530s, by the 1580s was almost entirely self-sufficient, after which the total amount of iron it produced using charcoal continued to grow, reaching its peak another two hundred years later in the 1750s.9 Had iron-making not been able to find sustainable supplies of fuel within England, it would have disappeared within just a few years rather than experiencing almost two centuries of expansion.10

And that’s just iron. The late sixteenth century also saw the rapid rise in England of a charcoal-hungry glass-making industry too. Green glass for small bottles had long been made in some of England’s forests in small quantities, but large quantities of glass for windows had had to be imported from the Low Countries and France. Just as with iron, however, the effect of debasement was to make the imports unaffordable for the English, and so French workers were enticed over in the 1550s and 60s to make window glass in the Weald. Soon afterwards, Venetian-style crystal-clear drinking glasses were being made there too.

What makes glass even more interesting than iron, however, is that its breakability meant it could not be made too far away from the cities in which it would be sold, and so had to compete directly with people’s homes for its fuel. Yet by the 1570s crystal glass was even being made even within London itself. Despite charcoal supplies being by far the largest cost of production, over the course of the late sixteenth century the price of glass in England remained stable, making it increasingly common and affordable while the price of pretty much everything else rose.11

What we have then is not evidence of a mid-sixteenth-century shortage of wood for fuel, and certainly not of those demands causing deforestation. It is instead evidence of truly unprecedented demands being generally and sustainably met.

And despite these unprecedented demands, the intensity with which under-woods were exploited for fuel seems to have actually decreased. During the medieval population peaks, the woods and hedges that supplied London had been squeezed for more fuel by simply cropping the trunks and branches more often, cutting them away every six or seven years rather than waiting for them to grow into larger poles or logs. After the Black Death killed off half the population, the cropping cycle could again lengthen to about eleven. But under-woods in the mid-sixteenth century were being cropped on average only twelve or so years — about twice as long a cycle as before the Black Death — which by the nineteenth century had lengthened still further to fourteen or fifteen.12

The lengthening of the cropping cycle can imply a number of things, and we’ll get to them all. But one possibility is that in order to meet unprecedented demands, more firewood was being collected at the expense of the other major use of trees: for timber.

Anton Howes, “The Coal Conquest”, Age of Invention, 2024-10-04.


    1. Estienne Perlin, “A description of England and Scotland” [1558], in The Antiquarian Repertory, vol.1 (1775), p.231. Perlin must have visited Britain in early 1553, as he mentions the arrival of a new French ambassador, which occurred in April 1553, as well as the wedding of Lady Jane Grey, which occurred in May of that year. Also Danielo Barbaro, “Report (May 1551)” in Calendar of State Papers Relating to English Affairs in the Archives of Venice, Vol 5: 1534-1554 (Her Majesty’s Stationery Office, 1873). And: Paul Warde and Tom Williamson, “Fuel Supply and Agriculture in Post-Medieval England”, The Agricultural History Review 62, no. 1 (2014), p.71

    2. Galloway et al., p.457 for the estimate of 17.4 miles overland as the outer limit of London’s firewood supply; Proceedings in the Parliaments of Elizabeth I, Vol I: 1558-1581, ed. T.E. Hartley (Leicester University Press, 1981), p.370: specifically, the London MP Rowland Hayward complained of the cost of firewood billets and charcoal having increased in price over the previous 30 years (which would encompass the period of debasement-induced inflation), before noting that “Sometimes the want of wood has driven the City to make provision in such places as they have been driven to carry it 12 miles by land”.

    3. Mavis E. Mate, Trade and Economic Developments, 1450-1550: The Experience of Kent, Surrey and Sussex (Boydell Press, 2006), pp.83, 92, 101.

    4. These statistics are derived from a combination of Peter King, “The Production and Consumption of Bar Iron in Early Modern England and Wales”, The Economic History Review 58, no. 1 (1 February 2005), pp.1–33 for the iron production estimates, and G. Hammersley, “The Charcoal Iron Industry and Its Fuel, 1540-1750”, The Economic History Review 26, no. 4 (1973), pp.593–613 for the estimates of how much charcoal, wood, and land was required at a given date to produce a given quantity of pig or bar iron.

    5. Paul L. Hughes and James F. Larkin, eds., Tudor Royal Proclamations., Vol. I: The Early Tudors (1485-1553) (Yale University Press, 1964), proclamations nos. 304, 310, 318, 319, 345, 357, 361, 365, 366.

    6. 1 & 2 Philip & Mary, c.5 (1555)

    7. William Brereton, Travels in Holland, the United Provinces, England, Scotland and Ireland 1634-1635, ed. Edward Hawkins (The Chetham Society, 1844), p.147

    8. T. C. Smout, ed., “Journal of Henry Kalmeter’s Travels in Scotland, 1719-20”, in Scottish Industrial History: A Miscellany, vol. 14, 4 (Scottish History Society, 1978), p.19

    9. See King. Note that there was an interruption to this growth in the mid-seventeenth century, for reasons I mention later on.

    10. There was a period in the early-to-mid seventeenth century when English ironmaking stagnated, but this was due to the growth of a competitive ironmaking industry in Sweden.

    11. D. W. Crossley, “The Performance of the Glass Industry in Sixteenth-Century England”, The Economic History Review 25, no. 3 (1972), pp.421–33

    12. Galloway et al. On cropping cycles in particular, see pp.454-5: they note how the average cropping of wood in their sample c.1300 was about every seven years, but by 1375-1400 — once population pressures had receded due to the Black Death — the average had increased to every eleven. See also Rackham, pp.140-1. John Worlidge, Systema agriculturæ (1675), p.96 mentions that coppice “of twelve or fifteen years are esteemed fit for the axe. But those of twenty years’ standing are better, and far advance the price. Seventeen years’ growth affords a tolerable fell”.

June 1, 2024

QotD: When the chimneys rose in London

A coal fire also burns much hotter, and with more acidic fumes, than a wood fire. Pots that worked well enough for wood — typically brass, either thin beaten-brass or thicker cast-brass — degrade rapidly over coal, and people increasingly switched to iron, which takes longer to heat but lasts much better. At the beginning of the shift to coal, the only option for pots was wrought iron — nearly pure elemental iron, wrought (archaic past tense of “worked”, as in “what hath God wrought”) with hammer and anvil, a labor-intensive process. But since the advent of the blast furnace in the late fifteenth century, there was a better, cheaper material available: cast iron.1 It was already being used for firebacks, rollers for crushing malt, and so forth, but English foundries were substantially behind those of the continent when it came to casting techniques in brass and were entirely unprepared to make iron pots with any sort of efficiency. The innovator here was Abraham Darby, who in 1707 filed a patent for a dramatically improved method of casting metal for pots — and also, incidentally, used a coal-fired blast furnace to smelt the iron. This turned out to be the key: a charcoal-fueled blast furnace, which is what people had been using up to then, makes white cast iron, a metal too brittle to be cast into nicely curved shapes like a pot. Smelting with coal produces gray cast iron, which includes silicon in the metal’s structure and works much better for casting complicated shapes like, say, parts for a steam engine. Coal-smelted iron would be the key material of the Industrial Revolution, but the economic incentive for its original development was the early modern market for pots, kettles, and grates suitable for cooking over the heat and fumes of a coal fire.2

In Ruth Goodman’s telling, though, the greatest difference between coal and wood fires is the smoke. Smoke isn’t something we think much about these days: on the rare occasions I’m around a fire at all, I’m either outdoors (where the smoke dissipates rapidly except for a pleasant lingering aroma on my jacket) or in front of a fireplace with a good chimney that draws the smoke up and out of the house. However, a chimney also draws about 70% of the fire’s heat — not a problem if you’re in a centrally-heated modern home and enjoying the fire for ✨ambience✨, but a serious issue if it’s the main thing between your family and the Little Ice Age outdoors. Accordingly, premodern English homes didn’t have chimneys: the fire sat in a central hearth in the middle of the room, radiating heat in all directions, and the smoke slowly dissipated out of the unglazed windows and through the thatch of the roof. Goodman describes practical considerations of living with woodsmoke that never occurred to me:

    In the relatively still milieu of an interior space, wood smoke creates a distinct and visible horizon, below which the air is fairly clear and above which asphyxiation is a real possibility. The height of this horizon line is critical to living without a chimney. The exact dynamics vary from building to building and from hour to hour as the weather outside changes. Winds can cause cross-draughts that stir things up; doors and shutters opening and closing can buffet smoke in various directions. … From my experiences managing fires in a multitude of buildings in many different weather conditions, I can attest to the annoyance of a small change in the angle of a propped-open door, the opening of a shutter or the shifting of a piece of furniture that you had placed just so to quiet the air. And as for people standing in doorways, don’t get me started.

One obvious adaptation was to live life low to the ground. On a warm day the smoke horizon might be relatively high, but on a cold damp one (of which, you may be aware, England has quite a lot) smoke hovers low enough that even sitting in a tall chair might well put your head right up into it. Far better to sit on a low stool, or, better yet, a nice soft insulating layer of rushes on the floor.

Chimneys did exist before the transition to coal, but given the cost of masonry and the additional fuel expenses, they were typically found only in the very wealthiest homes. Everyone else lived with a central hearth and if they could afford it added smoke management systems to their homes piecemeal. Among the available solutions were the reredos (a short half-height wall against which the fire was built and which would counteract drafts from doorways), the smoke hood (rather like our modern cooktop vent hood but without the fan, allowing some of the smoke to rise out of the living space without creating a draw on the heat), or the smoke bay (a method of constructing an upstairs room over only part of the downstairs that still allowed smoke to rise and dissipate through the roof). Wood smoke management was mostly a question of avoiding too great a concentration in places you wanted your face to be. The switch to coal changed this, though, because coal smoke is frankly foul stuff. It hangs lower than wood smoke, in part because it cools faster, and it’s full of sulfur compounds that combine with the water in your eyes and lungs to create a mild sulfuric acid; when your eyes water from the irritation, the stinging only gets worse. Burning coal in an unvented central hearth would have been painful and choking. If you already had one of the interim smoke management techniques of the wood-burning period — especially the smoke hood — you would have found adopting coal more appealing, but really, if you burned coal, you wanted a chimney. You probably already wanted a chimney, though; they had been a status symbol for centuries.

And indeed, chimneys went up all over London; their main disadvantage, aside from the cost of a major home renovation, had been the way they drew away the heat along with the smoke, but a coal fire’s greater energy output made that less of an issue. The other downside of the chimney’s draw, though, is the draft it creates at ground level. Again, this isn’t terribly noticeable today because most of us don’t spend a lot of time sitting in front of the fireplace (or indeed, sitting on the floor at all, unless we have small children), but pay attention next time you’re by an indoor wood fire and you will notice a flow of cold air for the first inch or two off the ground. All of a sudden, instead of putting your mattress directly on the drafty floor, you wanted a bedstead to lift it up — and a nice tall chair to sit on, and a table to pull your chair up to as well. There were further practical differences, too: because a chimney has to be built into a wall, it can’t heat as large an area as a central fire. This incentivized smaller rooms, which were further enabled by the fact that a coal fire can burn much longer without tending than a wood fire. A gentleman doesn’t have much use for small study where he can retreat to be alone with his books and papers if a servant is popping in every ten minutes to stir up the fire, but if the coals in the grate will burn for an hour or two untended he can have some real privacy. The premodern wood-burning home was a large open space where many members of the household, both masters and servants, went about their daily tasks; the coal-burning home gradually became a collection of smaller, furniture-filled spaces that individuals or small groups used for specific purposes. Nowhere is this shift more evident than in the word “hall”, which transitions from referring to something like Heorot to being a mere corridor between rooms.

Jane Psmith, “REVIEW: The Domestic Revolution by Ruth Goodman”, Mr. and Mrs. Psmith’s Bookshelf, 2023-05-22.


    1. Brief ferrous metallurgy digression: aside from the rare, relatively pure iron found in meteors, all iron found in nature is in the form of ores like haematite, where the iron bound up with oxygen and other impurities like silicon and phosphorus (“slag”). Getting the iron out of the ore requires adding carbon (for the oxygen to bond with) and heat (to fuel the chemical reaction): Fe2O3 + C + slag → Fe + CO2 + slag. Before the adoption of the blast furnace, European iron came from bloomeries: basically a chimney full of fuel hot enough to cause a reduction reaction when ore is added to the top, removing the oxygen from the ore but leaving behind a mass of mixed iron and slag called a bloom. The bloom would then be heated and beaten and heated and beaten — the hot metal sticks together while the slag crumbles and breaks off — to leave behind a lump of nearly pure iron. (If you managed the temperature of your bloomery just right you could incorporate some of the carbon into the iron itself, producing steel, but this was difficult to manage and carbon was usually added to the iron afterwards to make things like armor and swords.) In a blast furnace, by contrast, the fuel and ore were mixed together and powerful blasts of air were forced through as the material moved down the furnace and the molten iron dripped out the bottom. From there it could be poured directly into molds and cast into the desired shape. This is obviously much faster and easier! But cast iron has much more carbon, which makes it very hard, lowers its melting point, and leaves it extremely brittle — you would never want a cast iron sword. (The behavior of various ferrous metals is determined by the way the non-metal atoms, especially carbon, interrupt the crystal structure of the iron. Wrought iron has less than .08% carbon by weight, modern “low carbon” steel between .05% and .3%, “high carbon” steel about 1.7%, and cast iron more than 3%.)

    2. The sales of those cooking implements went on to provide the capital for further innovation: Darby’s son and grandson, two more Abrahams, also played important roles in the Industrial Revolution.

May 11, 2024

Blacksmithing Basics for Woodworkers: Launching Your First Forging Project

Filed under: Tools — Tags: , , , — Nicholas @ 02:00

Rex Krueger
Published Jan 31, 2024

What to expect your first night at a blacksmithing club.

Project plans: rexkrueger.com/store
Patrons saw this video early: patreon.com/rexkrueger

Abana National Organization: https://abana.org/
My review of 2 affordable hammers: https://www.youtube.com/watch?v=q0mGz…
I Forge Iron Forum: https://www.iforgeiron.com/

April 16, 2024

Making a Lie-Nielsen Plane From Start to Finish

Filed under: Tools, Woodworking — Tags: , , , — Nicholas @ 02:00

Lie-Nielsen Toolworks
Published Dec 8, 2023

See the process of making the Lie-Nielsen No. 62 Low Angle Jack Plane: initial pattern making; pouring the castings at the foundry; machining, grinding, and finishing all the parts; final assembly, inspection, and wrapping.

January 3, 2024

QotD: Iron and steel

Filed under: History, Quotations, Science, Technology — Tags: , , , , , — Nicholas @ 01:00

I don’t want to get too bogged down in the exact chemistry of how the introduction of carbon changes the metallic matrix of the iron; you are welcome to read about it. As the carbon content of the iron increases, the iron’s basic characteristics – its ductility and hardness (among others) – changes. Pure iron, when it takes a heavy impact, tends to deform (bend) to absorb that impact (it is ductile and soft). Increasing the carbon-content makes the iron harder, causing it to both resist bending more and also to hold an edge better (hardness is the key characteristic for holding an edge through use). In the right amount, the steel is springy, bending to absorb impacts but rapidly returning to its original shape. But too much carbon and the steel becomes too hard and not ductile enough, causing it to become brittle.

Compared to the other materials available for tools and weapons, high carbon “spring steel” was essentially the super-material of the pre-modern world. High carbon steel is dramatically harder than iron, such that a good steel blade will bite – often surprisingly deeply – into an iron blade without much damage to itself. Moreover, good steel can take fairly high energy impacts and simply bend to absorb the energy before springing back into its original shape (rather than, as with iron, having plastic deformation, where it bends, but doesn’t bend back – which is still better than breaking, but not much). And for armor, you may recall from our previous look at arrow penetration, a steel plate’s ability to resist puncture is much higher than the same plate made of iron (bronze, by the by, performs about as well as iron, assuming both are work hardened). of course, different applications still prefer different carbon contents; armor, for instance, tended to benefit from somewhat lower carbon content than a sword blade.

It is sometimes contended that the ancients did not know the difference between iron and steel. This is mostly a philological argument based on the infrequency of a technical distinction between the two in ancient languages. Latin authors will frequently use ferrum (iron) to mean both iron and steel; Greek will use σίδηρος (sideros, “iron”) much the same way. The problem here is that high literature in the ancient world – which is almost all of the literature we have – has a strong aversion to technical terms in general; it would do no good for an elite writer to display knowledge more becoming to a tradesman than a senator. That said in a handful of spots, Latin authors use chalybs (from the Greek χάλυψ) to mean steel, as distinct from iron.

More to the point, while our elite authors – who are, at most dilettantish observers of metallurgy, never active participants – may or may not know the difference, ancient artisans clearly did. As Tylecote (op. cit.) notes, we see surface carburization on tools as clearly as 1000 B.C. in the Levant and Egypt, although the extent of its use and intentionality is hard to gauge to due rust and damage. There is no such problem with Gallic metallurgy from at least the La Tène period (450 BCE – 50 B.C.) or Roman metallurgy from c. 200 B.C., because we see evidence of smiths quite deliberately varying carbon content over the different parts of sword-blades (more carbon in the edges, less in the core) through pattern welding, which itself can leave a tell-tale “streaky” appearance to the blade (these streaks can be faked, but there’s little point in faking them if they are not already understood to signify a better weapon). There can be little doubt that the smith who welds a steel edge to an iron core to make a sword blade understands that there is something different about that edge (especially since he cannot, as we can, precisely test the hardness of the two every time – he must know a method that generally produces harder metal and be working from that assumption; high carbon steel, properly produced, can be much harder than iron, as we’ll see).

That said, our ancient – or even medieval – smiths do not understand the chemistry of all of this, of course. Understanding the effects of carbuzation and how to harness that to make better tools must have been something learned through experience and experimentation, not from theoretical knowledge – a thing passed from master to apprentice, with only slight modification in each generation (though it is equally clear that techniques could move quite quickly over cultural boundaries, since smiths with an inferior technique need only imitate a superior one).

Bret Devereaux, “Collections: Iron, How Did They Make It, Part IVa: Steel Yourself”, A Collection of Unmitigated Pedantry, 2020-10-09.

November 15, 2023

“If you cannot make your own pig iron, you are just LARP’n as a real power”

Filed under: Britain, History, Technology — Tags: , , , , — Nicholas @ 04:00

CDR Salamander talks about the importance of an old industry to a modern industrial economy:

We probably need to start this out by explaining exactly what a blast furnace is and why it is important if you want to be a sovereign nation.

First of all, what it does;

    The purpose of blast furnace is to chemically reduce and physically convert iron oxide into liquid iron called “hot metal” The blast furnace is a huge, steel stack lined with refractory brick where iron ore, coke and limestone are charged into the top and preheated air is blown into the bottom. The raw materials require 6 to 8 hours to descend to the bottom of the furnace where they become the final product of liquid slag and liquid iron. These liquid products are drained from the furnace at regular intervals. The hot air that was blown into the bottom of the surface ascends to the top in 6 to 8 seconds after going through numerous chemical reactions. Once the blast furnace is started it continuously runs for four to ten years with only short stops to perform planned maintenance.

Why are blast furnaces so important? Remember the middle part of Billy Joel’s “Iron, coke, chromium steel?”

“Coke” is in essence purified coal, almost pure carbon. It is about the only thing that can at scale make “new” or raw iron, aka “pig iron”. Only coke in a blast furnace can make enough heat to turn iron ore in to iron. You can’t get that heat with an electric furnace.

Pig iron is the foundation of everything that follows that makes an industrial power. If you cannot make your own pig iron, you are just LARP’n as a real power.

It takes a semester at least to understand this, but here is all you really need to know;

    Primary differences

    While the end product from each of these is comparable, there are clearly differences between their capabilities and process. Comparing each type of furnace, the major distinctions are:

    Material source – blast furnaces can melt raw iron ore as well as recycled metal, while electric arc furnaces only melt recycled or scrap metal.

    Power supply – blast furnaces primarily use coke to supply the energy needed to heat up the metal, while EAFs use electricity to accomplish this.

    Environmental impact – because of the fuels used for each, EAFs can produce up to 85% less carbon dioxide than blast furnaces.

    Cost – EAFs cost less than blast furnaces and take up less space in a factory.

    Efficiency – EAFs also reach higher temperatures much faster and can melt and produce products more quickly, as well as having more precise control over the temperature compared to blast furnaces.

We’ll get to that environmental impact later, but the “Material source” section is your money quote.

Without a blast furnace, all you can do is recycle scrap iron.

You cannot fight wars at scale if all you have is scrap iron. You cannot be an industrial hub off of just scrap iron. If you are a nation of any size, you then become economically and security vulnerable at an existential level. I don’t care how much science fiction you get nakid and roll in; wars are won by steel, ungodly amounts of steel.

Where do you get the steel to build your warships? Your tanks? Your factories? Your buildings? Your factories?

If you can only use scrap, then you are simply a scavenger living off the hard work of previous generations. Eventually you run out. You will wind up like the cypress mills of old Florida where, once they ran out of cypress trees, they simply sold off the cypress lumber their mills were constructed of … and then went bankrupt.

October 26, 2023

QotD: Making steel

Filed under: History, Quotations, Science, Technology — Tags: , , — Nicholas @ 01:00

Let’s start with the absolute basics: what is steel? Fundamentally, steel is an alloy of iron and carbon. We can, for the most part, dispense with many modern varieties of steel that involve more complex alloys; things like stainless steel (which add chromium to the mix) were unknown to pre-modern smiths and produced only by accident. Natural alloys of this sort (particularly with manganese) might have been produced by accident where local ores had trace amounts of other metals. This may have led to the common belief among ancient and medieval writers that iron from certain areas was superior to others (steel from Noricum in the Roman period, for instance, had this reputation, note Buchwald, op. cit. for the evidence of this), though I have not seen this proved with chemical studies.

So we are going to limit ourselves here to just carbon and iron. Now in video-game logic, that means you take one “unit” of carbon and one “unit” of iron and bash them together in a fire to make steel. As we’ll see, the process is at least moderately more complicated than that. But more to the point: those proportions are totally wrong. Steel is a combination of iron and carbon, but not equal parts or anything close to it. Instead, the general division goes this way (there are several classification systems but they all have the same general grades):

Below 0.05% carbon or so, we just refer to that as iron. There is going to be some small amount of carbon in most iron objects, picked up in the smelting or forging process.
From 0.05% carbon to 0.25% carbon is mild or low carbon steel.
From about 0.3% to about 0.6%, we might call medium carbon steel, although I see this classification only infrequently.
From 0.6% to around 1.25% carbon is high-carbon steel, also known as spring steel. For most armor, weapons and tools, this is the “good stuff” (but see below on pattern welding).
From 1.25% to 2% are “ultra-high-carbon steels” which, as far as I can tell didn’t see much use in the ancient or medieval world.
Above 2%, you have cast iron or pig iron; excessive carbon makes the steel much too hard and brittle, making it unsuitable for most purposes.

Bret Devereaux, “Collections: Iron, How Did They Make It, Part IVa: Steel Yourself”, A Collection of Unmitigated Pedantry, 2020-10-09.

October 21, 2023

Magic In Metal (1969)

Filed under: Britain, Business, History — Tags: , , , , — Nicholas @ 02:00

PauliosVids
Published 15 Dec 2018

From the British Motor Corporation Ltd (BMC).

October 10, 2023

QotD: The production of charcoal in pre-industrial societies

Filed under: Europe, History, Quotations, Technology — Tags: , , , , — Nicholas @ 01:00

Wood, even when dried, contains quite a bit of water and volatile compounds; the former slows the rate of combustion and absorbs the energy, while the latter combusts incompletely, throwing off soot and smoke which contains carbon which would burn, if it had still been in the fire. All of that limits the burning temperature of wood; common woods often burn at most around 800-900°C, which isn’t enough for the tasks we are going to put it to.

Charcoaling solves this problem. By heating the wood in conditions where there isn’t enough air for it to actually ignite and burn, the water is all boiled off and the remaining solid material reduced to lumps of pure carbon, which will burn much hotter (in excess of 1,150°C, which is the target for a bloomery). Moreover, as more or less pure carbon lumps, the charcoal doesn’t have bunches of impurities which might foul our iron (like the sulfur common in mineral coal).

That said, this is a tricky process. The wood needs to be heated around 300-350°C, well above its ignition temperature, but mostly kept from actually burning by lack of oxygen (if you let oxygen in, the wood is going to burn away all of its carbon to CO2, which will, among other things, cause you to miss your emissions target and also remove all of the carbon you need to actually have charcoal), which in practice means the pile needs some oxygen to maintain enough combustion to keep the heat correct, but not so much that it bursts into flame, nor so little that it is totally extinguished. The method for doing this changed little from the ancient world to the medieval period; the systems described by Pliny (NH 16.8.23) and Theophrastus (HP 5.9.4) is the same method we see used in the early modern period.

First, the wood is cut and sawn into logs of fairly moderate size. Branches are removed; the logs need to be straight and smooth because they need to be packed very densely. They are then assembled into a conical pile, with a hollow center shaft; the pile is sometimes dug down into the ground, sometimes assembled at ground-level (as a fun quirk of the ancient evidence, the Latin-language sources generally think of above-ground charcoaling, whereas the Greek-language sources tend to assume a shallow pit is used). The wood pile is then covered in a clay structure referred to a charcoal kiln; this is not a permanent structure, but is instead reconstructed for each charcoal burning. Finally, the hollow center is filled with brushwood or wood-chips to provide the fuel for the actual combustion; this fuel is lit and the shaft almost entirely sealed by an air-tight layer of earth.

The fuel ignites and begins consuming the oxygen from the interior of the kiln, both heating the wood but also stealing the oxygen the wood needs to combust itself. The charcoal burner (often called collier, before that term meant “coal miner” it meant “charcoal burner”) manages the charcoal pile through the process by watching the smoke it emits and using its color to gauge the level of combustion (dark, sooty smoke would indicate that the process wasn’t yet done, while white smoke meant that the combustion was now happening “clean” indicating that the carbonization was finished). The burner can then influence the process by either puncturing or sealing holes in the kiln to increase or decrease airflow, working to achieve a balance where there is just enough oxygen to keep the fuel burning, but not enough that the wood catches fire in earnest. A decent sized kiln typically took about six to eight days to complete the carbonization process. Once it cooled, the kiln could be broken open and the pile of effectively pure carbon extracted.

Raw charcoal generally has to be made fairly close to the point of use, because the mass of carbon is so friable that it is difficult to transport it very far. Modern charcoal (like the cooking charcoal one may get for a grill) is pressed into briquettes using binders, originally using wet clay and later tar or pitch, to make compact, non-friable bricks. This kind of packing seems to have originated with coal-mining; I can find no evidence of its use in the ancient or medieval period with charcoal. As a result, smelting operations, which require truly prodigious amounts of charcoal, had to take place near supplies of wood; Sim and Ridge (op cit.) note that transport beyond 5-6km would degrade the charcoal so badly as to make it worthless; distances below 4km seem to have been more typical. Moving the pre-burned wood was also undesirable because so much material was lost in the charcoaling process, making moving green wood grossly inefficient. Consequently, for instance, we know that when Roman iron-working operations on Elba exhausted the wood supplies there, the iron ore was moved by ship to Populonia, on the coast of Italy to be smelted closer to the wood supply.

It is worth getting a sense of the overall efficiency of this process. Modern charcoaling is more efficient and can often get yields (that is, the mass of the charcoal when compared to the mass of the wood) as high as 40%, but ancient and medieval charcoaling was far less efficient. Sim and Ridge (op cit.) note ratios of initial-mass to the final charcoal ranging from 4:1 to 12:1 (or 25% to 8.3% efficiency), with 7:1 being a typical average (14%).

We can actually get a sense of the labor intensity of this job. Sim and Ridge (op cit.) note that a skilled wood-cutter can cut about a cord of wood in a day, in optimal conditions; a cord is a volume measure, but most woods mass around 4,000lbs (1,814kg) per cord. Constructing the kiln and moving the wood is also likely to take time and while more than one charcoal kiln can be running at once, the operator has to stay with them (and thus cannot be cutting any wood, though a larger operation with multiple assistants might). A single-man operation thus might need 8-10 days to charcoal a cord of wood, which would in turn produce something like 560lbs (253.96kg) of charcoal. A larger operation which has both dedicated wood-cutters and colliers running multiple kilns might be able to cut the man-days-per-cord down to something like 3 or 4, potentially doubling or tripling output (but requiring a number more workers). In short, by and large our sources suggest this was a fairly labor intensive job in order to produce sufficient amounts of charcoal for iron production of any scale.

Bret Devereaux, “Iron, How Did They Make It? Part II, Trees for Blooms”, A Collection of Unmitigated Pedantry, 2020-09-25.

September 5, 2023

A Tool Nerd’s Dream – Lee Valley & Veritas Manufacturing Plant Tour

Filed under: Business, Cancon, Tools, Woodworking — Tags: , , , — Nicholas @ 02:00

Bat Cave Creations
Published 29 Apr 2023

In this video we tour the Lee Valley & Veritas Manufacturing Plant. We get to see how Planes, Chisels, Tenon Cutters, and Drill Bits are made. This tour made me appreciate these amazing tools and hand planes even more!
(more…)

Older Posts »

Powered by WordPress