Quotulatiousness

December 17, 2024

The rejection-in-advance of Bovaer as a “climate-friendly” “solution” to the “problem” of climate change

At Watts Up With That?, Charles Rotter documents yet another imposed-from-above bright idea that consumers are already eager to reject:

When global elites and bureaucrats decide they must “fix” the world, the results often speak for themselves. Take the latest technocratic debacle: Bovaer, a feed additive designed to reduce methane emissions from cows, marketed as a “climate-friendly” solution. It’s now being shelved by Norwegian dairy producer Q-Meieriene after consumers flatly rejected its so-called “climate milk”.

This is more than a simple story of market rejection. It’s a cautionary tale of what happens when governments, corporations, and globalists push policies and products that tamper with the food supply to address a problem that may not even exist.

The Quest to Solve a “Crisis”

Bovaer, developed by DSM-Firmenich, has been touted as a game-changer in the fight against methane emissions — a major target of climate policies. The additive is said to suppress a key enzyme in the cow’s digestive process, reducing methane emissions by up to 30%. Regulatory bodies in over 68 countries, including the EU, Australia, and the U.S., have approved its use.

But let’s step back for a moment. Why are we targeting cow burps and farts in the first place? Methane is indeed a greenhouse gas, but it’s also a short-lived one that breaks down in the atmosphere within about a decade. Moreover, cows and bison have been emitting methane for millennia without triggering apocalyptic climate shifts. Yet suddenly, livestock emissions are treated as a planetary emergency demanding immediate action.

This myopic focus on cow methane is a prime example of how climate zealotry warps priorities. Rather than addressing real and immediate issues — like the energy crises their own policies create — governments and globalists have decided to micromanage how your milk is produced, all to reduce emissions by an imperceptible fraction of a percentage point.

Consumer Rebellion

The backlash against Bovaer has been swift and fierce. In Norway, Q-Meieriene began using the additive in 2023, branding the resulting product as “climate milk”. The response? Consumers overwhelmingly rejected it, leaving supermarket shelves stocked with unsold cartons while Bovaer-free milk flew off the shelves.

Facing dismal sales, Q-Meieriene recently announced it would discontinue the use of Bovaer, stating:

This is not merely a marketing failure. It reflects a broader consumer revolt against the technocratic imposition of “solutions” no one asked for. People are increasingly skeptical of being told that their daily choices — what they eat, how they travel, how they heat their homes — must be sacrificed on the altar of climate orthodoxy.

December 12, 2024

CHEVROLET with Cartoonist Rube Goldberg: Something for Nothing (1940)

Filed under: History, Technology, USA — Tags: , , , , — Nicholas @ 02:00

Charlie Dean Archives
Published Aug 27, 2013

Cartoonist Rube Goldberg creates a little animation to explain how fuel is converted to power in the modern automobile engine.

CharlieDeanArchives – Archive footage from the 20th century making history come alive!

September 16, 2024

QotD: The origins of Marmite

Filed under: Britain, Business, Food, Quotations — Tags: , , , — Nicholas @ 01:00

The story of Marmite begins in the late 19th century when a German scientist, Justus Freiherr von Liebig, discovered that the waste product from yeast used in brewing beer could be made into a meaty-flavoured paste which was completely vegetarian. He also produced bouillon, a meat extract which kept well in jars without needing refrigeration. This eventually became the product known as Oxo.

In 1902 the Marmite Food Extract Company was formed in Burton upon Trent, two miles from the Bass brewery which had been there since 1777. Yeast is a single-cell fungus originally isolated from the skin of grapes, used in brewing, winemaking and baking since ancient times. I have read somewhere that the yeast Bass used was descended from the original batch employed since its inception, endlessly reproducing itself right up to the present time.

The waste product from brewing was transported to the Marmite factory, where salt, enzymes and water were added to the slurry before it was simmered for several hours then poured into vats ready for bottling.

The product was an instant hit and within five years a second factory had to be built in Camberwell Green, south London. Marmite was given a huge boost with the discovery of vitamins. It was found to be a rich source of vitamin B, deficiency of which was responsible for the condition beriberi which afflicted British troops during the Great War. They were subsequently issued with Marmite as part of their rations. In the 1930s the folic-acid-rich product was used to treat anaemia in Bombay mill workers, and malnutrition during a malaria epidemic in Ceylon, now Sri Lanka.

Alan Ashworth, “That Reminds Me: My mate Marmite”, The Conservative Woman, 2024-06-05.

May 7, 2024

But Carbon Dioxide is scary, m’kay?

Last week, Chris Morrison shared some charts that show atmospheric carbon dioxide to be nowhere near high enough to be a concern … in fact, compared to ancient atmospheric conditions, CO2 may be at a potentially concerning low point:

Last year, Chris Packham hosted a five-part series on the BBC called Earth, which compared a mass extinction event 252 million years ago to the small rise in atmospheric carbon dioxide seen in the last 150 years. He said he hoped the “terror factor” generated by his programme would “spur us to do something about the environment crisis”. But as we shall see, the only terror factor is having to sit through an hour-long film consisting of cherry-picked science data and unproven assertions in the hope of persuading us that the increase in global temperatures in the last 150 years or so is comparable to the rise in temperatures over a considerable swath of geological time. Great play was made of a 12°C rise in average global temperatures 252 million years ago as CO2 levels started to rise, although Packham fails to report that CO2 levels were already at least four times higher back then than in modern times. The “science” that Packham cloaks himself with on every occasion is hardly served by terrorising the viewer with what is little more than a highly personal political message.

Think of all that suffering and wastage, he says about the fourth great mass extinction. I don’t think we want a comparable extinction to the one that happened 252 million years ago on our conscience, he adds. Of course, Packham is not the first person to politicise the end-Permian extinction when most plant and animal life disappeared to be replaced eventually with what became known as the age of the dinosaurs. As we can see from the graph below, even though that extinction event coincided with an uptick in CO2 levels, the general trend over a 600-million-year period was downwards ending in the near denudation currently experienced today. But scientists note that the rise started some time before the extinction event, with most of the Permian characterised by very low levels of CO2.

It is obvious why the three other great extinctions are of little interest to modern day climate alarmists. The Ordovician extinction 445 million years ago occurred when CO2 levels were 12 times higher than today, the Devonian wipe-out happen 372 millions ago when CO2 levels were falling, while the later Triassic/Jurassic event 201 million years ago occurred at a time of stable CO2. Hard to see a pattern there suggesting rising CO2 levels equals a mass extinction event. The disappearance of the dinosaurs 66 million years ago is generally attributed to the impact of a giant meteorite, while the current sixth mass extinction exists only inside the head of the Swedish doom goblin, and need not detain us at this point.

Since Packham was essentially making a BBC political film promoting Net Zero, he inevitably started with the fixed view that all our current environmental problems are the fault of CO2. An intense period of volcanic eruptions that led to huge coal deposits catching fire increased CO2 levels and almost instantly sent temperatures soaring at the end of the Permian period. About 20 million years of rain subsequently followed, he observed, taking some of the CO2 out of the atmosphere and order it seems was restored. Certainly, CO2 resumed a small descent but levels remained almost as high, or for some periods higher, as those at the end of the Permian period for another 120 million years. Packham does not provide an explanation of what happened to the average global temperature at this time.

The graph above shows why he avoided the subject. Temperatures did rise at the end of the Permian period after a long decline, but only as far as previous highs recorded 200 million years earlier. They then stayed at those levels for most of the next 200 million years, throughout the age of the dinosaurs. Helped by the increased levels of CO2, this is considered one of the most verdant periods in Earth’s history.

March 9, 2024

Salt – mundane, boring … and utterly essential

Filed under: Books, Economics, Food, Health, History — Tags: , , , , , , — Nicholas @ 05:00

In the latest Age of Invention newsletter, Anton Howes looks at the importance of salt in history:

There was a product in the seventeenth century that was universally considered a necessity as important as grain and fuel. Controlling the source of this product was one of the first priorities for many a military campaign, and sometimes even a motivation for starting a war. Improvements to the preparation and uses of this product would have increased population size and would have had a general and noticeable impact on people’s living standards. And this product underwent dramatic changes in the seventeenth and eighteenth centuries, becoming an obsession for many inventors and industrialists, while seemingly not featuring in many estimates of historical economic output or growth at all.

The product is salt.

Making salt does not seem, at first glance, all that interesting as an industry. Even ninety years ago, when salt was proportionately a much larger industry in terms of employment, consumption, and economic output, the author of a book on the history salt-making noted how a friend had advised keeping the word salt out of the title, “for people won’t believe it can ever have been important”.1 The bestselling Salt: A World History by Mark Kurlansky, published over twenty years ago, actively leaned into the idea that salt was boring, becoming so popular because it created such a surprisingly compelling narrative around an article that most people consider commonplace. (Kurlansky, it turns out, is behind essentially all of those one-word titles on the seemingly prosaic: cod, milk, paper, and even oysters).

But salt used to be important in a way that’s almost impossible to fully appreciate today.

Try to consider what life was like just a few hundred years ago, when food and drink alone accounted for 75-85% of the typical household’s spending — compared to just 10-15%, in much of the developed world today, and under 50% in all but a handful of even the very poorest countries. Anything that improved food and drink, even a little bit, was thus a very big deal. This might be said for all sorts of things — sugar, spices, herbs, new cooking methods — but salt was more like a general-purpose technology: something that enhances the natural flavours of all and any foods. Using salt, and using it well, is what makes all the difference to cooking, whether that’s judging the perfect amount for pasta water, or remembering to massage it into the turkey the night before Christmas. As chef Samin Nosrat puts it, “salt has a greater impact on flavour than any other ingredient. Learn to use it well, and food will taste good”. Or to quote the anonymous 1612 author of A Theological and Philosophical Treatise of the Nature and Goodness of Salt, salt is that which “gives all things their own true taste and perfect relish”. Salt is not just salty, like sugar is sweet or lemon is sour. Salt is the universal flavour enhancer, or as our 1612 author put it, “the seasoner of all things”.

Making food taste better was thus an especially big deal for people’s living standards, but I’ve never seen any attempt to chart salt’s historical effects on them. To put it in unsentimental economic terms, better access to salt effectively increased the productivity of agriculture — adding salt improved the eventual value of farmers’ and fishers’ produce — at a time when agriculture made up the vast majority of economic activity and employment. Before 1600, agriculture alone employed about two thirds of the English workforce, not to mention the millers, butchers, bakers, brewers and assorted others who transformed seeds into sustenance. Any improvements to the treatment or processing of food and drink would have been hugely significant — something difficult to fathom when agriculture accounts for barely 1% of economic activity in most developed economies today. (Where are all the innovative bakers in our history books?! They existed, but have been largely forgotten.)

And so far we’ve only mentioned salt’s direct effects on the tongue. It also increased the efficiency of agriculture by making food last longer. Properly salted flesh and fish could last for many months, sometimes even years. Salting reduced food waste — again consider just how much bigger a deal this used to be — and extended the range at which food could be transported, providing a whole host of other advantages. Salted provisions allowed sailors to cross oceans, cities to outlast sieges, and armies to go on longer campaigns. Salt’s preservative properties bordered on the necromantic: “it delivers dead bodies from corruption, and as a second soul enters into them and preserves them … from putrefaction, as the soul did when they were alive”.2

Because of salt’s preservative properties, many believed that salt had a crucial connection with life itself. The fluids associated with life — blood, sweat and tears — are all salty. And nowhere seemed to be more teeming with life as the open ocean. At a time when many believed in the spontaneous generation of many animals from inanimate matter, like mice from wheat or maggots from meat, this seemed a more convincing point. No house was said to generate as many rats as a ship passing over the salty sea, while no ship was said to have more rats than one whose cargo was salt.3 Salt seemed to have a kind of multiplying effect on life: something that could be applied not only to seasoning and preserving food, but to growing it.

Livestock, for example, were often fed salt: in Poland, thanks to the Wieliczka salt mines, great stones of salt lay all through the streets of Krakow and the surrounding villages so that “the cattle, passing to and fro, lick of those salt-stones”.4 Cheshire in north-west England, with salt springs at Nantwich, Middlewich and Northwich, has been known for at least half a millennium for its cheese: salt was an essential dietary supplement for the milch cows, also making it (less famously) one of the major production centres for England’s butter, too. In 1790s Bengal, where the East India Company monopolised salt and thereby suppressed its supply, one of the company’s own officials commented on the major effect this had on the region’s agricultural output: “I know nothing in which the rural economy of this country appears more defective than in the care and breed of cattle destined for tillage. Were the people able to give them a proper quantity of salt, they would … probably acquire greater strength and a larger size.”5 And to anyone keeping pigeons, great lumps of baked salt were placed in dovecotes to attract them and keep them coming back, while the dung of salt-eating pigeons, chickens, and other kept birds were considered excellent fertilisers.6


    1. Edward Hughes, Studies in Administration and Finance 1558 – 1825, with Special Reference to the History of Salt Taxation in England (Manchester University Press, 1934), p.2

    2. Anon., Theological and philosophical treatise of the nature and goodness of salt (1612), p.12

    3. Blaise de Vigenère (trans. Edward Stephens), A Discovrse of Fire and Salt, discovering many secret mysteries, as well philosophical, as theological (1649), p.161

    4. “A relation, concerning the Sal-Gemme-Mines in Poland”, Philosophical Transactions of the Royal Society of London 5, 61 (July 1670), p.2001

    5. Quoted in H. R. C. Wright, “Reforms in the Bengal Salt Monopoly, 1786-95”, Studies in Romanticism 1, no. 3 (1962), p.151

    6. Gervase Markam, Markhams farwell to husbandry or, The inriching of all sorts of barren and sterill grounds in our kingdome (1620), p.22

February 7, 2024

A disturbing proportion of scientific publishing is … bullshit

Tim Worstall on a few of the more upsetting details of how much we’ve been able depend on truth and testability in the scientific community and how badly that’s been undermined in recent years:

The Observer tells us that science itself is becoming polluted by journal mills. Fools — intellectual thieves perhaps — are publishing nonsense in scientific journals, this then pollutes the conclusions reached by people surveying science to see what’s what.

This is true and is a problem. But it’s what people publish as supposedly real science that is the real problem here, not just those obvious cases they’re complaining about:

    The startling rise in the publication of sham science papers has its roots in China, where young doctors and scientists seeking promotion were required to have published scientific papers. Shadow organisations – known as “paper mills” – began to supply fabricated work for publication in journals there.

    The practice has since spread to India, Iran, Russia, former Soviet Union states and eastern Europe, with paper mills supplying ­fabricated studies to more and more journals as increasing numbers of young ­scientists try to boost their careers by claiming false research experience. In some cases, journal editors have been bribed to accept articles, while paper mills have managed to establish their own agents as guest editors who then allow reams of ­falsified work to be published.

Indeed, an actual and real problem:

    The products of paper mills often look like regular articles but are based on templates in which names of genes or diseases are slotted in at random among fictitious tables and figures. Worryingly, these articles can then get incorporated into large databases used by those working on drug discovery.

    Others are more bizarre and include research unrelated to a journal’s field, making it clear that no peer review has taken place in relation to that article. An example is a paper on Marxist ideology that appeared in the journal Computational and Mathematical Methods in Medicine. Others are distinctive because of the strange language they use, including references to “bosom peril” rather than breast cancer and “Parkinson’s ailment” rather Parkinson’s disease.

Quite. But the problem is worse, much, much, worse.

Let us turn to something we all can agree is of some importance. Those critical minerals things. We all agree that we’re going to be using more of them in the future. Largely because the whole renewables thing is changing the minerals we use to power the world. We’re — to some extent, perhaps enough, perhaps not enough — moving from using fossil fuels to power the world to using rare earths, silicon, copper and so on to power the world. How much there is, how much useable, of those minerals is important. Because that’s what we’re doing, we’re changing which minerals — from fossil to metallic elements — we use to power the world.

Those estimates of how much there is out there are therefore important. The European Union, for example, has innumerable reports and task forces working on the basis that there’s not that much out there and therefore we’ve got to recycle everything. One of those foundational blocks of the circular economy is that we’ve got to do it anyway. Because there’s simply not enough to be able to power society without the circular economy.

This argument is nads*. The circular economy might be desirable for other reasons. At least in part it’s very sensible too – if it’s cheaper to recycle than to dig up new then of course we should recycle. But that we must recycle, regardless of the cost, because otherwise supply will vanish is that nads*.

But, folk will and do say, if we look at the actual science here we are short of these minerals and metals. Therefore etc. But it’s the science that has become infected. Wrongly infected, infested even.

Here’s the Royal Society of Chemistry and their periodic table. You need to click around a bit to see this but they have hafnium supply risk as “unknown”. That’s at least an advance from their previous insistence that it was at high supply risk. It isn’t, there’s more hafnium out there than we can shake a stick at. At current consumption rates — and assuming no recycling at all which, with hafnium, isn’t all that odd an idea — we’re going to run out sometime around the expected date for the heat death of the universe. No, not run out of the universe’s hafnium, run out of what we’ve got in the lithosphere of our own Earth. To a reasonable and rough measure the entirety of Cornwall is 0.01% hafnium. We happily mine for gold at 0.0001% concentrations and we use less hafnium annually than we do gold.

The RSC also says that gallium and germanium have a high supply risk. Can you guess which bodily part(s) such a claim should be associated with? For gallium we already have a thousand year supply booked to pass through the plants we normally use to extract our gallium for us. For germanium I — although someone competent could be a preference — could build you a plant to supply 2 to 4% of global annual germanium demand/supply. Take about 6 months and cost $10 million even at government contracting rates to do it too. The raw material would be fly ash from coal burning and there’s no shortage of that — hundreds of such plants could be constructed that is.

The idea that humanity is, in anything like the likely timespan of our species, going to run short in absolute terms of Hf, Ga or Ge is just the utmost nads*

But the American Chemistry Society says the same thing:


    * As ever, we are polite around here. Therefore we use the English euphemism “nads”, a shortening of “nadgers”, for the real meaning of “bollocks”.

January 3, 2024

QotD: Iron and steel

Filed under: History, Quotations, Science, Technology — Tags: , , , , , — Nicholas @ 01:00

I don’t want to get too bogged down in the exact chemistry of how the introduction of carbon changes the metallic matrix of the iron; you are welcome to read about it. As the carbon content of the iron increases, the iron’s basic characteristics – its ductility and hardness (among others) – changes. Pure iron, when it takes a heavy impact, tends to deform (bend) to absorb that impact (it is ductile and soft). Increasing the carbon-content makes the iron harder, causing it to both resist bending more and also to hold an edge better (hardness is the key characteristic for holding an edge through use). In the right amount, the steel is springy, bending to absorb impacts but rapidly returning to its original shape. But too much carbon and the steel becomes too hard and not ductile enough, causing it to become brittle.

Compared to the other materials available for tools and weapons, high carbon “spring steel” was essentially the super-material of the pre-modern world. High carbon steel is dramatically harder than iron, such that a good steel blade will bite – often surprisingly deeply – into an iron blade without much damage to itself. Moreover, good steel can take fairly high energy impacts and simply bend to absorb the energy before springing back into its original shape (rather than, as with iron, having plastic deformation, where it bends, but doesn’t bend back – which is still better than breaking, but not much). And for armor, you may recall from our previous look at arrow penetration, a steel plate’s ability to resist puncture is much higher than the same plate made of iron (bronze, by the by, performs about as well as iron, assuming both are work hardened). of course, different applications still prefer different carbon contents; armor, for instance, tended to benefit from somewhat lower carbon content than a sword blade.

It is sometimes contended that the ancients did not know the difference between iron and steel. This is mostly a philological argument based on the infrequency of a technical distinction between the two in ancient languages. Latin authors will frequently use ferrum (iron) to mean both iron and steel; Greek will use σίδηρος (sideros, “iron”) much the same way. The problem here is that high literature in the ancient world – which is almost all of the literature we have – has a strong aversion to technical terms in general; it would do no good for an elite writer to display knowledge more becoming to a tradesman than a senator. That said in a handful of spots, Latin authors use chalybs (from the Greek χάλυψ) to mean steel, as distinct from iron.

More to the point, while our elite authors – who are, at most dilettantish observers of metallurgy, never active participants – may or may not know the difference, ancient artisans clearly did. As Tylecote (op. cit.) notes, we see surface carburization on tools as clearly as 1000 B.C. in the Levant and Egypt, although the extent of its use and intentionality is hard to gauge to due rust and damage. There is no such problem with Gallic metallurgy from at least the La Tène period (450 BCE – 50 B.C.) or Roman metallurgy from c. 200 B.C., because we see evidence of smiths quite deliberately varying carbon content over the different parts of sword-blades (more carbon in the edges, less in the core) through pattern welding, which itself can leave a tell-tale “streaky” appearance to the blade (these streaks can be faked, but there’s little point in faking them if they are not already understood to signify a better weapon). There can be little doubt that the smith who welds a steel edge to an iron core to make a sword blade understands that there is something different about that edge (especially since he cannot, as we can, precisely test the hardness of the two every time – he must know a method that generally produces harder metal and be working from that assumption; high carbon steel, properly produced, can be much harder than iron, as we’ll see).

That said, our ancient – or even medieval – smiths do not understand the chemistry of all of this, of course. Understanding the effects of carbuzation and how to harness that to make better tools must have been something learned through experience and experimentation, not from theoretical knowledge – a thing passed from master to apprentice, with only slight modification in each generation (though it is equally clear that techniques could move quite quickly over cultural boundaries, since smiths with an inferior technique need only imitate a superior one).

Bret Devereaux, “Collections: Iron, How Did They Make It, Part IVa: Steel Yourself”, A Collection of Unmitigated Pedantry, 2020-10-09.

November 9, 2023

How they saved the holes in Swiss cheese

Filed under: Europe, Food, Science — Tags: , , — Nicholas @ 02:00

Tom Scott
Published 1 May 2023

Agroscope is a Swiss government-backed agricultural research lab. It’s got a lot of other resarch projects too, but it also keeps a backup of the Swiss cheese bacterial cultures… just in käse.
(more…)

October 10, 2023

QotD: The production of charcoal in pre-industrial societies

Filed under: Europe, History, Quotations, Technology — Tags: , , , , — Nicholas @ 01:00

Wood, even when dried, contains quite a bit of water and volatile compounds; the former slows the rate of combustion and absorbs the energy, while the latter combusts incompletely, throwing off soot and smoke which contains carbon which would burn, if it had still been in the fire. All of that limits the burning temperature of wood; common woods often burn at most around 800-900°C, which isn’t enough for the tasks we are going to put it to.

Charcoaling solves this problem. By heating the wood in conditions where there isn’t enough air for it to actually ignite and burn, the water is all boiled off and the remaining solid material reduced to lumps of pure carbon, which will burn much hotter (in excess of 1,150°C, which is the target for a bloomery). Moreover, as more or less pure carbon lumps, the charcoal doesn’t have bunches of impurities which might foul our iron (like the sulfur common in mineral coal).

That said, this is a tricky process. The wood needs to be heated around 300-350°C, well above its ignition temperature, but mostly kept from actually burning by lack of oxygen (if you let oxygen in, the wood is going to burn away all of its carbon to CO2, which will, among other things, cause you to miss your emissions target and also remove all of the carbon you need to actually have charcoal), which in practice means the pile needs some oxygen to maintain enough combustion to keep the heat correct, but not so much that it bursts into flame, nor so little that it is totally extinguished. The method for doing this changed little from the ancient world to the medieval period; the systems described by Pliny (NH 16.8.23) and Theophrastus (HP 5.9.4) is the same method we see used in the early modern period.

First, the wood is cut and sawn into logs of fairly moderate size. Branches are removed; the logs need to be straight and smooth because they need to be packed very densely. They are then assembled into a conical pile, with a hollow center shaft; the pile is sometimes dug down into the ground, sometimes assembled at ground-level (as a fun quirk of the ancient evidence, the Latin-language sources generally think of above-ground charcoaling, whereas the Greek-language sources tend to assume a shallow pit is used). The wood pile is then covered in a clay structure referred to a charcoal kiln; this is not a permanent structure, but is instead reconstructed for each charcoal burning. Finally, the hollow center is filled with brushwood or wood-chips to provide the fuel for the actual combustion; this fuel is lit and the shaft almost entirely sealed by an air-tight layer of earth.

The fuel ignites and begins consuming the oxygen from the interior of the kiln, both heating the wood but also stealing the oxygen the wood needs to combust itself. The charcoal burner (often called collier, before that term meant “coal miner” it meant “charcoal burner”) manages the charcoal pile through the process by watching the smoke it emits and using its color to gauge the level of combustion (dark, sooty smoke would indicate that the process wasn’t yet done, while white smoke meant that the combustion was now happening “clean” indicating that the carbonization was finished). The burner can then influence the process by either puncturing or sealing holes in the kiln to increase or decrease airflow, working to achieve a balance where there is just enough oxygen to keep the fuel burning, but not enough that the wood catches fire in earnest. A decent sized kiln typically took about six to eight days to complete the carbonization process. Once it cooled, the kiln could be broken open and the pile of effectively pure carbon extracted.

Raw charcoal generally has to be made fairly close to the point of use, because the mass of carbon is so friable that it is difficult to transport it very far. Modern charcoal (like the cooking charcoal one may get for a grill) is pressed into briquettes using binders, originally using wet clay and later tar or pitch, to make compact, non-friable bricks. This kind of packing seems to have originated with coal-mining; I can find no evidence of its use in the ancient or medieval period with charcoal. As a result, smelting operations, which require truly prodigious amounts of charcoal, had to take place near supplies of wood; Sim and Ridge (op cit.) note that transport beyond 5-6km would degrade the charcoal so badly as to make it worthless; distances below 4km seem to have been more typical. Moving the pre-burned wood was also undesirable because so much material was lost in the charcoaling process, making moving green wood grossly inefficient. Consequently, for instance, we know that when Roman iron-working operations on Elba exhausted the wood supplies there, the iron ore was moved by ship to Populonia, on the coast of Italy to be smelted closer to the wood supply.

It is worth getting a sense of the overall efficiency of this process. Modern charcoaling is more efficient and can often get yields (that is, the mass of the charcoal when compared to the mass of the wood) as high as 40%, but ancient and medieval charcoaling was far less efficient. Sim and Ridge (op cit.) note ratios of initial-mass to the final charcoal ranging from 4:1 to 12:1 (or 25% to 8.3% efficiency), with 7:1 being a typical average (14%).

We can actually get a sense of the labor intensity of this job. Sim and Ridge (op cit.) note that a skilled wood-cutter can cut about a cord of wood in a day, in optimal conditions; a cord is a volume measure, but most woods mass around 4,000lbs (1,814kg) per cord. Constructing the kiln and moving the wood is also likely to take time and while more than one charcoal kiln can be running at once, the operator has to stay with them (and thus cannot be cutting any wood, though a larger operation with multiple assistants might). A single-man operation thus might need 8-10 days to charcoal a cord of wood, which would in turn produce something like 560lbs (253.96kg) of charcoal. A larger operation which has both dedicated wood-cutters and colliers running multiple kilns might be able to cut the man-days-per-cord down to something like 3 or 4, potentially doubling or tripling output (but requiring a number more workers). In short, by and large our sources suggest this was a fairly labor intensive job in order to produce sufficient amounts of charcoal for iron production of any scale.

Bret Devereaux, “Iron, How Did They Make It? Part II, Trees for Blooms”, A Collection of Unmitigated Pedantry, 2020-09-25.

October 3, 2023

“Just play safe” is difficult when the definition of “safe” is uncertain

Filed under: Food, Health — Tags: , , , — Nicholas @ 04:00

David Friedman on the difficulty of “playing safe”:

It’s a no brainer. Just play safe

It is a common argument in many different contexts. In its strongest form, the claim is that the choice being argued for is unambiguously right, eliminates the possibility of a bad outcome at no cost. More plausibly, the claim is that one can trade the risk of something very bad for a certainty of something only a little bad. By agreeing to pay the insurance company a hundred dollars a year now you can make sure that if your house burns down you will have the money to replace it.

Doing that is sometimes is possible but, in an uncertain world, often not; you do not, cannot, know all the consequences of what you are doing. You may be exchanging the known risk of one bad outcome for the unknown risk of another.

Some examples:

Erythritol

Erythritol was the best of the sugar alcohols, substitutes tolerably well for sugar in cooking, has almost zero calories or glycemic load. For anyone worried about diabetes or obesity, using it instead of sugar is an obvious win. Diabetes and obesity are dangerous, sometimes life threatening.

Just play safe.

I did. Until an research came out offering evidence that it was not the best sugar alcohol but the worst:

    People with the highest erythritol levels (top 25%) were about twice as likely to have cardiovascular events over three years of follow-up as those with the lowest (bottom 25%). (Erythritol and cardiovascular events, NIH)

A single article might turn out to be wrong, of course; to be confident that erythritol is dangerous requires more research. But a single article was enough to tell me that using erythritol was not playing safe. I threw out the erythritol I had, then discovered that all the brands of “keto ice cream” — I was on a low glycemic diet and foods low in carbohydrates are also low in glycemic load — used erythritol as their sugar substitute.

Frozen bananas, put through a food processor or super blender along with a couple of ice cubes and some milk, cream, or yogurt, make a pretty good ice cream substitute.1 Or eat ice cream and keep down your weight or glycemic load by eating less of something else.

It’s safer.

Lethal Caution: The Butter/Margarine Story

For quite a long time the standard nutritional advice was to replace butter with margarine, eliminating the saturated fat that caused high cholesterol and hence heart attacks. It turned out to be very bad advice. Saturated fats may be bad for you — the jury is still out on that, with one recent survey of the evidence concluding that they have no effect on overall mortality — but transfats are much worse. The margarine we were told to switch to was largely transfats.2

“Consumption of trans unsaturated fatty acids, however, was associated with a 34% increase in all cause mortality”3

If that figure is correct, the nutritional advice we were given for decades killed several million people.


    1. Bananas get sweeter as they get riper so for either a keto or low glycemic diet, freeze them before they get too ripe.

    2. Some more recent margarines contain neither saturated fats nor transfats.

    3. “Intake of saturated and trans unsaturated fatty acids and risk of all cause mortality, cardiovascular disease, and type 2 diabetes: systematic review and meta-analysis of observational studies”, BMJ 2015; 351 doi: https://doi.org/10.1136/bmj.h3978 (Published 12 August 2015)

August 17, 2023

“… the Chinese invented gunpowder and had it for six hundred years, but couldn’t see its military applications and only used it for fireworks”

Filed under: China, History, Military, Science, Weapons — Tags: , , , , , — Nicholas @ 05:00

John Psmith would like to debunk the claim in the headline here:

An illustration of a fireworks display from the 1628-1643 edition of the Ming dynasty book Jin Ping Mei (1628-1643 edition).
Reproduced in Joseph Needham (1986). Science and Civilisation in China, Volume 5: Chemistry and Chemical Technology, Part 7: Military Technology: The Gunpowder Epic. Cambridge University Press. Page 142.

There’s an old trope that the Chinese invented gunpowder and had it for six hundred years, but couldn’t see its military applications and only used it for fireworks. I still see this claim made all over the place, which surprises me because it’s more than just wrong, it’s implausible to anybody with any understanding of human nature.

Long before the discovery of gunpowder, the ancient Chinese were adept at the production of toxic smoke for insecticidal, fumigation, and military purposes. Siege engines containing vast pumps and furnaces for smoking out defenders are well attested as early as the 4th century. These preparations often contained lime or arsenic to make them extra nasty, and there’s a good chance that frequent use of the latter substance was what enabled early recognition of the properties of saltpetre, since arsenic can heighten the incendiary effects of potassium nitrate.

By the 9th century, there are Taoist alchemical manuals warning not to combine charcoal, saltpetre, and sulphur, especially in the presence of arsenic. Nevertheless the temptation to burn the stuff was high — saltpetre is effective as a flux in smelting, and can liberate nitric acid, which was of extreme importance to sages pursuing the secret of longevity by dissolving diamonds, religious charms, and body parts into potions. Yes, the quest for the elixir of life brought about the powder that deals death.

And so the Chinese invented gunpowder, and then things immediately began moving very fast. In the early 10th century, we see it used in a primitive flame-thrower. By the year 1000, it’s incorporated into small grenades and into giant barrel bombs lobbed by trebuchets. By the middle of the 13th century, as the Song Dynasty was buckling under the Mongol onslaught, Chinese engineers had figured out that raising the nitrate content of a gunpowder mixture resulted in a much greater explosive effect. Shortly thereafter you begin seeing accounts of truly destructive explosions that bring down city walls or flatten buildings. All of this still at least a hundred years before the first mention of gunpowder in Europe.

Meanwhile, they had also been developing guns. Way back in the 950s (when the gunpowder formula was much weaker, and produced deflagarative sparks and flames rather than true explosions), people had already thought to mount containers of gunpowder onto the ends of spears and shove them in peoples’ faces. This invention was called the “fire lance”, and it was quickly refined and improved into a single-use, hand-held flamethrower that stuck around until the early 20th century.1 But some other inventive Chinese took the fire lances and made them much bigger, stuck them on tripods, and eventually started filling their mouths with bits of iron, broken pottery, glass, and other shrapnel. This happened right around when the formula for gunpowder was getting less deflagarative and more explosive, and pretty soon somebody put the two together and the cannon was born.

All told it’s about three and a half centuries from the first sage singing his eyebrows, to guns and cannons dominating the battlefield.2 Along the way what we see is not a gaggle of childlike orientals marvelling over fireworks and unable to conceive of military applications. We also don’t see an omnipotent despotism resisting technological change, or a hidebound bureaucracy maintaining an engineered stagnation. No, what we see is pretty much the opposite of these Western stereotypes of ancient Chinese society. We see a thriving ecosystem of opportunistic inventors and tacticians, striving to outcompete each other and producing a steady pace of technological change far beyond what Medieval Europe could accomplish.

Yet despite all of that, when in 1841 the iron-sided HMS Nemesis sailed into the First Opium War, the Chinese were utterly outclassed. For most of human history, the civilization cradled by the Yellow and the Yangtze was the most advanced on earth, but then in a period of just a century or two it was totally eclipsed by the upstart Europeans. This is the central paradox of the history of Chinese science and technology. So … why did it happen?


    1. Needham says he heard of one used by pirates in the South China Sea in the 1920s to set rigging alight on the ships that they boarded.

    2. I’ve left out a ton of weird gunpowder-based weaponry and evolutionary dead ends that happened along the way, but Needham’s book does a great job of covering them.

August 10, 2023

“Ultra-Processed Food” is so bad that we need extra scare-quotes!!

Filed under: Books, Food, Health, Politics — Tags: , , , , — Nicholas @ 03:00

Christopher Snowden seems, for some inexplicable reason, to be skeptical about the hysterical warnings of people like Chris van Tulleken in his recent book Ultra-Processed People: Who Do We All Eat Stuff That Isn’t Food … and Why Can’t We Stop?

If Jamie Oliver is the fun police, Chris van Tulleken is the Taliban. The selling point of books like Ultra-Processed People is the idea that everything you know is wrong. Van Tulleken, an infectious diseases doctor and television presenter, takes this to extremes. In this book, almost everybody is wrong, many of them are corrupt and almost no one is to be trusted. Only Dr. van Tulleken, a handful of researchers and anyone who pays £25 to read this book knows the real truth. The problem is not sugar. The problem is not carbs. Artificial sweeteners don’t work. Exercise doesn’t work. Willpower doesn’t work. Every scientist who has published research contradicting his theory is in the pay of the food industry or — how’s this for an ad hominem argument? — has cited studies by people who are. The British Nutrition Foundation, the Academy of Nutrition and Dietetics, the British Dietetic Association, the Centre for Social Justice, the Institute of Economic Affairs, Tortoise Media, Diabetes UK, Cancer Research UK and the British Heart Foundation are all tainted by food industry funding. Even Jamie Oliver – Saint Jamie, the Sage of Essex — is guilty by his association with Tesco and Deliveroo, and because he makes ultra-processed food (“albeit fairly marginal items”).

It is this ultra-processed food (UPF), argues van Tulleken, that is the real cause of obesity and diet-related diseases in the world today. Food is classified as UPF if it is wrapped in plastic and contains an ingredient you don’t have in your kitchen. This includes everything from mustard to Magnums but, counter-intuitively, doesn’t include sugar, salt or fat. Van Tulleken doesn’t quite put it like this but, in effect, anything you make at home is healthy while nearly anything you buy in a supermarket, aside from raw ingredients, is bad for you.

The evidence for this striking proposition can be briefly outlined, and van Tulleken deals with it swiftly in a single chapter. Firstly, there are a number of studies using observational epidemiology which find a correlation between diets which are high in UPF and various ailments, including not only obesity, heart disease and type 2 diabetes, but also dementia, depression, cancer and more. Secondly, there is a randomised controlled trial which gave a small group of volunteers a two-week diet of either ultra-processed food or minimally processed food. The nutritional profile of each diet was similar (the same levels of salt, sugar, etc.) and the volunteers were offered twice as much as they needed to maintain a healthy weight. The people on the ultra-processed diet ended up eating 500 calories more than the people on the minimally processed diet and put on nearly a kilogram of weight.

The randomised controlled trial was published in 2019 and already has over 1,200 academic citations. Van Tulleken considers it to be extraordinarily robust, but it only really stands out because the general standard of dietary research is so poor. The volunteers were not given ultra-processed versions of the same meals. They were given totally different meals, plus very different snacks, and they could eat as much as they wanted for free. What does it actually demonstrate? Arguably, all it shows is that if you give people unlimited quantities of tasty food, they will eat more of it than if you give them blander food. Van Tulleken assures us that “the two diets were equally delicious”, but this would seem to contradict his claims elsewhere that UPF is “hyper-palatable”, delicious and irresistible.

As for the epidemiological correlations, what is it that actually correlates? UPF is an incredibly broad category encompassing most foods that are known as HFSS (high in fat, sugar or salt) and many more besides. People who eat a lot of UPF tend to have lower incomes, which correlates with all sorts of health conditions. In the study van Tulleken cites to demonstrate that UPF causes cancer, the people who ate the most UPF had the highest smoking rate and were least likely to be physically active. Epidemiologists attempt to control for such factors, but with so much going on in the data, it is an heroic assumption to think that the effect of food processing can be teased out from the effects of fat, sugar, salt, obesity, smoking, stress, exercise and numerous socio-economic influences.

June 28, 2023

“I’ll forgive Dartnell for not writing ‘Lest Darkness Fall’ For Dummies

Filed under: Books, Europe, History, Science, Technology — Tags: , , , , , — Nicholas @ 09:25

Jane Psmith reviews The Knowledge by Lewis Dartnell, despite it not being quite what she was hoping it would be:

This is not the book I wanted to read.

The book I wanted to read was a detailed guide to bootstrapping your way to industrial civilization (or at least antibiotics) if you should happen to be dumped back in, say, the late Bronze Age.1 After all, there are plenty of technologies that didn’t make it big for centuries or millennia after their material preconditions were met, and with our 20/20 hindsight we could skip a lot of the dead ends that accompanied real-world technological progress.

Off the top of my head, for example, there’s no reason you couldn’t do double-entry bookkeeping with Arabic numerals as soon as you have something to write on, and it would probably have been useful at any point in history — just not useful enough that anyone got really motivated to invent it. Or, here, another one: the wheelbarrow is just two simple machines stuck together, is substantially more efficient than carrying things yourself, and yet somehow didn’t make it to Europe until the twelfth or thirteenth century AD. Or switching to women’s work, I’ve always taken comfort in the fact that with my arcane knowledge of purling I could revolutionize any medieval market.2 And while the full Green Revolution package depends on tremendous quantities of fertilizer to fuel the grains’ high yields, you could get some way along that path with just knowledge of plant genetics, painstaking record-keeping, and a lot of hand pollination. In fact, with a couple latifundia at your disposal in 100 BC, you could probably do it faster than Norman Borlaug did. But speaking of fertilizer, the Italian peninsula is full of niter deposits, and while your revolutio viridis is running through those you could be figuring out whether it’s faster to spin up a chemical industry to the point you could do the Haber-Bosch process at scale or to get to the Peruvian guano islands. (After about thirty seconds of consideration my money’s on Peru, though it’s a shame we’re trying to do this with the Romans since they were never a notably nautical bunch and 100 BC was a low point even for them; you’ll have to wipe out the Mediterranean pirates early and find Greek or Egyptian shipwrights.) And another question: can you go straight from the Antikythera mechanism to the Jacquard machine, and if not what do you need in between? Inquiring minds want to know.3

But I’ll forgive Dartnell for not writing Lest Darkness Fall” For Dummies, which I’ll admit is a pretty niche pitch, because The Knowledge is doing something almost as cool.4 Like my imaginary book, it employs a familiar fictional conceit to explain how practical things work. Instead of time travel, though, Dartnell takes as his premise the sudden disappearance (probably plague, definitely not zombies) of almost all of humanity, leaving behind a few survivors but all the incredible complexity of our technological civilization. How would you survive? And more importantly, how would you rebuild?


    1. I read the Nantucket Trilogy at an impressionable age.

    2. Knitting came to Europe in the thirteenth century, but the complementary purl stitch, which is necessary to create stretchy ribbing, didn’t. If you’ve ever wondered why medieval hosen were made of woven fabric and fit the leg relatively poorly, that’s why. When purling came to England, Elizabeth I paid an exorbitant amount of money for her first pair of silk stockings and refused to go back to cloth.

    3. Obviously you would also need to motivate people to actually do any of these things, which is its own set of complications — Jason Crawford at Roots of Progress has a great review of Robert Allen’s classic The British Industrial Revolution in Global Perspective that gets much deeper into why no one actually cared about automation and mechanization — but please allow me to imagine here.

    4. Please do not recommend How To Invent Everything, which purports to do something like this. It doesn’t go nearly deep enough to be interesting, let alone useful. You know, in the hypothetical that I’m sent back in time.

May 23, 2023

Mustard: A Spicy History

Filed under: Cancon, Europe, Food, France, Greece, Health, History, India, USA — Tags: , , , , — Nicholas @ 02:00

The History Guy: History Deserves to be Remembered
Published 15 Feb 2023

In 2018 The Atlantic observed “For some Americans, a trip to the ballpark isn’t complete without the bright-yellow squiggle of French’s mustard atop a hot dog … Yet few realize that this condiment has been equally essential — maybe more so — for the past 6,000 years.”
(more…)

May 12, 2023

The crusade against (insert scare quotes here) Ultra-Processed Food

Filed under: Europe, Food, Health, Politics — Tags: , , , — Nicholas @ 03:00

In The Critic, Christopher Snowden traces the orthorexist journey from warning about the dangers of saturated fat, to protesting against sugar content in foods, through an anti-carbohydrate phase to today’s crusade against “Ultra-Processed Food”:

It is barely ten years since Denmark repealed its infamous “fat tax”. It was supposed to be a world-leading intervention to tackle obesity but it proved to be hugely unpopular and lasted just 15 months.

It seems almost strange now that it targeted saturated fat. In hindsight, it was the last gasp of the crusade against fat before all eyes turned to sugar. The anti-sugar crusade seemed to come out of nowhere in 2014 with the emergence of the tiny but phenomenally successful pressure group Action on Sugar. Within three years, the British government had announced a tax on sugary drinks, but by then the anti-sugar movement was morphing into a campaign against carbohydrates. That began to run out of steam a couple of years ago when many of the leading anti-carb personalities found that they could get more attention — and, dare I say, money — from being “sceptical” about COVID-19 vaccines.

They come and go, these food fads, but they all rely on the belief that there is something in the food supply that is uniquely dangerous, something hitherto unknown that only independent free thinkers can see is the cause of all our problems.

The new dietary villain is “ultra-processed food” (UPF), a concept that didn’t even exist until a few years ago but is now everywhere. There have been two books about UPF published in recent weeks and a third — Henry Dimbleby’s Ravenous — dedicated a lot of space to it.

The simple definition of ultra-processed food as used by those who are concerned about them (I am not making this up to make them sound silly) is anything that is “wrapped in plastic and has at least one ingredient you wouldn’t find in a home kitchen”. Since you probably don’t have emulsifiers, preservatives and artificial sweeteners in your kitchen, this rules out a lot of products.

The argument is that these products make you fat and should be avoided. The evidence for this comes from a study published in 2019. In a randomised controlled trial, ten people were given an ultra-processed diet and ten other people were given an unprocessed diet. Both diets were similar in their overall sugar, fat, protein and salt content, although the meals themselves were very different.

The participants were given all the food for free and they could eat as much as wanted. The people on the ultra-processed diet ended up eating 500 calories per day more than the other group and, after two weeks, had put on nearly a kilogram of weight. By contrast, the people on the unprocessed diet lost weight.

If you look at the food that was offered to the two groups, the explanation is obvious. The meals and snacks available to the UPF group were delicious whereas the food given to the other group was rather Spartan and was unlikely to make anybody ask for a second helping. If you give people tasty food for free, they will tend to eat more of it.

Older Posts »

Powered by WordPress