Quotulatiousness

May 8, 2026

“… without Western Civilization, we’d all still be whacking at the dirt with sticks and dying of intestinal parasites”

Filed under: Americas, Media, Technology — Tags: , , , , , — Nicholas @ 03:00

Devon Eriksen responds to someone who had a clanker generate an imaginary Aztec capital today if the Aztecs had managed to defeat Cortes and his conquistadors:

Guitars. Suits and ties. Western architecture. English and Spanish text.

What’s easy to miss is that the generative AI is making its own, separate, political statement here. Not because it intended to, but because it had no choice.

Even human creativity consists mostly of rearranging things, but AI generation is entirely that and nothing else.

So when you ask it for “modern”, it gives you “western”, because in its eyes, there is no distinction between the two. “Western” is the only “modern” that actually exists for it to draw from.

Even cultures that were capable of building an alternative version of modern, because they weren’t skinning and eating each other, and had invented the wheel, still borrowed heavily from the West, not because they couldn’t do otherwise, but because the West moved faster, and had already done the work.

So, ask an AI for “modern Aztec”, and you get English-speaking Tokyo/Venice, with browner people, pyramid reskins on skyscrapers, and some out-of-place Mayan stuff, all set to Peruvian flute music.

This is the same reason that a lot of people, most of whom really aren’t much more than LLMs themselves, say silly things like “there is no White culture” … because, like the very simple art machine, they cannot conceive of any alternative version of modernity.

So nothing is Western to them, it’s all just “modern”.

But of course it really is Western, because without Western Civilization, we’d all still be whacking at the dirt with sticks and dying of intestinal parasites.

That AI is Western, too.

May 7, 2026

Great success! Honda “postpones” their Ontario EV project

As part of their mindless fanboyism for anything remotely related to “Net Zero”, the federal government and the Ontario provincial government have been serving up subsidies for electric vehicles and hastening the “inevitable transition” away from internal combustion vehicles. Through legislation and regulation, they’ve been doing everything they can to close down the traditional car and truck manufacturing sector and replace them with zero emission vehicles. The various governments have handed out subsidies amounting to billions, and yet one after another after another the much ballyhoo’d EV factories, battery plants, and other futuristic projects fall by the wayside, leaving very little in exchange for those billions:

There was a time, not very long ago, when Liberal politicians treated EV battery announcements like moon landings.

Hard hats. Safety glasses. Giant ceremonial cheques. Breathless speeches about “the future”. Every battery plant was “historic”. Every subsidy package was “transformational”. Every corporate press conference looked like a motivational seminar for people who think buzzwords are infrastructure.

All we were missing was a fog machine and Bono.

Meanwhile ordinary Canadians were standing in grocery aisles doing mental math over bacon prices, delaying dental work, and wondering whether they could survive another winter utility bill without sacrificing whatever scraps remained of their savings.

But while Canadians were trying to keep their heads above water, Ottawa was busy launching one of the most expensive industrial subsidy experiments in modern Canadian history.

AI-generated image from Melanie in Saskatchewan

The Honda EV project in Ontario was supposed to be one of the crown jewels of this brave new green economy. Politicians lined up in hard hats and safety glasses like a traveling theatre troupe performing The Future Is Here. Canadians were assured this was proof the country was becoming an EV superpower.

Turns out it may have been more of a very expensive PowerPoint presentation with taxpayer financing attached.

[…]

In March 2020, Prime Minister Justin Trudeau appointed Mark Carney as an informal economic adviser during the COVID recovery period. Over the following years, Carney increasingly promoted “green transition” investment frameworks, climate-linked financial systems, ESG-focused economic planning, and massive public-private investment partnerships tied to decarbonization strategies.

Which is important context now, because the EV subsidy era did not emerge out of thin air. It grew out of a broader worldview that treated government-directed green investment as both economic policy and moral mission. The assumption underneath all of this was breathtakingly simple:

If government wants it badly enough, reality will cooperate.”

That is usually where things begin going sideways.

Canadians were told the EV transition was inevitable. Questions about affordability, charging infrastructure, winter range, electrical grid capacity, or consumer demand were often brushed aside like annoying little details raised by peasants who simply lacked sufficient enlightenment.

Then came the subsidy gold rush.

[…]

Corporations are not charities. They are not loyal patriots. They are not emotionally attached to government slogans.

They follow incentives. They chase profitability. They change direction when conditions change.

That is exactly what Honda did.

Meanwhile Canadians are left holding the bill for another “historic transformation” that produced:

  • endless announcements
  • glossy photo ops
  • consultant buzzwords
  • government self-congratulation
  • escalating subsidy exposure
  • and corporate renegotiations every time market conditions shifted
  • while producing no completed Honda EV manufacturing hub and no fleet of Canadian-built EVs rolling proudly off Ontario assembly lines.

What remains instead is a stalled megaproject, a confused tariff policy, a government spinning contradictory narratives depending on the week, and taxpayers once again discovering they were voluntold into becoming venture capitalists for political vanity projects.

Apparently this is what “economic leadership” looks like now.

Hard hats. Press releases. Fifty-plus billion dollars in EV-related exposure. And a factory plan slowly evaporating into the mist while Chinese EVs roll through the front gate anyway.

Tu-144 Concordeski – Speed, Spies and Failure

HardThrasher
Published 4 May 2026

In great secrecy, in 1963 the USSR set about making aviation history with the world’s first Supersonic Transport (SST). In 1968, five months before Concorde, the Tu-144 became the first passenger jet to break the sound barrier. But it was a white elephant that crashed on multiple occasions, killed hundreds and flew for just a matter of months after over a decade of development. It was, perhaps the first of a string of failures that brought down the Soviet Union.

00:00 – 11:06 – Introduction and Background
11:07 – 23:10 – The Decision is made to build
23:10 – 35:31 – And then it got worse — how everything fell apart
35:32 – 39:10 – The En Crashening — From First Flight to Constant Crashes
39:11 – 48:49 – Enter the KGB — What role did spies play
49:22 – End – Like, Subscribe, Join the Patreon
(more…)

May 6, 2026

QotD: Deskilling society through AI

Filed under: Education, Media, Quotations, Technology — Tags: , , — Nicholas @ 01:00

It’s always a little dangerous to write about any rapidly-developing technology, because chances are pretty good that whatever you say will be incredibly and obviously dated within a few months. But I’m going to plant my flag anyway, because even if nothing else changes — even if there’s no meaningful advancement in LLM performance beyond the state-of-the-art right now, in March 2025 — the potential disruption is already so enormous that you can think of it as a kind of Industrial Revolution for text.

Just like in the first one, we’ve figured out how to use machines to do a broad swathe of things people used to do, swapping energy and capital in for human labor. And just like in the first one, the output isn’t necessarily better (in fact, it’s often worse), but it’s so much cheaper in terms of human time and thought and effort that the quality almost doesn’t matter. Sometimes that’s wonderful: if you desperately need to put a roof for your barn right this moment, it’s a blessing to be able to slap on some corrugated tin instead of going to the effort of thatching. When you have to write your seventeenth letter to the insurance company explaining that no, they really ought to be covering this, it’s a relief to hand the composition off to Claude instead. But do that too much and you forget how to do it yourself — or more plausibly, you never learn.

The greatest risk of AI is probably “we all get turned into paperclips”, or maybe “someone uses it to design a novel and incredibly fatal pathogen”, but the most certain risk — the one that’s already here, at least on the edges — is a great deskilling. Just as the mechanization of physical labor lost us all those traditional skills that Langlands describes, the ability to automate cognitive tasks undermines their acquisition in the first place. Why pay any attention at all to word choice and metaphor and prosody when ChatGPT can churn out that essay in a few seconds? Why worry about drafting a convincing email when you’re pretty sure your recipient is just going to ask Grok for a summary?1 Why learn to code when a machine can do it faster?

I was recently informed that someone — “not anyone you know, Mom, someone at another school” — used ChatGPT to write his essay about the causes of the Civil War. This was obviously deeply upsetting to the congenital rule-follower who reported it to me, on account of THAT’S CHEATING (you must imagine this in the whiniest she-touched-my-stuff voice possible), but it was a good teachable moment — for me, if not for the history teacher at another school. What’s the point of an essay about the causes of the Civil War, anyway? It can’t be that the teacher wants to know the answer: she can find a dozen books on the topic if she cares to look, each more cogent and thorough than anything a middle-schooler is likely to produce.2 Heck, even the Wikipedia article will probably give her a better understanding. And if it’s not for the teacher’s benefit, it’s certainly not for the benefit of any other audience, since as soon as the essay is marked and graded it’ll probably be crumpled up and tossed into the recycling bin. No, it’s for the kid.

The point of writing an essay about the causes of the Civil War is not to have an essay about the causes of the Civil War, it’s to undergo the internal changes effected by the process of thinking through, planning, drafting, and editing the darn thing. Writing forces you to put your thoughts in order, to shape whatever mass of inchoate ideas is bouncing around in your head into something clear and reasoned you can pin to the page. The thinking is the hard part; putting words to it is simple by comparison. (This book review began life as about seven hundred words of stream-of-consciousness riffing, with only the vaguest kind of structure. When I experimentally pasted it into an LLM and asked for an essay, the result was terrible.) But even the putting of words is a valuable skill: what’s the right tone here? What’s the right word? Do I want to say “writing forces you to” or “when you write you have to”? How do they feel different? Asking a machine to do this for you is like bringing a forklift to the gym.

Of course, that kid who had ChatGPT write his essay was almost certainly thinking of the assignment not as one small step in the alchemical process of self-transformation that is education but as basically equivalent to an appeal letter to the insurance company: just another dumb hoop you have to jump through in your interactions with a vast impersonal machine that doesn’t particularly want to grind you to dust but wouldn’t mind it either. And since this was at another school, he might not even be wrong. Maybe the teacher was just pasting the rubric and the essays back into ChatGPT and asking it to assign a grade.3

But there’s an even bigger problem than lying about who (or what) has done the work, which is lying about whether the work has been done at all. LLMs make lying very easy indeed. Yes, yes, sometimes they hallucinate and tell you things that are patently untrue, and that’s a bigger danger for students and other people who don’t have the background to notice when something seems off — this is all true, but it’s not what I mean.

LLMs, when working exactly as intended, enable human falsehood — because our society relies on written records as proof of work. Until recently that was fine, because writing down lies actually used to be pretty hard: putting together a convincing false report from scratch — maintenance records for the airplane you’re about to board, say, or a radiologist’s report on your brain scan — was almost as time-consuming as actually checking the things that were supposed to checked and then documenting them, and the liar had to spend the whole time aware of their own dishonesty. (Not that this stops everyone, of course.) But now that it takes about two clicks to generate an inspector’s report for the house you’re considering buying, or the pathologist’s findings in your biopsy, how much are you going to trust that they actually looked?

LLMs can be useful tools,4 but all tools change what we make and how we make it. It’s often a good tradeoff! Sure, each individual example of simplification and automation in the name of efficiency is a tiny bit of alienation, removing the maker from the making, but it’s also a gift of time we can spend on other things: I couldn’t write this if I also had to sew my family’s clothes and wash our laundry by hand. And yet those bits pile up, and once it becomes possible to exist in the world without really needing to come into contact with it, once you can get by without ever really needing to make anything, some people just won’t. And that’s terrible! Being entirely without cræft — never bringing mind-body-soul into harmony with one another and then using them to master the world — means missing out on something deeply human.

Jane Psmith, “REVIEW: Cræft, by Alexander Langlands”, Mr. and Mrs. Psmith’s Bookshelf, 2025-03-24.


  1. All the “AI written/AI read” communication begins to resemble Slavoj Zizek’s perfect date:
  2. “So my idea of a perfect date is the following one. We met. Then I put, she puts her plastic penis dildo into my … “stimulating training unit” is the name of this product. Into my plastic vagina. We plug them in and the machines are doing it for us. They’re buzzing in the background and I’m free to do whatever I want and she. We have a nice talk; we have tea; we talk about movies. What can be — we paid our superego full tribute. Machines are doing — now where would have been here a true romance. Let’s say I talk with a lady, with the lady because we really like each other. And, you know, when I’m pouring her tea or she to me quite by chance our hands touch. We go on touching. Maybe we even end up in bed. But it’s not the usual oppressive sex where you worry about performance. No, all that is taken care of by the stupid machines. That would be ideal sex for me today.”

  3. Well, okay, most of them.
  4. See footnote one again.
  5. Personally I’ve found them useful in three cases: (1) when I’m blanking on how to begin an email I will occasionally ask for a draft, which inevitably makes me so mad about how bad it is that I immediately rewrite it in a way that doesn’t suck; (2) when it’s Sunday night and I need a picture of a Japanese man in a business suit and a samurai helmet for a book review going up in the morning; and (3) when I can’t figure out the right search term for my question. (Turns out it was “sigmatic aorist”. Thanks, Claude.)

May 5, 2026

Orwell: “It would probably not be beyond human ingenuity to write books by machinery”

Filed under: Books, Media, Technology — Tags: , , — Nicholas @ 04:00

In the portion above the paywall, Matt Johnson discusses Orwell’s career as we face an unending deluge of writing “assisted” by AI or even entirely created by AI:

In the introduction to his 1991 book Orwell: The Authorised Biography, Michael Shelden distinguishes his approach from that of Bernard Crick’s George Orwell: A Life, published a decade earlier. While Crick’s volume offered the most complete portrait of Orwell available at that point, Shelden argues that it’s too dull and impersonal — a flood of facts that bury Orwell’s singular, idiosyncratic personality. Shelden observes that Crick “relies heavily on the notion that facts speak for themselves if presented in enough detail”. So he attempts to provide a more intimate account of Orwell’s life: “A writer’s character and personal history influence what he writes and how he writes it. And the more we know about him, the better we are able to appreciate his work.” After all, “Books are not written by machines in sealed compartments”.

But we have now entered an era in which books can, in fact, be written by machines in sealed compartments. Large language models (LLMs) generate billions of words a day and are increasingly capable of producing long, structured, and sophisticated texts. While Orwell could not have foreseen the AI revolution, he predicted that synthetic text could someday replace human writing. In his 1946 essay “The Prevention of Literature”, he observes: “It would probably not be beyond human ingenuity to write books by machinery”. Although he doesn’t linger on this possibility, he laments the depersonalisation and mass production of writing already underway in the 1940s, and these arguments are just as applicable to AI-generated writing today.

Orwell expressed an almost eerie sensitivity to the ways in which literary ability — and even the quality of thought — can decline alongside a growing reliance on automated writing processes. For example, he cites radio features “commonly written by tired hacks to whom the subject and the manner of treatment are dictated beforehand”. The writing itself was “merely a kind of raw material to be chopped into shape by producers and censors”. His experience dealing with the pressures of working in a strictly controlled corporate environment at the BBC during wartime undoubtedly left him with this impression. He also cites “innumerable books and pamphlets commissioned by government departments” created in the same industrial manner.

Orwell’s scrutiny of the “machine-like” creation of “short stories, serials, and poems for the very cheap magazines” holds up particularly well today. In an uncanny anticipation of the process by which millions of users now produce creative content with AI, he writes:

    Papers such as the Writer abound with advertisements of Literary Schools, all of them offering you readymade plots at a few shillings a time. Some, together with the plot, supply the opening and closing sentences of each chapter. Others furnish you with a sort of algebraical formula by the use of which you can construct your plots for yourself. Others offer packs of cards marked with characters and situations, which have only to be shuffled and dealt in order to produce ingenious stories automatically.

“The Prevention of Literature” was published around the time Orwell began work on Nineteen Eighty-Four, and it shows. Winston Smith’s job in the Ministry of Truth is to rewrite historical documents to match Party propaganda. He deletes “unpersons” from old news stories and ensures that recorded events always line up with the latest party line, all with the help of his speakwrite dictation machine. He dumps original documents into the Memory Hole for incineration. In the essay, Orwell moves from a discussion of increasingly robotic forms of literary production to the role this shift could play in a totalitarian state:

    It is probably in some such way that the literature of a totalitarian society would be produced, if literature were still felt to be necessary. Imagination — even consciousness, so far as possible — would be eliminated from the process of writing. Books would be planned in their broad lines by bureaucrats, and would pass through so many hands that when finished they would be no more an individual product than a Ford car at the end of the assembly line.

In some ways, Orwell’s bleak prophecies would turn out to be more accurate than he could have imagined. The idea that human thought would be replaced by an “algebraical formula” and that consciousness would be eliminated from the writing process is now a reality on a vast scale (though the question of whether consciousness will emerge from AI systems remains open). But Orwell filtered his predictions about the future of writing through his fixation on state power and the possible emergence of a “rigidly totalitarian society”, and this led him astray. In such a society, Orwell assumed that “novels and stories will be completely superseded by film and radio productions”. To the extent that people would want to keep reading, “perhaps some kind of low-grade sensational fiction will survive, produced by a sort of conveyor-belt process that reduces human initiative to the minimum”. He concluded: “It goes without saying that anything so produced would be rubbish”.

April 23, 2026

Arctic defence – Canada can’t “go it alone”

Filed under: Cancon, Military, Technology, Weapons — Tags: , , — Nicholas @ 03:00

On the social media site formerly known as Twitter, Lee Humphrey explains a few of the reasons that we can ignore Prime Minister Mark Carney’s claims that Canada can defend the north without US assistance:

Canada is not capable of going it alone & is not going it alone. It’s a lie to say otherwise.

The majority of new radars being bought to replace existing radars will be from US companies, all our radars including the Australian built OTH radar will use US military satellites to communicate with monitoring stations in the US & Canada.

The armed drones we are buying are built in the US & will use US military satellites to communicate with their ground based controllers.

The subs we are spending $40 billion on are not capable of safely patrolling under sea ice for more than 11 continuous days before they have to turn around & get to clear water so we will continue to rely on US nuclear powered subs to track Russian & Chinese subs who are in sovereign CDN arctic waters.

Over the last year, US fighter aircraft have had to respond to 3 separate incidents that the RCAF were unable to respond to at all or in a timely way.

CDN’s who still believe Trump is going to invade need to realize just how many opportunities he keeps missing 😎

Reality sucks especially when national security or national sovereignty is at stake but the reality is that not only can we not operate independently at home, we can’t & haven’t been capable of operating independently for 60 years now!

April 19, 2026

AI’s missing economic impact

Filed under: Business, Economics, Technology — Tags: , , — Nicholas @ 03:00

On the social media site formerly known as Twitter, Rational Aussie explains at least part of why the expected economic benefits of widespread adoption of artificial intelligence agents are … missing:

It’s funny how AI has made white collar work 10x faster already but there’s been basically no economic impact from it.

The reason is quite simple:

1. Most white collar work is bullshit, so speeding it up by 10x still equals a pile of bullshit at the end

2. Most white collar employees are using AI to do all their work for the week in 4 hours instead of 40, whilst telling their manager the deadline is still 40 hours away

We have been living in a fake economy for the better part of two decades. It is all a fugazi.

People who do real jobs in the real world get paid comparatively crap, and people who do fake jobs in the fiat Ponzi world get paid just enough fiat currency to pretend they are important. None of it amounts to anything productive nor valuable for the world though.

An entire generation doing fake email jobs, slide decks and excel sheets for corporations who ultimately produce nothing.

April 18, 2026

Australia’s age verification scheme – a great success!

Every time a politician gets up on hind legs to propose yet another brilliant scheme to ensure little Jaden and little Daenerys don’t access adult content on the internet, I remind myself that it’s going to be pitting the tech know-how of people who need help opening child-proof caps against the youngsters they get to open the child-proof caps for them. In other words, it’s not going to work out quite how the politicians expect:

“Kid-notebook-computer-learns-159533” by LuidmilaKot is marked with CC0 1.0 .

Among the great many bogeymen of the current moment is social media, which stands accused of making young people anxious and unhappy. Whatever the merits of those charges — and they’re debatable — politicians have predictably tried to address concerns by applying the blunt instrument of coercive law to kids’ online activities rather than simply let parents help their children make better choices. The experience in Australia now shows the subjects of the law have, once again, proven cleverer than law enforcers.

[…]

“There are significant questions about the effectiveness of Australia’s social media ban”, reports the U.K.’s Molly Rose Foundation, which supports internet restrictions, of the results of a poll of Australian young people. “Three fifths (61%) of 12–15 year-olds who previously held accounts on restricted platforms continue to have access to one or more active accounts.”

The group adds that “70% of children still using restricted sites say that it was ‘easy’ to circumvent the ban. In most cases, social media platforms have failed to detect or seek to remove under 16s accounts.”

Importantly, officials agree that young people subject to the law are actively evading its impact. In a compliance update published last month, Australia’s eSafety Commissioner, which enforces the ban, conceded that “a substantial proportion of Australian children under the age of 16 continue to retain accounts, create new accounts, or pass platforms’ age assurance systems”.

Like the Molly Rose Foundation, Australian regulators note that noncompliance is not just a concern for the small platforms with limited exposure in Australia which were expected to become refuges for Australian teens seeking online connections. They also point to large, established companies including Facebook, Instagram, Snapchat, TikTok, and YouTube.

In the majority of cases, according to both reports, young people ignoring the law have not yet been asked to verify their age. But, according to the Molly Rose Foundation, “around a quarter of children still using each restricted platform had been successfully able to get around an age check on a pre-existing account”. Some changed their claimed age, others had older friends and relatives set up accounts for them, and still others gamed technology intended to estimate their age by their appearance.

Another proof of the value of open source

Filed under: History, Media, Technology — Tags: , , , , , — Nicholas @ 03:00

On the social media site formerly known as Twitter, ESR discusses a pre-computer (pre-electronics) proof that open source is more secure than closed source:

“How university open debates and discussions introduced me to open source” by opensourceway is licensed under CC BY-SA 2.0

There’s an old, bad idea that’s been trying to resurrect itself on X in the last couple of days. Which makes it time for me to explain exactly why, in the age of LLMs, open-sourcing your code is an even more important security measure than it was before we had robot friends.

The underlying principle was discovered in the 1880s by an expert on military cryptography, a man named August Kerckhoffs, writing long before computers were a thing.

To start with, you need to focus in on the fact that cryptosystems have two parts. They have methods, and they have keys. You feed a key and a message to a method and get encrypted information that, you hope, only someone else with the same pair of method and key can read.

What Kerckhoffs noticed was this: military cryptosystems in normal operation leak information about their methods. Code books and code machines get captured, stolen, betrayed, or lost in simple accidents and found by people you don’t want to have them. This was the pre-computer equivalent of an unintended source-code disclosure.

Cryptosystems also leak information about their keys — think post-it notes with passwords stuck to a monitor. What Kerckhoffs noticed is that these two different kinds of compromising leakage happen at very different base rates. It is almost impossible to prevent leakage of information about methods, but just barely possible to prevent leakage of information about keys.

Why? Keys have fewer bits. This makes them easier to keep secret.

Remember: this was something an intelligent man could notice in the 1880s, well before even vacuum tubes. Which is your first clue that the power of this observation hasn’t changed just because we’re in the middle of a freaking Singularity.

Security through obscurity — closed source code — means you’re busted if either the source code or the keys get leaked. Open source is a preemptive strike — it’s a way to force the property that your security depends *only* on keeping the keys secret.

What you’re doing by designing under the assumption of open source is preventing source code leakage from being a danger. And that’s the kind of leakage with a high base rate.

As far back as 1947 Claude Shannon applied this to electronic security — he did critical work on the voice scramblers that were used for secure telephone communications between heads of state during World War II. Shannon said one should always design as though “the enemy knows the system”. The US’s National Security Agency still uses this as a guiding principle in computer-based cryptosystems.

If you’re doing software security, always design as though the enemy can see your source code. I’m still a little puzzled that I was apparently the first person to notice that this was a general argument for open source; as soon as I did, my first thought was more or less “Duh? Somebody should have noticed this sooner?”

Now let’s consider how LLMs change this picture. Or…don’t.

An LLM is like a cryptanalyst with a superhuman attention span that never sleeps. If your system leaks information that can compromise it, that compromise is going to happen a hell of a lot faster than if your adversary has to rely on Mark 1 meatbrains.

But it gets worse. With LLMs, decompilation is now fast and cheap. You have to assume that if an adversary can see your executable binary, they can recover the source code. If you were relying on that to be secret, you are *screwed*.

Leakage control — limiting the set of bits that can yield a compromise — is more important than ever. So security by code obscurity is an even more brittle and dangerous strategy than it used to be.

Anybody who tries to tell you differently is either deeply stupid or trying to sell you something that you should not by any means buy.

April 12, 2026

The two kinds of enshittification

Filed under: Business, Media, Technology — Tags: , , , — Nicholas @ 03:00

On the social media site formerly known as Twitter, ESR explains the differences between the two kinds of enshittification we’re seeing these days:

It may be time to start distinguishing between classic two-sided enshittification and a more general single-sided variety.

When Corey Doctorow originally defined the term “enshittification” he was describing a very specific thing that can and does happen when a platform like Amazon or Google acts as a two-sided market-maker. They start by reducing friction for both buyers and sellers, get everybody locked in by the higher cost of doing volume business anywhere else, then start charging tolls on both sides and injecting spamware that nobody wants. Eventually even their search function becomes completely shitty.

The increasingly horrifying “agentic” train wreck that Windows 11 has become isn’t a two-sided platform in the same way, but the feel of its late stages is depressingly familiar. It’s so stuffed with bloatware, spamware, and spyware that its nominal function as an operating system to run programs for its users feels almost like an afterthought.

I’m going to call this “single-sided enshittification”, and point out that both kinds stem from the same fundamental disconnect. They’re both things that happen when the dominant revenue stream from a product is disconnected from the needs of its original users.

In both cases, an important factor, though not the only one, is the attack of the adtech vampires. So very much of the ugliness in enshittified platforms is downstream of the easy money that they offer product owners for allowing them to sink their fangs into the information stream.

I don’t have a solution to this problem. But if there is one, it starts with identifying the problem correctly. Enshittification — it’s not just for two-sided platforms anymore.

From the comments on the original post:

QotD: “Disinformation”

Filed under: Government, Liberty, Media, Politics, Quotations, Technology — Tags: , , , — Nicholas @ 01:00

    Neil Stone @DrNeilStone
    X is coordinated disinformation packaged as Free Speech

The concept of disinformation is inherently authoritarian. It presumes some faultless source from which truth flows, such that all speech can be judged by its alignment with this source.

Yes, sometimes certain issues are fairly clear-cut and people are just lying, but more often people fundamentally disagree about both facts and methods. They disagree about who is trustworthy and what institutions and processes are most likely to produce truth.

I, as a private citizen, might call some claim a lie or some person a liar. That’s discourse. I hope to persuade others that I am correct. But to institutionalize disinformation is necessarily to institutionalize a priest caste of truth determiners. This is antithetical to the scientific method and the process of knowledge production in general.

Truth-seeking must start from a place of humility: we are not sure of our claims or our methods. We are doing our imperfect best. We demonstrate the value of our ideas via evidence, argument, and the practical utility they provide. Not by censoring competing ideas.

It is ludicrous to assume that modern academic or journalistic institutions are bias-free oracles, yet this is the basis of the “disinformation” concept.

Hunter Ash, The social media site formerly known as Twitter, 2025-12-27.

April 5, 2026

When military requirements conflict with national policies

On Substack, Holly MathNerd explains why the US military hasn’t ramped up production of drones in light of the experiences of other current conflicts:

Most people who have opinions about the war in Iran are not also reading the Federal Acquisition Regulations. I am, unfortunately for my social life, one of the people who does both.

And when you hold those two things in your head at the same time — what’s happening over the Strait of Hormuz and what’s happening in federal procurement policy — a contradiction emerges that is so glaring, and so consequential, that I could not write about anything else this week.

Here is the contradiction, in full, before I show you the data.

The United States is fighting a war where drones are the decisive tactical weapon. We are spending $2 to $4 million per intercept to stop Iranian drones that cost $50,000 each. Our own offensive drone program shipped what it had into an active war because full-rate production hadn’t started yet. Ukraine, which does not have this problem, produced two million drones in 2024 by building a distributed ecosystem of small manufacturers who iterate their designs every two weeks and sell units for $300 to $5,000 each.

We cannot do what Ukraine does, because Congress — correctly, for legitimate national security reasons — spent five consecutive National Defense Authorization Acts closing the door on Chinese drone hardware. DJI, the dominant global manufacturer, is now restricted by four separate federal authorities. There is no waiver for convenience. The wall is complete.

Which means the only path to drone dominance runs through a domestic industrial base capable of producing drones at volume, at low cost, with rapid iteration.

That base exists. Partially. Precariously. And it is built on exactly the kind of small, specialized, distributed manufacturers that the 8(a) federal contracting program was designed to bring into the market.

April 3, 2026

“Rocket launches are America at its best”

Filed under: Space, Technology, USA — Tags: , , — Nicholas @ 04:00

Jen Gerson on the Artemis II launch on April 1st:

Artemis II launch, 1 April, 2026
NASA image

I’m not The Line‘s resident space dork; and, yet, I, like everyone likely reading this piece, watched the launch of Artemis II last night, enraptured and hopeful for a successful slingshot around the moon.

My son watched with me, he counted down from 10, and he jumped up when the rockets lit up, throwing four astronauts in a tin can into space.

This stuff is cool on its own merit, but it hits us all somewhere a little deeper than mere wonder at the extraordinary mechanics.

Watching a manned rocket launch is the barest little window-crack opening into a distant future. It’s a monumental effort to throw a fine fishing line into the darkness, hoping against hope that some great destiny is on the other side just waiting for us to tug at it.

By all rational accounts space travel is dumb. It’s an extraordinarily expensive use of human capital and time and resources to reach into nothingness and expanse. We all love pictures of stars and planets and nebulae, but we may never glean much of real material value from these investments in our own lifetimes. Or our great-grandchildren’s lifetimes.

There may be nothing but lifeless rock and death beyond our own ecosystem; no other place we will ever call home.

In fact, from where we sit today, that’s probably true.

Yet we do this stupid thing anyway. We must do the stupid thing anyway.

[…]

Rocket launches are America at its best, and perhaps now more than usual, we need to remind ourselves that this best still exists. Perhaps especially on the same night we sat fearing the President would announce that NATO was over and the world was breaking. (He didn’t, and I guess it’s not for now.)

And regardless of what nation we belong to, whether we’re accountants, butlers, or mothers, every single one of us carries that thin thread of life forward. We all take part in the project. We all have a place. Some of the big roles may be assigned to individual players, but the destiny of humanity is shared. (Whether we like it or not.)

So, we can all be moved together in these moments. We can all imagine what great-great-great grandchildren who have long forgotten our own names might think while watching archival footage of the Artemis II launch. What even greater world might they achieve. What more fanciful ambitions might be open to them. Maybe they will say that this was the moment we started to get our priorities right and our acts together. Maybe things will get better.

Maybe Artemis II, absurd and wasteful, is neither. Who knows how my son will metabolize the video stream of this really cool rocket; I cannot say who he may come to be for witnessing it.

Our craziest aspirations are the way we send our love to the children too far distant for us to see or know.

For my own part, I caught the space bug very early through science fiction of the 1950s and 60s, especially from the writings of Robert Heinlein and Arthur C. Clarke. Earth is just our starting point, and one planet isn’t enough to ensure the survival of our species, so exploring space is an evolutionary necessity.

March 15, 2026

Jobs and new technology – the example of the ATM

In Saturday’s FEE Weekly, Diego Costa looks at the classic example of how the role of the bank teller changed when automated teller machines (ATM) were introduced:

“Pulling out money from ATM” by ota_photos is licensed under CC BY-SA 2.0 .

[…] Those are important findings, but the study of capitalism in the age of AI is larger than labor-saving technologies inside a fixed institutional world. It’s the study of market processes that change the world in which labor takes place.

David Oks gets at this in a recent essay on bank tellers that has been making the rounds. For years, economists and pundits used the ATM to illustrate why technological progress does not necessarily wipe out jobs. In a conversation with Ross Douthat, Vice President J.D. Vance made exactly that point. The ATM automated a large share of what bank tellers used to do, and yet teller employment did not collapse. Why? Because the ATM lowered the cost of operating a branch. Banks opened more branches. Tellers shifted toward relationship management, customer cultivation, and a more boutique kind of service. The machine changed the worker’s role inside the same institution.

That story was true. Until it wasn’t.

As Oks puts it, the ATM did not kill the bank teller, but the iPhone did. Mobile banking changed the consumer interface of finance. Once that happened, the branch ceased to be the unquestioned center of retail banking. And once the branch lost that status, the teller lost the institutional setting that made him economically legible in the first place. The ATM fit capital into a labor-shaped hole. The smartphone changed the shape of the hole.

Vance looks at the ATM era and says: technology does not destroy jobs. Oks looks at the smartphone era and says: it does, just not the technology you expected. But if you stop there, you are still doing what economist Joseph Schumpeter called appraising the process ex visu of a given point of time. As Schumpeter wrote, capitalism is an organic process, and the “analysis of what happens in any particular part of it, say, in an individual concern or industry, may indeed clarify details of mechanism but is inconclusive beyond that”. You shouldn’t study one occupation within one industry and draw conclusions about how technological change works.

The obvious question you still have to answer is: where did those former bank tellers go? What happened to the capital freed when branches closed? What new institutional forms, fintech, mobile payments, embedded finance, neobanks, emerged from the very same process that destroyed the branch model? How many jobs did those create, and in what configurations?

The lost teller jobs are seen. They show up in BLS data and make for a dramatic graph. The unseen is everything the mobile banking revolution enabled, not only within financial services, but across the entire economy. The person who no longer spends thirty minutes at a branch and instead uses that time to manage cash flow for a small business. The immigrant who sends remittances through an app instead of through Western Union. The fintech startup that employs forty engineers building fraud-detection systems. None of that appears in a chart titled “Bank Teller Employment”. The unseen is the world that emerges.

When economists say the ATM was “complementary” to bank tellers, what they usually mean is something quite narrow: the machine performed one set of tasks, such as dispensing cash, and freed the human to concentrate on others, such as relationship banking, cross-selling, and problem-solving.

But the ATM did more than substitute for one task while leaving others to the teller. It made the teller more productive inside the same institutional setting. This is the comparative advantage layer that Séb Krier touches on when he says that “as long as the combination of Human + AGI yields even a marginal gain over AGI alone, the human retains a comparative advantage”. The branch still organized the relationship between bank and customer and the teller still inhabited a role within that world. The ATM simply changed the economics of that role, making the branch cheaper to operate and, paradoxically, more worth expanding.

But the branch is not just a building with unhappy carpet and suspicious lighting. It is an institution. It is a set of roles, expectations, scripts, constraints, and physical arrangements that organize how a bank and a customer relate to one another. It tells people where banking happens, how banking happens, and who performs which function in the ritual. The teller made sense within that world. So did the ATM. They were both playing the same game.

The iPhone did something different. Instead of automating tasks within the branch, it challenged the premise that banking requires a branch at all. It shifted the game to another board. Call this institutional substitution. When a technology is designed to operate within existing rules, the institution can often absorb it, adapt to it, metabolize it. The real threat comes from technologies that are not even playing the same game. The ATM was a move within the branch-banking game. Mobile banking was a move in the higher-order game, the game about which games get played.

Most discussion of AI stops at the level of task substitution and complementarity. Those are necessary questions, but ATM questions.

Joseph Schumpeter understood that entrepreneurship is not simply about making institutions more efficient. It’s about unsettling the institutional forms through which those efficiencies make sense at all. If you ask whether AI can do some of the work of a lawyer, a teacher, a customer service representative, or a junior analyst, you are asking an interesting question. But you are still mostly asking an ATM question. You are asking how capital fits into an existing human role. The more interesting question is whether AI changes the institutional setting that made that role intelligible in the first place. Now we are talking about institutional substitution. It’s a more dangerous territory and a more interesting territory.

And if the bank teller story is any guide, the technologies that bring about institutional substitution will not necessarily be the ones designed to automate an institution’s existing tasks. They may come from somewhere orthogonal, from applications and configurations that incumbents were not watching because they did not look like competition. The iPhone was not competing with the ATM. It was playing a different game, and it happened to make the old game less central.

So the real question is not whether AI will destroy jobs in the abstract. The real question is how AI will reorganize the architecture of production, consumption, and coordination. Not “AI does what lawyers do, but cheaper”, but rather “AI enables a new way of resolving disputes or structuring agreements that makes the current institutional form of legal services less necessary”.

Update, 16 March: Welcome, Instapundit readers! Have a look around at some of my other posts you may find of interest. I send out a daily summary of posts here through my Substackhttps://substack.com/@nicholasrusson that you can subscribe to if you’d like to be informed of new posts in the future.

March 6, 2026

How Not to Build a Plane – TSR2 vs F-111

HardThrasher
Published 5 Mar 2026

In the late Cold War, Britain and the United States tried to build the ultimate low-level supersonic strike aircraft. The result was two of the most ambitious aviation programmes ever attempted: the BAC TSR-2 and the General Dynamics F-111 Aardvark. Both aircraft were designed to solve the same terrifying problem. Soviet surface-to-air missiles had made high-altitude bombing almost suicidal. The next generation of bombers would have to fly low and fast, automatically following the terrain, navigating using primitive onboard computers, and delivering nuclear or conventional weapons deep inside enemy territory. In theory, these aircraft would be revolutionary.

In practice … things went wrong.

The TSR2 programme became one of the most controversial cancellations in British aviation history. Plagued by spiralling costs, technical ambition far beyond the computers of the era, and a labyrinth of government bureaucracy, the aircraft was cancelled in 1965 after only a handful of test flights. Meanwhile the American F-111 survived the same technological challenges and political battles — but only just. Development disasters, crashes, exploding engines, and staggering cost overruns nearly killed the programme multiple times before the aircraft finally entered service.

In this video we explore:

• Why the TSR-2 was so technologically ambitious

• How terrain-following radar and early flight computers nearly broke both projects

• The political battles inside Whitehall and Washington

• Why the F-111 Aardvark survived when TSR2 did not

• And what these aircraft reveal about Cold War military technology and procurement

The TSR2 and F-111 weren’t just aircraft. They were early attempts at something closer to a flying computer, built decades before modern electronics made such systems reliable. And that ambition nearly destroyed both programmes.
(more…)

Older Posts »

Powered by WordPress