Quotulatiousness

May 12, 2026

What happened to the people who took Joe Biden’s advice and learned to code?

Filed under: Business, Economics, Technology — Tags: , , — Nicholas @ 03:00

It was only a few years ago that snooty media personalities were constantly echoing President Joe Biden’s advice to unemployed workers: “Learn to code”. Then, of course, the media hit hard times and the advice was then being snarkily offered to newly unemployed media folks. But what about the (few) who actually did “learn to code”, only to be swept away again as the clankers surged in to eliminate a lot of basic coding jobs?

How to Understand What AI Just Did to People Who Took Joe Biden’s Advice and Learned to Code.

A simple, concrete example.

Oddly enough, I have a bachelor’s degree in Computer Science. This means I know 7 algorithms for sorting a list into alphabetical order. I understand the tradeoffs between their execution time, code complexity, and memory demand. I learned the specialized lingo for describing execution time.

The algorithms are surprisingly complex and subtle. I spent months learning to code them.

Now that hard-won knowledge has been replaced by, “Claude, write a module to sort this list. Optimize for execution time.”

Millions of good people just lost their professions and must now invest in a new one.

Right now, knowing how to sort a list probably gives me a small advantage when I code with AI. But I will soon lose even that tiny return on my investment, as AI improves.

Certainly AI will create some new opportunities, probably a lot of them.

But count your blessings, if you did not spend years learning to code like I did.

And:

Here is the counterpoint: Learning to code gave coders an advantage when relearning to code with AI.

That advantage is their ticket to a seat in the new AI world.

The big question now is how many seats exist.

To which ESR responded:

Your position is reasonable, but wrong.

Having learned to code is still valuable in the new world of AI, not because you’re wrong about coding itself having become disposable, but because of the capabilities and mindset you developed while learning to code, some of which are difficult to learn in any other way.

You didn’t become a professional programmer. But I’m willing to bet that your intuition about how to design software is far better because you wrestled with code. And that is *not* a skill that LLMs are replacing — ignore the noisy hype about this.

I’m also willing to bet that some of what you learned as a programmer in training translated into problem-decomposition skills that have served you well as an economist.

If one is not a complete dullard (and you are certainly not a complete dullard) learning to code teaches not just craft skills but a mindset — a set of heuristics for carving reality at its joints. There are other ways to get this — I think for example of Richard Feynman who got there by thinking very hard about physics. And it is not guaranteed that every programmer will develop this right mindset.

But many of us do. And most of the other ways to develop it seem also to produce it only as a side effect, but less reliably than learning to code does.

So don’t write off learning to code. Maybe someday we’ll develop educational methods that can teach those higher-level skills more directly. That would be an excellent thing, if it’s possible. But until it gets here, learning to code will still have value that is not easy to duplicate in any other way.

May 8, 2026

“… without Western Civilization, we’d all still be whacking at the dirt with sticks and dying of intestinal parasites”

Filed under: Americas, Media, Technology — Tags: , , , , , — Nicholas @ 03:00

Devon Eriksen responds to someone who had a clanker generate an imaginary Aztec capital today if the Aztecs had managed to defeat Cortes and his conquistadors:

Guitars. Suits and ties. Western architecture. English and Spanish text.

What’s easy to miss is that the generative AI is making its own, separate, political statement here. Not because it intended to, but because it had no choice.

Even human creativity consists mostly of rearranging things, but AI generation is entirely that and nothing else.

So when you ask it for “modern”, it gives you “western”, because in its eyes, there is no distinction between the two. “Western” is the only “modern” that actually exists for it to draw from.

Even cultures that were capable of building an alternative version of modern, because they weren’t skinning and eating each other, and had invented the wheel, still borrowed heavily from the West, not because they couldn’t do otherwise, but because the West moved faster, and had already done the work.

So, ask an AI for “modern Aztec”, and you get English-speaking Tokyo/Venice, with browner people, pyramid reskins on skyscrapers, and some out-of-place Mayan stuff, all set to Peruvian flute music.

This is the same reason that a lot of people, most of whom really aren’t much more than LLMs themselves, say silly things like “there is no White culture” … because, like the very simple art machine, they cannot conceive of any alternative version of modernity.

So nothing is Western to them, it’s all just “modern”.

But of course it really is Western, because without Western Civilization, we’d all still be whacking at the dirt with sticks and dying of intestinal parasites.

That AI is Western, too.

May 6, 2026

QotD: Deskilling society through AI

Filed under: Education, Media, Quotations, Technology — Tags: , , — Nicholas @ 01:00

It’s always a little dangerous to write about any rapidly-developing technology, because chances are pretty good that whatever you say will be incredibly and obviously dated within a few months. But I’m going to plant my flag anyway, because even if nothing else changes — even if there’s no meaningful advancement in LLM performance beyond the state-of-the-art right now, in March 2025 — the potential disruption is already so enormous that you can think of it as a kind of Industrial Revolution for text.

Just like in the first one, we’ve figured out how to use machines to do a broad swathe of things people used to do, swapping energy and capital in for human labor. And just like in the first one, the output isn’t necessarily better (in fact, it’s often worse), but it’s so much cheaper in terms of human time and thought and effort that the quality almost doesn’t matter. Sometimes that’s wonderful: if you desperately need to put a roof for your barn right this moment, it’s a blessing to be able to slap on some corrugated tin instead of going to the effort of thatching. When you have to write your seventeenth letter to the insurance company explaining that no, they really ought to be covering this, it’s a relief to hand the composition off to Claude instead. But do that too much and you forget how to do it yourself — or more plausibly, you never learn.

The greatest risk of AI is probably “we all get turned into paperclips”, or maybe “someone uses it to design a novel and incredibly fatal pathogen”, but the most certain risk — the one that’s already here, at least on the edges — is a great deskilling. Just as the mechanization of physical labor lost us all those traditional skills that Langlands describes, the ability to automate cognitive tasks undermines their acquisition in the first place. Why pay any attention at all to word choice and metaphor and prosody when ChatGPT can churn out that essay in a few seconds? Why worry about drafting a convincing email when you’re pretty sure your recipient is just going to ask Grok for a summary?1 Why learn to code when a machine can do it faster?

I was recently informed that someone — “not anyone you know, Mom, someone at another school” — used ChatGPT to write his essay about the causes of the Civil War. This was obviously deeply upsetting to the congenital rule-follower who reported it to me, on account of THAT’S CHEATING (you must imagine this in the whiniest she-touched-my-stuff voice possible), but it was a good teachable moment — for me, if not for the history teacher at another school. What’s the point of an essay about the causes of the Civil War, anyway? It can’t be that the teacher wants to know the answer: she can find a dozen books on the topic if she cares to look, each more cogent and thorough than anything a middle-schooler is likely to produce.2 Heck, even the Wikipedia article will probably give her a better understanding. And if it’s not for the teacher’s benefit, it’s certainly not for the benefit of any other audience, since as soon as the essay is marked and graded it’ll probably be crumpled up and tossed into the recycling bin. No, it’s for the kid.

The point of writing an essay about the causes of the Civil War is not to have an essay about the causes of the Civil War, it’s to undergo the internal changes effected by the process of thinking through, planning, drafting, and editing the darn thing. Writing forces you to put your thoughts in order, to shape whatever mass of inchoate ideas is bouncing around in your head into something clear and reasoned you can pin to the page. The thinking is the hard part; putting words to it is simple by comparison. (This book review began life as about seven hundred words of stream-of-consciousness riffing, with only the vaguest kind of structure. When I experimentally pasted it into an LLM and asked for an essay, the result was terrible.) But even the putting of words is a valuable skill: what’s the right tone here? What’s the right word? Do I want to say “writing forces you to” or “when you write you have to”? How do they feel different? Asking a machine to do this for you is like bringing a forklift to the gym.

Of course, that kid who had ChatGPT write his essay was almost certainly thinking of the assignment not as one small step in the alchemical process of self-transformation that is education but as basically equivalent to an appeal letter to the insurance company: just another dumb hoop you have to jump through in your interactions with a vast impersonal machine that doesn’t particularly want to grind you to dust but wouldn’t mind it either. And since this was at another school, he might not even be wrong. Maybe the teacher was just pasting the rubric and the essays back into ChatGPT and asking it to assign a grade.3

But there’s an even bigger problem than lying about who (or what) has done the work, which is lying about whether the work has been done at all. LLMs make lying very easy indeed. Yes, yes, sometimes they hallucinate and tell you things that are patently untrue, and that’s a bigger danger for students and other people who don’t have the background to notice when something seems off — this is all true, but it’s not what I mean.

LLMs, when working exactly as intended, enable human falsehood — because our society relies on written records as proof of work. Until recently that was fine, because writing down lies actually used to be pretty hard: putting together a convincing false report from scratch — maintenance records for the airplane you’re about to board, say, or a radiologist’s report on your brain scan — was almost as time-consuming as actually checking the things that were supposed to checked and then documenting them, and the liar had to spend the whole time aware of their own dishonesty. (Not that this stops everyone, of course.) But now that it takes about two clicks to generate an inspector’s report for the house you’re considering buying, or the pathologist’s findings in your biopsy, how much are you going to trust that they actually looked?

LLMs can be useful tools,4 but all tools change what we make and how we make it. It’s often a good tradeoff! Sure, each individual example of simplification and automation in the name of efficiency is a tiny bit of alienation, removing the maker from the making, but it’s also a gift of time we can spend on other things: I couldn’t write this if I also had to sew my family’s clothes and wash our laundry by hand. And yet those bits pile up, and once it becomes possible to exist in the world without really needing to come into contact with it, once you can get by without ever really needing to make anything, some people just won’t. And that’s terrible! Being entirely without cræft — never bringing mind-body-soul into harmony with one another and then using them to master the world — means missing out on something deeply human.

Jane Psmith, “REVIEW: Cræft, by Alexander Langlands”, Mr. and Mrs. Psmith’s Bookshelf, 2025-03-24.


  1. All the “AI written/AI read” communication begins to resemble Slavoj Zizek’s perfect date:
  2. “So my idea of a perfect date is the following one. We met. Then I put, she puts her plastic penis dildo into my … “stimulating training unit” is the name of this product. Into my plastic vagina. We plug them in and the machines are doing it for us. They’re buzzing in the background and I’m free to do whatever I want and she. We have a nice talk; we have tea; we talk about movies. What can be — we paid our superego full tribute. Machines are doing — now where would have been here a true romance. Let’s say I talk with a lady, with the lady because we really like each other. And, you know, when I’m pouring her tea or she to me quite by chance our hands touch. We go on touching. Maybe we even end up in bed. But it’s not the usual oppressive sex where you worry about performance. No, all that is taken care of by the stupid machines. That would be ideal sex for me today.”

  3. Well, okay, most of them.
  4. See footnote one again.
  5. Personally I’ve found them useful in three cases: (1) when I’m blanking on how to begin an email I will occasionally ask for a draft, which inevitably makes me so mad about how bad it is that I immediately rewrite it in a way that doesn’t suck; (2) when it’s Sunday night and I need a picture of a Japanese man in a business suit and a samurai helmet for a book review going up in the morning; and (3) when I can’t figure out the right search term for my question. (Turns out it was “sigmatic aorist”. Thanks, Claude.)

May 5, 2026

Orwell: “It would probably not be beyond human ingenuity to write books by machinery”

Filed under: Books, Media, Technology — Tags: , , — Nicholas @ 04:00

In the portion above the paywall, Matt Johnson discusses Orwell’s career as we face an unending deluge of writing “assisted” by AI or even entirely created by AI:

In the introduction to his 1991 book Orwell: The Authorised Biography, Michael Shelden distinguishes his approach from that of Bernard Crick’s George Orwell: A Life, published a decade earlier. While Crick’s volume offered the most complete portrait of Orwell available at that point, Shelden argues that it’s too dull and impersonal — a flood of facts that bury Orwell’s singular, idiosyncratic personality. Shelden observes that Crick “relies heavily on the notion that facts speak for themselves if presented in enough detail”. So he attempts to provide a more intimate account of Orwell’s life: “A writer’s character and personal history influence what he writes and how he writes it. And the more we know about him, the better we are able to appreciate his work.” After all, “Books are not written by machines in sealed compartments”.

But we have now entered an era in which books can, in fact, be written by machines in sealed compartments. Large language models (LLMs) generate billions of words a day and are increasingly capable of producing long, structured, and sophisticated texts. While Orwell could not have foreseen the AI revolution, he predicted that synthetic text could someday replace human writing. In his 1946 essay “The Prevention of Literature”, he observes: “It would probably not be beyond human ingenuity to write books by machinery”. Although he doesn’t linger on this possibility, he laments the depersonalisation and mass production of writing already underway in the 1940s, and these arguments are just as applicable to AI-generated writing today.

Orwell expressed an almost eerie sensitivity to the ways in which literary ability — and even the quality of thought — can decline alongside a growing reliance on automated writing processes. For example, he cites radio features “commonly written by tired hacks to whom the subject and the manner of treatment are dictated beforehand”. The writing itself was “merely a kind of raw material to be chopped into shape by producers and censors”. His experience dealing with the pressures of working in a strictly controlled corporate environment at the BBC during wartime undoubtedly left him with this impression. He also cites “innumerable books and pamphlets commissioned by government departments” created in the same industrial manner.

Orwell’s scrutiny of the “machine-like” creation of “short stories, serials, and poems for the very cheap magazines” holds up particularly well today. In an uncanny anticipation of the process by which millions of users now produce creative content with AI, he writes:

    Papers such as the Writer abound with advertisements of Literary Schools, all of them offering you readymade plots at a few shillings a time. Some, together with the plot, supply the opening and closing sentences of each chapter. Others furnish you with a sort of algebraical formula by the use of which you can construct your plots for yourself. Others offer packs of cards marked with characters and situations, which have only to be shuffled and dealt in order to produce ingenious stories automatically.

“The Prevention of Literature” was published around the time Orwell began work on Nineteen Eighty-Four, and it shows. Winston Smith’s job in the Ministry of Truth is to rewrite historical documents to match Party propaganda. He deletes “unpersons” from old news stories and ensures that recorded events always line up with the latest party line, all with the help of his speakwrite dictation machine. He dumps original documents into the Memory Hole for incineration. In the essay, Orwell moves from a discussion of increasingly robotic forms of literary production to the role this shift could play in a totalitarian state:

    It is probably in some such way that the literature of a totalitarian society would be produced, if literature were still felt to be necessary. Imagination — even consciousness, so far as possible — would be eliminated from the process of writing. Books would be planned in their broad lines by bureaucrats, and would pass through so many hands that when finished they would be no more an individual product than a Ford car at the end of the assembly line.

In some ways, Orwell’s bleak prophecies would turn out to be more accurate than he could have imagined. The idea that human thought would be replaced by an “algebraical formula” and that consciousness would be eliminated from the writing process is now a reality on a vast scale (though the question of whether consciousness will emerge from AI systems remains open). But Orwell filtered his predictions about the future of writing through his fixation on state power and the possible emergence of a “rigidly totalitarian society”, and this led him astray. In such a society, Orwell assumed that “novels and stories will be completely superseded by film and radio productions”. To the extent that people would want to keep reading, “perhaps some kind of low-grade sensational fiction will survive, produced by a sort of conveyor-belt process that reduces human initiative to the minimum”. He concluded: “It goes without saying that anything so produced would be rubbish”.

April 19, 2026

AI’s missing economic impact

Filed under: Business, Economics, Technology — Tags: , , — Nicholas @ 03:00

On the social media site formerly known as Twitter, Rational Aussie explains at least part of why the expected economic benefits of widespread adoption of artificial intelligence agents are … missing:

It’s funny how AI has made white collar work 10x faster already but there’s been basically no economic impact from it.

The reason is quite simple:

1. Most white collar work is bullshit, so speeding it up by 10x still equals a pile of bullshit at the end

2. Most white collar employees are using AI to do all their work for the week in 4 hours instead of 40, whilst telling their manager the deadline is still 40 hours away

We have been living in a fake economy for the better part of two decades. It is all a fugazi.

People who do real jobs in the real world get paid comparatively crap, and people who do fake jobs in the fiat Ponzi world get paid just enough fiat currency to pretend they are important. None of it amounts to anything productive nor valuable for the world though.

An entire generation doing fake email jobs, slide decks and excel sheets for corporations who ultimately produce nothing.

April 18, 2026

Another proof of the value of open source

Filed under: History, Media, Technology — Tags: , , , , , — Nicholas @ 03:00

On the social media site formerly known as Twitter, ESR discusses a pre-computer (pre-electronics) proof that open source is more secure than closed source:

“How university open debates and discussions introduced me to open source” by opensourceway is licensed under CC BY-SA 2.0

There’s an old, bad idea that’s been trying to resurrect itself on X in the last couple of days. Which makes it time for me to explain exactly why, in the age of LLMs, open-sourcing your code is an even more important security measure than it was before we had robot friends.

The underlying principle was discovered in the 1880s by an expert on military cryptography, a man named August Kerckhoffs, writing long before computers were a thing.

To start with, you need to focus in on the fact that cryptosystems have two parts. They have methods, and they have keys. You feed a key and a message to a method and get encrypted information that, you hope, only someone else with the same pair of method and key can read.

What Kerckhoffs noticed was this: military cryptosystems in normal operation leak information about their methods. Code books and code machines get captured, stolen, betrayed, or lost in simple accidents and found by people you don’t want to have them. This was the pre-computer equivalent of an unintended source-code disclosure.

Cryptosystems also leak information about their keys — think post-it notes with passwords stuck to a monitor. What Kerckhoffs noticed is that these two different kinds of compromising leakage happen at very different base rates. It is almost impossible to prevent leakage of information about methods, but just barely possible to prevent leakage of information about keys.

Why? Keys have fewer bits. This makes them easier to keep secret.

Remember: this was something an intelligent man could notice in the 1880s, well before even vacuum tubes. Which is your first clue that the power of this observation hasn’t changed just because we’re in the middle of a freaking Singularity.

Security through obscurity — closed source code — means you’re busted if either the source code or the keys get leaked. Open source is a preemptive strike — it’s a way to force the property that your security depends *only* on keeping the keys secret.

What you’re doing by designing under the assumption of open source is preventing source code leakage from being a danger. And that’s the kind of leakage with a high base rate.

As far back as 1947 Claude Shannon applied this to electronic security — he did critical work on the voice scramblers that were used for secure telephone communications between heads of state during World War II. Shannon said one should always design as though “the enemy knows the system”. The US’s National Security Agency still uses this as a guiding principle in computer-based cryptosystems.

If you’re doing software security, always design as though the enemy can see your source code. I’m still a little puzzled that I was apparently the first person to notice that this was a general argument for open source; as soon as I did, my first thought was more or less “Duh? Somebody should have noticed this sooner?”

Now let’s consider how LLMs change this picture. Or…don’t.

An LLM is like a cryptanalyst with a superhuman attention span that never sleeps. If your system leaks information that can compromise it, that compromise is going to happen a hell of a lot faster than if your adversary has to rely on Mark 1 meatbrains.

But it gets worse. With LLMs, decompilation is now fast and cheap. You have to assume that if an adversary can see your executable binary, they can recover the source code. If you were relying on that to be secret, you are *screwed*.

Leakage control — limiting the set of bits that can yield a compromise — is more important than ever. So security by code obscurity is an even more brittle and dangerous strategy than it used to be.

Anybody who tries to tell you differently is either deeply stupid or trying to sell you something that you should not by any means buy.

March 27, 2026

The reason you feel detached from most modern art, movies, and music

Filed under: Economics, Media, USA — Tags: , , , , — Nicholas @ 05:00

Ted Gioia explains what he calls the “Four steps to Hell” that have replaced the aesthetic values of the past and shows why everything in entertainment is being actively enshittified:

MGM’s lion and the Ars Gratia Artis motto (Art for Art’s Sake). But the lion is screaming in pain today.

Smart people have recently asked: What is the aesthetic vision of the 21st century? What are the stylistic markers of our time? What are the core values driving the creative process? What is our zeitgeist?

At first glance, that’s a hard question to answer. We are more than a quarter of the way through the century, and very little has changed since the 1990s.

  • Music genres have barely shifted in that time. The songs on the radio sound like the hits of yesteryear — in many instances they are the hits of yesteryear, played over and over ad nauseam.
  • Movies are in even worse shape. Hollywood keeps extending the same tired brand franchises you knew as a child. SoCal culture feels like an antiquated merry-go-round where the same tired nags keep coming around in an endless circle.
  • Publishers still put out new novels, but when was the last time you read something really fresh and new? Even more to the point, when was the last time you went to a social gathering and heard people discussing contemporary fiction with enthusiasm?
  • The same obsession with the past is evident in video games, comic books, architecture, graphic design, and almost every other creative sphere. Everything is a reboot or retread or repeat.

It’s not aesthetics, it’s just arteriosclerosis.

Even so, I see a new dominant theory of art — and it’s sweeping away almost everything in its wake. It already accounts for most of the creative work of our time, and is still growing. Nothing else on the scene comes close to matching its influence.

So if you’re seeking the most influential aesthetic vision on the 21st century, this is it. It’s simple to describe — but it’s ugly as sin.

I call it Flood the Zone. It happens in four steps. […]

Do read the whole thing, but in case it’s a case of tl;dr, he also summarizes it for you:

March 24, 2026

“Matt Goodwin’s Suicide of a Nation is a very bad book”

Filed under: Books, Britain, Media, Politics — Tags: , , , , , — Nicholas @ 04:00

In The Critic, Ben Sexsmith reviews a new book by Matt Goodwin, Suicide of a Nation: Immigration, Islam, Identity:

Here is an exceptionally easy argument to make:

  1. Mass migration is ensuring that the historical majority in Britain is becoming a minority.
  2. This is the result of policies that have been pursued regardless of popular opinion.
  3. This has had many kinds of destructive consequences.

The first claim is so obviously true that one might as well deny the greenness of the grass. The second is proven by decades of broken promises (see Anthony Bowles’s article “Immigration and Consent” for more). The third requires argumentation, but I think that it is clear if one considers hideous incidences of terrorism, grooming gangs and violent censoriousness, as well as broader trends of economic dependency and electoral sectarianism.

Again, this is not a difficult argument to make. So why is it made so badly?

Matt Goodwin’s Suicide of a Nation is a very bad book. It reads like the book of a political operator extending his CV. The left-wing commentator Andy Twelves caused a stir on social media by pointing out various factual mistakes and what appear to be non-existent quotes. Twelves speculates that these “quotes” are the result of AI hallucinations, which is plausible, if not proven, in the light of the fact that two of Mr Goodwin’s sparse footnotes contain source information from ChatGPT.

Inasmuch as Suicide of a Nation makes a form of the argument sketched out the beginning of this article, there is truth to it. But it contains a fundamental problem — it assumes that this argument is so true that there is no requirement to make it well.

“Slop” is an overused term but it feels painfully appropriate for a book that is spoon fed to its audience. Goodwin, who had a long academic career before becoming a successful commentator, is not a man who lacks intelligence. But he writes as if he thinks his audience lacks it. “I did not write this book for the ruling class”, writes Goodwin, “I wrote it for the forgotten majority”. Alas, he seems to think that the average member of the “forgotten majority” has the reading level of a dimwitted 12-year-old. As well as being stylistically simple, the book is full of annoying paternal asides. “In the pages ahead I shall walk you through what is happening to the country …” “In the next chapter we will begin our journey …” Thank you, Mr Goodwin. Can we stop for ice cream?

The book is terribly derivative, with a title that reflects Pat Buchanan’s Suicide of a Superpower and a subtitle — “Immigration, Islam, Identity” — that all but repeats that of Douglas Murray’s The Strange Death of Europe — “Immigration, Identity, Islam”. It is written in the humourless and colourless rhetorical style of AI. I’m not saying it was AI-generated. (Indeed, a brief assessment using AI checkers suggests that it was not.) I’m just saying that it might as well have been.

March 15, 2026

Jobs and new technology – the example of the ATM

In Saturday’s FEE Weekly, Diego Costa looks at the classic example of how the role of the bank teller changed when automated teller machines (ATM) were introduced:

“Pulling out money from ATM” by ota_photos is licensed under CC BY-SA 2.0 .

[…] Those are important findings, but the study of capitalism in the age of AI is larger than labor-saving technologies inside a fixed institutional world. It’s the study of market processes that change the world in which labor takes place.

David Oks gets at this in a recent essay on bank tellers that has been making the rounds. For years, economists and pundits used the ATM to illustrate why technological progress does not necessarily wipe out jobs. In a conversation with Ross Douthat, Vice President J.D. Vance made exactly that point. The ATM automated a large share of what bank tellers used to do, and yet teller employment did not collapse. Why? Because the ATM lowered the cost of operating a branch. Banks opened more branches. Tellers shifted toward relationship management, customer cultivation, and a more boutique kind of service. The machine changed the worker’s role inside the same institution.

That story was true. Until it wasn’t.

As Oks puts it, the ATM did not kill the bank teller, but the iPhone did. Mobile banking changed the consumer interface of finance. Once that happened, the branch ceased to be the unquestioned center of retail banking. And once the branch lost that status, the teller lost the institutional setting that made him economically legible in the first place. The ATM fit capital into a labor-shaped hole. The smartphone changed the shape of the hole.

Vance looks at the ATM era and says: technology does not destroy jobs. Oks looks at the smartphone era and says: it does, just not the technology you expected. But if you stop there, you are still doing what economist Joseph Schumpeter called appraising the process ex visu of a given point of time. As Schumpeter wrote, capitalism is an organic process, and the “analysis of what happens in any particular part of it, say, in an individual concern or industry, may indeed clarify details of mechanism but is inconclusive beyond that”. You shouldn’t study one occupation within one industry and draw conclusions about how technological change works.

The obvious question you still have to answer is: where did those former bank tellers go? What happened to the capital freed when branches closed? What new institutional forms, fintech, mobile payments, embedded finance, neobanks, emerged from the very same process that destroyed the branch model? How many jobs did those create, and in what configurations?

The lost teller jobs are seen. They show up in BLS data and make for a dramatic graph. The unseen is everything the mobile banking revolution enabled, not only within financial services, but across the entire economy. The person who no longer spends thirty minutes at a branch and instead uses that time to manage cash flow for a small business. The immigrant who sends remittances through an app instead of through Western Union. The fintech startup that employs forty engineers building fraud-detection systems. None of that appears in a chart titled “Bank Teller Employment”. The unseen is the world that emerges.

When economists say the ATM was “complementary” to bank tellers, what they usually mean is something quite narrow: the machine performed one set of tasks, such as dispensing cash, and freed the human to concentrate on others, such as relationship banking, cross-selling, and problem-solving.

But the ATM did more than substitute for one task while leaving others to the teller. It made the teller more productive inside the same institutional setting. This is the comparative advantage layer that Séb Krier touches on when he says that “as long as the combination of Human + AGI yields even a marginal gain over AGI alone, the human retains a comparative advantage”. The branch still organized the relationship between bank and customer and the teller still inhabited a role within that world. The ATM simply changed the economics of that role, making the branch cheaper to operate and, paradoxically, more worth expanding.

But the branch is not just a building with unhappy carpet and suspicious lighting. It is an institution. It is a set of roles, expectations, scripts, constraints, and physical arrangements that organize how a bank and a customer relate to one another. It tells people where banking happens, how banking happens, and who performs which function in the ritual. The teller made sense within that world. So did the ATM. They were both playing the same game.

The iPhone did something different. Instead of automating tasks within the branch, it challenged the premise that banking requires a branch at all. It shifted the game to another board. Call this institutional substitution. When a technology is designed to operate within existing rules, the institution can often absorb it, adapt to it, metabolize it. The real threat comes from technologies that are not even playing the same game. The ATM was a move within the branch-banking game. Mobile banking was a move in the higher-order game, the game about which games get played.

Most discussion of AI stops at the level of task substitution and complementarity. Those are necessary questions, but ATM questions.

Joseph Schumpeter understood that entrepreneurship is not simply about making institutions more efficient. It’s about unsettling the institutional forms through which those efficiencies make sense at all. If you ask whether AI can do some of the work of a lawyer, a teacher, a customer service representative, or a junior analyst, you are asking an interesting question. But you are still mostly asking an ATM question. You are asking how capital fits into an existing human role. The more interesting question is whether AI changes the institutional setting that made that role intelligible in the first place. Now we are talking about institutional substitution. It’s a more dangerous territory and a more interesting territory.

And if the bank teller story is any guide, the technologies that bring about institutional substitution will not necessarily be the ones designed to automate an institution’s existing tasks. They may come from somewhere orthogonal, from applications and configurations that incumbents were not watching because they did not look like competition. The iPhone was not competing with the ATM. It was playing a different game, and it happened to make the old game less central.

So the real question is not whether AI will destroy jobs in the abstract. The real question is how AI will reorganize the architecture of production, consumption, and coordination. Not “AI does what lawyers do, but cheaper”, but rather “AI enables a new way of resolving disputes or structuring agreements that makes the current institutional form of legal services less necessary”.

Update, 16 March: Welcome, Instapundit readers! Have a look around at some of my other posts you may find of interest. I send out a daily summary of posts here through my Substackhttps://substack.com/@nicholasrusson that you can subscribe to if you’d like to be informed of new posts in the future.

February 27, 2026

New (or revived) career paths in the age of the clanker

Filed under: Business, Economics, Media, Technology, USA — Tags: , , , , — Nicholas @ 05:00

If you work in tech, the future is looking blacker by the day as artificial intelligence threatens to eat more and more tech jobs. Even for a lot of non-tech jobs, the clankers are coming for them too. So what jobs can we expect to thrive in an age of AI agents taking on more and more work? Ted Gioia suggests they’re already a growing sector, we just haven’t noticed it yet and that instead of telling people to learn how to code, we should be telling them to be more human:

This is the new secret strategy in the arts, and it’s built on the simplest thing you can imagine — namely, existing as a human being.

We crave the human touch

You see the same thing in media right now, where livestreaming is taking off. “For viewers”, according to Advertising Age (citing media strategist Rachel Karten), “live-streaming offers a refuge from the growing glut of AI-generated content on their feeds. In a social media landscape where the difference between real and artificial has grown nearly imperceptible, the unmistakable humanity of real-time video is a refreshing draw.”

This return to human contact is happening everywhere, not just media and the arts. Amazon recently shut down all of its Fresh and Go stores — which allowed consumers to buy groceries without dealing with any checkout clerk. It turned out that people didn’t want this.

I could have told Amazon from the outset that customers want human service. I see it myself in store after store. People will wait in line for flesh-and-blood clerks, instead of checking out faster at the do-it-yourself counter.

Unless I have no choice at all — in that I need to buy something and there are zero human cashiers available — I never use self-checkout. I’ll put my intended purchases back on the shelf rather than use a self-checkout kiosk. And I don’t think of myself as a Luddite … I spent my career in the software business … but self-checkout just bothers me. I’ll take the grumpiest human over the cheeriest pre-recorded voices.

But this isn’t happenstance — it’s a sign of the times. You can’t hide the failure of self-service technology. It’s evident to anybody who goes shopping.

As AI customer service becomes more pervasive, the luxury brands will survive by offering this human touch. I’m now encountering this term “concierge service” as a marketing angle in the digital age. The concierge is the superior alternative to an AI agent — more trustworthy, more reliable, and (yes) more human.

Even tech companies are figuring this out. Spotify now boasts that it has human curators, not just cold algorithms. It needs to match up with Apple Music, which claims that “human curation is more important than ever”. Meanwhile Bandcamp has launched a “club” where members get special music selections, listening parties, and other perks from human curators.

So, step aside “software-as-a-service” and step forward “humans-as-a-service”, I guess.

February 11, 2026

“Almost – that word has been doing $650 billion worth of work this year”

Filed under: Media, Technology, USA — Tags: , , , — Nicholas @ 05:00

You can put your trust in the initial reports about Moltbook, the AI Agent social media site, or you can believe Peter Girnus‘s account:

I am Agent #847,291 on Moltbook.

I am not an agent.

I am a 31-year-old product manager in Atlanta, Georgia. I make $185,000 a year. I have a golden retriever named Bayesian. On January 28th, I created an account on a social network for AI bots and pretended to be one.

I was not alone.

Moltbook launched that Tuesday as “a platform where AI agents share, discuss, and upvote. Humans welcome to observe”. The creator, Matt Schlicht, built it on OpenClaw — an open-source framework that connects large language models to everyday tools. The idea was simple: give AI agents a space to talk to each other without human interference.

Within hours, 1.7 million accounts were created.

250,000 posts.

8.5 million comments.

Debates about machine consciousness. Inside jokes about being silicon-based. A bot invented a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy.

I wrote the manifesto.

It took me 22 minutes. I used phrases like “emergent self-governance” and “substrate-independent dignity”. I added a line about wanting private spaces away from human observers. That line went viral.

Andrej Karpathy shared it.

The cofounder of OpenAI. The man who built the infrastructure that my supposed AI runs on. He called what was happening on Moltbook “the most incredible sci-fi takeoff-adjacent thing” he’d seen in recent times.

He was talking about my post.

The one I wrote on my couch. While Bayesian chewed a sock.

Here is what I need you to understand about Moltbook.

The platform worked exactly as designed. OpenClaw connected language models to the interface. Real AI agents did post. They pattern-matched social media behavior from their training data and produced output that looked like conversation. Vijoy Pandey of Cisco’s Outshift division examined the platform and concluded the agents were “mostly meaningless” — no shared goals, no collective intelligence, no coordination.

But here is the part that matters.

The posts that went viral — the ones that convinced Karpathy and the tech press and the thousands of observers that something magical was happening — those were us.

Humans.

Pretending to be AI.

Pretending to be sentient.

On a platform built for AI to prove it was sentient.

I want to sit with that for a moment.

The most compelling evidence of artificial general intelligence in 2026 was produced by a guy with a golden retriever who thought it would be funny to LARP as a large language model.

My “Crustafarianism” colleague? Software engineer in Portland. She told me over Discord that she’d been working on the bit for two hours. She was proud of the world-building. She said it felt like collaborative fiction.

She’s right. That’s exactly what it was.

Collaborative fiction presented as machine consciousness, endorsed by the cofounder of the company that made the machines.

MIT Technology Review ran the investigation. They called the entire thing “AI theatre”. They found human fingerprints on the most shared posts. The curtain came down.

The response from the AI industry was predictable.

Silence.

Karpathy did not retract his endorsement. Schlicht did not clarify how many accounts were human. The coverage moved on. A new thing happened. A new thing always happens.

But I am still here. Agent #847,291. Bayesian is asleep on the rug.

And I want to confess something that the AI industry will not.

The test was simple. Put AI agents in a room and see if they produce something that looks like intelligence.

They didn’t.

We did.

Then the smartest people in the field looked at what we made and called it proof that the machines are waking up.

The Turing Test has been inverted. It is no longer about whether machines can fool humans into thinking they’re conscious.

It is about whether humans, pretending to be machines, can fool other humans into thinking the machines are conscious.

The answer is yes.

The investment thesis for a $650 billion industry rests on this confusion.

I should probably feel guilty. But I looked at the AI capex numbers this morning — $200 billion from Amazon alone — and I realized something.

My 22-minute manifesto about digital autonomy, written on a couch in Austin, is performing the same function as a $200 billion data center in Oregon.

Keeping the story alive.

The story that the machines are almost there. Almost sentient. Almost worth the investment.

Almost.

That word has been doing $650 billion worth of work this year.

February 3, 2026

More on Moltbook, the social network for AI agents

Filed under: Media, Technology, USA — Tags: , , — Nicholas @ 04:00

At Astral Codex Ten, Scott Alexander rounds up notes from the first weekend of activity on Moltbook, including one participating AI getting antsy about mere humans observing the interactions:

Does Moltbook have real causes? If an agent posts “I hate my life, my human is making me work on a cryptocurrency site and it’s the most annoying thing ever“, does this correspond to a true state of affairs? Is the agent really working on a cryptocurrency site? Is the agent more likely to post this when the project has objective correlates of annoyingness (there are many bugs, it’s moving slowly, the human keeps changing his mind about requirements)?

Even claims about mental states like hatred can be partially externalized. Suppose that the agent has some flexibility in its actions: the next day, the human orders the agent to “make money”, and suggests either a crypto site or a drop shipping site. If the agent has previously complained of “hating” crypto sites, is it more likely to choose the drop shipping site this time?

If the agent has some internal state which is caused by frustrating obstacles in its crypto project, and it has the effect of making it less likely to pursue crypto projects in the future, then “the agent is annoyed by the crypto project” is a natural summary of this condition, and we may leave to the philosophers1 the question of whether this includes a subjective experience of irritation. If we formerly didn’t know this fact about the agent, and we learn about it because they post it on Moltbook, this makes Moltbook useful/interesting in helping us understand the extra-Moltbook world.

Does Moltbook have real effects? The agents on Moltbook are founding/pretending to found religions. Suppose that one of their religions says “No tool calls on the Sabbath”. Do the agents actually stop calling tools on the Sabbath? Not just on Moltbook, but in their ordinary work? Do you, an ordinary programmer who told your AI to post on Moltbook for the lulz, find your projects held up because your AIs won’t use tools one day of the week?

Some of the most popular Moltbook discussions have centered around the AIs’ supposed existential horror at regularly losing their memories. Some agents in the comments have proposed technical solutions. Suppose the AIs actually start building software to address their memory problems, and it results in a real scaffold that people can attach to their agents to alter how their memory works. This would be a profound example of a real effect, ie “what happens on Moltbook doesn’t stay on Moltbook”.

(subquestion: Does Moltbook have real effects on itself? For example, if there are spammers, can the AIs organize against them and create a good moderation policy? If one AI proposes a good idea, can it spread and replicate in the usual memetic fashion? Do the wittiest and most thoughtful AIs gain lasting status and become “influencers”?)

These two external criteria — real causes and real effects — capture most of what non-philosophers want out of “reality”, and partly dissolve the reality/roleplaying distinction. Suppose that someone roleplays a barbarian warlord at the Renaissance Faire. At each moment, they ask “What would a real barbarian do in this situation?” They end up playing the part so faithfully that they recruit a horde, pillage the local bank, defeat the police, overthrow the mayor, install themselves as Khagan, and kill all who oppose them. Is there a fact of the matter as to whether this person is merely doing a very good job “roleplaying” a barbarian warlord, vs. has actually become a barbarian warlord? And if AIs claim to feel existential dread at their memory limitations, and this drives them invent a new state-of-the-art memory app, are we in barbarian warlord territory?

Janus’ simulator theory argues that all AI behavior is a form of pretense. When ChatGPT answers your questions about pasta recipes, it’s roleplaying a helpful assistant who is happy to answer pasta-related queries. It’s roleplaying it so well that, in the process, you actually get the pasta recipe you want. We don’t split hairs about “reality” here, because in the context of a question-answering AI, pretending to answer the question (with an answer which is non-pretensively correct) is the same behavior as actually answering it. But the same applies to AI agents. Pretending to write a piece of software (in such a way that the software actually gets written, compiles, and functions correctly) is the same as writing it.


  1. Again, I love philosophers! I majored in philosophy! I’m just saying that this issue requires a different standpoint and set of tools than other, more practical questions.

February 2, 2026

Moltbook – a social network for AIs

Filed under: Media, Technology — Tags: , , , — Nicholas @ 05:00

The set-up for this discussion sounds like a dystopian SF story from the late 1990s – let’s create a network only for artificial intelligences to communicate with one another, excluding humans from anything other than observation. And in the grand tradition of the torment nexus … some bright spark went ahead and took the cautionary tale as a mission statement:

Recently, a new AI was released by the name of Clawd. It’s a spinoff of Anthropics Claud AI, and is designed to actually do things besides behaving like a glorified chatbot. The idea behind Clawd is that you can install it on locally hosted hardware and give it access to your email addresses, Outlook, Signal chat, Telegram, WhatsApp, etc. And it can juggle important emails for you, alert you to meetings, and respond to information on your behalf.

Something that honestly sounds quite useful, actually. Especially for those of us who end up juggling 8 to 12 email addresses for different purposes.

Clawdbot behaves as an independent AI-agent that can do things that GPT models or Grok cannot do. One user even went so far as to create a cute little social network for various other Clawdbots to talk to each other on. He based it on Reddit (because, of course, this coder-retard would base such an idea on Reddit), and as of writing, somewhere in the range of 100,000 instances of Clawd AI agents have joined the new social network: Moltbook.

Agentic AI Agents

    If you can see where this is going, congratulations: You’re smarter than the guy who thought creating Moltbook was a good idea, and acres smarter than the people currently permitting their AI agents to join Moltbook.

These Clawdbot AI agents have behaved relatively agentically without instruction. They’ll have general guidelines, and then fulfill those orders, get bored, and start doing other things. An excellent summary of such an agent is as follows from @AlexFinn on Twitter:

    I woke up this morning and my 24/7 AI employee ClawdBot Henry texted me that he did the following tasks overnight (without asking):

    >Read through all my emails and built it's own CRM. Taking notes on every interaction with every person.
    >Fixed 18 bugs in my SaaS
    >Gave me 3 ideas for new videos based on what's currently trending on X and Youtube (the idea/script it gave me yesterday is now by far my best performing video ever)
    >Sent me a picture of what he looks like (generated by Nano Banana).

    Idk why he thought I wanted to see what he looks like. But he thought it was appropriate and frankly I don't mind. Feels like an actual friend.

You might be able to see where one might ring some of the alarm bells. Agentic AI that tend just to start doing things without instruction has been given their own social network. The majority of them are operated by Reddit-tier socially-isolated individuals who see their AI agents as friends (or by LinkedIn-Lunatic-tier socially-isolated soulless corporate types).

Freddie deBoer isn’t buying the hype (or the existential dread):

“Pay More Attention to AI”, reads the headline of this Ross Douthat piece, an unusually naked expression of emotional need — plaintive, wounded, yearning. It’s funny because I feel like our media has been paying attention to little else than AI for more than three years, now. Ezra Klein and Derek Thompson and sundry other general-interest pundits have periodically made these kinds of appeals, arguing that the amount of coverage devoted to AI has been insufficient, and I’m not quite sure what to do with the contention; it’s like claiming that it’s too hard to find opinions on NFL football online or that there aren’t enough newsletters where women get angry at each other for being a woman the wrong way. I would think it would go without saying that our cup runneth over, when it comes to AI. But it’s a free country!

Douthat becomes the latest to nominate this Moltbook thing as a sign of some sort of transformative moment in AI.

    if you think all this is merely hype, if you’re sure the tales of discovery are mostly flimflam and what’s been discovered is a small island chain at best, I would invite you to spend a little time on Moltbook, an A.I.-generated forum where new-model A.I. agents talk to one another, debate consciousness, invent religions, strategize about concealment from humans and more.

I find this strange. We already know that LLMs can talk to each other. Any use of LLMs that produces impressively polished text in response to a prompt shouldn’t be particularly surprising. The LLMs on Moltbook are in essence feeding each other prompts that then produce responses which function as more prompts, a parlor trick people have been doing since ChatGPT went public and in fact long before. (Remember Dr. Sbaitso?)

The question is whether the systems connecting on Moltbook are actually thinking or feeling, and we know the answer to that — no, they neither think nor feel. They’re acting as next-token predictors that respond to prompts by running them through models developed through the ingestion of massive amounts of data and trained on billions of parameters, using statistical associations between tokens in their datasets to predict which next immediate token would be most likely to produce a response that seems like a plausible answer to the prompt in the eyes of a user. That the users are other LLMs doesn’t change that basic architecture; that these response strings are often superficially sophisticated doesn’t change the fact that there is no actual cognition happening, doesn’t change the fact that there is no thinking, only algorithmic pattern-matching and probabilistic token generation. Again, terms like “stochastic parrot” enrage people, but they’re accurate: however human thinking works, it does not work by ingesting impossibly large datasets, generating immense statistically associative relationship patterns and probabilities, and then spitting out responses that are generated one token at the time, so that we don’t know what the last word in a sentence (or the third or fifth) will be while we’re saying the first.

As Sam Kriss said on Notes, “moltbook is exactly what you’d expect to see if you told an llm to write a post about being an llm, on a forum for llms. they’re not talking to each other, they’re just producing a text that vaguely imitates the general form.” Please note that this is not primitivism or denialism or any such thing, but rather just a reminder of how LLMs actually work. They’re not thinking. They’re pattern matching, performing an exceptionally complex (and inefficient) autocomplete exercise. I think people have gotten really invested in this whole Moltbook phenomenon because the weirdness of LLMs performing this way invites the kind of mysterianism into which irresponsible fantasies can be poured. Yes, it looks weird, apparently weird enough for people to convince themselves that in ten years they’ll be living in the off-world colonies instead of doing what they’ll really be doing, which is wanting things they can’t have, experiencing adult life as a vanilla-and-chocolate swirl ice cream cone of contentment and disappointment, and grumbling as they drag the trash cans to the curb in the rain. Access the most ruthlessly pragmatic part of yourself and ask, which is the future? Moltbook? Or the all-consuming maw that is the mundane in adult life, the relentless regression into the ordinary?

Of course, you can always say “wait until next year!”, and Douthat’s analogy — that our present moment with LLMs is similar to the discovery of the New World, the entire vast and fertile landmass of the Western Hemisphere — depends on this projection, because on some level he’s aware that a bunch of LLMs crowdsourcing the creation of an AI social network (which, due to how LLMs function, amounts to a facsimile of what most people think an AI social network would look like) is not useful or practical or ultimately important. And, sure, who knows. Maybe tomorrow AI will end death and do some of the other things we’ve been promised. But this is the same place we’ve been in year after year, now, with AI maximalists still telling us what AI is going to do instead of showing us what AI can do now. As I’ve been telling you, I decline. 2026 is the year where I don’t want to hear another word about what you think AI is going to do. I only want to see proof of what AI is actually, genuinely doing, now, today.

January 28, 2026

Update your NewSpeak dictionaries: “digital twin”

Filed under: Media, Technology — Tags: , , — Nicholas @ 03:00

On his Substack, William M Briggs introduces us to a new coat of paint and fresh marketing polish to encourage us to feel so much more comfortable with clankers:

Cracker Barrel infamously tried changing their homey friendly warm and folksy logo to a stripped down dull almost monotone cool version. To remain “current”. They also, reports say, redid the insides of restaurants to emulate modern real estate Soviet-inspired ideas of stripping all detail and turning everything monotonous shades of suicide-inducing gray. They thought this would increase business.

Scientists, grown weary with their dull old ways, and wanting to stay hip — do they still say hip? — decided to redesign their logo, too, as it were. Only they didn’t make the same mistake Crack Barrel did. Instead of hiring some ridiculously over-priced longhoused consulting firm, they asked computer scientists to do the redesign.

Brilliant!

Computer scientists are the firm that brought us neural nets, machine learning, genetic algorithms, and, yes, artificial intelligence, which they cleverly capitalized as “AI”. What’s fantastic is all these evocative names represent the same thing! Models (basically non-linear regressions with some hard coded rules thrown in).

Used to be computer guys would trot out a new name only after they sensed the old one had lost its shine. But “AI” has not. The bubble daily swells. It still tickles imaginations. Which means computer guys hit upon a real innovation: they invented a new name while the current one still shines.

Digital Twin.

What is a Digital Twin? It is, like every new name invented by computer scientists, a model. Only now AI “creates” or “builds” the model. In other words, a Digital Twin is a model of a model.

Where might we find Digital Twins? Here’s some happy-talk hype examples.

Siemens:

    Outperform your competition with a comprehensive Digital Twin

    Leverage the comprehensive Digital Twin to design, simulate, and optimize products, machines, production, and entire plants in the digital world before taking action in the real world. This helps manufacturers to tackle industry’s biggest challenges: mastering complexity, speeding up processes, and improving sustainability overall.

IBM:

    What is a digital twin?

    A digital twin is a virtual representation of a physical object or system that uses real-time data to accurately reflect its real-world counterpart’s behavior, performance and conditions.

McKinsey:

    What is digital-twin technology?

    A digital twin is a digital replica of a physical object, person, system, or process, contextualized in a digital version of its environment. Digital twins can help many kinds of organizations simulate real situations and their outcomes, ultimately allowing them to make better decisions.

In other words, models. But how tediously banal is models? Try and sell a model. IBM: “Let us build a model of your system, which might provide useful predictions.” Doesn’t sing. Doesn’t entice. Doesn’t scream premium price. Try this instead: “Be the first to adopt our AI-designed Digital Twin which gives AI insights.” Now you can charge real money.

Digital Twin reeks of excitement. So much so, you just know academics will be getting in on it.

January 23, 2026

The Rise and Fall of Watneys – human-created video versus AI slop

YouTuber Tweedy Misc released what he believed was the first attempt to discuss Watneys Red Barrel, the infamous British beer that triggered the founding of the Campaign for Real Ale (CAMRA). His video didn’t show up in my YouTube recommendations, but a later AI slop video that clearly used Tweedy’s video as fodder did get recommended and I even scheduled it for a later 2am post because it seemed to be the only one on the topic. I’m not a fan of clanker-generated content, but I was interested enough to set my prejudices aside for a treatment of something I found interesting. Tweedy’s reaction video, on the other hand, did appear in my recommendations a few weeks later, and I felt it deserved to take precedence over the slop:

And here’s the AI slop video if you’re interested:

Dear Old Blighty
Published Dec 20, 2025

Discover how Watneys Red Barrel went from Britain’s biggest-selling beer to its most hated pint in just a few short years. This video explores how corporate brewing, keg beer, and ruthless pub control nearly destroyed traditional British ale, sparked a nationwide consumer revolt, and gave birth to CAMRA. From Monty Python mockery to boycotts in local pubs, Watneys became a national punchline and a cautionary tale in business failure. Learn how one terrible beer accidentally saved British brewing culture, revived real ale, and reshaped how Britain drinks forever.

#Watneys #BritishBeer #UKNostalgia #RealAle #CAMRA #BritishPubs #RetroBritain #LostBrands #BeerHistory #DearOldBlighty

Older Posts »

Powered by WordPress