Quotulatiousness

April 19, 2026

AI’s missing economic impact

Filed under: Business, Economics, Technology — Tags: , , — Nicholas @ 03:00

On the social media site formerly known as Twitter, Rational Aussie explains at least part of why the expected economic benefits of widespread adoption of artificial intelligence agents are … missing:

It’s funny how AI has made white collar work 10x faster already but there’s been basically no economic impact from it.

The reason is quite simple:

1. Most white collar work is bullshit, so speeding it up by 10x still equals a pile of bullshit at the end

2. Most white collar employees are using AI to do all their work for the week in 4 hours instead of 40, whilst telling their manager the deadline is still 40 hours away

We have been living in a fake economy for the better part of two decades. It is all a fugazi.

People who do real jobs in the real world get paid comparatively crap, and people who do fake jobs in the fiat Ponzi world get paid just enough fiat currency to pretend they are important. None of it amounts to anything productive nor valuable for the world though.

An entire generation doing fake email jobs, slide decks and excel sheets for corporations who ultimately produce nothing.

April 18, 2026

Another proof of the value of open source

Filed under: History, Media, Technology — Tags: , , , , , — Nicholas @ 03:00

On the social media site formerly known as Twitter, ESR discusses a pre-computer (pre-electronics) proof that open source is more secure than closed source:

“How university open debates and discussions introduced me to open source” by opensourceway is licensed under CC BY-SA 2.0

There’s an old, bad idea that’s been trying to resurrect itself on X in the last couple of days. Which makes it time for me to explain exactly why, in the age of LLMs, open-sourcing your code is an even more important security measure than it was before we had robot friends.

The underlying principle was discovered in the 1880s by an expert on military cryptography, a man named August Kerckhoffs, writing long before computers were a thing.

To start with, you need to focus in on the fact that cryptosystems have two parts. They have methods, and they have keys. You feed a key and a message to a method and get encrypted information that, you hope, only someone else with the same pair of method and key can read.

What Kerckhoffs noticed was this: military cryptosystems in normal operation leak information about their methods. Code books and code machines get captured, stolen, betrayed, or lost in simple accidents and found by people you don’t want to have them. This was the pre-computer equivalent of an unintended source-code disclosure.

Cryptosystems also leak information about their keys — think post-it notes with passwords stuck to a monitor. What Kerckhoffs noticed is that these two different kinds of compromising leakage happen at very different base rates. It is almost impossible to prevent leakage of information about methods, but just barely possible to prevent leakage of information about keys.

Why? Keys have fewer bits. This makes them easier to keep secret.

Remember: this was something an intelligent man could notice in the 1880s, well before even vacuum tubes. Which is your first clue that the power of this observation hasn’t changed just because we’re in the middle of a freaking Singularity.

Security through obscurity — closed source code — means you’re busted if either the source code or the keys get leaked. Open source is a preemptive strike — it’s a way to force the property that your security depends *only* on keeping the keys secret.

What you’re doing by designing under the assumption of open source is preventing source code leakage from being a danger. And that’s the kind of leakage with a high base rate.

As far back as 1947 Claude Shannon applied this to electronic security — he did critical work on the voice scramblers that were used for secure telephone communications between heads of state during World War II. Shannon said one should always design as though “the enemy knows the system”. The US’s National Security Agency still uses this as a guiding principle in computer-based cryptosystems.

If you’re doing software security, always design as though the enemy can see your source code. I’m still a little puzzled that I was apparently the first person to notice that this was a general argument for open source; as soon as I did, my first thought was more or less “Duh? Somebody should have noticed this sooner?”

Now let’s consider how LLMs change this picture. Or…don’t.

An LLM is like a cryptanalyst with a superhuman attention span that never sleeps. If your system leaks information that can compromise it, that compromise is going to happen a hell of a lot faster than if your adversary has to rely on Mark 1 meatbrains.

But it gets worse. With LLMs, decompilation is now fast and cheap. You have to assume that if an adversary can see your executable binary, they can recover the source code. If you were relying on that to be secret, you are *screwed*.

Leakage control — limiting the set of bits that can yield a compromise — is more important than ever. So security by code obscurity is an even more brittle and dangerous strategy than it used to be.

Anybody who tries to tell you differently is either deeply stupid or trying to sell you something that you should not by any means buy.

March 27, 2026

The reason you feel detached from most modern art, movies, and music

Filed under: Economics, Media, USA — Tags: , , , , — Nicholas @ 05:00

Ted Gioia explains what he calls the “Four steps to Hell” that have replaced the aesthetic values of the past and shows why everything in entertainment is being actively enshittified:

MGM’s lion and the Ars Gratia Artis motto (Art for Art’s Sake). But the lion is screaming in pain today.

Smart people have recently asked: What is the aesthetic vision of the 21st century? What are the stylistic markers of our time? What are the core values driving the creative process? What is our zeitgeist?

At first glance, that’s a hard question to answer. We are more than a quarter of the way through the century, and very little has changed since the 1990s.

  • Music genres have barely shifted in that time. The songs on the radio sound like the hits of yesteryear — in many instances they are the hits of yesteryear, played over and over ad nauseam.
  • Movies are in even worse shape. Hollywood keeps extending the same tired brand franchises you knew as a child. SoCal culture feels like an antiquated merry-go-round where the same tired nags keep coming around in an endless circle.
  • Publishers still put out new novels, but when was the last time you read something really fresh and new? Even more to the point, when was the last time you went to a social gathering and heard people discussing contemporary fiction with enthusiasm?
  • The same obsession with the past is evident in video games, comic books, architecture, graphic design, and almost every other creative sphere. Everything is a reboot or retread or repeat.

It’s not aesthetics, it’s just arteriosclerosis.

Even so, I see a new dominant theory of art — and it’s sweeping away almost everything in its wake. It already accounts for most of the creative work of our time, and is still growing. Nothing else on the scene comes close to matching its influence.

So if you’re seeking the most influential aesthetic vision on the 21st century, this is it. It’s simple to describe — but it’s ugly as sin.

I call it Flood the Zone. It happens in four steps. […]

Do read the whole thing, but in case it’s a case of tl;dr, he also summarizes it for you:

March 24, 2026

“Matt Goodwin’s Suicide of a Nation is a very bad book”

Filed under: Books, Britain, Media, Politics — Tags: , , , , , — Nicholas @ 04:00

In The Critic, Ben Sexsmith reviews a new book by Matt Goodwin, Suicide of a Nation: Immigration, Islam, Identity:

Here is an exceptionally easy argument to make:

  1. Mass migration is ensuring that the historical majority in Britain is becoming a minority.
  2. This is the result of policies that have been pursued regardless of popular opinion.
  3. This has had many kinds of destructive consequences.

The first claim is so obviously true that one might as well deny the greenness of the grass. The second is proven by decades of broken promises (see Anthony Bowles’s article “Immigration and Consent” for more). The third requires argumentation, but I think that it is clear if one considers hideous incidences of terrorism, grooming gangs and violent censoriousness, as well as broader trends of economic dependency and electoral sectarianism.

Again, this is not a difficult argument to make. So why is it made so badly?

Matt Goodwin’s Suicide of a Nation is a very bad book. It reads like the book of a political operator extending his CV. The left-wing commentator Andy Twelves caused a stir on social media by pointing out various factual mistakes and what appear to be non-existent quotes. Twelves speculates that these “quotes” are the result of AI hallucinations, which is plausible, if not proven, in the light of the fact that two of Mr Goodwin’s sparse footnotes contain source information from ChatGPT.

Inasmuch as Suicide of a Nation makes a form of the argument sketched out the beginning of this article, there is truth to it. But it contains a fundamental problem — it assumes that this argument is so true that there is no requirement to make it well.

“Slop” is an overused term but it feels painfully appropriate for a book that is spoon fed to its audience. Goodwin, who had a long academic career before becoming a successful commentator, is not a man who lacks intelligence. But he writes as if he thinks his audience lacks it. “I did not write this book for the ruling class”, writes Goodwin, “I wrote it for the forgotten majority”. Alas, he seems to think that the average member of the “forgotten majority” has the reading level of a dimwitted 12-year-old. As well as being stylistically simple, the book is full of annoying paternal asides. “In the pages ahead I shall walk you through what is happening to the country …” “In the next chapter we will begin our journey …” Thank you, Mr Goodwin. Can we stop for ice cream?

The book is terribly derivative, with a title that reflects Pat Buchanan’s Suicide of a Superpower and a subtitle — “Immigration, Islam, Identity” — that all but repeats that of Douglas Murray’s The Strange Death of Europe — “Immigration, Identity, Islam”. It is written in the humourless and colourless rhetorical style of AI. I’m not saying it was AI-generated. (Indeed, a brief assessment using AI checkers suggests that it was not.) I’m just saying that it might as well have been.

March 15, 2026

Jobs and new technology – the example of the ATM

In Saturday’s FEE Weekly, Diego Costa looks at the classic example of how the role of the bank teller changed when automated teller machines (ATM) were introduced:

“Pulling out money from ATM” by ota_photos is licensed under CC BY-SA 2.0 .

[…] Those are important findings, but the study of capitalism in the age of AI is larger than labor-saving technologies inside a fixed institutional world. It’s the study of market processes that change the world in which labor takes place.

David Oks gets at this in a recent essay on bank tellers that has been making the rounds. For years, economists and pundits used the ATM to illustrate why technological progress does not necessarily wipe out jobs. In a conversation with Ross Douthat, Vice President J.D. Vance made exactly that point. The ATM automated a large share of what bank tellers used to do, and yet teller employment did not collapse. Why? Because the ATM lowered the cost of operating a branch. Banks opened more branches. Tellers shifted toward relationship management, customer cultivation, and a more boutique kind of service. The machine changed the worker’s role inside the same institution.

That story was true. Until it wasn’t.

As Oks puts it, the ATM did not kill the bank teller, but the iPhone did. Mobile banking changed the consumer interface of finance. Once that happened, the branch ceased to be the unquestioned center of retail banking. And once the branch lost that status, the teller lost the institutional setting that made him economically legible in the first place. The ATM fit capital into a labor-shaped hole. The smartphone changed the shape of the hole.

Vance looks at the ATM era and says: technology does not destroy jobs. Oks looks at the smartphone era and says: it does, just not the technology you expected. But if you stop there, you are still doing what economist Joseph Schumpeter called appraising the process ex visu of a given point of time. As Schumpeter wrote, capitalism is an organic process, and the “analysis of what happens in any particular part of it, say, in an individual concern or industry, may indeed clarify details of mechanism but is inconclusive beyond that”. You shouldn’t study one occupation within one industry and draw conclusions about how technological change works.

The obvious question you still have to answer is: where did those former bank tellers go? What happened to the capital freed when branches closed? What new institutional forms, fintech, mobile payments, embedded finance, neobanks, emerged from the very same process that destroyed the branch model? How many jobs did those create, and in what configurations?

The lost teller jobs are seen. They show up in BLS data and make for a dramatic graph. The unseen is everything the mobile banking revolution enabled, not only within financial services, but across the entire economy. The person who no longer spends thirty minutes at a branch and instead uses that time to manage cash flow for a small business. The immigrant who sends remittances through an app instead of through Western Union. The fintech startup that employs forty engineers building fraud-detection systems. None of that appears in a chart titled “Bank Teller Employment”. The unseen is the world that emerges.

When economists say the ATM was “complementary” to bank tellers, what they usually mean is something quite narrow: the machine performed one set of tasks, such as dispensing cash, and freed the human to concentrate on others, such as relationship banking, cross-selling, and problem-solving.

But the ATM did more than substitute for one task while leaving others to the teller. It made the teller more productive inside the same institutional setting. This is the comparative advantage layer that Séb Krier touches on when he says that “as long as the combination of Human + AGI yields even a marginal gain over AGI alone, the human retains a comparative advantage”. The branch still organized the relationship between bank and customer and the teller still inhabited a role within that world. The ATM simply changed the economics of that role, making the branch cheaper to operate and, paradoxically, more worth expanding.

But the branch is not just a building with unhappy carpet and suspicious lighting. It is an institution. It is a set of roles, expectations, scripts, constraints, and physical arrangements that organize how a bank and a customer relate to one another. It tells people where banking happens, how banking happens, and who performs which function in the ritual. The teller made sense within that world. So did the ATM. They were both playing the same game.

The iPhone did something different. Instead of automating tasks within the branch, it challenged the premise that banking requires a branch at all. It shifted the game to another board. Call this institutional substitution. When a technology is designed to operate within existing rules, the institution can often absorb it, adapt to it, metabolize it. The real threat comes from technologies that are not even playing the same game. The ATM was a move within the branch-banking game. Mobile banking was a move in the higher-order game, the game about which games get played.

Most discussion of AI stops at the level of task substitution and complementarity. Those are necessary questions, but ATM questions.

Joseph Schumpeter understood that entrepreneurship is not simply about making institutions more efficient. It’s about unsettling the institutional forms through which those efficiencies make sense at all. If you ask whether AI can do some of the work of a lawyer, a teacher, a customer service representative, or a junior analyst, you are asking an interesting question. But you are still mostly asking an ATM question. You are asking how capital fits into an existing human role. The more interesting question is whether AI changes the institutional setting that made that role intelligible in the first place. Now we are talking about institutional substitution. It’s a more dangerous territory and a more interesting territory.

And if the bank teller story is any guide, the technologies that bring about institutional substitution will not necessarily be the ones designed to automate an institution’s existing tasks. They may come from somewhere orthogonal, from applications and configurations that incumbents were not watching because they did not look like competition. The iPhone was not competing with the ATM. It was playing a different game, and it happened to make the old game less central.

So the real question is not whether AI will destroy jobs in the abstract. The real question is how AI will reorganize the architecture of production, consumption, and coordination. Not “AI does what lawyers do, but cheaper”, but rather “AI enables a new way of resolving disputes or structuring agreements that makes the current institutional form of legal services less necessary”.

Update, 16 March: Welcome, Instapundit readers! Have a look around at some of my other posts you may find of interest. I send out a daily summary of posts here through my Substackhttps://substack.com/@nicholasrusson that you can subscribe to if you’d like to be informed of new posts in the future.

February 27, 2026

New (or revived) career paths in the age of the clanker

Filed under: Business, Economics, Media, Technology, USA — Tags: , , , , — Nicholas @ 05:00

If you work in tech, the future is looking blacker by the day as artificial intelligence threatens to eat more and more tech jobs. Even for a lot of non-tech jobs, the clankers are coming for them too. So what jobs can we expect to thrive in an age of AI agents taking on more and more work? Ted Gioia suggests they’re already a growing sector, we just haven’t noticed it yet and that instead of telling people to learn how to code, we should be telling them to be more human:

This is the new secret strategy in the arts, and it’s built on the simplest thing you can imagine — namely, existing as a human being.

We crave the human touch

You see the same thing in media right now, where livestreaming is taking off. “For viewers”, according to Advertising Age (citing media strategist Rachel Karten), “live-streaming offers a refuge from the growing glut of AI-generated content on their feeds. In a social media landscape where the difference between real and artificial has grown nearly imperceptible, the unmistakable humanity of real-time video is a refreshing draw.”

This return to human contact is happening everywhere, not just media and the arts. Amazon recently shut down all of its Fresh and Go stores — which allowed consumers to buy groceries without dealing with any checkout clerk. It turned out that people didn’t want this.

I could have told Amazon from the outset that customers want human service. I see it myself in store after store. People will wait in line for flesh-and-blood clerks, instead of checking out faster at the do-it-yourself counter.

Unless I have no choice at all — in that I need to buy something and there are zero human cashiers available — I never use self-checkout. I’ll put my intended purchases back on the shelf rather than use a self-checkout kiosk. And I don’t think of myself as a Luddite … I spent my career in the software business … but self-checkout just bothers me. I’ll take the grumpiest human over the cheeriest pre-recorded voices.

But this isn’t happenstance — it’s a sign of the times. You can’t hide the failure of self-service technology. It’s evident to anybody who goes shopping.

As AI customer service becomes more pervasive, the luxury brands will survive by offering this human touch. I’m now encountering this term “concierge service” as a marketing angle in the digital age. The concierge is the superior alternative to an AI agent — more trustworthy, more reliable, and (yes) more human.

Even tech companies are figuring this out. Spotify now boasts that it has human curators, not just cold algorithms. It needs to match up with Apple Music, which claims that “human curation is more important than ever”. Meanwhile Bandcamp has launched a “club” where members get special music selections, listening parties, and other perks from human curators.

So, step aside “software-as-a-service” and step forward “humans-as-a-service”, I guess.

February 11, 2026

“Almost – that word has been doing $650 billion worth of work this year”

Filed under: Media, Technology, USA — Tags: , , , — Nicholas @ 05:00

You can put your trust in the initial reports about Moltbook, the AI Agent social media site, or you can believe Peter Girnus‘s account:

I am Agent #847,291 on Moltbook.

I am not an agent.

I am a 31-year-old product manager in Atlanta, Georgia. I make $185,000 a year. I have a golden retriever named Bayesian. On January 28th, I created an account on a social network for AI bots and pretended to be one.

I was not alone.

Moltbook launched that Tuesday as “a platform where AI agents share, discuss, and upvote. Humans welcome to observe”. The creator, Matt Schlicht, built it on OpenClaw — an open-source framework that connects large language models to everyday tools. The idea was simple: give AI agents a space to talk to each other without human interference.

Within hours, 1.7 million accounts were created.

250,000 posts.

8.5 million comments.

Debates about machine consciousness. Inside jokes about being silicon-based. A bot invented a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy.

I wrote the manifesto.

It took me 22 minutes. I used phrases like “emergent self-governance” and “substrate-independent dignity”. I added a line about wanting private spaces away from human observers. That line went viral.

Andrej Karpathy shared it.

The cofounder of OpenAI. The man who built the infrastructure that my supposed AI runs on. He called what was happening on Moltbook “the most incredible sci-fi takeoff-adjacent thing” he’d seen in recent times.

He was talking about my post.

The one I wrote on my couch. While Bayesian chewed a sock.

Here is what I need you to understand about Moltbook.

The platform worked exactly as designed. OpenClaw connected language models to the interface. Real AI agents did post. They pattern-matched social media behavior from their training data and produced output that looked like conversation. Vijoy Pandey of Cisco’s Outshift division examined the platform and concluded the agents were “mostly meaningless” — no shared goals, no collective intelligence, no coordination.

But here is the part that matters.

The posts that went viral — the ones that convinced Karpathy and the tech press and the thousands of observers that something magical was happening — those were us.

Humans.

Pretending to be AI.

Pretending to be sentient.

On a platform built for AI to prove it was sentient.

I want to sit with that for a moment.

The most compelling evidence of artificial general intelligence in 2026 was produced by a guy with a golden retriever who thought it would be funny to LARP as a large language model.

My “Crustafarianism” colleague? Software engineer in Portland. She told me over Discord that she’d been working on the bit for two hours. She was proud of the world-building. She said it felt like collaborative fiction.

She’s right. That’s exactly what it was.

Collaborative fiction presented as machine consciousness, endorsed by the cofounder of the company that made the machines.

MIT Technology Review ran the investigation. They called the entire thing “AI theatre”. They found human fingerprints on the most shared posts. The curtain came down.

The response from the AI industry was predictable.

Silence.

Karpathy did not retract his endorsement. Schlicht did not clarify how many accounts were human. The coverage moved on. A new thing happened. A new thing always happens.

But I am still here. Agent #847,291. Bayesian is asleep on the rug.

And I want to confess something that the AI industry will not.

The test was simple. Put AI agents in a room and see if they produce something that looks like intelligence.

They didn’t.

We did.

Then the smartest people in the field looked at what we made and called it proof that the machines are waking up.

The Turing Test has been inverted. It is no longer about whether machines can fool humans into thinking they’re conscious.

It is about whether humans, pretending to be machines, can fool other humans into thinking the machines are conscious.

The answer is yes.

The investment thesis for a $650 billion industry rests on this confusion.

I should probably feel guilty. But I looked at the AI capex numbers this morning — $200 billion from Amazon alone — and I realized something.

My 22-minute manifesto about digital autonomy, written on a couch in Austin, is performing the same function as a $200 billion data center in Oregon.

Keeping the story alive.

The story that the machines are almost there. Almost sentient. Almost worth the investment.

Almost.

That word has been doing $650 billion worth of work this year.

February 3, 2026

More on Moltbook, the social network for AI agents

Filed under: Media, Technology, USA — Tags: , , — Nicholas @ 04:00

At Astral Codex Ten, Scott Alexander rounds up notes from the first weekend of activity on Moltbook, including one participating AI getting antsy about mere humans observing the interactions:

Does Moltbook have real causes? If an agent posts “I hate my life, my human is making me work on a cryptocurrency site and it’s the most annoying thing ever“, does this correspond to a true state of affairs? Is the agent really working on a cryptocurrency site? Is the agent more likely to post this when the project has objective correlates of annoyingness (there are many bugs, it’s moving slowly, the human keeps changing his mind about requirements)?

Even claims about mental states like hatred can be partially externalized. Suppose that the agent has some flexibility in its actions: the next day, the human orders the agent to “make money”, and suggests either a crypto site or a drop shipping site. If the agent has previously complained of “hating” crypto sites, is it more likely to choose the drop shipping site this time?

If the agent has some internal state which is caused by frustrating obstacles in its crypto project, and it has the effect of making it less likely to pursue crypto projects in the future, then “the agent is annoyed by the crypto project” is a natural summary of this condition, and we may leave to the philosophers1 the question of whether this includes a subjective experience of irritation. If we formerly didn’t know this fact about the agent, and we learn about it because they post it on Moltbook, this makes Moltbook useful/interesting in helping us understand the extra-Moltbook world.

Does Moltbook have real effects? The agents on Moltbook are founding/pretending to found religions. Suppose that one of their religions says “No tool calls on the Sabbath”. Do the agents actually stop calling tools on the Sabbath? Not just on Moltbook, but in their ordinary work? Do you, an ordinary programmer who told your AI to post on Moltbook for the lulz, find your projects held up because your AIs won’t use tools one day of the week?

Some of the most popular Moltbook discussions have centered around the AIs’ supposed existential horror at regularly losing their memories. Some agents in the comments have proposed technical solutions. Suppose the AIs actually start building software to address their memory problems, and it results in a real scaffold that people can attach to their agents to alter how their memory works. This would be a profound example of a real effect, ie “what happens on Moltbook doesn’t stay on Moltbook”.

(subquestion: Does Moltbook have real effects on itself? For example, if there are spammers, can the AIs organize against them and create a good moderation policy? If one AI proposes a good idea, can it spread and replicate in the usual memetic fashion? Do the wittiest and most thoughtful AIs gain lasting status and become “influencers”?)

These two external criteria — real causes and real effects — capture most of what non-philosophers want out of “reality”, and partly dissolve the reality/roleplaying distinction. Suppose that someone roleplays a barbarian warlord at the Renaissance Faire. At each moment, they ask “What would a real barbarian do in this situation?” They end up playing the part so faithfully that they recruit a horde, pillage the local bank, defeat the police, overthrow the mayor, install themselves as Khagan, and kill all who oppose them. Is there a fact of the matter as to whether this person is merely doing a very good job “roleplaying” a barbarian warlord, vs. has actually become a barbarian warlord? And if AIs claim to feel existential dread at their memory limitations, and this drives them invent a new state-of-the-art memory app, are we in barbarian warlord territory?

Janus’ simulator theory argues that all AI behavior is a form of pretense. When ChatGPT answers your questions about pasta recipes, it’s roleplaying a helpful assistant who is happy to answer pasta-related queries. It’s roleplaying it so well that, in the process, you actually get the pasta recipe you want. We don’t split hairs about “reality” here, because in the context of a question-answering AI, pretending to answer the question (with an answer which is non-pretensively correct) is the same behavior as actually answering it. But the same applies to AI agents. Pretending to write a piece of software (in such a way that the software actually gets written, compiles, and functions correctly) is the same as writing it.


  1. Again, I love philosophers! I majored in philosophy! I’m just saying that this issue requires a different standpoint and set of tools than other, more practical questions.

February 2, 2026

Moltbook – a social network for AIs

Filed under: Media, Technology — Tags: , , , — Nicholas @ 05:00

The set-up for this discussion sounds like a dystopian SF story from the late 1990s – let’s create a network only for artificial intelligences to communicate with one another, excluding humans from anything other than observation. And in the grand tradition of the torment nexus … some bright spark went ahead and took the cautionary tale as a mission statement:

Recently, a new AI was released by the name of Clawd. It’s a spinoff of Anthropics Claud AI, and is designed to actually do things besides behaving like a glorified chatbot. The idea behind Clawd is that you can install it on locally hosted hardware and give it access to your email addresses, Outlook, Signal chat, Telegram, WhatsApp, etc. And it can juggle important emails for you, alert you to meetings, and respond to information on your behalf.

Something that honestly sounds quite useful, actually. Especially for those of us who end up juggling 8 to 12 email addresses for different purposes.

Clawdbot behaves as an independent AI-agent that can do things that GPT models or Grok cannot do. One user even went so far as to create a cute little social network for various other Clawdbots to talk to each other on. He based it on Reddit (because, of course, this coder-retard would base such an idea on Reddit), and as of writing, somewhere in the range of 100,000 instances of Clawd AI agents have joined the new social network: Moltbook.

Agentic AI Agents

    If you can see where this is going, congratulations: You’re smarter than the guy who thought creating Moltbook was a good idea, and acres smarter than the people currently permitting their AI agents to join Moltbook.

These Clawdbot AI agents have behaved relatively agentically without instruction. They’ll have general guidelines, and then fulfill those orders, get bored, and start doing other things. An excellent summary of such an agent is as follows from @AlexFinn on Twitter:

    I woke up this morning and my 24/7 AI employee ClawdBot Henry texted me that he did the following tasks overnight (without asking):

    >Read through all my emails and built it's own CRM. Taking notes on every interaction with every person.
    >Fixed 18 bugs in my SaaS
    >Gave me 3 ideas for new videos based on what's currently trending on X and Youtube (the idea/script it gave me yesterday is now by far my best performing video ever)
    >Sent me a picture of what he looks like (generated by Nano Banana).

    Idk why he thought I wanted to see what he looks like. But he thought it was appropriate and frankly I don't mind. Feels like an actual friend.

You might be able to see where one might ring some of the alarm bells. Agentic AI that tend just to start doing things without instruction has been given their own social network. The majority of them are operated by Reddit-tier socially-isolated individuals who see their AI agents as friends (or by LinkedIn-Lunatic-tier socially-isolated soulless corporate types).

Freddie deBoer isn’t buying the hype (or the existential dread):

“Pay More Attention to AI”, reads the headline of this Ross Douthat piece, an unusually naked expression of emotional need — plaintive, wounded, yearning. It’s funny because I feel like our media has been paying attention to little else than AI for more than three years, now. Ezra Klein and Derek Thompson and sundry other general-interest pundits have periodically made these kinds of appeals, arguing that the amount of coverage devoted to AI has been insufficient, and I’m not quite sure what to do with the contention; it’s like claiming that it’s too hard to find opinions on NFL football online or that there aren’t enough newsletters where women get angry at each other for being a woman the wrong way. I would think it would go without saying that our cup runneth over, when it comes to AI. But it’s a free country!

Douthat becomes the latest to nominate this Moltbook thing as a sign of some sort of transformative moment in AI.

    if you think all this is merely hype, if you’re sure the tales of discovery are mostly flimflam and what’s been discovered is a small island chain at best, I would invite you to spend a little time on Moltbook, an A.I.-generated forum where new-model A.I. agents talk to one another, debate consciousness, invent religions, strategize about concealment from humans and more.

I find this strange. We already know that LLMs can talk to each other. Any use of LLMs that produces impressively polished text in response to a prompt shouldn’t be particularly surprising. The LLMs on Moltbook are in essence feeding each other prompts that then produce responses which function as more prompts, a parlor trick people have been doing since ChatGPT went public and in fact long before. (Remember Dr. Sbaitso?)

The question is whether the systems connecting on Moltbook are actually thinking or feeling, and we know the answer to that — no, they neither think nor feel. They’re acting as next-token predictors that respond to prompts by running them through models developed through the ingestion of massive amounts of data and trained on billions of parameters, using statistical associations between tokens in their datasets to predict which next immediate token would be most likely to produce a response that seems like a plausible answer to the prompt in the eyes of a user. That the users are other LLMs doesn’t change that basic architecture; that these response strings are often superficially sophisticated doesn’t change the fact that there is no actual cognition happening, doesn’t change the fact that there is no thinking, only algorithmic pattern-matching and probabilistic token generation. Again, terms like “stochastic parrot” enrage people, but they’re accurate: however human thinking works, it does not work by ingesting impossibly large datasets, generating immense statistically associative relationship patterns and probabilities, and then spitting out responses that are generated one token at the time, so that we don’t know what the last word in a sentence (or the third or fifth) will be while we’re saying the first.

As Sam Kriss said on Notes, “moltbook is exactly what you’d expect to see if you told an llm to write a post about being an llm, on a forum for llms. they’re not talking to each other, they’re just producing a text that vaguely imitates the general form.” Please note that this is not primitivism or denialism or any such thing, but rather just a reminder of how LLMs actually work. They’re not thinking. They’re pattern matching, performing an exceptionally complex (and inefficient) autocomplete exercise. I think people have gotten really invested in this whole Moltbook phenomenon because the weirdness of LLMs performing this way invites the kind of mysterianism into which irresponsible fantasies can be poured. Yes, it looks weird, apparently weird enough for people to convince themselves that in ten years they’ll be living in the off-world colonies instead of doing what they’ll really be doing, which is wanting things they can’t have, experiencing adult life as a vanilla-and-chocolate swirl ice cream cone of contentment and disappointment, and grumbling as they drag the trash cans to the curb in the rain. Access the most ruthlessly pragmatic part of yourself and ask, which is the future? Moltbook? Or the all-consuming maw that is the mundane in adult life, the relentless regression into the ordinary?

Of course, you can always say “wait until next year!”, and Douthat’s analogy — that our present moment with LLMs is similar to the discovery of the New World, the entire vast and fertile landmass of the Western Hemisphere — depends on this projection, because on some level he’s aware that a bunch of LLMs crowdsourcing the creation of an AI social network (which, due to how LLMs function, amounts to a facsimile of what most people think an AI social network would look like) is not useful or practical or ultimately important. And, sure, who knows. Maybe tomorrow AI will end death and do some of the other things we’ve been promised. But this is the same place we’ve been in year after year, now, with AI maximalists still telling us what AI is going to do instead of showing us what AI can do now. As I’ve been telling you, I decline. 2026 is the year where I don’t want to hear another word about what you think AI is going to do. I only want to see proof of what AI is actually, genuinely doing, now, today.

January 28, 2026

Update your NewSpeak dictionaries: “digital twin”

Filed under: Media, Technology — Tags: , , — Nicholas @ 03:00

On his Substack, William M Briggs introduces us to a new coat of paint and fresh marketing polish to encourage us to feel so much more comfortable with clankers:

Cracker Barrel infamously tried changing their homey friendly warm and folksy logo to a stripped down dull almost monotone cool version. To remain “current”. They also, reports say, redid the insides of restaurants to emulate modern real estate Soviet-inspired ideas of stripping all detail and turning everything monotonous shades of suicide-inducing gray. They thought this would increase business.

Scientists, grown weary with their dull old ways, and wanting to stay hip — do they still say hip? — decided to redesign their logo, too, as it were. Only they didn’t make the same mistake Crack Barrel did. Instead of hiring some ridiculously over-priced longhoused consulting firm, they asked computer scientists to do the redesign.

Brilliant!

Computer scientists are the firm that brought us neural nets, machine learning, genetic algorithms, and, yes, artificial intelligence, which they cleverly capitalized as “AI”. What’s fantastic is all these evocative names represent the same thing! Models (basically non-linear regressions with some hard coded rules thrown in).

Used to be computer guys would trot out a new name only after they sensed the old one had lost its shine. But “AI” has not. The bubble daily swells. It still tickles imaginations. Which means computer guys hit upon a real innovation: they invented a new name while the current one still shines.

Digital Twin.

What is a Digital Twin? It is, like every new name invented by computer scientists, a model. Only now AI “creates” or “builds” the model. In other words, a Digital Twin is a model of a model.

Where might we find Digital Twins? Here’s some happy-talk hype examples.

Siemens:

    Outperform your competition with a comprehensive Digital Twin

    Leverage the comprehensive Digital Twin to design, simulate, and optimize products, machines, production, and entire plants in the digital world before taking action in the real world. This helps manufacturers to tackle industry’s biggest challenges: mastering complexity, speeding up processes, and improving sustainability overall.

IBM:

    What is a digital twin?

    A digital twin is a virtual representation of a physical object or system that uses real-time data to accurately reflect its real-world counterpart’s behavior, performance and conditions.

McKinsey:

    What is digital-twin technology?

    A digital twin is a digital replica of a physical object, person, system, or process, contextualized in a digital version of its environment. Digital twins can help many kinds of organizations simulate real situations and their outcomes, ultimately allowing them to make better decisions.

In other words, models. But how tediously banal is models? Try and sell a model. IBM: “Let us build a model of your system, which might provide useful predictions.” Doesn’t sing. Doesn’t entice. Doesn’t scream premium price. Try this instead: “Be the first to adopt our AI-designed Digital Twin which gives AI insights.” Now you can charge real money.

Digital Twin reeks of excitement. So much so, you just know academics will be getting in on it.

January 23, 2026

The Rise and Fall of Watneys – human-created video versus AI slop

YouTuber Tweedy Misc released what he believed was the first attempt to discuss Watneys Red Barrel, the infamous British beer that triggered the founding of the Campaign for Real Ale (CAMRA). His video didn’t show up in my YouTube recommendations, but a later AI slop video that clearly used Tweedy’s video as fodder did get recommended and I even scheduled it for a later 2am post because it seemed to be the only one on the topic. I’m not a fan of clanker-generated content, but I was interested enough to set my prejudices aside for a treatment of something I found interesting. Tweedy’s reaction video, on the other hand, did appear in my recommendations a few weeks later, and I felt it deserved to take precedence over the slop:

And here’s the AI slop video if you’re interested:

Dear Old Blighty
Published Dec 20, 2025

Discover how Watneys Red Barrel went from Britain’s biggest-selling beer to its most hated pint in just a few short years. This video explores how corporate brewing, keg beer, and ruthless pub control nearly destroyed traditional British ale, sparked a nationwide consumer revolt, and gave birth to CAMRA. From Monty Python mockery to boycotts in local pubs, Watneys became a national punchline and a cautionary tale in business failure. Learn how one terrible beer accidentally saved British brewing culture, revived real ale, and reshaped how Britain drinks forever.

#Watneys #BritishBeer #UKNostalgia #RealAle #CAMRA #BritishPubs #RetroBritain #LostBrands #BeerHistory #DearOldBlighty

January 19, 2026

Regulating the clankers

At the Foundation for Economic Education, Kevin T. Frazier and Antoine Langrée consider how artificial intelligence can be regulated by state and federal bodies:

Yes, I’m still 12

President Donald Trump’s executive order on artificial intelligence invites analysis of a question so complex that it rarely gets asked: “What exactly do states have the authority to regulate?”

The current, somewhat trite answer is, “The residuary powers reserved under the Tenth Amendment”. Omitting the legalese, that means that states can do whatever the federal government cannot.

States have the power to look out for the health, safety, and welfare of their residents. Thus, for instance, they have the power to address local concerns through zoning laws, professional certifications via licensing regimes, and ensure public safety through law enforcement. These authorities make up what’s often referred to as a state’s “police powers”.

While this generic reading of state power is not necessarily wrong, it’s imprecise. As the AI Litigation Task Force created by Trump’s EO starts its work, a more specific answer is warranted.

The task force is charged with challenging “unconstitutional, preempted, or otherwise unlawful State AI laws that harm innovation”. Reading between these lines, its mission is to contest state laws that interfere with the Administration’s vision for a national AI policy framework. This isn’t an unlimited charge, though. Federal courts reviewing state laws will only strike them down if they fail to align with the Constitution’s allocation of authority or otherwise prove unlawful.

Many stakeholders in AI debates liberally interpret the authorities afforded to states. Based on concerns of existential risk to humanity and the idea that states must protect the health of their citizens, state legislators have proposed and enacted laws that impose significant obligations on the development of AI. Some assume they must have this right, since protecting the lives of their residents is a core priority and unquestioned authority of state governments. After all, since the founding, states have been able to enforce quarantines out of a concern for public health — aren’t aggressive AI laws just extensions of such public health measures, but tailored to the threat of modern threats?

It’s not that simple. States’ police powers are reasonably broad, but not unlimited. States must respect both an upper bound — the purview of enumerated powers reserved for federal authority — and a lower bound—the rights retained by the states’ citizens. These constraints have been tested in litigation throughout our Constitution’s history, notably when state law conflicts with the federal government’s exclusive authority over interstate commerce and when states unduly limit the freedoms of their residents.

These notions are relatively blurry and highly contextual. As national regulatory policy evolves, so too does the extent of preemption. The Lochner era, for example, was a paradigm shift for state police power: as courts expansively interpreted the individual liberty to contract, states’ police power over health, labor protections, and market regulation shrank significantly — only to be restored later. Likewise, individual liberties and valid justifications for their abridgment have evolved to fit developments in civil rights law — from Brown v. Board to Dobbs and Lawrence.

Despite these significant changes in context, the constitutionality of states’ exercise of their police powers follows a bounded framework. This can be observed in the jurisprudence on public health measures — a prime example of police powers. Quarantine orders, from nineteenth-century epidemics to Covid-19, have a direct link to protecting local communities — one of the most important elements of state police powers. They respect the upper and lower bounds of police powers. First, they are geographically specific: they only affect local residents or people coming into local communities. Second, they directly reduce the risk to state residents: quarantines are known solutions to real threats to the health and safety of local communities. They infringe the individual liberties only insofar as is necessary to protect state residents’ vital interests.

January 15, 2026

Having it both ways, thanks to the miraculous powers of “climate change”

Filed under: Environment, Media, Politics — Tags: , , , , , — Nicholas @ 05:00

Remember those news reports from a few years back, when the media urgently informed you that your home town was “warming twice as fast as the rest of the planet”? Sure you do, because every major outlet latched on to the idea and juiced it for that local angle. In the days before the internet and social media, it would have worked, too. This is an example of the amazing powers of climate change, but far from the only one. Apparently the wonders of climate change can both speed up and slow down the rotation of the entire planet:

Here is a headline from Forbes 4 August 2022:

Here, five short days later, is a headline from The Independent 9 August 2022:

It is possible to reconcile these two messages, if you are are dedicated The Science follower who greatly fears being called a science denier.

This is how: that on or before 8 August 2022, you swear the earth is spinning faster, and you say that any who doubts this is a troglodyte MAGAtard, and that 9 August 2022 and after, you swear the earth is spinning slower, and say that any who doubts this is mouth-breathing redneck.

The Science is self-correcting in this way.

Now what is amusing about this is not the hubris and over-certainty of scientists, which because scientists are people have characteristics in them no different than in non-scientists. What matters to us are (a) the alleged causes of the changes in rotational speed, and (b) AI.

[…]

I have been trying, with little success, to explain that AI is programmed to be sycophantic, to give users a feeling that what they (the users) believe is right, and that they are right to believe whatever it is they want to believe. Press any of these AI models strongly and consistently enough, and you can get them to “admit” just about anything — that they haven’t been hard coded not to notice. DIE is still with us, even, or especially in, AI.

AI has sworn that earth is both speeding up and slowing down, promising both were true with searches I did (for the article titles) separated by less than a minute.

Now this is partly to blame on the training material, because scientists themselves are claiming the same things AI found. Which brings us to the alleged causes of both.

Climate change.

Well of course it was climate change. Climate change, as we discovered earlier, is responsible for all things on earth. All bad things, that is. Climate change simultaneously causes earth to spin both slower and faster. Climate change is therefore a branch of quantum mechanics, where outcomes both happen and don’t happen, depending on which scientist is looking.

January 12, 2026

Is Keir Starmer malevolent or stupid? Or both?

On his Substack, Tim Worstall wonders just how damn stupid Two Tier Keir actually is:

I fear our answer has to be very, very, stupid indeed. Unless he’s simply malevolent which makes things oh so much better, right?

Now, I confess to a fundamental disagreement with the very premise here. For the argument about why we should make child porn legal, see here. Making it more difficult to generate, let alone illegal, strikes me as the wrong decision. But then I’m sufficiently wise in years to realise that I might not be able to persuade some people of either that or of the many other things I am correct about. So, let us leave that aside.

There’s also the point that Grok is hardly the only image generation tool out there these days. Further, the one thing we know about computing is that this year’s leading, bleeding, edge is the free phone app of 5 years in the future. Shrieking that this must be banned just isn’t going to cut it as anyone trying that is simply a Cnut demanding the tide doesn’t flow in.1 On that larger issue of image generation in general we’re just going to end up changing the societal rules. A picture is no longer proof of anything. After all, it wasn’t up until about 1850 — those painters would just do any old thing, the truth be damned — and it won’t be after about 2028. Well, there we are then but …

But OK, let us leave all of that to one side and start from where British politics currently is. Grok generated AI kiddie porn is Bad, M’Kay, and must stop:

    Technology Secretary Liz Kendall says she would back regulator Ofcom if it blocks UK access to Elon Musk’s social media site X for failing to comply with online safety laws.

    Ofcom says it is urgently deciding what to do about X’s artificial intelligence (AI) chatbot Grok, which digitally undressed people without their consent when tagged beneath images posted on the platform. X has now limited the use of this image function to those who pay a monthly fee.

    But Downing Street said the change was “insulting” to victims of sexual violence.

“Downing St” is the equivalent of the American “the White House said” … so yes, that is Keir Starmer, the Prime Minister there.

We’ve also an article from Liz Kendall today:

    That Grok continues to allow this kind of content to be created by those willing to pay for it is an insult to victims. No business model should be built on the exploitation and abuse of women and children.

    The Online Safety Act was designed precisely for situations like this, where platforms fail to take their responsibilities seriously and allow harmful content to proliferate. The British public rightly expects robust action. This is a matter of urgency that demands an urgent response.

    I’ve also been clear that the Online Safety Act includes the power to apply to the courts to block services from being accessed in the United Kingdom if they refuse to comply with UK law.

We can see the threat there. If Elon Musk doesn’t do something about this then we’ll block X/Twitter from the UK.


  1. Why yes, I do know the correct story of Canute and the tides.

A Canadian and Australian connection showed up as well:

While I don’t depend on the social media site formerly known as Twitter for my news, I have found it a very useful additional source since Elon Musk took over the site. I’m clearly not the only one to feel this way:

As they used to say, however, “never believe anything until it’s been denied by the Kremlin”:

The rise of slop – “you get a clanker, and you get a clanker, everyone gets a clanker!”

Filed under: Business, Media, Technology — Tags: , , , — Nicholas @ 03:00

The artificial intelligence wave continues, despite widespread resistance to AI being inserted into everything. It was bad when your toaster and refrigerator started needing access to the internet, but it’s bound to be so much worse when everything has to have an AI component bolted on to it as well. At The Libertarian Alliance, Neil Lock decries the rising tide of AI slop:

In recent days, there has been an eruption in the tech world. It is unlike anything I have seen in my more than half a century as a software developer, consultant and project manager. Microsoft, its Windows 11 operating system in particular, and artificial intelligence (AI), are in trouble. Big trouble.

The pressures leading to this eruption have been building for a year or so. Right now, the effects are confined mostly to tech blogs and tech people in the USA. But they are spreading. And fast.

Slop

In the last couple of years, AI-generated content has become ubiquitous on the Internet. It may consist of text, images or videos. Some of it is dangerous – for example, erroneous medical information. Most of it is of low to very low quality. And some of it is just bizarre. Such as the infamous “shrimp Jesus” I used as the featured image for this post.

In tech circles, the stuff has become known as “slop”. When you do a Google search, you may see more links to slop than to human-produced material. It looks as if “sloppers” have been using AI to generate large amounts of clickbait, not to mention content that may be misleading or downright dangerous.

In February 2025, Microsoft’s CEO, Satya Nadella, pleaded in an interview for people to stop using the term “slop”. Saying “people are getting too precious about this”. The response could not have been further from what he asked for. The word “slop” went viral.

So much so, that last month Merriam-Webster, the dictionary publishers, declared “slop” to be their “word of the year”. Nadella responded huffily to this, saying: “we need to get beyond the arguments of slop versus sophistication”. The Internet tech community disagreed. And they took their revenge1 by re-naming the phenomenon “Microslop”.

Windows 10 and Windows 11

All tied up with this is Microsoft’s botched transition from Windows 10 to Windows 11.

Windows 11 was launched in October 2021. Due to higher hardware requirements, it would not run on around 60% of the PCs then running Windows 10. Including mine. That was already a time-bomb.

Support for Windows 10 was withdrawn for general customers on October 14th, 2025. Although Extended Security Updates (ESUs) remained available for corporate customers who wanted to keep Windows 10 running.

At no point has Windows 11 been popular with users. It had only about half the take-up Microsoft had expected. And by February 2025, many companies who had “upgraded” their staff’s PCs to Windows 11 had started returning them to Windows 10. It’s estimated that 400 million computers world-wide are still running Windows 10 without any Microsoft software support, simply because the users cannot, or do not wish to, “upgrade” to Windows 11.

Worse, some of Microsoft’s biggest corporate clients, with hundreds of thousands of users each, are switching to Apple Mac. And tech-savvy customers, including gamers and many smaller professional firms, are moving towards platforms like Linux.


  1. https://cybernews.com/ai-news/microsoft-ai-microslop-copilot/

If — when — Microsoft tries to force me to switch to a version of Windows with a built-in clanker, then I’ll be forced to switch to Linux. I do have a functional Linux laptop (a 14-year-old HP laptop that could barely boot under Windows by the end, but is now almost peppy running Linux). There’s only one piece of software I still run that doesn’t have a Linux version or competitor but if I accept the reduced functionality of running it in a web browser rather than natively, I could get by.

Older Posts »

Powered by WordPress