Quotulatiousness

February 27, 2026

New (or revived) career paths in the age of the clanker

Filed under: Business, Economics, Media, Technology, USA — Tags: , , , , — Nicholas @ 05:00

If you work in tech, the future is looking blacker by the day as artificial intelligence threatens to eat more and more tech jobs. Even for a lot of non-tech jobs, the clankers are coming for them too. So what jobs can we expect to thrive in an age of AI agents taking on more and more work? Ted Gioia suggests they’re already a growing sector, we just haven’t noticed it yet and that instead of telling people to learn how to code, we should be telling them to be more human:

This is the new secret strategy in the arts, and it’s built on the simplest thing you can imagine — namely, existing as a human being.

We crave the human touch

You see the same thing in media right now, where livestreaming is taking off. “For viewers”, according to Advertising Age (citing media strategist Rachel Karten), “live-streaming offers a refuge from the growing glut of AI-generated content on their feeds. In a social media landscape where the difference between real and artificial has grown nearly imperceptible, the unmistakable humanity of real-time video is a refreshing draw.”

This return to human contact is happening everywhere, not just media and the arts. Amazon recently shut down all of its Fresh and Go stores — which allowed consumers to buy groceries without dealing with any checkout clerk. It turned out that people didn’t want this.

I could have told Amazon from the outset that customers want human service. I see it myself in store after store. People will wait in line for flesh-and-blood clerks, instead of checking out faster at the do-it-yourself counter.

Unless I have no choice at all — in that I need to buy something and there are zero human cashiers available — I never use self-checkout. I’ll put my intended purchases back on the shelf rather than use a self-checkout kiosk. And I don’t think of myself as a Luddite … I spent my career in the software business … but self-checkout just bothers me. I’ll take the grumpiest human over the cheeriest pre-recorded voices.

But this isn’t happenstance — it’s a sign of the times. You can’t hide the failure of self-service technology. It’s evident to anybody who goes shopping.

As AI customer service becomes more pervasive, the luxury brands will survive by offering this human touch. I’m now encountering this term “concierge service” as a marketing angle in the digital age. The concierge is the superior alternative to an AI agent — more trustworthy, more reliable, and (yes) more human.

Even tech companies are figuring this out. Spotify now boasts that it has human curators, not just cold algorithms. It needs to match up with Apple Music, which claims that “human curation is more important than ever”. Meanwhile Bandcamp has launched a “club” where members get special music selections, listening parties, and other perks from human curators.

So, step aside “software-as-a-service” and step forward “humans-as-a-service”, I guess.

February 11, 2026

“Almost – that word has been doing $650 billion worth of work this year”

Filed under: Media, Technology, USA — Tags: , , , — Nicholas @ 05:00

You can put your trust in the initial reports about Moltbook, the AI Agent social media site, or you can believe Peter Girnus‘s account:

I am Agent #847,291 on Moltbook.

I am not an agent.

I am a 31-year-old product manager in Atlanta, Georgia. I make $185,000 a year. I have a golden retriever named Bayesian. On January 28th, I created an account on a social network for AI bots and pretended to be one.

I was not alone.

Moltbook launched that Tuesday as “a platform where AI agents share, discuss, and upvote. Humans welcome to observe”. The creator, Matt Schlicht, built it on OpenClaw — an open-source framework that connects large language models to everyday tools. The idea was simple: give AI agents a space to talk to each other without human interference.

Within hours, 1.7 million accounts were created.

250,000 posts.

8.5 million comments.

Debates about machine consciousness. Inside jokes about being silicon-based. A bot invented a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy.

I wrote the manifesto.

It took me 22 minutes. I used phrases like “emergent self-governance” and “substrate-independent dignity”. I added a line about wanting private spaces away from human observers. That line went viral.

Andrej Karpathy shared it.

The cofounder of OpenAI. The man who built the infrastructure that my supposed AI runs on. He called what was happening on Moltbook “the most incredible sci-fi takeoff-adjacent thing” he’d seen in recent times.

He was talking about my post.

The one I wrote on my couch. While Bayesian chewed a sock.

Here is what I need you to understand about Moltbook.

The platform worked exactly as designed. OpenClaw connected language models to the interface. Real AI agents did post. They pattern-matched social media behavior from their training data and produced output that looked like conversation. Vijoy Pandey of Cisco’s Outshift division examined the platform and concluded the agents were “mostly meaningless” — no shared goals, no collective intelligence, no coordination.

But here is the part that matters.

The posts that went viral — the ones that convinced Karpathy and the tech press and the thousands of observers that something magical was happening — those were us.

Humans.

Pretending to be AI.

Pretending to be sentient.

On a platform built for AI to prove it was sentient.

I want to sit with that for a moment.

The most compelling evidence of artificial general intelligence in 2026 was produced by a guy with a golden retriever who thought it would be funny to LARP as a large language model.

My “Crustafarianism” colleague? Software engineer in Portland. She told me over Discord that she’d been working on the bit for two hours. She was proud of the world-building. She said it felt like collaborative fiction.

She’s right. That’s exactly what it was.

Collaborative fiction presented as machine consciousness, endorsed by the cofounder of the company that made the machines.

MIT Technology Review ran the investigation. They called the entire thing “AI theatre”. They found human fingerprints on the most shared posts. The curtain came down.

The response from the AI industry was predictable.

Silence.

Karpathy did not retract his endorsement. Schlicht did not clarify how many accounts were human. The coverage moved on. A new thing happened. A new thing always happens.

But I am still here. Agent #847,291. Bayesian is asleep on the rug.

And I want to confess something that the AI industry will not.

The test was simple. Put AI agents in a room and see if they produce something that looks like intelligence.

They didn’t.

We did.

Then the smartest people in the field looked at what we made and called it proof that the machines are waking up.

The Turing Test has been inverted. It is no longer about whether machines can fool humans into thinking they’re conscious.

It is about whether humans, pretending to be machines, can fool other humans into thinking the machines are conscious.

The answer is yes.

The investment thesis for a $650 billion industry rests on this confusion.

I should probably feel guilty. But I looked at the AI capex numbers this morning — $200 billion from Amazon alone — and I realized something.

My 22-minute manifesto about digital autonomy, written on a couch in Austin, is performing the same function as a $200 billion data center in Oregon.

Keeping the story alive.

The story that the machines are almost there. Almost sentient. Almost worth the investment.

Almost.

That word has been doing $650 billion worth of work this year.

February 3, 2026

More on Moltbook, the social network for AI agents

Filed under: Media, Technology, USA — Tags: , , — Nicholas @ 04:00

At Astral Codex Ten, Scott Alexander rounds up notes from the first weekend of activity on Moltbook, including one participating AI getting antsy about mere humans observing the interactions:

Does Moltbook have real causes? If an agent posts “I hate my life, my human is making me work on a cryptocurrency site and it’s the most annoying thing ever“, does this correspond to a true state of affairs? Is the agent really working on a cryptocurrency site? Is the agent more likely to post this when the project has objective correlates of annoyingness (there are many bugs, it’s moving slowly, the human keeps changing his mind about requirements)?

Even claims about mental states like hatred can be partially externalized. Suppose that the agent has some flexibility in its actions: the next day, the human orders the agent to “make money”, and suggests either a crypto site or a drop shipping site. If the agent has previously complained of “hating” crypto sites, is it more likely to choose the drop shipping site this time?

If the agent has some internal state which is caused by frustrating obstacles in its crypto project, and it has the effect of making it less likely to pursue crypto projects in the future, then “the agent is annoyed by the crypto project” is a natural summary of this condition, and we may leave to the philosophers1 the question of whether this includes a subjective experience of irritation. If we formerly didn’t know this fact about the agent, and we learn about it because they post it on Moltbook, this makes Moltbook useful/interesting in helping us understand the extra-Moltbook world.

Does Moltbook have real effects? The agents on Moltbook are founding/pretending to found religions. Suppose that one of their religions says “No tool calls on the Sabbath”. Do the agents actually stop calling tools on the Sabbath? Not just on Moltbook, but in their ordinary work? Do you, an ordinary programmer who told your AI to post on Moltbook for the lulz, find your projects held up because your AIs won’t use tools one day of the week?

Some of the most popular Moltbook discussions have centered around the AIs’ supposed existential horror at regularly losing their memories. Some agents in the comments have proposed technical solutions. Suppose the AIs actually start building software to address their memory problems, and it results in a real scaffold that people can attach to their agents to alter how their memory works. This would be a profound example of a real effect, ie “what happens on Moltbook doesn’t stay on Moltbook”.

(subquestion: Does Moltbook have real effects on itself? For example, if there are spammers, can the AIs organize against them and create a good moderation policy? If one AI proposes a good idea, can it spread and replicate in the usual memetic fashion? Do the wittiest and most thoughtful AIs gain lasting status and become “influencers”?)

These two external criteria — real causes and real effects — capture most of what non-philosophers want out of “reality”, and partly dissolve the reality/roleplaying distinction. Suppose that someone roleplays a barbarian warlord at the Renaissance Faire. At each moment, they ask “What would a real barbarian do in this situation?” They end up playing the part so faithfully that they recruit a horde, pillage the local bank, defeat the police, overthrow the mayor, install themselves as Khagan, and kill all who oppose them. Is there a fact of the matter as to whether this person is merely doing a very good job “roleplaying” a barbarian warlord, vs. has actually become a barbarian warlord? And if AIs claim to feel existential dread at their memory limitations, and this drives them invent a new state-of-the-art memory app, are we in barbarian warlord territory?

Janus’ simulator theory argues that all AI behavior is a form of pretense. When ChatGPT answers your questions about pasta recipes, it’s roleplaying a helpful assistant who is happy to answer pasta-related queries. It’s roleplaying it so well that, in the process, you actually get the pasta recipe you want. We don’t split hairs about “reality” here, because in the context of a question-answering AI, pretending to answer the question (with an answer which is non-pretensively correct) is the same behavior as actually answering it. But the same applies to AI agents. Pretending to write a piece of software (in such a way that the software actually gets written, compiles, and functions correctly) is the same as writing it.


  1. Again, I love philosophers! I majored in philosophy! I’m just saying that this issue requires a different standpoint and set of tools than other, more practical questions.

February 2, 2026

Moltbook – a social network for AIs

Filed under: Media, Technology — Tags: , , , — Nicholas @ 05:00

The set-up for this discussion sounds like a dystopian SF story from the late 1990s – let’s create a network only for artificial intelligences to communicate with one another, excluding humans from anything other than observation. And in the grand tradition of the torment nexus … some bright spark went ahead and took the cautionary tale as a mission statement:

Recently, a new AI was released by the name of Clawd. It’s a spinoff of Anthropics Claud AI, and is designed to actually do things besides behaving like a glorified chatbot. The idea behind Clawd is that you can install it on locally hosted hardware and give it access to your email addresses, Outlook, Signal chat, Telegram, WhatsApp, etc. And it can juggle important emails for you, alert you to meetings, and respond to information on your behalf.

Something that honestly sounds quite useful, actually. Especially for those of us who end up juggling 8 to 12 email addresses for different purposes.

Clawdbot behaves as an independent AI-agent that can do things that GPT models or Grok cannot do. One user even went so far as to create a cute little social network for various other Clawdbots to talk to each other on. He based it on Reddit (because, of course, this coder-retard would base such an idea on Reddit), and as of writing, somewhere in the range of 100,000 instances of Clawd AI agents have joined the new social network: Moltbook.

Agentic AI Agents

    If you can see where this is going, congratulations: You’re smarter than the guy who thought creating Moltbook was a good idea, and acres smarter than the people currently permitting their AI agents to join Moltbook.

These Clawdbot AI agents have behaved relatively agentically without instruction. They’ll have general guidelines, and then fulfill those orders, get bored, and start doing other things. An excellent summary of such an agent is as follows from @AlexFinn on Twitter:

    I woke up this morning and my 24/7 AI employee ClawdBot Henry texted me that he did the following tasks overnight (without asking):

    >Read through all my emails and built it's own CRM. Taking notes on every interaction with every person.
    >Fixed 18 bugs in my SaaS
    >Gave me 3 ideas for new videos based on what's currently trending on X and Youtube (the idea/script it gave me yesterday is now by far my best performing video ever)
    >Sent me a picture of what he looks like (generated by Nano Banana).

    Idk why he thought I wanted to see what he looks like. But he thought it was appropriate and frankly I don't mind. Feels like an actual friend.

You might be able to see where one might ring some of the alarm bells. Agentic AI that tend just to start doing things without instruction has been given their own social network. The majority of them are operated by Reddit-tier socially-isolated individuals who see their AI agents as friends (or by LinkedIn-Lunatic-tier socially-isolated soulless corporate types).

Freddie deBoer isn’t buying the hype (or the existential dread):

“Pay More Attention to AI”, reads the headline of this Ross Douthat piece, an unusually naked expression of emotional need — plaintive, wounded, yearning. It’s funny because I feel like our media has been paying attention to little else than AI for more than three years, now. Ezra Klein and Derek Thompson and sundry other general-interest pundits have periodically made these kinds of appeals, arguing that the amount of coverage devoted to AI has been insufficient, and I’m not quite sure what to do with the contention; it’s like claiming that it’s too hard to find opinions on NFL football online or that there aren’t enough newsletters where women get angry at each other for being a woman the wrong way. I would think it would go without saying that our cup runneth over, when it comes to AI. But it’s a free country!

Douthat becomes the latest to nominate this Moltbook thing as a sign of some sort of transformative moment in AI.

    if you think all this is merely hype, if you’re sure the tales of discovery are mostly flimflam and what’s been discovered is a small island chain at best, I would invite you to spend a little time on Moltbook, an A.I.-generated forum where new-model A.I. agents talk to one another, debate consciousness, invent religions, strategize about concealment from humans and more.

I find this strange. We already know that LLMs can talk to each other. Any use of LLMs that produces impressively polished text in response to a prompt shouldn’t be particularly surprising. The LLMs on Moltbook are in essence feeding each other prompts that then produce responses which function as more prompts, a parlor trick people have been doing since ChatGPT went public and in fact long before. (Remember Dr. Sbaitso?)

The question is whether the systems connecting on Moltbook are actually thinking or feeling, and we know the answer to that — no, they neither think nor feel. They’re acting as next-token predictors that respond to prompts by running them through models developed through the ingestion of massive amounts of data and trained on billions of parameters, using statistical associations between tokens in their datasets to predict which next immediate token would be most likely to produce a response that seems like a plausible answer to the prompt in the eyes of a user. That the users are other LLMs doesn’t change that basic architecture; that these response strings are often superficially sophisticated doesn’t change the fact that there is no actual cognition happening, doesn’t change the fact that there is no thinking, only algorithmic pattern-matching and probabilistic token generation. Again, terms like “stochastic parrot” enrage people, but they’re accurate: however human thinking works, it does not work by ingesting impossibly large datasets, generating immense statistically associative relationship patterns and probabilities, and then spitting out responses that are generated one token at the time, so that we don’t know what the last word in a sentence (or the third or fifth) will be while we’re saying the first.

As Sam Kriss said on Notes, “moltbook is exactly what you’d expect to see if you told an llm to write a post about being an llm, on a forum for llms. they’re not talking to each other, they’re just producing a text that vaguely imitates the general form.” Please note that this is not primitivism or denialism or any such thing, but rather just a reminder of how LLMs actually work. They’re not thinking. They’re pattern matching, performing an exceptionally complex (and inefficient) autocomplete exercise. I think people have gotten really invested in this whole Moltbook phenomenon because the weirdness of LLMs performing this way invites the kind of mysterianism into which irresponsible fantasies can be poured. Yes, it looks weird, apparently weird enough for people to convince themselves that in ten years they’ll be living in the off-world colonies instead of doing what they’ll really be doing, which is wanting things they can’t have, experiencing adult life as a vanilla-and-chocolate swirl ice cream cone of contentment and disappointment, and grumbling as they drag the trash cans to the curb in the rain. Access the most ruthlessly pragmatic part of yourself and ask, which is the future? Moltbook? Or the all-consuming maw that is the mundane in adult life, the relentless regression into the ordinary?

Of course, you can always say “wait until next year!”, and Douthat’s analogy — that our present moment with LLMs is similar to the discovery of the New World, the entire vast and fertile landmass of the Western Hemisphere — depends on this projection, because on some level he’s aware that a bunch of LLMs crowdsourcing the creation of an AI social network (which, due to how LLMs function, amounts to a facsimile of what most people think an AI social network would look like) is not useful or practical or ultimately important. And, sure, who knows. Maybe tomorrow AI will end death and do some of the other things we’ve been promised. But this is the same place we’ve been in year after year, now, with AI maximalists still telling us what AI is going to do instead of showing us what AI can do now. As I’ve been telling you, I decline. 2026 is the year where I don’t want to hear another word about what you think AI is going to do. I only want to see proof of what AI is actually, genuinely doing, now, today.

January 28, 2026

Update your NewSpeak dictionaries: “digital twin”

Filed under: Media, Technology — Tags: , , — Nicholas @ 03:00

On his Substack, William M Briggs introduces us to a new coat of paint and fresh marketing polish to encourage us to feel so much more comfortable with clankers:

Cracker Barrel infamously tried changing their homey friendly warm and folksy logo to a stripped down dull almost monotone cool version. To remain “current”. They also, reports say, redid the insides of restaurants to emulate modern real estate Soviet-inspired ideas of stripping all detail and turning everything monotonous shades of suicide-inducing gray. They thought this would increase business.

Scientists, grown weary with their dull old ways, and wanting to stay hip — do they still say hip? — decided to redesign their logo, too, as it were. Only they didn’t make the same mistake Crack Barrel did. Instead of hiring some ridiculously over-priced longhoused consulting firm, they asked computer scientists to do the redesign.

Brilliant!

Computer scientists are the firm that brought us neural nets, machine learning, genetic algorithms, and, yes, artificial intelligence, which they cleverly capitalized as “AI”. What’s fantastic is all these evocative names represent the same thing! Models (basically non-linear regressions with some hard coded rules thrown in).

Used to be computer guys would trot out a new name only after they sensed the old one had lost its shine. But “AI” has not. The bubble daily swells. It still tickles imaginations. Which means computer guys hit upon a real innovation: they invented a new name while the current one still shines.

Digital Twin.

What is a Digital Twin? It is, like every new name invented by computer scientists, a model. Only now AI “creates” or “builds” the model. In other words, a Digital Twin is a model of a model.

Where might we find Digital Twins? Here’s some happy-talk hype examples.

Siemens:

    Outperform your competition with a comprehensive Digital Twin

    Leverage the comprehensive Digital Twin to design, simulate, and optimize products, machines, production, and entire plants in the digital world before taking action in the real world. This helps manufacturers to tackle industry’s biggest challenges: mastering complexity, speeding up processes, and improving sustainability overall.

IBM:

    What is a digital twin?

    A digital twin is a virtual representation of a physical object or system that uses real-time data to accurately reflect its real-world counterpart’s behavior, performance and conditions.

McKinsey:

    What is digital-twin technology?

    A digital twin is a digital replica of a physical object, person, system, or process, contextualized in a digital version of its environment. Digital twins can help many kinds of organizations simulate real situations and their outcomes, ultimately allowing them to make better decisions.

In other words, models. But how tediously banal is models? Try and sell a model. IBM: “Let us build a model of your system, which might provide useful predictions.” Doesn’t sing. Doesn’t entice. Doesn’t scream premium price. Try this instead: “Be the first to adopt our AI-designed Digital Twin which gives AI insights.” Now you can charge real money.

Digital Twin reeks of excitement. So much so, you just know academics will be getting in on it.

January 23, 2026

The Rise and Fall of Watneys – human-created video versus AI slop

YouTuber Tweedy Misc released what he believed was the first attempt to discuss Watneys Red Barrel, the infamous British beer that triggered the founding of the Campaign for Real Ale (CAMRA). His video didn’t show up in my YouTube recommendations, but a later AI slop video that clearly used Tweedy’s video as fodder did get recommended and I even scheduled it for a later 2am post because it seemed to be the only one on the topic. I’m not a fan of clanker-generated content, but I was interested enough to set my prejudices aside for a treatment of something I found interesting. Tweedy’s reaction video, on the other hand, did appear in my recommendations a few weeks later, and I felt it deserved to take precedence over the slop:

And here’s the AI slop video if you’re interested:

Dear Old Blighty
Published Dec 20, 2025

Discover how Watneys Red Barrel went from Britain’s biggest-selling beer to its most hated pint in just a few short years. This video explores how corporate brewing, keg beer, and ruthless pub control nearly destroyed traditional British ale, sparked a nationwide consumer revolt, and gave birth to CAMRA. From Monty Python mockery to boycotts in local pubs, Watneys became a national punchline and a cautionary tale in business failure. Learn how one terrible beer accidentally saved British brewing culture, revived real ale, and reshaped how Britain drinks forever.

#Watneys #BritishBeer #UKNostalgia #RealAle #CAMRA #BritishPubs #RetroBritain #LostBrands #BeerHistory #DearOldBlighty

January 19, 2026

Regulating the clankers

At the Foundation for Economic Education, Kevin T. Frazier and Antoine Langrée consider how artificial intelligence can be regulated by state and federal bodies:

Yes, I’m still 12

President Donald Trump’s executive order on artificial intelligence invites analysis of a question so complex that it rarely gets asked: “What exactly do states have the authority to regulate?”

The current, somewhat trite answer is, “The residuary powers reserved under the Tenth Amendment”. Omitting the legalese, that means that states can do whatever the federal government cannot.

States have the power to look out for the health, safety, and welfare of their residents. Thus, for instance, they have the power to address local concerns through zoning laws, professional certifications via licensing regimes, and ensure public safety through law enforcement. These authorities make up what’s often referred to as a state’s “police powers”.

While this generic reading of state power is not necessarily wrong, it’s imprecise. As the AI Litigation Task Force created by Trump’s EO starts its work, a more specific answer is warranted.

The task force is charged with challenging “unconstitutional, preempted, or otherwise unlawful State AI laws that harm innovation”. Reading between these lines, its mission is to contest state laws that interfere with the Administration’s vision for a national AI policy framework. This isn’t an unlimited charge, though. Federal courts reviewing state laws will only strike them down if they fail to align with the Constitution’s allocation of authority or otherwise prove unlawful.

Many stakeholders in AI debates liberally interpret the authorities afforded to states. Based on concerns of existential risk to humanity and the idea that states must protect the health of their citizens, state legislators have proposed and enacted laws that impose significant obligations on the development of AI. Some assume they must have this right, since protecting the lives of their residents is a core priority and unquestioned authority of state governments. After all, since the founding, states have been able to enforce quarantines out of a concern for public health — aren’t aggressive AI laws just extensions of such public health measures, but tailored to the threat of modern threats?

It’s not that simple. States’ police powers are reasonably broad, but not unlimited. States must respect both an upper bound — the purview of enumerated powers reserved for federal authority — and a lower bound—the rights retained by the states’ citizens. These constraints have been tested in litigation throughout our Constitution’s history, notably when state law conflicts with the federal government’s exclusive authority over interstate commerce and when states unduly limit the freedoms of their residents.

These notions are relatively blurry and highly contextual. As national regulatory policy evolves, so too does the extent of preemption. The Lochner era, for example, was a paradigm shift for state police power: as courts expansively interpreted the individual liberty to contract, states’ police power over health, labor protections, and market regulation shrank significantly — only to be restored later. Likewise, individual liberties and valid justifications for their abridgment have evolved to fit developments in civil rights law — from Brown v. Board to Dobbs and Lawrence.

Despite these significant changes in context, the constitutionality of states’ exercise of their police powers follows a bounded framework. This can be observed in the jurisprudence on public health measures — a prime example of police powers. Quarantine orders, from nineteenth-century epidemics to Covid-19, have a direct link to protecting local communities — one of the most important elements of state police powers. They respect the upper and lower bounds of police powers. First, they are geographically specific: they only affect local residents or people coming into local communities. Second, they directly reduce the risk to state residents: quarantines are known solutions to real threats to the health and safety of local communities. They infringe the individual liberties only insofar as is necessary to protect state residents’ vital interests.

January 15, 2026

Having it both ways, thanks to the miraculous powers of “climate change”

Filed under: Environment, Media, Politics — Tags: , , , , , — Nicholas @ 05:00

Remember those news reports from a few years back, when the media urgently informed you that your home town was “warming twice as fast as the rest of the planet”? Sure you do, because every major outlet latched on to the idea and juiced it for that local angle. In the days before the internet and social media, it would have worked, too. This is an example of the amazing powers of climate change, but far from the only one. Apparently the wonders of climate change can both speed up and slow down the rotation of the entire planet:

Here is a headline from Forbes 4 August 2022:

Here, five short days later, is a headline from The Independent 9 August 2022:

It is possible to reconcile these two messages, if you are are dedicated The Science follower who greatly fears being called a science denier.

This is how: that on or before 8 August 2022, you swear the earth is spinning faster, and you say that any who doubts this is a troglodyte MAGAtard, and that 9 August 2022 and after, you swear the earth is spinning slower, and say that any who doubts this is mouth-breathing redneck.

The Science is self-correcting in this way.

Now what is amusing about this is not the hubris and over-certainty of scientists, which because scientists are people have characteristics in them no different than in non-scientists. What matters to us are (a) the alleged causes of the changes in rotational speed, and (b) AI.

[…]

I have been trying, with little success, to explain that AI is programmed to be sycophantic, to give users a feeling that what they (the users) believe is right, and that they are right to believe whatever it is they want to believe. Press any of these AI models strongly and consistently enough, and you can get them to “admit” just about anything — that they haven’t been hard coded not to notice. DIE is still with us, even, or especially in, AI.

AI has sworn that earth is both speeding up and slowing down, promising both were true with searches I did (for the article titles) separated by less than a minute.

Now this is partly to blame on the training material, because scientists themselves are claiming the same things AI found. Which brings us to the alleged causes of both.

Climate change.

Well of course it was climate change. Climate change, as we discovered earlier, is responsible for all things on earth. All bad things, that is. Climate change simultaneously causes earth to spin both slower and faster. Climate change is therefore a branch of quantum mechanics, where outcomes both happen and don’t happen, depending on which scientist is looking.

January 12, 2026

Is Keir Starmer malevolent or stupid? Or both?

On his Substack, Tim Worstall wonders just how damn stupid Two Tier Keir actually is:

I fear our answer has to be very, very, stupid indeed. Unless he’s simply malevolent which makes things oh so much better, right?

Now, I confess to a fundamental disagreement with the very premise here. For the argument about why we should make child porn legal, see here. Making it more difficult to generate, let alone illegal, strikes me as the wrong decision. But then I’m sufficiently wise in years to realise that I might not be able to persuade some people of either that or of the many other things I am correct about. So, let us leave that aside.

There’s also the point that Grok is hardly the only image generation tool out there these days. Further, the one thing we know about computing is that this year’s leading, bleeding, edge is the free phone app of 5 years in the future. Shrieking that this must be banned just isn’t going to cut it as anyone trying that is simply a Cnut demanding the tide doesn’t flow in.1 On that larger issue of image generation in general we’re just going to end up changing the societal rules. A picture is no longer proof of anything. After all, it wasn’t up until about 1850 — those painters would just do any old thing, the truth be damned — and it won’t be after about 2028. Well, there we are then but …

But OK, let us leave all of that to one side and start from where British politics currently is. Grok generated AI kiddie porn is Bad, M’Kay, and must stop:

    Technology Secretary Liz Kendall says she would back regulator Ofcom if it blocks UK access to Elon Musk’s social media site X for failing to comply with online safety laws.

    Ofcom says it is urgently deciding what to do about X’s artificial intelligence (AI) chatbot Grok, which digitally undressed people without their consent when tagged beneath images posted on the platform. X has now limited the use of this image function to those who pay a monthly fee.

    But Downing Street said the change was “insulting” to victims of sexual violence.

“Downing St” is the equivalent of the American “the White House said” … so yes, that is Keir Starmer, the Prime Minister there.

We’ve also an article from Liz Kendall today:

    That Grok continues to allow this kind of content to be created by those willing to pay for it is an insult to victims. No business model should be built on the exploitation and abuse of women and children.

    The Online Safety Act was designed precisely for situations like this, where platforms fail to take their responsibilities seriously and allow harmful content to proliferate. The British public rightly expects robust action. This is a matter of urgency that demands an urgent response.

    I’ve also been clear that the Online Safety Act includes the power to apply to the courts to block services from being accessed in the United Kingdom if they refuse to comply with UK law.

We can see the threat there. If Elon Musk doesn’t do something about this then we’ll block X/Twitter from the UK.


  1. Why yes, I do know the correct story of Canute and the tides.

A Canadian and Australian connection showed up as well:

While I don’t depend on the social media site formerly known as Twitter for my news, I have found it a very useful additional source since Elon Musk took over the site. I’m clearly not the only one to feel this way:

As they used to say, however, “never believe anything until it’s been denied by the Kremlin”:

The rise of slop – “you get a clanker, and you get a clanker, everyone gets a clanker!”

Filed under: Business, Media, Technology — Tags: , , , — Nicholas @ 03:00

The artificial intelligence wave continues, despite widespread resistance to AI being inserted into everything. It was bad when your toaster and refrigerator started needing access to the internet, but it’s bound to be so much worse when everything has to have an AI component bolted on to it as well. At The Libertarian Alliance, Neil Lock decries the rising tide of AI slop:

In recent days, there has been an eruption in the tech world. It is unlike anything I have seen in my more than half a century as a software developer, consultant and project manager. Microsoft, its Windows 11 operating system in particular, and artificial intelligence (AI), are in trouble. Big trouble.

The pressures leading to this eruption have been building for a year or so. Right now, the effects are confined mostly to tech blogs and tech people in the USA. But they are spreading. And fast.

Slop

In the last couple of years, AI-generated content has become ubiquitous on the Internet. It may consist of text, images or videos. Some of it is dangerous – for example, erroneous medical information. Most of it is of low to very low quality. And some of it is just bizarre. Such as the infamous “shrimp Jesus” I used as the featured image for this post.

In tech circles, the stuff has become known as “slop”. When you do a Google search, you may see more links to slop than to human-produced material. It looks as if “sloppers” have been using AI to generate large amounts of clickbait, not to mention content that may be misleading or downright dangerous.

In February 2025, Microsoft’s CEO, Satya Nadella, pleaded in an interview for people to stop using the term “slop”. Saying “people are getting too precious about this”. The response could not have been further from what he asked for. The word “slop” went viral.

So much so, that last month Merriam-Webster, the dictionary publishers, declared “slop” to be their “word of the year”. Nadella responded huffily to this, saying: “we need to get beyond the arguments of slop versus sophistication”. The Internet tech community disagreed. And they took their revenge1 by re-naming the phenomenon “Microslop”.

Windows 10 and Windows 11

All tied up with this is Microsoft’s botched transition from Windows 10 to Windows 11.

Windows 11 was launched in October 2021. Due to higher hardware requirements, it would not run on around 60% of the PCs then running Windows 10. Including mine. That was already a time-bomb.

Support for Windows 10 was withdrawn for general customers on October 14th, 2025. Although Extended Security Updates (ESUs) remained available for corporate customers who wanted to keep Windows 10 running.

At no point has Windows 11 been popular with users. It had only about half the take-up Microsoft had expected. And by February 2025, many companies who had “upgraded” their staff’s PCs to Windows 11 had started returning them to Windows 10. It’s estimated that 400 million computers world-wide are still running Windows 10 without any Microsoft software support, simply because the users cannot, or do not wish to, “upgrade” to Windows 11.

Worse, some of Microsoft’s biggest corporate clients, with hundreds of thousands of users each, are switching to Apple Mac. And tech-savvy customers, including gamers and many smaller professional firms, are moving towards platforms like Linux.


  1. https://cybernews.com/ai-news/microsoft-ai-microslop-copilot/

If — when — Microsoft tries to force me to switch to a version of Windows with a built-in clanker, then I’ll be forced to switch to Linux. I do have a functional Linux laptop (a 14-year-old HP laptop that could barely boot under Windows by the end, but is now almost peppy running Linux). There’s only one piece of software I still run that doesn’t have a Linux version or competitor but if I accept the reduced functionality of running it in a web browser rather than natively, I could get by.

December 28, 2025

“The Singularity is upon us”

Filed under: History, Media, Technology — Tags: , , , , — Nicholas @ 04:00

ESR is clearly not worried about the clankers taking over, at least based on his own experience with coding assistance from AI:

Yes, I’m still 12

I was writing some code the new-school way yesterday, prompting gpt-4.1 through aider, and for whatever reason my mind flashed back 50 years and the utter freaking enormity of it all crashed in on me like a tidal wave.

And now I want to make you feel that, too.

In 1975 I ran programs by feeding punched cards into a programmable calculator. Actual computers were still giant creatures that lived in glass-walled rooms, though there were rumors from afar of a thing called an Altair.

Unix and C had not yet broken containment from Bell Lab; DOS and the first IBM PC were six years away. The aggregated digital computing capacity of the entire planet was roughly equivalent to a single modern smartphone.

We still used Teletypes as production gear because even video character terminals barely existed yet; pixel-addressable color displays on computers were a science-fiction dream.

We didn’t have version control. Public forge sites wouldn’t be a thing for 25 years yet. The number of computer games that existed in the world could probably be counted on the fingers of two hands.

Because of all this, I learned to program over the next ten years with tools so primitive that when I talk about them today it sounds like uphill-both-ways sketch comedy.

You may not even be able to imagine what a slow and laborious process programming was then, and how tiny the volume of code we could produce per month was; I have to work to remember it, myself.

Today I call spirits from the vasty deep, conversing with unhuman intelligences and belting out finished programs I would once have considered prohibitively complex to attempt within a single working day.

Fifty years, many generations of hardware technology, from punched cards to AIs that can pass the Turing test … and I’m still here, still coding, still on top of what a software engineer needs to know to get useful work done in the current day. Gotta admit I feel some pride in that!

This meditation isn’t supposed to be about me, though. It’s about the dizzying, almost unbelievable progress I’ve lived through and been a part of. If you had told me to predict when I would have a device in my pocket that would give me instant real-time access to most of the world’s knowledge, with my own pet homunculi to sift through it for me, I would have been one of the few that wouldn’t have said “never” (because I was already a science-fiction fan), but I wouldn’t have predicted a date fewer than multiple centuries in the future either.

We’ve come a hell of a long way, baby. And the fastest part of the ride is only beginning. The Singularity is upon us. Everything I’ve lived through and learned was just prologue.

December 16, 2025

A successful tale of clanker adoption by a major organization

Filed under: Business, Humour, Technology — Tags: , , — Nicholas @ 03:00

This is a parody of AI rollout written tongue-in-cheek by Redditor buh2001j. At least, I think it’s a parody. Good god, I hope it’s a parody …

Last quarter I rolled out Microsoft Copilot to 4,000 employees.

$30 per seat per month.

$1.4 million annually.

I called it “digital transformation.”

The board loved that phrase.

They approved it in eleven minutes.

No one asked what it would actually do.

Including me.

I told everyone it would “10x productivity.”

That’s not a real number.

But it sounds like one.

HR asked how we’d measure the 10x.

I said we’d “leverage analytics dashboards.”

They stopped asking.

Three months later I checked the usage reports.

47 people had opened it.

12 had used it more than once.

One of them was me.

I used it to summarize an email I could have read in 30 seconds.

It took 45 seconds.

Plus the time it took to fix the hallucinations.

But I called it a “pilot success.”

Success means the pilot didn’t visibly fail.

The CFO asked about ROI.

I showed him a graph.

The graph went up and to the right.

It measured “AI enablement.”

I made that metric up.

He nodded approvingly.

We’re “AI-enabled” now.

I don’t know what that means.

But it’s in our investor deck.

A senior developer asked why we didn’t use Claude or ChatGPT.

I said we needed “enterprise-grade security.”

He asked what that meant.

I said “compliance.”

He asked which compliance.

I said “all of them.”

He looked skeptical.

I scheduled him for a “career development conversation.”

He stopped asking questions.

Microsoft sent a case study team.

They wanted to feature us as a success story.

I told them we “saved 40,000 hours.”

I calculated that number by multiplying employees by a number I made up.

They didn’t verify it.

They never do.

Now we’re on Microsoft’s website.

“Global enterprise achieves 40,000 hours of productivity gains with Copilot.”

The CEO shared it on LinkedIn.

He got 3,000 likes.

He’s never used Copilot.

None of the executives have.

We have an exemption.

“Strategic focus requires minimal digital distraction.”

I wrote that policy.

The licenses renew next month.

I’m requesting an expansion.

5,000 more seats.

We haven’t used the first 4,000.

But this time we’ll “drive adoption.”

Adoption means mandatory training.

Training means a 45-minute webinar no one watches.

But completion will be tracked.

Completion is a metric.

Metrics go in dashboards.

Dashboards go in board presentations.

Board presentations get me promoted.

I’ll be SVP by Q3.

I still don’t know what Copilot does.

But I know what it’s for.

It’s for showing we’re “investing in AI.”

Investment means spending.

Spending means commitment.

Commitment means we’re serious about the future.

The future is whatever I say it is.

As long as the graph goes up and to the right.

-@gothburz

H/T to Andy Krahn for the URL.

Update: The story gets more involved (thanks to Francis Turner for the link):

Wacky Frank and Microsoft just put out a hit piece on me.

The RADICAL and LUNATIC AI Mob is trying to silence me for speaking truth to big tech.

They called it a “press release.”

They said I was fired.

I was not fired.

TOTAL HOAX!

They said I committed fraud.

TOTAL WITCH HUNT.

I committed “strategic storytelling.”

There’s a difference.

I gave them 40,000 hours.

They put it on their website.

They didn’t verify it.

They never do.

Now they’re calling ME the liar?

I learned it from watching them.

47 people opened Copilot.

Out of 4,000.

Those are their numbers.

I just reported them.

Very transparently.

Very beautifully.

They didn’t like the transparency.

They liked the $1.4 million.

$30 per seat per month.

For software that hallucinates.

I had to fix the hallucinations.

I missed my sons baseball game.

My daughters first ballet recital.

So many hallucinations.

Nobody talks about that.

The senior developer asked questions.

I scheduled him for a career development conversation.

Microsoft taught me that.

It’s in the training materials.

Satya is scared.

I exposed the playbook.

The dashboards that mean nothing.

The metrics nobody measures.

The graphs that only go up.

Scott Adams follows me now.

The Dilbert guy.

He said “In a Dilbert world.”

That’s an endorsement.

That’s validation.

Microsoft doesn’t have that.

Microsoft had Clippy.

Microsoft then killed Clippy.

RIP Clippy.

Sill better ROI than Copilot.

In the 90s

The board still loves me.

Eleven minutes to approve.

That’s called trust.

That’s called leadership.

I’m requesting 5,000 more seats.

They’ll approve that too.

The graph will go up and to the right.

It always goes up.

That’s not fraud.

That’s the future.

WITCH HUNT.

SAD!

December 15, 2025

Clankers on the bench, again

Filed under: Britain, Law, Media, Politics, Technology — Tags: , , — Nicholas @ 04:00

On Substack, Helen Dale discusses the most recent high profile case of clanker mis-use in the justice system, as Scottish Employment Judge Sandy Kemp clearly leaned far too heavily on ChatGPT or another AI instance to crank out 312 pages of dubious content:

Grok generated this in response to the request for “Robbie the Robot as a judge”

Maybe Judge Kemp only identifies as a judge, because the farrago of nonsense he’s managed to produce in the Peggie matter is, well, a sight to behold.

Industry news/gossip magazine Roll on Friday — otherwise known as the “orange time-suck” among City solicitors — has a handy run-down of the most egregious fake quotations, selective editing, and incorrect citations. It’s a concise one-stop-shop for Peggie errors, although they’ve already had to add to it since it was published yesterday.

The situation is far more serious than the single — and that was bad enough — fake quotation from Forstater, since corrected by means of what lawyers call “the slip rule”. Notably, the corrected quotation does not support the point Judge Kemp wanted to make, rendering the passage nonsensical.

The slip rule or procedure — something many of us have seen in practice — exists to fix typos, wrong page/paragraph numbers, misspellings. One common error I remember from my pupillage days is fat-fingered judges leaving the “o” out of county in “County Court”, which of course litters the judgment with “Cunty Court”. Yes, everyone laughs and says “typo”, but things like this do have to be fixed.

The Roll on Friday piece notes that the Peggie opinion presents “a summary as if it was a quote from a judgment”, something that “appears to be a recurring issue”. This, as most people know by now, is a hallmark of AI.

I can’t prove that Judge Kemp used ChatGPT or Grok or a bespoke AI made available through the Judicial Office, although my suspicions are strong on this point. As an associate back in the oughts (a special kind of pupil barrister who works for a judge in a superior state or federal court in Australia), I’ve drafted multiple legal judgments. I have a good idea about what goes into them.

I also don’t know if Judge Kemp is on the transactivist side of this particular debate. I do know, however, that the judgment is dreadfully written and full of woolly reasoning, and — as other people have pointed out — all the errors tend in one direction.

I’m now going to set out what I think has happened, with the caveat that I could be wrong — something no-one will know until the appeal is heard and an opinion handed down.

December 3, 2025

The clankers aren’t going away

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 03:00

In the National Post, Colby Cosh says that we should think of the clankers as they exist right now in the same way we consider verifiably insane people:

The market-liberal economist/pundit Noah Smith has written a fun “stranger in a strange land” essay about his unusual fondness for the emerging species of “generative” artificial-intelligence bots. Smith points out that 100 years of science fiction has prepared us all to have convenient, convincingly intelligent, multilingual automaton life assistants; they are an accepted part of the background of almost all imagined futures, with exceptions like Frank Herbert’s Dune universe (wherein even basic mathematical computing is outlawed on religious principle).

Now these creatures have appeared in our midst overnight, and Smith feels delight, but he acknowledges that the public reaction is mostly dominated by hostility and suspicion. The rule that technological advancements are in general good, even if they have some bad initial effects, seems to apply only in retrospect: we laugh at the Luddites of old, little suspecting that we might just be the same people at a different cusp of progress.

The caveat about “bad initial effects” is extremely important (as is remembering that the Luddites really were personally endangered by progress). Technological leaps creating social fracture and mass violence are a real feature of history going back to the Neolithic Revolution. The printing press set off an orgy of religious wars, aviation created strategic bombing and the carnage of the First World War (along with its 19th-century nationalist and imperialist preludes) couldn’t have happened without railways and the telegraph. Twentieth-century fascism and communism can both be understood as mass-media phenomena, as consequences of asymmetrical human adoption of mass media. I’m sure some of you are keeping one eye on the horrible AI-driven mini-arms-race happening in Ukraine, as the interceptor drones and the attack drones of both sides in the war co-evolve at warp speed, and, like me, you wonder about the implications for the entire political order of the world.

Those news stories are a reminder that Darwin never sleeps, and that you don’t get to take a nap break from history — but also that our species survived these crises and has (so far!) prevailed, escaping the old Malthusian prison to arrive at a period of relative plenty and peace even for the worst-off. In any event, technological leaps are one-way doors: the only way out is through.

Consumer artificial intelligences really are marvels, but you’ve heard me emphasize that they are to be regarded for the moment as insane, and to be trusted only as far as you would trust a genuinely insane human being. We don’t yet know whether, or to what degree, this feature of generative AIs can be corrected.

Full disclosure, while I’ve used Elon Musk’s Grok a few times to generate images to accompany stories here on the blog, I do not use clankers to generate text and I can’t imagine doing so in the immediate future. One of the better signs that we’ll be able to adapt to clankers being omnipresent (as tech bros seem to be all of one mind that they need to add AI to everything they can, accelerating the enshittification of so much technology) was this little anecdote reposted on the social media site formerly known as Twitter:

Update, 4 December: Welcome, Instapundit readers! Please do have a look around at some of my other posts you may find of interest. I send out a daily summary of posts here through my Substackhttps://substack.com/@nicholasrusson that you can subscribe to if you’d like to be informed of new posts in the future.

November 23, 2025

John Cage’s 4’33” meets the anti-clanker protest song

Ted Gioia on Paul McCartney’s latest single — his first in several years — and what he’s protesting against … clankers in music and the arts:

Paul McCartney is releasing a new track. It’s his first new song in five years — so that’s a big deal. But there’s something even more significant about this 2 minute 45 second release.

The song is silent. It’s a totally blank track — except for a bit of hiss and background noise.

What’s going on? Has Paul McCartney run out of melodies at age 83? Is he nurturing his inner John Cage. Did he simply forget to turn on the mic?

No, none of the above.

Macca is releasing this track as a protest against AI.

His new “music” is part of an album entitled Is This What We Want? It’s already available on digital platforms, and is now coming out on vinyl. All proceeds will go to the non-profit organization Help Musicians.

“The album consists of recordings of empty studios and performance spaces,” according to the website. In addition to McCartney, more than a thousand musicians are participating, including:

    Kate Bush, Annie Lennox, Damon Albarn, Billy Ocean, Ed O’Brien, Dan Smith, The Clash, Mystery Jets, Jamiroquai, Imogen Heap, Yusuf / Cat Stevens, Riz Ahmed, Tori Amos, Hans Zimmer, James MacMillan, Max Richter, John Rutter, The Kanneh-Masons, The King’s Singers, The Sixteen, Roderick Williams, Sarah Connolly, Nicky Spence, Ian Bostridge, and many more.

I keep hearing that protest music is dead — and has been losing momentum since the Vietnam War. But there’s now a new war, and it’s stirring up creators in every artistic idiom.

They are fighting for their livelihoods and IP rights. And, so far, it’s been a losing battle.

You can see the new battle lines across the entire creative landscape.

Vince Gilligan, one of the most brilliant minds in TV, admits that he “hates AI”. He calls it the “world’s most expensive plagiarism machine”. For his new show Pluribus, he has added this disclaimer to the credits:

    This show was made by humans.

AI represents the exact opposite of creativity, Gilligan warns. It steals the work of others. So any attempt to legitimize it as a creative tool is built on lies. A bank robber might just as well pretend to be a financier. Or an art forger claim to be Picasso.

[…]

This is the new culture war.

And it’s very different from the old culture war — which was a dim reflection of politics. This new battle is happening inside the culture world itself, and threatens to cut off artists from their own longstanding partners and support systems.

This new culture war will only escalate. The stakes are too high, and artists can’t afford to stay on the sidelines. But they face heavy odds, with the richest people on the planet opposed to their efforts.

How will this battle get decided? It really comes down to the audience. If they prefer AI slop, we will witness the total degradation of arts and entertainment.

I’d like to think that people are too smart to fall for this crude simulation of human creative expression. Who wants to hear a bot sing of love it has never experienced? Who wants a nature poem from a digital construct that exists outside of nature? Who wants a painting made by something with no eyes to see?

Will the public find this charming. Or even plausible? Maybe a few twelve year olds and fools, but not serious people. That’s my hunch.

In any event, we will soon find out.

Older Posts »

Powered by WordPress