Quotulatiousness

October 29, 2025

Clankers on the bench

Filed under: Australia, Law, Technology, USA — Tags: , , — Nicholas @ 03:00

The cynic in me wonders if having AI judges would make the justice system any worse, given the ever-increasing pro-criminal bias on display in courtrooms across North America and Europe:

Grok generated this in response to my request for “Robbie the Robot as a judge”

It’s the question rattling through chambers and law schools. Are we in danger of a world where the solemn business of justice, liberty, livelihood, and who really owns the back fence is entrusted not to a human in robes but to a chirpy algorithm with a software bug and a 4,000-word disclaimer? Are we handing over judgment itself to machines, or simply giving them the photocopying and hoping they don’t start offering opinions?

Because, depending on whom you ask, AI in law is either (a) the long-delayed democratization of justice for ordinary people or (b) the first act of a constitutional farce in which courts drown beneath PDFs full of nonsense and fake footnotes.

The Machinery Arrives

Beneath the wood paneling and the reassuring thump of legal pomposity, something mildly heretical is afoot. Judges, clerks, and barristers — those high priests of precedent — are quietly feeding their briefs to generative AI, which now whirs away in the background, summarizing, drafting, and rummaging through case law while its human overlords wrestle with the biscuit tin and their consciences.

According to the Judicial Commission of New South Wales (NSW), the robots are already in the building. Their latest handbook cheerfully notes that AI is used for legal analytics, mass document review, “natural language” searching, and predictive modeling — all of which sound terribly sophisticated until you realize they’re essentially Excel spreadsheets with delusions of grandeur. A UNESCO survey adds the clincher: nearly half the world’s judges, prosecutors, and court staff have used generative AI for work, and only 9 percent have had what’s politely called safe-usage training. This is training where someone explains that you shouldn’t upload confidential evidence to a chatbot that lives in the cloud or take legal advice from a program that thinks Brown v. Board of Education was a musical.

The Law Society of NSW, in a rare fit of clairvoyance back in 2016, created something called the Future Committee — the sort of name that already sounds like a sci-fi tribunal convened to ban fun. Their brief was to consider what might happen when clients demanded more for less, junior lawyers were burnt to a crisp, and artificial intelligence started politely asking, “Shall I draft that for you?” The conclusion was simple: adapt or be eaten.

Meanwhile, in London, the Law Society of England and Wales skipped the warm-up act and went straight to the apocalypse. Its 2021 report, Images of the Future Worlds Facing the Legal Profession 2020–2030, envisioned a legal world in which routine advice would be swallowed whole by AI portals, full-time lawyers would be reduced to an endangered species, and the survivors would work alongside AI and be mandated to take “performance-enhancing medication in order to optimise their own productivity and effectiveness.” The whole thing reads like 1984 rewritten by a management consultant — right down to the faint violin of self-pity playing somewhere in the distance.

Oh, but those were in Australia and the UK, it’s not that bad in North America, surely? Uh, well …

Across the Atlantic, the award for Legal Farce of the Century goes to Mata v. Avianca, Inc. (S.D.N.Y. 2023). In this modern masterpiece of professional self-immolation, a team of lawyers filed court papers quoting three magnificent precedents: Varghese v. China Southern Airlines, Martinez v. Delta, and Zicherman v. Korean Air Lines. Unfortunately, none of them existed — not in Westlaw, not in Lexis, not even in the fever dreams of law students. When the judge asked, quite reasonably, to see the cases, counsel could only offer the look of people discovering gravity for the first time. Sanctions followed under Rule 11 for what the court delicately called “subjective bad faith”, which is American for “you made this up”. The ruling is now shown at continuing-education sessions under the optimistic title Let’s Not Do That Again.

The sequel writes itself:

  • Massachusetts: A lawyer submitted memoranda stuffed with phantom cases, blamed “the office AI”, and was fined. The judge, channeling divine exasperation, warned that blind acceptance of AI-generated content is not a defense — it’s a lifestyle choice.
  • Alabama: Attorneys for the state prison system filed citations to imaginary authorities and were sentenced to the most humiliating punishment known to the bar: writing apology letters to their law school deans and delivering public lectures on ethics.
  • California: One overzealous litigator managed to produce a brief in which twenty-one of twenty-three authorities were pure fiction. The court fined him, the press dined out on it, and AI-compliance seminars across America gained a new slide.

Thus, the first commandment of the digital age is: the robot may write it, but the Submit button still belongs to a human — and the human still gets to explain it to the judge.

October 26, 2025

Biggs and the “End of History”

Feral Historian
Published 30 May 2025

The “Biggs Edit” isn’t just a contentious question of Star Wars arcana, but an example of some of the problems historians face trying to reconstruct the past. Problems that are only going to get worse in the age of AI.

00:00 Intro
01:12 Not So Easy
05:02 A Slim Hope
05:50 Not Equal Claims
06:46 Memory and AI

🔹 Patreon | patreon.com/FeralHistorian
🔹 Ko-Fi | ko-fi.com/feralhistorian

September 25, 2025

An unanticipated danger of AI – “classified” videos for decision-makers

Until fairly recently, even the least tech-savvy among us could distinguish AI-generated videos from the real thing … but most of the leaders and decision-makers in western governments aren’t very tech-savvy and put into high-pressure environments may be uniquely susceptible to AI manipulation:

What If I Told You … One of the biggest applications of AI for misinformation hasn’t been online but in the halls of power.

Aging boomer politicians, generals, and major figures are manipulated by showing them AI videos they can’t tell, can’t pause to look at, and certainly can’t digitally examine or geolocate …

“And as you saw Mr President.”

Pay attention. All of them reference seeing “videos” that you aren’t allowed to see, of events which they claim are public record, but appear no-where and no reporting supports …

Sean Hannity was interviewing a world leader and even said “You should show the public the video you showed me it’d really change everyone’s opinion. it changed mine” LIVE ON AIR. And the world leader said some non-committal maybe, then released nothing.

These aging politicians, media figures, corporate personalities, etc. all casually reference seeing insane videos that would CHANGE EVERYTHING and would have been immediately released to sway public opinion if they existed or would have been leaked if it would have been in poor taste to be seen directly releasing them (like gore films)

But of course they aren’t released because they’re faked and the internet would immediately piece together that they’re faked with AI, video game, and archival footage from old conflicts … But the aging 60- and 80-year-olds who run the world can’t tell.

There was a case where they challenged Greta Thunberg “Would you watch this video it’d change your mind” and she refused telling them to just release it … Then they didn’t and attacked her for not being willing to view evidence contrary to her views … in a controlled environment where she couldn’t scrutinize it or check its authenticity against anything else …

It sounds insane! But if you pay attention all of these politicians, media figures, and even influencers … People who often have ZERO security clearance or any official attachment of real trust or allegiance to the governments showing them this “classified” or “controlled” footage … Regularly reference seeing footage which does not exist in the public domain, for events which are viciously contested in which any of the footage they claim to have seen would be WORLD CHANGING news … Yet all these figures are just left out in the wind repeating “Trust me bro”s for some of the most important occurrences of the past decade.

August 24, 2025

Much of our prosperity is based on trust, and we’re rapidly losing it

Ted Gioia foresees a precipitous fall in trust coming at us very soon, and I’m afraid he might be being too optimistic:

During the great purges of the 1930s, Stalin ordered the execution of a million people, including some of his closest associates. But it wasn’t enough to kill these victims — they also had to disappear from photographs.

In a famous case, Nikolai Yezhov got removed from his position next to Stalin in a photo taken by the Moscow Canal. This erasure alarmed many party elites because Yezhov, head of the secret police, had been one of the most feared men in the Soviet Union.

And now he got totally deleted.

Well, not totally. In those days of print media, original photos survived, and a paper trail made it difficult to erase history.

So this photo was later used to mock Stalin, and the pretensions of dictators. They can try to change reality, but that’s not possible.

Or is it? Maybe dictators now get the last laugh. Because in the last few months, reality has been defeated — totally, completely, unquestionably.

It is now possible to alter reality and every kind of historical record — and perhaps irrevocably. The technology for creating fake audio, video, and text has improved enormously in just the last few months. We will soon reach — or may have already reached — a tipping point where it’s impossible to tell the difference between truth and deception.

  • Can I tell the difference between a fake AI video and a real video? A few months ago, I would have said yes. But now I’m not so sure.
  • Can I tell the difference between fake AI music and human music? I still think I can discern a difference in complex genres, but this is a lot harder than it was just a few months ago.
  • Can I tell the difference between a fake AI book and a real book by a human author? I’m fairly confident I can do this for a book on a subject I know well, but if I’m operating outside my core expertise, I might fail.

At the current rate of technological advance, all reliable ways of validating truth will soon be gone. My best guess is that we have another 12 months to enjoy some degree of confidence in our shared sense of reality.

But what happens when it’s gone?

Back in 2023, I asserted that trust is the most scarce thing in society. But that was before all these tech deceptions came online. Trust will soon get even more scarce — or perhaps disappear completely from the public sphere.

This is not a small matter.

Most discussions of this issue focus on the technology. I believe that’s a mistake. The real turmoil will take place in social cohesion and individual psychology. They will both fracture in a world where our shared benchmarks of truth and actuality disappear.

We will be — already are — in desperate need of Robert Heinlein’s Fair Witnesses:

A Fair Witness is an individual trained to observe events and report exactly what is seen and heard, making no extrapolations or assumptions. While wearing the Fair Witness uniform of a white robe, they are presumed to be observing and opining in their professional capacity. Works that refer to the Fair Witness emphasize the profession’s impartiality, integrity, objectivity, and reliability.

An example from the book [Stranger in a Strange Land] illustrates the role of Fair Witness when Anne is asked what color a house is. She answers, “It’s white on this side.” The character Jubal then explains, “You see? It doesn’t occur to Anne to infer that the other side is white, too. All the King’s horses couldn’t force her to commit herself … unless she went there and looked – and even then she wouldn’t assume that it stayed white after she left.”

July 28, 2025

The AI threat to the laptop classes

Filed under: Business, Economics, Media, Politics, Technology, USA — Tags: , , , — Nicholas @ 05:00

Warren at Coyote Blog responds to a recent Gato Malo post on the way artificial intelligence (however described) will continue to disrupt the workplace especially as it begins to threaten the “laptop class” workers:

I agree with Gato that AI has a huge potential to disrupt current work patterns, in the same way that the industrial revolution did. The 19th century disruptions were severe, and many people suffered as their experience and skill set no longer matched the new economy. But eventually everyone, from the poorest to the rich, were better off for letting the industrial revolution run its course.

But in the 19th century, the disrupted were essentially powerless. What happens this time around, though, when the disrupted are the ruling elite themselves? These potentially disrupted professions include lawyers and doctors who already have shown themselves very willing to organize to block innovation, squash competition, and protect their high pay. Just look at the history of the attempts by Congress to reduce Medicare reimbursements to doctors. And that was minor compared to the potential AI disruption. Let me give you another example of the powerful resisting a technological change that should have disrupted their businesses.

When TV first was being rolled out, the industry coalesced around a network of local broadcast stations, many of whom became affiliates of a network like NBC or CBS. Why this model? Mainly it was driven by technology — the farthest a TV signal could reasonably be broadcast was about 50-75 miles. Thus everyone by necessity got their TV through three or four TV stations in their metropolitan area, each its own small business.

Now fast forward to today. There are multiple ways to broadcast a TV signal nationwide — there are several satellite options and many streaming internet approaches. So now when we watch DirecTV or Youtube TV, we just watch the national NBC or ABC feed, right? Nope. Federal law requires that whatever service you use MUST serve up NBC, for example, via the local affiliate. That is why your streaming TV service harasses you when you travel, because it is worried about violating the law by showing you the Phoenix CBS affiliate when you are staying overnight in Atlanta (gasp).

This is hugely costly. In order to be able to provide NBC among its stations, Youtube TV must gather the feeds from 235 different stations. In the Internet streaming era this is costly but in the satellite era it was insane. DirecTV, with its limited bandwidth, had to simultaneously broadcast 235 stations, most showing identical content, just to legally provide you with NBC. So why this crazy, expensive, insane effort? I am sure you have guessed — pound for pound local TV stations are among the most powerful lobbyists in the country. First, they have money and a massive incentive to defend their local geographic monopoly — Car dealers and alcohol distributors are much the same, which is why every potential innovation is resisted in those markets. But TV stations have one extra card to play — nearly every Congressman in the House likely depends on the three or four TV stations in one major metropolitan area for a huge part of their publicity and coverage. No politician is going to screw with that. At the end of the day, local stations did not get disrupted, they actually became more valuable with this government-enforced distribution of their product.

This is a small example of the fight that is coming in AI. Congressmen will couch their arguments in fear-charged terminology as if their real fear is some Terminator-like AI apocalypse. But the real concern will be from the influential elite who are being disrupted. What would have happened to the Industrial Revolution if the hand-loom weavers were the children of the nobility? Would the government have allowed the revolution to proceed? We are about to find out.

On a cheerier note (if you’re an AI), here’s Ted Gioia‘s most recent concerns about AI getting more evil as it gets more capable:

I hate to be the bearer of bad news, but AI doesn’t make ethical decisions like a human being. And none of the reasons why people avoid evil apply to AI.

Okay, I’m no software guru. But I did spend years studying moral philosophy at Oxford. That gave me useful tools in understanding how people choose good over evil.

And this is relevant expertise in the current moment.

So let’s look at the eight main reasons why people resist evil impulses. These cover a wide range — from fear of going to jail to religious faith to Darwinian natural selection.

You will see that none of them apply to AI.

Do you see what this means? You and I have plenty of reasons to choose good over evil. But an AI bot is like the honey badger in a famous meme — and just don’t care.

So sci-fi writers have good reason to fear AI. And so do we. The moral compass that drives human behavior has no influence over a bot. As it gets smarter, it will increasingly resemble a Bond villain. That’s what we should expect.

Anyone who tries to forecast the future of AI must take this into account. I certainly do.

And even though I’d like to think that I’m a fearless predictor, I must admit that what I see playing out over the next few years is very, very very troubling.

Here’s my hypothesis: Let’s call it Ted’s Unruly Rules of Robotics:

  1. Smart machines will have an inherent tendency to evil—because human moral or legal or religious or evolutionary tendencies to goodness don’t apply to them.
  2. The only way to stop this is through human intervention.
  3. But as the machines get smarter, this intervention will increasingly fail.

July 22, 2025

The internet keeps getting worse. Let’s talk about why.

Jared Henderson
Published 16 Jul 2025

Why do major online platforms keep getting worse? Cory Doctorow’s work helps us understand the pattern of growth, decline, and eventual demise.

→ Timestamps
00:00 Beginning
00:51 How Platforms Die
08:29 The Death of a Platform (From the Inside)
12:14 Ads, Everywhere
14:47 Yes, I Make Money from Ads
16:32 Bots
22:04 The Internet We Need
(more…)

July 21, 2025

AI slop seems to have finally triggered significant pushback

Filed under: Business, Media, Technology — Tags: , , , , , — Nicholas @ 03:00

Ted Gioia says that he’s seeing strong indicators that the AI slop superabundance has helped create a widespread rejection of it and all its works:

2025 has been the year of garbage culture.

Creators watch in horror as dismal AI slop threatens their livelihoods — and the integrity of their fields. It’s everywhere, spreading faster than a pharaoh’s plague.

In recent months, we’ve been bombarded with millions of lousy AI songs, idiotic AI videos, and clumsy AI images. Error-filled AI texts are everywhere — from your workplace memos to the books sold on Amazon.com.

Even my lowly vocation, music journalism, gets turned into a joke when it’s accompanied by slop images of fake events.

No, these things did not really happen.

But something has changed in the last few days.

The garbage hasn’t disappeared. It’s still everywhere, stinking up the joint.

But people are disgusted, and finally pushing back. And they are doing so with such fervor that even the biggest AI companies are now getting nervous and pulling back.

Just consider this surprising headline:

This was stunning news. YouTube is part of the biggest AI slop promoter of them all — namely the Google/Alphabet empire. How can they possibly abandon AI garbage? Their bosses are the biggest slopmasters of them all.

After this shocking news reverberated through the creative economy, YouTube started to backtrack. They said that they would not punish every AI video — some can still be monetized.

But even the revised guidelines are still a major blow to AI slop purveyors. YouTube made clear that “creators are required to disclose when their realistic content is altered or synthetic”. That’s a huge win—we finally have a requirement for disclosure, and it came straight from the dark planet Alphabet.

YouTube also stressed that it opposes “content that is mass-produced or repetitive, which is content viewers often consider spam”. This is just a step away from blocking slop.

Update, 22 July: Ted posted a follow-up with a bit more evidence that the pushback is working:

In my latest article I criticized Spotify for allowing uploads of unauthorized AI tracks to the profiles of dead musicians.

But the company may finally be listening to criticisms of its AI policies. In this case, Spotify has now taken steps to stop the abuses, and a spokesperson reached out to me today with an update and expressing a clear and proper policy on AI fraud.

I share it below (and have also updated my article):

    We’ve flagged the issue to SoundOn, the distributor of the content in question, and it has been removed. This violates Spotify’s deceptive content policies, which prohibit impersonation intended to mislead, such as replicating another creator’s name, image, or description, or posing as a person, brand, or organization in a deceptive manner. This is not allowed. We take action against licensors and distributors who fail to police for this kind of fraud and those who commit repeated or egregious violations can and have been permanently removed from Spotify.

They acted quickly, and I give them credit for that.

Update the second, 23 July: Ah, Spotify giveth and Spotify taketh away:

“Spotify is publishing new, AI-generated songs on the official pages of artists who died years ago without the permission of their estates or record labels,” reports 404 Media.

This scandal came to light because of an AI song attributed to Blaze Foley, who died in 1989. The bogus track is accompanied by an AI-generated image of a man who bears no resemblance to the singer.

What’s going on here? Is this just ignorance or carelessness at Spotify? Or does it represent something more sinister — another example of the company’s willingness to deceive users in the pursuit of profits?

These scams must stop. If Spotify doesn’t fix this mess immediately, courts should intervene.

But the dead musician scandal is just a start — because other bizarre things are happening at Spotify.

The whole situation is positively surreal.

July 7, 2025

Consumers don’t want AI in everything, but you’ll be forced to take your AI, peasants!

Filed under: Business, Media, Technology — Tags: , , , — Nicholas @ 05:00

Ted Gioia — like about 92% of consumers at last count — doesn’t want to have artificial intelligence “enhancing” the software he uses every day, but software companies don’t want him — or you — to have that choice:

A few months ago, I needed to send an email. But when I opened Microsoft Outlook, something had changed.

Microsoft asked me to use Copilot to write my email. Copilot is my AI companion. (That’s the cute word they use.)

Hey I don’t want a companion — especially not a fake AI buddy. I never asked for this.

And what about the people receiving my emails? They don’t want this either. They want to hear from me, not a bot.

How do I turn my companion off?

After some trial-and-error, I found a way to disable Copilot. Phew!

But a few days later, Microsoft surprised me again. It wouldn’t let me save an Excel file until I had agreed to new terms for my software account.

Guess what? AI is now bundled into all of my Microsoft software.

Even worse, Microsoft recently raised the price of its subscriptions by $3 per month to cover the additional AI benefits. I get to use my AI companion 60 times per month as part of the deal.

But I don’t want to use it. I want to kill it.

As you can see, I’ve never used this service. I still have all 60 credits unused. But I’m paying for it — because it’s now embedded into Microsoft Word, Excel, etc.

This is how AI gets introduced to the marketplace — by force-feeding the public. And they’re doing this for a very good reason.

Most people won’t pay for AI voluntarily — just 8% according to a recent survey. So they need to bundle it with some other essential product.

You never get to decide.

Before proceeding let me ask a simple question: Has there ever been a major innovation that helped society, but only 8% of the public would pay for it?

That’s never happened before in human history. Everybody wanted electricity in their homes. Everybody wanted a radio. Everybody wanted a phone. Everybody wanted a refrigerator. Everybody wanted a TV set. Everybody wanted the Internet.

They wanted it. They paid for it. They enjoyed it.

AI isn’t like that. People distrust it or even hate it — and more so with each passing month. So the purveyors must bundle it into current offerings, and force usage that way.

July 4, 2025

Everyone’s Mad About AI; Here’s What We Think

Filed under: History, Media, Technology — Tags: , , — Nicholas @ 04:00

World War Two
Published 3 Jul 2025

Is AI rewriting history? Indy, Anna, Sebastian, Sparty, and Iryna tackle some tricky questions about the future of history, our channels, and the world in general. We discuss our recent use of AI in the Rise of Hitler series, animating portraits, and the use of large language models in research. What risks and opportunities are there for TimeGhost in the future?
(more…)

June 29, 2025

A parent reviews “Alpha School”

Filed under: Education, Technology, USA — Tags: , , — Nicholas @ 03:00

At Astral Codex Ten, an anonymous reviewer offers his views on a new “AI-powered” school that claims radically better results for children than traditional schooling methods:

Unbound Academy website screencap

In January 2025, the charter school application of “Unbound Academy“, a subsidiary of “2 Hour Learning, Inc“, lit up the education press: two hours of “AI-powered” academics, 2.6x learning velocity, and zero teachers. Sympathetic reporters repeated the slogans; union leaders reached for pitchforks; Reddit muttered “another rich-kid scam“. More sophisticated critics dismissed the pitch as “selective data from expensive private schools”.

But there is nowhere on the internet that provides a detailed, non-partisan, description of what the “2 hour learning” program actually is, let alone an objective third party analysis to back up its claims.

[…]

Unfortunately, the public evidence base on whether this is “real” is thin in both directions. Alpha’s own material is glossy and elliptical; mainstream coverage either repeats Alpha’s talking points, or attacks the premise that kids should even be allowed to learn faster than their peers. Until Raj Chetty installs himself in the hallway with a clipboard counting MAP percentiles it is hard to get real information on what exactly Alpha is doing, whether it is actually working beyond selection effects, and if there is anyway it could scale in a way that all the other education initiatives seemed to fail to do.

I first heard about Alpha in May 2024, and in the absence of randomized-controlled clarity, I did what any moderately obsessive parent with three elementary-aged kids and an itch for data would do: I moved the family across the country to Austin for a year and ran the experiment myself (unfortunately, despite trying my best we never managed to have identical twins, so I stopped short of running a proper control group. My wife was less disappointed than I was).

Since last autumn I’ve collected the sort of on-the-ground detail that doesn’t surface in press releases, or is available anywhere online: long chats with founders, curriculum leads, “guides” (not teachers), Brazilian Zoom coaches, sceptical parents, ecstatic parents, and the kids who live inside the Alpha dashboard – including my own. I hope this seven-part review can help share what the program actually is and that this review is more open minded than the critics, but is something that would never get past an Alpha public relations gatekeeper:

  1. Starting Point: My Assumptions: how my views on elite private schools, tutoring and acceleration shaped the experiment (and this essay). WHAT is the existing education environment.
  2. A Short History of Alpha: from billionaire-funded microschool to charter aspirations. HOW Alpha came to be.
  3. How Alpha Works Part 1: Under the Hood: What does “2-hour learning” actually look like – what is the product and the science behind the product? HOW is Alpha getting kids to learn faster (Spoiler: “Two hour learning AI learning” closer to three hours, with a 5:1 teacher:student ratio and zero “generative AI”).
  4. How Alpha Works Part 2: Incentives & Motivation: The secret sauce that doesn’t get mentioned in the PR copy, but I have discovered is at least as important as the fancy technology. The “other HOW” that no one is talking about.
  5. How Alpha is Measured: Effectiveness: The science says it should work, but how do you measure if it is working? How is the vaunted “2.6x” number calculated? WHAT data is Alpha using to make its claims and what does that data actually say?
  6. Why this time might be different: Most promising educational initiatives fail to have impact when expanded beyond their initial studies. Bryan Caplan might argue this is because most education education is just signaling anyway (“The Case Against Education“). He also argues that most parental interventions have no impact (“Selfish Reasons to Have More Kids“) – He claims that how kids turn out is a combination of genetics and non-shared environment (randomness; nothing to do with parenting choices). How can we reconcile Caplan’s buttoned-up data with the idea that the “parenting choice” to educate your kids differently (like with Alpha) might result in different outcomes than would be expected from genetics alone? WHY could Alpha work?
  7. What Comes Next? The Scaling Problem: The Alpha founders have a vision of completely re-inventing the way the world serves education. But even if Alpha works, it is up against a history of education programs that were never able to scale. It is also going to face resistance for being “weird”. WHAT comes next?

After twelve months I’m persuaded that Alpha is doing something remarkable — but that almost everyone, including Alpha’s own copywriting team, is describing it wrong:

  • It isn’t genuine two-hour learning: most kids start school at 8:30am, start working on the “two-hour platform” sometime between 9am-930am and are occupied with academics until noon-1230pm. They also blend in “surges” from time to time to squeeze in more hours on the platform.
  • It isn’t AI in the way we have been thinking about it since the “Attention is all you need” paper. There is no “generative AI” powered by OpenAI, Gemini or Claude in the platform the kids use – it is closer to “turbocharged spreadsheet checklist with a spaced-repetition algorithm”
  • It definitely isn’t teacher-free: Teachers have been rebranded “guides”, and while their workload is different than a traditional school, they are very important – and both the quantity and quality are much higher than traditional schools.
  • The bundle matters: it’s not just the learning platform on its own. A big part of the product’s success is how the school has set up student incentives and the culture they have built to make everything work together

… Yet the core claim survives: Since they started in October my children have been marching through and mastering material roughly three times faster than their age-matched peers (and their own speed prior to the program). I am NOT convinced that an Alpha-like program would work for every child, but I expect, for roughly 30-70% of children it could radically change how fast they learn, and dramatically change their lives and potential.

June 23, 2025

80% of top-grossing movies are prequels, sequels, spin-offs, remakes or reboots

Filed under: Business, Media, USA — Tags: , , , , — Nicholas @ 05:00

Ted Gioia on the death of creativity in the movie business, which also seems to be tracking almost exactly with the trend in music business profits:

I’m not shocked when I look in the mirror. Yeah, the Honest Broker isn’t getting any younger. But that’s the human condition.

Maybe I should start using a moisturizer. What do y’all think?

Nah. I’ll just let this aging thing play out.

On the other hand, I’m dumbfounded at everything in public life getting older — even older than me! Consider the current political landscape.

With each passing year, the US Congress looks more like the College of Cardinals (average age =78) or the Rolling Stones (average age = also 78).

We’re gonna need a lot of moisturizer.

But Congress is young and spry compared to Hollywood.

Back in 2000, 80% of movie revenues came from original ideas. But this has now totally flip-flopped.

Today 80% of the movie business is built on old ideas — remakes, and spin-offs, and various other brand extensions. And we went from 80% new to 80% old in just a few years.

[…]

Look at music — and you see the same thing.

The share of old songs on streaming will soon reach 80%. It’s not quite there yet — the latest figures are 73%. But it was at 63% back in 2019. So it’s just a matter of time.

In 2000, streaming didn’t exist, so we looked to the Billboard chart to gauge a song’s success. And new music made up more than 80% of charted songs. So here — just like the movies — we’re flip-flopping from 80% new to 80% old over the course of a few years.

I don’t have good figures on publishing. But I’m pretty sure that AI-generated books and articles will soon represent 80% of the marketplace. Maybe we’ve already reached that threshold.

AI is deliberately designed to cut-and-paste, rehashing past work as its modus operandi. And it will do this to every field — replacing originality with repetition and regurgitation.

This is the new 80% rule.

Just imagine if traditional businesses operated this way.

  • “Welcome to our restaurant, 80% of the food is leftovers.”
  • “Welcome to our boutique, 80% of clothing is secondhand.”
  • “Welcome to our dating service, 80% of the choices are your ex-girlfriends (or ex-boyfriends).”

None of that sounds very appetizing.

June 14, 2025

QotD: University students or NPCs?

Filed under: Education, Quotations, USA — Tags: , , , , — Nicholas @ 01:00

When I first started teaching, for instance, I had to constantly remind myself that my charges were just teenagers. At most they were 21, 22 tops, which is basically the same thing. So much of the crap they pulled, then, was just typical teenager stuff. All they really needed to straighten themselves out was two good head knocks and a swift kick in the ass, which life would soon provide. I did exactly the same sort of dumb stuff back in my own undergrad days – maybe not as bad, but it was a difference of degree, not kind. They’d be ok in a few years.

A few semesters on, and that no longer applied. Sure, sure, they were still teenagers, and still pulled typical teenager capers … but a new set of behaviors crept in. I can’t describe them exactly, in detail, but the overall impression was: here’s someone doing a pretty good impersonation of a teenager. Most every kid goes through the faux-sophisticate stage, usually somewhere around age 12, and this kinda looked like that — young kids pretending to be a lot older — but it also looked a lot like the opposite end of the spectrum. Not quite “hello, fellow teens!” — not yet — but there was something like that going on, too. It was weird, but I figured it was mostly in my head — I’ve always been a grouchy old man, but now I was actually chronologically old enough to let my freak flag fly, so I assumed that’s what I was doing. They’re not changing, I am

Fast forward a few more semesters, and nope, it’s definitely them. The kids at the tail end of my career still looked like bargain basement Rich Littles, doing impersonations of teenagers, but their act was terrible. Remember a few years back, when Facebook or Twitter or whoever tried to make an AI chat bot, and it immediately turned super racist? Not that these kids were racists — they were the furthest thing from that — but they all seemed to have a small stock of crowdsourced responses. And that’s ALL they had, so no matter what the situation, they’d shoehorn it in to one of their canned affects, because that’s all they had.

By the very end, interacting with them was like playing one of those old text-adventure games from the very dawn of the personal computer, like Zork. They’d respond to commands, but only the right commands, in the exact word order. No deviations allowed, and of course their responses were equally programmed.

Severian, “Terminators”, Founding Questions, 2021-12-04.

June 11, 2025

The coming “Dissolution of the Universities”

At Postcards from Barsoom, John Carter provides a useful summary of the situation in England at the time of the Reformation which brought King Henry VIII to seize the wealth and property of the monasteries and other Christian establishments and why he was probably right to do so. Then he shows just how the modern western universities now find themselves in a remarkably similar position today:

The well-preserved ruins of Fountains Abbey, a Cistercian monastery near Ripon in North Yorkshire. Founded in 1132 until dissolved by order of King Henry VIII in 1539. It is now owned by the National Trust as part of the Studley Royal Park, a UNESCO World Heritage Site.
Photo by Admiralgary via Wikimedia Commons.

Our own university system is on the cusp of a similar collapse. This may seem outrageous, given the size, wealth, and massive cultural importance of universities, but at the dawn of the 16th century, the suggestion that monasteries would be dismantled across Europe within a generation would have struck everyone – even their opponents – as absurd.

The Class of 2026

The rot in academia is already proverbial. Scholarly careerism, declining curricular standards, the replication crisis, a demented ideological monoculture, administrative bloat … a steady accumulation of chronic cultural entropy has built up inside the organizational tissue of the academy, rendering universities less effective, less trustworthy, less affordable, and less useful than ever before in history. We see a parallel here with the moral laxity of 16th century monastic life, where religious vows were more theoretical than daily realities for many monks. Does anyone truly think that Harvard professors take Veritas at all seriously?

At the same time, universities have become engorged on tuition fees, research grants, and endowments, providing an easy and luxurious life for armies of well-paid and under-worked administrators, as well as for those professors who are able to play the social games necessary to climb the greased pole of academic promotion. Everyone knows that academia is in a bubble, and as with any bubble, correction is inevitable, and the longer correction is postponed by the thicket of interlocking entrenched interests that have dug themselves into the system, the uglier that correction was always going to be.

Just as the printing press rendered the monastic scriptoria entirely redundant, the Internet has placed universities under increasing threat of obsolescence. Libraries and academic publishing have already been rendered useless by preprint servers. It is no longer, strictly speaking, necessary to attend a university to learn things: the Internet has every tool an autodidact could desire, and insofar as it doesn’t – for instance, university presses and private journals charging outrageous fees for their books and papers – this is due to the academy jealously guarding its treasures with intellectual property law rather than any limitation of the technology. One can easily make the argument that academia has become an obstacle, rather than an organ, of information dissemination.

Still, universities have so far managed to hold on to their relevance due to their lock on credentialization: no one really cares how many How-To videos you watched at YouTube U, because – in theory – a university degree means that there was some level of human verification that you actually mastered the material you studied.

Large Language Models, however, are delivering the killing blow. Just as the printing press collapsed the cost of reproducing text, AI has collapsed the cost of producing texts. This is actually worse news for universities than Gutenberg was the monasteries: movable type made scriptoria unnecessary, but LLMs haven’t only made universities obsolete, they’ve made it impossible for universities to fulfil their function.

Universities rely on undergraduate tuition fees for a major part of their income. Large research schools derive a significant fraction from research grants, and the more prestigious institutions often receive substantial private donations, but for the majority of schools it is the fee-paying undergraduate that pays the bills. This is already a problem, because enrolment is already declining, partly for demographic reasons (the birth rate is low), and partly because academia has been increasingly coded as women’s work, leading to young men staying away.

In theory, undergraduate students are paying for an “education”. They are gaining essential professional skills that will make them employable in well-remunerated white collar professions, or they are broadening their minds with a liberal arts education that provides them with the soft skills – critical thinking, the ability to compose and parse complex texts, a depth of historical and philosophical understanding of intricate social and political issues – that prepare them for careers in elite socioeconomic strata.

Everyone, however, has long since understood that this narrative of “education” is a barely-plausible polite fiction, like those little scraps of fabric exotic dancers wear on their nipples so everyone can pretend they aren’t showing their boobs. Students know it’s a lie, professors know it’s a lie, administrators know it’s a lie, and employers certainly know it’s a lie. What students are actually paying for is not an education, but a credential: they could not possibly care less about the “education” they’re receiving, so long as they receive a piece of paper at the end of their four years which they can take to an employer as evidence that they are not cognitively handicapped, and are therefore in possession of the minimal level of self-discipline and intelligence required to handle routine tasks at the entry-level end of the org chart. Thus the venerable proverb among students that “C’s and D’s get degrees”. It doesn’t matter if you did well: employers don’t generally care about your GPA. All that matters is that you do the minimal possible level of work to squeak through. As a general rule, your time as a student is better spent grinding away in the library as little as possible while enjoying yourself to the maximum extent that you can in order to develop social networks you can draw upon later.

Until recently, graduate school ensured that there was still some vestigial motivation for genuine intellectual engagement. Corporate America might not care about your transcript, but if you wanted an advanced degree, graduate schools most certainly did. Those students with greater academic ambitions than a Bachelor’s degree could therefore generally be relied on to actually apply themselves, thereby making the professoriate’s efforts delivering lectures, preparing homework assignments, and grading exams somewhat less of a pantomime. DEI, however, was already eating its way through even this. As graduate school admission became more about protected identities and less about intellectual mastery, and as graduate programs were themselves rendered easier in order to improve retention of underqualified diversity admits, it started to become less important to study hard even if one wanted to enter grad school.

To the point. In 2022, ChatGPT became available. Almost overnight undergraduate students began using it to write their essays for them. Its abuse has now become essentially ubiquitous, and not only for essays: ChatGPT can write code or solve mathematical problems just as easily as it can generate reams of plausible-sounding text. It might not yet do these things well, but it doesn’t have to: remember, C’s and D’s get degrees.

June 8, 2025

“If the New York Times notices the Buddha, the enlightened one has already left town”

Ted Gioia points out that momentous changes in society are not often noticed until they’ve taken place, and provides ten warning signs of such a change happening right now:

Would you believe me if I told you that the biggest news story of our century is happening right now — but is never mentioned in the press?

That sounds crazy, doesn’t it?

But that is often the case when a bold new worldview appears.

  • How long did it take before the Renaissance got mentioned in the town square?
  • When did newspapers start covering the Enlightenment?
  • Or the collapse in mercantilism?
  • Or the rise of globalism?
  • Or the birth of Christianity or Islam or some other earthshaking creed?

The biggest changes often happen long before they even get a name. By the time the scribes notice, the world is already reborn.

You can take this to the bank: If the New York Times notices the Buddha, the enlightened one has already left town.

For example, the word Renaissance got introduced two hundred years after the start of the Renaissance. The game was already over.

The same is true of most major cultural movements — they are truly the elephants in the room. And the elites at the epicenter of power are absolutely the last to notice.

Tiberius may run the entire Roman Empire, but he will never hear the Good News.

There’s a general rule here — the bigger the shift, the easier it is to miss.

We are living through a situation like that right now. We are experiencing a total shift — like the magnetic poles reversing. But it doesn’t even have a name — not yet.

So let’s give it one.

Let’s call it: The Collapse of the Knowledge System.

We could also define it as the emergence of a new knowledge system.

In this regard, it resembles other massive shifts in Western history — specifically the rebirth of humanistic thinking in the early Renaissance, or the rise of Romanticism in the nineteenth century.

In these volatile situations, the whole entrenched hierarchy of truth and authority gets totally reversed. The old experts and their systems are discredited, and completely new values take their place. The newcomers bring more than just a new attitude — they turn everything on its head.

That’s happening right now.

The knowledge structure that has dominated everything for our entire lifetime — and for our parents and grandparents — is collapsing. And it’s taking place everywhere, all at once.

If this were just an isolated situation — a problem in universities, or media, or politics — the current hierarchy could possibly survive. But that isn’t the case.

The crisis has spread into every sector of society which relies on clear knowledge and respected authority.

The ten warning signs

June 1, 2025

Ted Gioia on stopping AI cheating in academia

Filed under: Britain, Education, Media, Technology, USA — Tags: , , , — Nicholas @ 03:00

I’ve never been to Oxford, either as a student or as a tourist, but I believe Ted Gioia‘s description of his experiences there and how they can be used to disrupt the steady take-over of modern education by artificial intelligence cheats:

How would the Oxford system kill AI?

Once again, where do I begin?

There were so many oddities in Oxford education. Medical students complained to me that they were forced to draw every organ in the human body. I came here to be a doctor, not a bloody artist.

When they griped to their teachers, they were given the usual response: This is how we’ve always done things.

I knew a woman who wanted to study modern drama, but she was forced to decipher handwriting from 13th century manuscripts as preparatory training.

This is how we’ve always done things.

Americans who studied modern history were dismayed to learn that the modern world at Oxford begins in the year 284 A.D. But I guess that makes sense when you consider that Oxford was founded two centuries before the rise of the Aztec Empire.

My experience was less extreme. But every aspect of it was impervious to automation and digitization — let alone AI (which didn’t exist back then).

If implemented today, the Oxford system would totally elminate AI cheating — in these five ways:

(1) EVERYTHING WAS HANDWRITTEN — WE DIDN’T EVEN HAVE TYPEWRITERS.

All my high school term papers were typewritten — that was a requirement. And when I attended Stanford, I brought a Smith-Corona electric typewriter with me from home. I used it constantly. Even in those pre-computer days, we relied on machines at every stage of an American education.

When I returned from Oxford to attend Stanford Business School, computers were beginning to intrude on education. I was even forced (unwillingly) to learn computer programming as a requirement for entering the MBA program.

But during my time at Oxford, I never owned a typewriter. I never touched a typewriter. I never even saw a typewriter. Every paper, every exam answer, every text whatsoever was handwritten—and for exams, they were handwritten under the supervision of proctors.

When I got my exam results from the college, the grades were handwritten in ancient Greek characters. (I’m not making this up.)

Even if ChatGPT had existed back then, you couldn’t have relied on it in these settings.

(2) MY PROFESSORS TAUGHT ME AT TUTORIALS IN THEIR OFFICES. THEY WOULD GRILL ME VERBALLY — AND I WAS EXPECTED TO HAVE IMMEDIATE RESPONSES TO ALL THEIR QUESTIONS.

The Oxford education is based on the tutorial system. It’s a conversation in the don’s office. This was often one-on-one. Sometimes two students would share a tutorial with a single tutor. But I never had a tutorial with more than three people in the room.

I was expected to show up with a handwritten essay. But I wouldn’t hand it in for grading — I read it aloud in front of the scholar. He would constantly interrupt me with questions, and I was expected to have smart answers.

When I finished reading my paper, he would have more follow-up questions. The whole process resembled a police interrogation from a BBC crime show.

There’s no way to cheat in this setting. You either back up what you’re saying on the spot — or you look like a fool. Hey, that’s just like real life.

(3) ACADEMIC RESULTS WERE BASED ENTIRELY ON HANDWRITTEN AND ORAL EXAMS. YOU EITHER PASSED OR FAILED — AND MANY FAILED.

The Oxford system was brutal. Your future depended on your performance at grueling multi-day examinations. Everything was handwritten or oral, all done in a totally contained and supervised environment.

Cheating was impossible. And behind-the-scenes influence peddling was prevented — my exams were judged anonymously by professors who weren’t my tutors. They didn’t know anything about me, except what was written in the exam booklets.

I did well and thus got exempted from the dreaded viva voce — the intense oral exam that (for many students) serves as follow-up to the written exams.

That was a relief, because the viva voce is even less susceptible to bluffing or games-playing than tutorials. You are now defending yourself in front of a panel of esteemed scholars, and they love tightening the screws on poorly prepared students.

(4) THE SYSTEM WAS TOUGH AND UNFORGIVING — BUT THIS WAS INTENTIONAL. OTHERWISE THE CREDENTIAL GOT DEVALUED.

I was shocked at how many smart Oxford students left without earning a degree. This was a huge change from my experience in the US — where faculty and administration do a lot of hand-holding and forgiving in order to boost graduation rates.

There were no participation trophies at Oxford. You sank or swam — and it was easy to sink.

That’s why many well-known people — I won’t name names, but some are world famous — can tell you that they studied at Oxford, but they can’t claim that they got a degree at Oxford. Even elite Rhodes Scholars fail the exams, or fear them so much that they leave without taking them.

I feel sorry for my friends who didn’t fare well in this system. But in a world of rampant AI cheating, this kind of bullet-proof credentialing will return by necessity — or the credentials will get devalued.

(5) EVEN THE INFORMAL WAYS OF BUILDING YOUR REPUTATION WERE DONE FACE-TO-FACE — WITH NO TECHNOLOGY INVOLVED

Exams weren’t the only way to build a reputation at Oxford. I also saw people rise in stature because of their conversational or debating or politicking or interpersonal skills.

I’ve never been anywhere in my life where so much depended on your ability at informal speaking. You could actually gain renown by your witty and intelligent dinner conversation. Even better, if you had solid public speaking skills you could flourish at the Oxford Union — and maybe end up as Prime Minister some day.

All of this was done face-to-face. Even if a time traveler had given you a smartphone with a chatbot, you would never have been able to use it. You had to think on your feet, and deliver the goods with lots of people watching.

Maybe that’s not for everybody. But the people who survived and flourished in this environment were impressive individuals who, even at a young age, were already battle tested.

« Newer PostsOlder Posts »

Powered by WordPress