Quotulatiousness

July 21, 2025

AI slop seems to have finally triggered significant pushback

Filed under: Business, Media, Technology — Tags: , , , , , — Nicholas @ 03:00

Ted Gioia says that he’s seeing strong indicators that the AI slop superabundance has helped create a widespread rejection of it and all its works:

2025 has been the year of garbage culture.

Creators watch in horror as dismal AI slop threatens their livelihoods — and the integrity of their fields. It’s everywhere, spreading faster than a pharaoh’s plague.

In recent months, we’ve been bombarded with millions of lousy AI songs, idiotic AI videos, and clumsy AI images. Error-filled AI texts are everywhere — from your workplace memos to the books sold on Amazon.com.

Even my lowly vocation, music journalism, gets turned into a joke when it’s accompanied by slop images of fake events.

No, these things did not really happen.

But something has changed in the last few days.

The garbage hasn’t disappeared. It’s still everywhere, stinking up the joint.

But people are disgusted, and finally pushing back. And they are doing so with such fervor that even the biggest AI companies are now getting nervous and pulling back.

Just consider this surprising headline:

This was stunning news. YouTube is part of the biggest AI slop promoter of them all — namely the Google/Alphabet empire. How can they possibly abandon AI garbage? Their bosses are the biggest slopmasters of them all.

After this shocking news reverberated through the creative economy, YouTube started to backtrack. They said that they would not punish every AI video — some can still be monetized.

But even the revised guidelines are still a major blow to AI slop purveyors. YouTube made clear that “creators are required to disclose when their realistic content is altered or synthetic”. That’s a huge win—we finally have a requirement for disclosure, and it came straight from the dark planet Alphabet.

YouTube also stressed that it opposes “content that is mass-produced or repetitive, which is content viewers often consider spam”. This is just a step away from blocking slop.

Update, 22 July: Ted posted a follow-up with a bit more evidence that the pushback is working:

In my latest article I criticized Spotify for allowing uploads of unauthorized AI tracks to the profiles of dead musicians.

But the company may finally be listening to criticisms of its AI policies. In this case, Spotify has now taken steps to stop the abuses, and a spokesperson reached out to me today with an update and expressing a clear and proper policy on AI fraud.

I share it below (and have also updated my article):

    We’ve flagged the issue to SoundOn, the distributor of the content in question, and it has been removed. This violates Spotify’s deceptive content policies, which prohibit impersonation intended to mislead, such as replicating another creator’s name, image, or description, or posing as a person, brand, or organization in a deceptive manner. This is not allowed. We take action against licensors and distributors who fail to police for this kind of fraud and those who commit repeated or egregious violations can and have been permanently removed from Spotify.

They acted quickly, and I give them credit for that.

Update the second, 23 July: Ah, Spotify giveth and Spotify taketh away:

“Spotify is publishing new, AI-generated songs on the official pages of artists who died years ago without the permission of their estates or record labels,” reports 404 Media.

This scandal came to light because of an AI song attributed to Blaze Foley, who died in 1989. The bogus track is accompanied by an AI-generated image of a man who bears no resemblance to the singer.

What’s going on here? Is this just ignorance or carelessness at Spotify? Or does it represent something more sinister — another example of the company’s willingness to deceive users in the pursuit of profits?

These scams must stop. If Spotify doesn’t fix this mess immediately, courts should intervene.

But the dead musician scandal is just a start — because other bizarre things are happening at Spotify.

The whole situation is positively surreal.

July 7, 2025

Consumers don’t want AI in everything, but you’ll be forced to take your AI, peasants!

Filed under: Business, Media, Technology — Tags: , , , — Nicholas @ 05:00

Ted Gioia — like about 92% of consumers at last count — doesn’t want to have artificial intelligence “enhancing” the software he uses every day, but software companies don’t want him — or you — to have that choice:

A few months ago, I needed to send an email. But when I opened Microsoft Outlook, something had changed.

Microsoft asked me to use Copilot to write my email. Copilot is my AI companion. (That’s the cute word they use.)

Hey I don’t want a companion — especially not a fake AI buddy. I never asked for this.

And what about the people receiving my emails? They don’t want this either. They want to hear from me, not a bot.

How do I turn my companion off?

After some trial-and-error, I found a way to disable Copilot. Phew!

But a few days later, Microsoft surprised me again. It wouldn’t let me save an Excel file until I had agreed to new terms for my software account.

Guess what? AI is now bundled into all of my Microsoft software.

Even worse, Microsoft recently raised the price of its subscriptions by $3 per month to cover the additional AI benefits. I get to use my AI companion 60 times per month as part of the deal.

But I don’t want to use it. I want to kill it.

As you can see, I’ve never used this service. I still have all 60 credits unused. But I’m paying for it — because it’s now embedded into Microsoft Word, Excel, etc.

This is how AI gets introduced to the marketplace — by force-feeding the public. And they’re doing this for a very good reason.

Most people won’t pay for AI voluntarily — just 8% according to a recent survey. So they need to bundle it with some other essential product.

You never get to decide.

Before proceeding let me ask a simple question: Has there ever been a major innovation that helped society, but only 8% of the public would pay for it?

That’s never happened before in human history. Everybody wanted electricity in their homes. Everybody wanted a radio. Everybody wanted a phone. Everybody wanted a refrigerator. Everybody wanted a TV set. Everybody wanted the Internet.

They wanted it. They paid for it. They enjoyed it.

AI isn’t like that. People distrust it or even hate it — and more so with each passing month. So the purveyors must bundle it into current offerings, and force usage that way.

July 4, 2025

Everyone’s Mad About AI; Here’s What We Think

Filed under: History, Media, Technology — Tags: , , — Nicholas @ 04:00

World War Two
Published 3 Jul 2025

Is AI rewriting history? Indy, Anna, Sebastian, Sparty, and Iryna tackle some tricky questions about the future of history, our channels, and the world in general. We discuss our recent use of AI in the Rise of Hitler series, animating portraits, and the use of large language models in research. What risks and opportunities are there for TimeGhost in the future?
(more…)

June 29, 2025

A parent reviews “Alpha School”

Filed under: Education, Technology, USA — Tags: , , — Nicholas @ 03:00

At Astral Codex Ten, an anonymous reviewer offers his views on a new “AI-powered” school that claims radically better results for children than traditional schooling methods:

Unbound Academy website screencap

In January 2025, the charter school application of “Unbound Academy“, a subsidiary of “2 Hour Learning, Inc“, lit up the education press: two hours of “AI-powered” academics, 2.6x learning velocity, and zero teachers. Sympathetic reporters repeated the slogans; union leaders reached for pitchforks; Reddit muttered “another rich-kid scam“. More sophisticated critics dismissed the pitch as “selective data from expensive private schools”.

But there is nowhere on the internet that provides a detailed, non-partisan, description of what the “2 hour learning” program actually is, let alone an objective third party analysis to back up its claims.

[…]

Unfortunately, the public evidence base on whether this is “real” is thin in both directions. Alpha’s own material is glossy and elliptical; mainstream coverage either repeats Alpha’s talking points, or attacks the premise that kids should even be allowed to learn faster than their peers. Until Raj Chetty installs himself in the hallway with a clipboard counting MAP percentiles it is hard to get real information on what exactly Alpha is doing, whether it is actually working beyond selection effects, and if there is anyway it could scale in a way that all the other education initiatives seemed to fail to do.

I first heard about Alpha in May 2024, and in the absence of randomized-controlled clarity, I did what any moderately obsessive parent with three elementary-aged kids and an itch for data would do: I moved the family across the country to Austin for a year and ran the experiment myself (unfortunately, despite trying my best we never managed to have identical twins, so I stopped short of running a proper control group. My wife was less disappointed than I was).

Since last autumn I’ve collected the sort of on-the-ground detail that doesn’t surface in press releases, or is available anywhere online: long chats with founders, curriculum leads, “guides” (not teachers), Brazilian Zoom coaches, sceptical parents, ecstatic parents, and the kids who live inside the Alpha dashboard – including my own. I hope this seven-part review can help share what the program actually is and that this review is more open minded than the critics, but is something that would never get past an Alpha public relations gatekeeper:

  1. Starting Point: My Assumptions: how my views on elite private schools, tutoring and acceleration shaped the experiment (and this essay). WHAT is the existing education environment.
  2. A Short History of Alpha: from billionaire-funded microschool to charter aspirations. HOW Alpha came to be.
  3. How Alpha Works Part 1: Under the Hood: What does “2-hour learning” actually look like – what is the product and the science behind the product? HOW is Alpha getting kids to learn faster (Spoiler: “Two hour learning AI learning” closer to three hours, with a 5:1 teacher:student ratio and zero “generative AI”).
  4. How Alpha Works Part 2: Incentives & Motivation: The secret sauce that doesn’t get mentioned in the PR copy, but I have discovered is at least as important as the fancy technology. The “other HOW” that no one is talking about.
  5. How Alpha is Measured: Effectiveness: The science says it should work, but how do you measure if it is working? How is the vaunted “2.6x” number calculated? WHAT data is Alpha using to make its claims and what does that data actually say?
  6. Why this time might be different: Most promising educational initiatives fail to have impact when expanded beyond their initial studies. Bryan Caplan might argue this is because most education education is just signaling anyway (“The Case Against Education“). He also argues that most parental interventions have no impact (“Selfish Reasons to Have More Kids“) – He claims that how kids turn out is a combination of genetics and non-shared environment (randomness; nothing to do with parenting choices). How can we reconcile Caplan’s buttoned-up data with the idea that the “parenting choice” to educate your kids differently (like with Alpha) might result in different outcomes than would be expected from genetics alone? WHY could Alpha work?
  7. What Comes Next? The Scaling Problem: The Alpha founders have a vision of completely re-inventing the way the world serves education. But even if Alpha works, it is up against a history of education programs that were never able to scale. It is also going to face resistance for being “weird”. WHAT comes next?

After twelve months I’m persuaded that Alpha is doing something remarkable — but that almost everyone, including Alpha’s own copywriting team, is describing it wrong:

  • It isn’t genuine two-hour learning: most kids start school at 8:30am, start working on the “two-hour platform” sometime between 9am-930am and are occupied with academics until noon-1230pm. They also blend in “surges” from time to time to squeeze in more hours on the platform.
  • It isn’t AI in the way we have been thinking about it since the “Attention is all you need” paper. There is no “generative AI” powered by OpenAI, Gemini or Claude in the platform the kids use – it is closer to “turbocharged spreadsheet checklist with a spaced-repetition algorithm”
  • It definitely isn’t teacher-free: Teachers have been rebranded “guides”, and while their workload is different than a traditional school, they are very important – and both the quantity and quality are much higher than traditional schools.
  • The bundle matters: it’s not just the learning platform on its own. A big part of the product’s success is how the school has set up student incentives and the culture they have built to make everything work together

… Yet the core claim survives: Since they started in October my children have been marching through and mastering material roughly three times faster than their age-matched peers (and their own speed prior to the program). I am NOT convinced that an Alpha-like program would work for every child, but I expect, for roughly 30-70% of children it could radically change how fast they learn, and dramatically change their lives and potential.

June 23, 2025

80% of top-grossing movies are prequels, sequels, spin-offs, remakes or reboots

Filed under: Business, Media, USA — Tags: , , , , — Nicholas @ 05:00

Ted Gioia on the death of creativity in the movie business, which also seems to be tracking almost exactly with the trend in music business profits:

I’m not shocked when I look in the mirror. Yeah, the Honest Broker isn’t getting any younger. But that’s the human condition.

Maybe I should start using a moisturizer. What do y’all think?

Nah. I’ll just let this aging thing play out.

On the other hand, I’m dumbfounded at everything in public life getting older — even older than me! Consider the current political landscape.

With each passing year, the US Congress looks more like the College of Cardinals (average age =78) or the Rolling Stones (average age = also 78).

We’re gonna need a lot of moisturizer.

But Congress is young and spry compared to Hollywood.

Back in 2000, 80% of movie revenues came from original ideas. But this has now totally flip-flopped.

Today 80% of the movie business is built on old ideas — remakes, and spin-offs, and various other brand extensions. And we went from 80% new to 80% old in just a few years.

[…]

Look at music — and you see the same thing.

The share of old songs on streaming will soon reach 80%. It’s not quite there yet — the latest figures are 73%. But it was at 63% back in 2019. So it’s just a matter of time.

In 2000, streaming didn’t exist, so we looked to the Billboard chart to gauge a song’s success. And new music made up more than 80% of charted songs. So here — just like the movies — we’re flip-flopping from 80% new to 80% old over the course of a few years.

I don’t have good figures on publishing. But I’m pretty sure that AI-generated books and articles will soon represent 80% of the marketplace. Maybe we’ve already reached that threshold.

AI is deliberately designed to cut-and-paste, rehashing past work as its modus operandi. And it will do this to every field — replacing originality with repetition and regurgitation.

This is the new 80% rule.

Just imagine if traditional businesses operated this way.

  • “Welcome to our restaurant, 80% of the food is leftovers.”
  • “Welcome to our boutique, 80% of clothing is secondhand.”
  • “Welcome to our dating service, 80% of the choices are your ex-girlfriends (or ex-boyfriends).”

None of that sounds very appetizing.

June 14, 2025

QotD: University students or NPCs?

Filed under: Education, Quotations, USA — Tags: , , , , — Nicholas @ 01:00

When I first started teaching, for instance, I had to constantly remind myself that my charges were just teenagers. At most they were 21, 22 tops, which is basically the same thing. So much of the crap they pulled, then, was just typical teenager stuff. All they really needed to straighten themselves out was two good head knocks and a swift kick in the ass, which life would soon provide. I did exactly the same sort of dumb stuff back in my own undergrad days – maybe not as bad, but it was a difference of degree, not kind. They’d be ok in a few years.

A few semesters on, and that no longer applied. Sure, sure, they were still teenagers, and still pulled typical teenager capers … but a new set of behaviors crept in. I can’t describe them exactly, in detail, but the overall impression was: here’s someone doing a pretty good impersonation of a teenager. Most every kid goes through the faux-sophisticate stage, usually somewhere around age 12, and this kinda looked like that — young kids pretending to be a lot older — but it also looked a lot like the opposite end of the spectrum. Not quite “hello, fellow teens!” — not yet — but there was something like that going on, too. It was weird, but I figured it was mostly in my head — I’ve always been a grouchy old man, but now I was actually chronologically old enough to let my freak flag fly, so I assumed that’s what I was doing. They’re not changing, I am

Fast forward a few more semesters, and nope, it’s definitely them. The kids at the tail end of my career still looked like bargain basement Rich Littles, doing impersonations of teenagers, but their act was terrible. Remember a few years back, when Facebook or Twitter or whoever tried to make an AI chat bot, and it immediately turned super racist? Not that these kids were racists — they were the furthest thing from that — but they all seemed to have a small stock of crowdsourced responses. And that’s ALL they had, so no matter what the situation, they’d shoehorn it in to one of their canned affects, because that’s all they had.

By the very end, interacting with them was like playing one of those old text-adventure games from the very dawn of the personal computer, like Zork. They’d respond to commands, but only the right commands, in the exact word order. No deviations allowed, and of course their responses were equally programmed.

Severian, “Terminators”, Founding Questions, 2021-12-04.

June 11, 2025

The coming “Dissolution of the Universities”

At Postcards from Barsoom, John Carter provides a useful summary of the situation in England at the time of the Reformation which brought King Henry VIII to seize the wealth and property of the monasteries and other Christian establishments and why he was probably right to do so. Then he shows just how the modern western universities now find themselves in a remarkably similar position today:

The well-preserved ruins of Fountains Abbey, a Cistercian monastery near Ripon in North Yorkshire. Founded in 1132 until dissolved by order of King Henry VIII in 1539. It is now owned by the National Trust as part of the Studley Royal Park, a UNESCO World Heritage Site.
Photo by Admiralgary via Wikimedia Commons.

Our own university system is on the cusp of a similar collapse. This may seem outrageous, given the size, wealth, and massive cultural importance of universities, but at the dawn of the 16th century, the suggestion that monasteries would be dismantled across Europe within a generation would have struck everyone – even their opponents – as absurd.

The Class of 2026

The rot in academia is already proverbial. Scholarly careerism, declining curricular standards, the replication crisis, a demented ideological monoculture, administrative bloat … a steady accumulation of chronic cultural entropy has built up inside the organizational tissue of the academy, rendering universities less effective, less trustworthy, less affordable, and less useful than ever before in history. We see a parallel here with the moral laxity of 16th century monastic life, where religious vows were more theoretical than daily realities for many monks. Does anyone truly think that Harvard professors take Veritas at all seriously?

At the same time, universities have become engorged on tuition fees, research grants, and endowments, providing an easy and luxurious life for armies of well-paid and under-worked administrators, as well as for those professors who are able to play the social games necessary to climb the greased pole of academic promotion. Everyone knows that academia is in a bubble, and as with any bubble, correction is inevitable, and the longer correction is postponed by the thicket of interlocking entrenched interests that have dug themselves into the system, the uglier that correction was always going to be.

Just as the printing press rendered the monastic scriptoria entirely redundant, the Internet has placed universities under increasing threat of obsolescence. Libraries and academic publishing have already been rendered useless by preprint servers. It is no longer, strictly speaking, necessary to attend a university to learn things: the Internet has every tool an autodidact could desire, and insofar as it doesn’t – for instance, university presses and private journals charging outrageous fees for their books and papers – this is due to the academy jealously guarding its treasures with intellectual property law rather than any limitation of the technology. One can easily make the argument that academia has become an obstacle, rather than an organ, of information dissemination.

Still, universities have so far managed to hold on to their relevance due to their lock on credentialization: no one really cares how many How-To videos you watched at YouTube U, because – in theory – a university degree means that there was some level of human verification that you actually mastered the material you studied.

Large Language Models, however, are delivering the killing blow. Just as the printing press collapsed the cost of reproducing text, AI has collapsed the cost of producing texts. This is actually worse news for universities than Gutenberg was the monasteries: movable type made scriptoria unnecessary, but LLMs haven’t only made universities obsolete, they’ve made it impossible for universities to fulfil their function.

Universities rely on undergraduate tuition fees for a major part of their income. Large research schools derive a significant fraction from research grants, and the more prestigious institutions often receive substantial private donations, but for the majority of schools it is the fee-paying undergraduate that pays the bills. This is already a problem, because enrolment is already declining, partly for demographic reasons (the birth rate is low), and partly because academia has been increasingly coded as women’s work, leading to young men staying away.

In theory, undergraduate students are paying for an “education”. They are gaining essential professional skills that will make them employable in well-remunerated white collar professions, or they are broadening their minds with a liberal arts education that provides them with the soft skills – critical thinking, the ability to compose and parse complex texts, a depth of historical and philosophical understanding of intricate social and political issues – that prepare them for careers in elite socioeconomic strata.

Everyone, however, has long since understood that this narrative of “education” is a barely-plausible polite fiction, like those little scraps of fabric exotic dancers wear on their nipples so everyone can pretend they aren’t showing their boobs. Students know it’s a lie, professors know it’s a lie, administrators know it’s a lie, and employers certainly know it’s a lie. What students are actually paying for is not an education, but a credential: they could not possibly care less about the “education” they’re receiving, so long as they receive a piece of paper at the end of their four years which they can take to an employer as evidence that they are not cognitively handicapped, and are therefore in possession of the minimal level of self-discipline and intelligence required to handle routine tasks at the entry-level end of the org chart. Thus the venerable proverb among students that “C’s and D’s get degrees”. It doesn’t matter if you did well: employers don’t generally care about your GPA. All that matters is that you do the minimal possible level of work to squeak through. As a general rule, your time as a student is better spent grinding away in the library as little as possible while enjoying yourself to the maximum extent that you can in order to develop social networks you can draw upon later.

Until recently, graduate school ensured that there was still some vestigial motivation for genuine intellectual engagement. Corporate America might not care about your transcript, but if you wanted an advanced degree, graduate schools most certainly did. Those students with greater academic ambitions than a Bachelor’s degree could therefore generally be relied on to actually apply themselves, thereby making the professoriate’s efforts delivering lectures, preparing homework assignments, and grading exams somewhat less of a pantomime. DEI, however, was already eating its way through even this. As graduate school admission became more about protected identities and less about intellectual mastery, and as graduate programs were themselves rendered easier in order to improve retention of underqualified diversity admits, it started to become less important to study hard even if one wanted to enter grad school.

To the point. In 2022, ChatGPT became available. Almost overnight undergraduate students began using it to write their essays for them. Its abuse has now become essentially ubiquitous, and not only for essays: ChatGPT can write code or solve mathematical problems just as easily as it can generate reams of plausible-sounding text. It might not yet do these things well, but it doesn’t have to: remember, C’s and D’s get degrees.

June 8, 2025

“If the New York Times notices the Buddha, the enlightened one has already left town”

Ted Gioia points out that momentous changes in society are not often noticed until they’ve taken place, and provides ten warning signs of such a change happening right now:

Would you believe me if I told you that the biggest news story of our century is happening right now — but is never mentioned in the press?

That sounds crazy, doesn’t it?

But that is often the case when a bold new worldview appears.

  • How long did it take before the Renaissance got mentioned in the town square?
  • When did newspapers start covering the Enlightenment?
  • Or the collapse in mercantilism?
  • Or the rise of globalism?
  • Or the birth of Christianity or Islam or some other earthshaking creed?

The biggest changes often happen long before they even get a name. By the time the scribes notice, the world is already reborn.

You can take this to the bank: If the New York Times notices the Buddha, the enlightened one has already left town.

For example, the word Renaissance got introduced two hundred years after the start of the Renaissance. The game was already over.

The same is true of most major cultural movements — they are truly the elephants in the room. And the elites at the epicenter of power are absolutely the last to notice.

Tiberius may run the entire Roman Empire, but he will never hear the Good News.

There’s a general rule here — the bigger the shift, the easier it is to miss.

We are living through a situation like that right now. We are experiencing a total shift — like the magnetic poles reversing. But it doesn’t even have a name — not yet.

So let’s give it one.

Let’s call it: The Collapse of the Knowledge System.

We could also define it as the emergence of a new knowledge system.

In this regard, it resembles other massive shifts in Western history — specifically the rebirth of humanistic thinking in the early Renaissance, or the rise of Romanticism in the nineteenth century.

In these volatile situations, the whole entrenched hierarchy of truth and authority gets totally reversed. The old experts and their systems are discredited, and completely new values take their place. The newcomers bring more than just a new attitude — they turn everything on its head.

That’s happening right now.

The knowledge structure that has dominated everything for our entire lifetime — and for our parents and grandparents — is collapsing. And it’s taking place everywhere, all at once.

If this were just an isolated situation — a problem in universities, or media, or politics — the current hierarchy could possibly survive. But that isn’t the case.

The crisis has spread into every sector of society which relies on clear knowledge and respected authority.

The ten warning signs

June 1, 2025

Ted Gioia on stopping AI cheating in academia

Filed under: Britain, Education, Media, Technology, USA — Tags: , , , — Nicholas @ 03:00

I’ve never been to Oxford, either as a student or as a tourist, but I believe Ted Gioia‘s description of his experiences there and how they can be used to disrupt the steady take-over of modern education by artificial intelligence cheats:

How would the Oxford system kill AI?

Once again, where do I begin?

There were so many oddities in Oxford education. Medical students complained to me that they were forced to draw every organ in the human body. I came here to be a doctor, not a bloody artist.

When they griped to their teachers, they were given the usual response: This is how we’ve always done things.

I knew a woman who wanted to study modern drama, but she was forced to decipher handwriting from 13th century manuscripts as preparatory training.

This is how we’ve always done things.

Americans who studied modern history were dismayed to learn that the modern world at Oxford begins in the year 284 A.D. But I guess that makes sense when you consider that Oxford was founded two centuries before the rise of the Aztec Empire.

My experience was less extreme. But every aspect of it was impervious to automation and digitization — let alone AI (which didn’t exist back then).

If implemented today, the Oxford system would totally elminate AI cheating — in these five ways:

(1) EVERYTHING WAS HANDWRITTEN — WE DIDN’T EVEN HAVE TYPEWRITERS.

All my high school term papers were typewritten — that was a requirement. And when I attended Stanford, I brought a Smith-Corona electric typewriter with me from home. I used it constantly. Even in those pre-computer days, we relied on machines at every stage of an American education.

When I returned from Oxford to attend Stanford Business School, computers were beginning to intrude on education. I was even forced (unwillingly) to learn computer programming as a requirement for entering the MBA program.

But during my time at Oxford, I never owned a typewriter. I never touched a typewriter. I never even saw a typewriter. Every paper, every exam answer, every text whatsoever was handwritten—and for exams, they were handwritten under the supervision of proctors.

When I got my exam results from the college, the grades were handwritten in ancient Greek characters. (I’m not making this up.)

Even if ChatGPT had existed back then, you couldn’t have relied on it in these settings.

(2) MY PROFESSORS TAUGHT ME AT TUTORIALS IN THEIR OFFICES. THEY WOULD GRILL ME VERBALLY — AND I WAS EXPECTED TO HAVE IMMEDIATE RESPONSES TO ALL THEIR QUESTIONS.

The Oxford education is based on the tutorial system. It’s a conversation in the don’s office. This was often one-on-one. Sometimes two students would share a tutorial with a single tutor. But I never had a tutorial with more than three people in the room.

I was expected to show up with a handwritten essay. But I wouldn’t hand it in for grading — I read it aloud in front of the scholar. He would constantly interrupt me with questions, and I was expected to have smart answers.

When I finished reading my paper, he would have more follow-up questions. The whole process resembled a police interrogation from a BBC crime show.

There’s no way to cheat in this setting. You either back up what you’re saying on the spot — or you look like a fool. Hey, that’s just like real life.

(3) ACADEMIC RESULTS WERE BASED ENTIRELY ON HANDWRITTEN AND ORAL EXAMS. YOU EITHER PASSED OR FAILED — AND MANY FAILED.

The Oxford system was brutal. Your future depended on your performance at grueling multi-day examinations. Everything was handwritten or oral, all done in a totally contained and supervised environment.

Cheating was impossible. And behind-the-scenes influence peddling was prevented — my exams were judged anonymously by professors who weren’t my tutors. They didn’t know anything about me, except what was written in the exam booklets.

I did well and thus got exempted from the dreaded viva voce — the intense oral exam that (for many students) serves as follow-up to the written exams.

That was a relief, because the viva voce is even less susceptible to bluffing or games-playing than tutorials. You are now defending yourself in front of a panel of esteemed scholars, and they love tightening the screws on poorly prepared students.

(4) THE SYSTEM WAS TOUGH AND UNFORGIVING — BUT THIS WAS INTENTIONAL. OTHERWISE THE CREDENTIAL GOT DEVALUED.

I was shocked at how many smart Oxford students left without earning a degree. This was a huge change from my experience in the US — where faculty and administration do a lot of hand-holding and forgiving in order to boost graduation rates.

There were no participation trophies at Oxford. You sank or swam — and it was easy to sink.

That’s why many well-known people — I won’t name names, but some are world famous — can tell you that they studied at Oxford, but they can’t claim that they got a degree at Oxford. Even elite Rhodes Scholars fail the exams, or fear them so much that they leave without taking them.

I feel sorry for my friends who didn’t fare well in this system. But in a world of rampant AI cheating, this kind of bullet-proof credentialing will return by necessity — or the credentials will get devalued.

(5) EVEN THE INFORMAL WAYS OF BUILDING YOUR REPUTATION WERE DONE FACE-TO-FACE — WITH NO TECHNOLOGY INVOLVED

Exams weren’t the only way to build a reputation at Oxford. I also saw people rise in stature because of their conversational or debating or politicking or interpersonal skills.

I’ve never been anywhere in my life where so much depended on your ability at informal speaking. You could actually gain renown by your witty and intelligent dinner conversation. Even better, if you had solid public speaking skills you could flourish at the Oxford Union — and maybe end up as Prime Minister some day.

All of this was done face-to-face. Even if a time traveler had given you a smartphone with a chatbot, you would never have been able to use it. You had to think on your feet, and deliver the goods with lots of people watching.

Maybe that’s not for everybody. But the people who survived and flourished in this environment were impressive individuals who, even at a young age, were already battle tested.

May 26, 2025

QotD: A Slop manifesto

Filed under: Media, Quotations, Technology — Tags: , , , — Nicholas @ 01:00

Long live Slop!

Slop is a creative style that emerged around 2023 with the rise of generative AI. Slop art is flat, awkward, stale, listless, and often ridiculous. Slop works are celebrated for their stupidity and clumsiness — which are often amplified by strange juxtapositions of culture memes.

These Slop works are widely mocked by the audience — and even by the people who create and curate them. Yet they are the results of hundreds of billions of dollars in tech investment.

Slop is all about wastefulness!

Let’s put this in context: In the current moment, there’s no money for serious artists — in filmmaking, fiction, painting, music, whatever. But there’s an endless supply of dollars to create Slop technology.

In fact, no artistic movement in human history has soaked up more cash than Slop.

This seems like a paradox. Why is so much money devoted to churning out crap?

Ah, that’s part of the appeal of Slop. The audience’s gleeful mockery is actually enhanced by the fact that a huge fortune has been wasted in creating pointless and bizarre works.

In other words, this mismatch between means and ends is a key part of our aesthetic movement. Hence a certain degree of cynicism is embedded in both the production and consumption of Slop.

So it’s stupid. It’s wasteful. It’s tasteless. It’s cynical.

And that’s all part of the plan.

Long live Slop!

Ted Gioia, “The New Aesthetics of Slop”, The Honest Broker, 2025-02-25.

May 21, 2025

AI hallucinations capture the Chicago Sun-Times summer reading list

Filed under: Books, Media, Technology — Tags: , , , , — Nicholas @ 05:00

Good luck finding some of the highly recommended books on the Chicago Sun-Times list of summer reading … they don’t actually exist (yet):

Critics aren’t perfect.

Sometimes they get facts wrong. Sometimes their judgment is faulty. Sometimes they dangle their modifiers or split their infinitives with everybody watching.

I’ve been there. And it’s awkward.

But I’ve never seen anything as embarrassing as the “Summer Reading List for 2025” in the Chicago Sun-Times.

It gave glowing reviews to books that don’t exist. And I bet you can guess why.

Yes, the newspaper relied on AI to write the article.

The article starts with a recommendation for Tidewater Dreams by Isabel Allende. This is Allende’s “first climate fiction novel” where “magical realism meets environmental activism”.

It’s a shame that Allende never wrote this book. Nor did anyone else — the book simply doesn’t exist.

(I’ll predict, however, that an AI-generated book with this title will show up on Amazon within a few days. When you live in a world of AI hallucinations, this is how the business model plays out.)

The next book on the Sun-Times list is The Last Algorithm by Andy Weir. This novel is also non-existent. But the storyline — about rogue AI that gains consciousness — makes me think that the bots are now mocking us.

It doesn’t get better. The first 10 books on the summer reading list are entirely hallucinated.

As the story of the fake reviews spread on social media, the Sun-Times got into damage control mode. It issued a public statement denying responsibility.

But that just makes matters worse.

Why are they publishing garbage without vetting it? And the denial is also implausible.

Somebody at the newspaper must have given the okay to this. The printing presses don’t run themselves (although maybe that will be the next stage of the AI business model).


How is this happening?

We are now several years into the AI revolution. I’m constantly hearing about new, improved bots that are smarter than super-geniuses, and can replace lowly humans.

But the bizarre lapses are getting worse — and more dangerous.

AI is routinely making stupid, nonsensical mistakes that even the most incompetent employee would never make. I’ve met some incompetent journalists over the years, but none would make a boneheaded move of this magnitude.

And this is after a trillion dollars has been sunk into AI by the most powerful corporations in the world. This is after they have soaked up much of the energy grid. This is after all the training and vetting and upgrading.

We’re not talking about beta testing or first generation AI. Silicon Valley is actually bragging about this tech — but it’s stupider than the worst journalist in the country.

April 28, 2025

A potential positive to the explosion of AI-generated fake porn

Filed under: Law, Media, Technology — Tags: , , , — Nicholas @ 05:00

The one thing we have always been able to predict with 100% confidence is that every new media will be used for pornography and crime, often in the same product. However, No Pasaran makes a case for there being a socially useful side to the ever-increasing realism of fake porn from popular LLM generators:

Seriously?! Am I the only person that sees the benefits of this “troubling trend” of online bullying?!

Think about it.

Blackmail is now a thing of the past.

That’s it.

It’s over.

Whether you are a teen or an adult, whether the photos are real or not, you can simply pass all of them off — indeed, you can do so nonchalantly — as fakes or deepfakes. To your classmates, to your spouse, to your constituents. Who will know whether you are fibbing or telling the truth? (Maybe you hardly know yourself …)

(In a totally different context, of course, that is exactly what Joe Biden’s White House did …)

As it happens, a considerable size of the audience for these sex photos/videos — maybe far more than half — will already be assuming that they’re fakes … (Thanks for the Instalink, Sarah.)

Depression at 16? Suicide at 17? Why fear sextortion at this point? Compliment instead the (anonymous) photo/video creators for doing a good job — for doing an outstanding job.

On my phone I keep receiving photos of Donald Trump tenderly cuddling with Joe Biden or Vladimir Putin or Stormy Daniels. Lots of apps now make you “repair” snapshots that are decades or (over) a century old, colorize them, and make them into mini-movies (the latest one I saw delighted me as it involved Civil War daguerreotypes from the 1860s).

I also keep receiving AI ads where, by combining a couple of photos of myself and of any girl (someone I know and am perhaps infatuated with or some rock or movie star or someone — Marilyn Monroe? Rudolph Valentino? Che Guevara? Queen Victoria? — who has been dead for decades) I can make myself hug or kiss that person — hungrily — on the mouth.

Years ago (long before AI), I was writing a TV script imagining a politician who was on national television and who was all of a sudden ambushed with private photos of him in a compromising position (with a woman other than his wife, with a man, with many women, with many men, at an orgy, in a BDSM cave, with a money shot, whatever …). Talk of falling victim; talk of bullying; talk of harassment (justified or otherwise)!

How should he react?

Ignore the content. And, with an admiring voice, let out a whistle and praise the work: “Wow, that’s well done!”

“What do you mean?!” interrupts the TV presenter, visibly frustrated. “No no no! Don’t tell me you are claiming they’re fake?! We have proof that you were seen at—”

Again, this was before AI, needless to say, which only made the politician’s next words even more startling: “It is so how admirable the degree to which studios have made progress with special effects!”

February 21, 2025

Tech enshittification continues

Ted Gioia notes that even the world’s biggest search engine provider is doing almost everything it can to make your search experiences worse and worse:

Almost everything in the digital world is turning into its opposite.

  • Social media platforms now prevent people from having a social life.
  • ChatGPT makes you less likely to chat with anybody.
  • Relationship apps make it harder for couples to form lasting relationships.
  • Health and wellness websites make it almost impossible to find reliable health advice — instead peddling products of dubious efficacy.
  • Product review sites now prevent people from reading impartial reviews by actual users of the product, instead operating as pay-for-play vehicles.
  • Etc. etc. etc.

You can often tell by the name. PayPal will never pay you a penny, and it’s certainly not your pal. Microsoft Teams only works if you stay away from your team. If you keep using Safari, you will never go on an actual Safari.

But the worst reversal is happening with search engines. They now prevent you from searching.

I’ve known Google up close and personal from the start. I initially found the company quirky and endearing — but those days ended long ago. The company is now clueless and creepy.

Almost every day I read some ugly news story about Google. Here are a few headlines from a typical week:

This company goes out of its ways to do mischief. Messing with people is in its DNA.

Meanwhile, its base business is degrading at an alarming rate. The company doesn’t seem to care.

In a strange turnaround, search engines don’t want you to search for anything. That’s because searching leads you on a journey — and Google doesn’t want you to leave their platform.

The search engine was invented as your gateway to the web. The inventors of this technology tried to index every page on the Internet — so that you could find anything and everything.

That was an exciting era. Search engines were like train stations or airports. They took you all over the world.

At Google today, the goal is the exact opposite. You never leave the station.

Techies once described the Internet as a digital highway. But we need a different metaphor nowadays. Web platforms want to trap you on their app, and keep you there forever.

So, instead of a digital highway, we have a digital roach motel. They let the roaches check in — but not check out.

February 14, 2025

Trump may start paying attention to Canadian cultural protectionist polices next

Michael Geist points out just how many Canadian federal policies and programs will likely come under scrutiny by the Trump administration for their blatant protectionism against US cultural products:

My Globe and Mail op-ed argues the need for change is particularly true for Canadian digital and cultural policy. Parliamentary prorogation ended efforts at privacy, cybersecurity and AI reforms and U.S. pressure has thrown the future of a series of mandated payments – digital service taxes, streaming payments and news media contributions – into doubt. But the Trump tariff escalation, which now extends to steel and aluminum as well as the prospect of reviving the original tariff plan in a matter of weeks, signals something far bigger that may ultimately render current Canadian digital and cultural policy unrecognizable.

Our cultural frameworks are largely based on decades-old policies premised on marketplace protections and mandated support payments. This included foreign ownership restrictions in the cultural sector and requirements that broadcasters contribute a portion of their revenues to support Canadian content production.

As we moved from an analog to digital world, the government simply extended those policies to the digital realm. But with Mr. Trump appearing to call out what he views to be Canadian protectionist policies in sensitive sectors such as banking ownership, the cultural and digital sectors may be next.

If so, there are no shortage of long-standing policies that tilt the playing field in favour of Canadians that could spark some uncomfortable conversations.

Why do U.S. companies face ownership restrictions in the telecom and broadcast sectors? Why are Canadian broadcasters permitted to block U.S. television signals in order to capture increased advertising revenue? Why do Canadian content rules exclude U.S. companies from owning productions featuring predominantly Canadian talent?

The Canadian response that this is how it has always been is unlikely to persuade Mr. Trump.

Canadian policies premised on “making web giants pay” may also be non-starters under Mr. Trump. For the past five years, the Canadian government seemingly welcomed the opportunity to sabre rattle with U.S. internet companies. This led to mandated payments for streaming services to support Canadian film, television and music production; link taxes that targeted Meta and Google to help Canadian news outlets; and the multibillion-dollar retroactive digital services tax that is primarily aimed at U.S. tech giants.

Not only have those policies raised consumer affordability and marketplace competition concerns, they have also emerged as increasingly contentious trade issues. If the trade battles with the U.S. continue, the pressure to scale back the policies will mount.

Beyond rethinking established cultural and digital policies both new and old, the bigger changes may come from re-evaluating the competitive impact of policies that rely heavily on regulation just as the U.S. prioritizes economic growth through deregulation. Proposed Canadian privacy, online harms and AI rules have all relied heavily on increased regulation, looking to Europe as the model.

For example, consider the Canadian approach to AI regulation in the now-defunct Artificial Intelligence and Data Act. It specifically referenced the European Union’s regulatory system, which establishes extensive regulatory requirements for high-risk AI systems and bans some AI systems altogether.

However, the European approach is not the only game in town. Mr. Trump moved swiftly to cancel the former Biden administration’s executive order on AI regulation, signalling that the U.S. will prioritize deregulation in pursuit of global AI leadership. Further, the arrival of DeepSeek, the Chinese answer to ChatGPT, took the world by storm and served notice that U.S. AI dominance is by no means guaranteed.

The competing approaches – U.S.-style lightweight regulation that favours economic growth against a more robust European regulatory model that emphasizes AI guardrails and public protections – will force difficult policy choices that Canada has thus far avoided.

February 1, 2025

China produced DeepSeek, Britain is mired in deep suck

In the Sean Gabb Newsletter, Sebastian Wang discusses the contrast between China’s recent release of the DeepSeek AI platform that appears to be eating the collective lunches of the existing LLM products by US firms and the devotion of the British Labour government to plunge ever deeper into their Net Zero dystopian vision:

Net Zero image from Jo Nova

For those few readers who may be unaware, DeepSeek is an advanced open-source artificial intelligence platform developed in China. Released in late 2024, it has set a new standard in AI, outperforming American counterparts in adaptability and capability. It excels in natural language processing, machine learning, and data analysis, and — critically — it is open source. Unlike the proprietary models that dominate the American tech landscape, DeepSeek allows anyone to adapt, improve, and use it as he sees fit. It’s not just a technological triumph for China; it’s a serious challenge to American domination of information technology and an opportunity for those who want to break free from the stranglehold of Silicon Valley.

As a Chinese person, I take pride in this achievement. It’s a testament to what my people can achieve with focus and ambition. But my concern is less about taking pride in what China has done and more about lamenting how little Britain has contributed to this revolution. Britain, the birthplace of the Industrial Revolution, seems to have no place in this new world of AI-driven progress. The question is why?

The answer is simple: The people who rule Britain have chosen decline. Crushing taxes on income and capital gains, and inheritance taxes, punish those who want to create wealth. Endless regulations stifle ambition, making it easier to conform than to innovate. Worst of all, there are the net zero policies, which have made energy costs the highest in the world, making electricity unaffordable and unreliable. Industries that depend on energy have been priced out of existence, and the dreamers and doers who might have built the next DeepSeek are being ground down by a system designed to reward mediocrity.

Net zero is not a noble goal born of misconception; it’s a disaster by design. It’s a wealth transfer scheme that takes from ordinary working people and hands billions to a small clique of green profiteers. The winners are the wind farm builders, the financiers running opaque carbon trading schemes, and the activists cashing in on government handouts. The losers are everyone else — families struggling to pay energy bills, businesses forced to close, and an entire country left unable to compete.

Compare this to China. DeepSeek wasn’t luck — it was the product of a system that rewards innovation. Electricity in China is cheap and reliable. Regulations focus on enabling progress, not blocking it. Ambition is celebrated, not treated as a threat. The ruling class there, for all its many and terrible faults, understands the value of creating wealth and technological self-reliance. Britain’s ruling class, by contrast, has abandoned the idea of building anything. They’d rather sit in the City of London, counting money made elsewhere, than see industry and innovation flourish in the country at large. They have chosen decline — not for them, but for us.

« Newer PostsOlder Posts »

Powered by WordPress