Ted Gioia says that he’s seeing strong indicators that the AI slop superabundance has helped create a widespread rejection of it and all its works:
2025 has been the year of garbage culture.
Creators watch in horror as dismal AI slop threatens their livelihoods — and the integrity of their fields. It’s everywhere, spreading faster than a pharaoh’s plague.
In recent months, we’ve been bombarded with millions of lousy AI songs, idiotic AI videos, and clumsy AI images. Error-filled AI texts are everywhere — from your workplace memos to the books sold on Amazon.com.
Even my lowly vocation, music journalism, gets turned into a joke when it’s accompanied by slop images of fake events.

No, these things did not really happen.
But something has changed in the last few days.
The garbage hasn’t disappeared. It’s still everywhere, stinking up the joint.
But people are disgusted, and finally pushing back. And they are doing so with such fervor that even the biggest AI companies are now getting nervous and pulling back.
Just consider this surprising headline:
This was stunning news. YouTube is part of the biggest AI slop promoter of them all — namely the Google/Alphabet empire. How can they possibly abandon AI garbage? Their bosses are the biggest slopmasters of them all.
After this shocking news reverberated through the creative economy, YouTube started to backtrack. They said that they would not punish every AI video — some can still be monetized.
But even the revised guidelines are still a major blow to AI slop purveyors. YouTube made clear that “creators are required to disclose when their realistic content is altered or synthetic”. That’s a huge win—we finally have a requirement for disclosure, and it came straight from the dark planet Alphabet.
YouTube also stressed that it opposes “content that is mass-produced or repetitive, which is content viewers often consider spam”. This is just a step away from blocking slop.
Update, 22 July: Ted posted a follow-up with a bit more evidence that the pushback is working:
In my latest article I criticized Spotify for allowing uploads of unauthorized AI tracks to the profiles of dead musicians.
But the company may finally be listening to criticisms of its AI policies. In this case, Spotify has now taken steps to stop the abuses, and a spokesperson reached out to me today with an update and expressing a clear and proper policy on AI fraud.
I share it below (and have also updated my article):
We’ve flagged the issue to SoundOn, the distributor of the content in question, and it has been removed. This violates Spotify’s deceptive content policies, which prohibit impersonation intended to mislead, such as replicating another creator’s name, image, or description, or posing as a person, brand, or organization in a deceptive manner. This is not allowed. We take action against licensors and distributors who fail to police for this kind of fraud and those who commit repeated or egregious violations can and have been permanently removed from Spotify.
They acted quickly, and I give them credit for that.
Update the second, 23 July: Ah, Spotify giveth and Spotify taketh away:
“Spotify is publishing new, AI-generated songs on the official pages of artists who died years ago without the permission of their estates or record labels,” reports 404 Media.
This scandal came to light because of an AI song attributed to Blaze Foley, who died in 1989. The bogus track is accompanied by an AI-generated image of a man who bears no resemblance to the singer.
What’s going on here? Is this just ignorance or carelessness at Spotify? Or does it represent something more sinister — another example of the company’s willingness to deceive users in the pursuit of profits?
These scams must stop. If Spotify doesn’t fix this mess immediately, courts should intervene.
But the dead musician scandal is just a start — because other bizarre things are happening at Spotify.
The whole situation is positively surreal.