On Slop
The word arrived with the force of revelation. Slop. Scrolling through feeds thick with AI-generated images, articles, videos—content that felt somehow wrong, uncanny, excessive—we finally had a name for the unease. The term spread because it captured something visceral: the texture of language that reads smoothly but says nothing, images that resolve into coherence without ever achieving meaning, an endless tide of stuff that nobody asked for and nobody quite wanted.
The complaint is now ubiquitous. Graphite's analysis of web content found that over half of new articles are now AI-generated. Bynder's research shows a majority of consumers report reduced engagement when they suspect content is machine-made. The Macquarie Dictionary named "AI slop" its 2025 word of the year. The diagnosis seems clear: AI is flooding our information environment with garbage, and the flood is drowning authentic human expression.
But what if the diagnosis is wrong? Not factually wrong (the volume is real, the uncanniness genuine) but conceptually wrong. What if "slop" is a category error that mistakes volume for vice, aesthetics for ethics, and origin for orientation? The question isn't whether AI produces garbage. It does, in abundance. The question is whether that's the right thing to be worried about.