Skip to content

2025⚓︎

Twelve Predictions for 2026

Most AI predictions ask what the technology will do. These predictions ask what we'll become.

The capability forecasts will mostly be right. Models will get faster, cheaper, more capable. Agents will handle more tasks. Adoption curves will steepen. None of that is particularly interesting to predict because none of it tells us what matters: how these tools will reshape the people and organizations that use them.

These are my predictions for the transformations we'll see in 2026. I draw on thinkers who've spent careers studying what happens when tools change their users: Stiegler's pharmacology of technology, Brown's analysis of neoliberal subjectivity, Zuboff's instrumentarian power, Ingold's critique of hylomorphism, Snowden's complexity framework.

Deep Structure

Why do presentations exist as artifacts at all?

The sociologist Bruno Latour offers a useful concept. He calls certain objects immutable mobiles: things that can travel across distances and contexts while remaining stable. A map is an immutable mobile. So is a legal contract, a scientific paper, a quarterly report. These artifacts enable coordination without co-presence. You don't need to be in the room; the document carries the message. It arrives the same as it left.

Presentations function as immutable mobiles. The deck you send to investors, the training materials distributed to five hundred employees, the pitch signed off by legal—these need to arrive identical to how they departed. The artifact isn't just for viewing. It's for traveling.

Thin to Thick

The hidden design challenge in every AI tool.

Everyone building AI tools is solving the same problem, and most don't realize it.

The visible problem is capability: can your tool write code, analyze data, automate workflows? The hidden problem is structure: how do users move from "I tried ChatGPT once" to "this tool understands how I work"? What primitives do you give them? How do those primitives grow?

Scroll through LinkedIn and you'll find practitioners mapping their journey:

  1. Casual use — random questions, sporadic interactions
  2. Power user — saved prompts, custom instructions
  3. Packager — building reusable workflows
  4. Chaos — too many tools that don't talk to each other
  5. Workspace need — craving a single place where everything lives

The stages feel true. They also encode assumptions worth excavating. What theory of progression does this taxonomy assume? What happens to users who don't follow the path?

The Political Economy of Moats

We say "moat" and imagine water, stone, permanence. Architecture designed to repel. But medieval sieges tell a different story. Historians estimate that three-quarters of sieges succeeded, and more through negotiation or starvation than assault. Fortresses fell to betrayal, bribery, and coercion at least as often as to military force. In 1119, Louis VI of France bribed the castellan of Les Andelys to smuggle soldiers inside hidden under straw; the "impregnable" fortress fell without a blow struck. Under accepted rules of war, a town captured by force could be sacked, but one that surrendered could not. Surrender was economically rational for everyone except the ideologically committed.

The drawbridge was always the point. Moats weren't designed to prevent entry; they were designed to make entry expensive enough that negotiation became preferable to assault. The moat's function was never architectural but political-economic. Every castle had a price. The question was whether attackers could pay it.

This week, three drawbridges lowered.

Disney licensed 200 characters to OpenAI's Sora for $1 billion plus equity. Mickey Mouse, Darth Vader, Iron Man—a century of IP fortress-building converted to platform positioning in a single transaction. Meta announced its pivot from open-source Llama to closed proprietary "Avocado," abandoning the strategy that made it the center of open AI development. And Anthropic's Claude Skills feature invited companies to "package your procedures, best practices, and institutional knowledge" into the platform—the capability moats that vertical SaaS companies believe make them defensible.

Three moats. Three prices paid. The water didn't drain; it converted to currency.

What we're witnessing isn't moat failure. It's moat-liquefaction: the transformation of defensive barriers into transactional surfaces. A liquid moat doesn't breach; it trades. The castle doesn't fall; it renegotiates. And the question that medieval castellans understood perfectly—whose incentives govern the drawbridge?—turns out to be the only strategic question that matters.

On Slop

The word arrived with the force of revelation. Slop. Scrolling through feeds thick with AI-generated images, articles, videos—content that felt somehow wrong, uncanny, excessive—we finally had a name for the unease. The term spread because it captured something visceral: the texture of language that reads smoothly but says nothing, images that resolve into coherence without ever achieving meaning, an endless tide of stuff that nobody asked for and nobody quite wanted.

The complaint is now ubiquitous. Graphite's analysis of web content found that over half of new articles are now AI-generated. Bynder's research shows a majority of consumers report reduced engagement when they suspect content is machine-made. The Macquarie Dictionary named "AI slop" its 2025 word of the year. The diagnosis seems clear: AI is flooding our information environment with garbage, and the flood is drowning authentic human expression.

But what if the diagnosis is wrong? Not factually wrong (the volume is real, the uncanniness genuine) but conceptually wrong. What if "slop" is a category error that mistakes volume for vice, aesthetics for ethics, and origin for orientation? The question isn't whether AI produces garbage. It does, in abundance. The question is whether that's the right thing to be worried about.

Remix / Remake / Remade

In China Miéville's The City & The City, two cities occupy the same physical space. Besźel and Ul Qoma share streets, share buildings, share the very air, yet their citizens are trained from birth to unsee the other city. You learn to recognize the architecture, the clothing, the gait of the other place, and you learn to let your gaze slide past it without acknowledgment. The unseeing isn't ignorance—it's a disciplined practice, socially enforced and after long enough, automatic, a learned incapacity so thorough that the other city becomes invisible not through absence but through cultivated blindness. To see is a crime called Breach. The citizens of Besźel walk past the citizens of Ul Qoma every day, their shoulders nearly brushing, and neither acknowledges the other's existence.

We are practicing the same unseeing now.

The Cockpit That Remembers

In the 1900s, over 4,000 wagon manufacturers dominated American transportation. They had infrastructure, expertise, and supply chains refined across generations. How many became car companies? One. Studebaker. One company out of four thousand made the transition.

Walking and Flying

There's a particular kind of knowing that comes from walking—from months spent in neighborhoods where you learn which doors open easily and which remain forever closed, from conversations that drift and return like tides, from the smell of cooking that tells you more about a place than any survey could capture. Ethnographers have walked like this for decades: slowly, attentively, building trust one cup of coffee at a time, noticing the things people don't say as carefully as the things they do. It's intimate work, this ground-level knowing; embodied, reciprocal, achingly slow.

Cathedrals versus Commons

The best pranks take ten thousand years to set up.

That's what the Dwellers do in Banks' The Algebraist: beings who live for eons in gas giants, playing elaborate jokes on each other because when you have infinite resources, reputation through novelty is all that matters. One Dweller breeds a sentient species just to have an audience for their poetry. Another spends centuries setting up a pun.

They would find our current situation hilarious.

Consider the hospital administrator's daily reality: You're running critical infrastructure on OpenAI's API while your radiologists use sketchy Discord models on the side. Your lawyers demand compliance certificates. Your engineers contribute to repositories that will make your product obsolete. Your board wants quarterly growth. Your mission statement talks about serving humanity. Three different economic games, incompatible rules, everyone pretending this is normal.

Beyond the Flywheel

For twenty years, we've lived with a particular story about how digital growth works—the flywheel spinning faster with each turn, users attracting more users, data improving services which draw more data; it's a narrative that explained why Amazon felt inevitable, why Facebook seemed unstoppable, why Uber could burn billions and still look like the future. The flywheel wasn't entirely wrong, of course. It captured something real about network effects and compounding advantages; it gave us services that were genuinely transformative, at least for a while. Yet what looked like perpetual motion from one angle, we now see, looked like extraction from another.