Skip to content

Archive⚓︎

Wardley Factories

The first industrial revolution made goods. The resistance of raw material to finished product—spinning cotton into thread, forging iron into rails—was the friction that defined an era. Then something shifted. The resistance moved up a level. We stopped just making goods and started making the machines that make goods. Resistance became: how do you build a factory? How do you systematize production itself?

This pattern recurs. Each time we solve the friction at one level, we create the conditions for the next level to become the bottleneck. And then we industrialize that. The resistance keeps moving up, and we keep following it, building machines to solve the problems created by the previous generation of machines.

Simon Wardley's evolution model—Genesis, Custom-Built, Product, Commodity—was supposed to describe how technologies naturally mature. Something new emerges (genesis), gets built bespoke for early adopters (custom), standardizes into products that compete on features (product), and eventually becomes undifferentiated infrastructure everyone assumes exists (commodity). The phases had a certain stateliness to them. You could watch a technology work through them over years, sometimes decades. The journey from ARPANET to commodity cloud computing took roughly forty years.

That stateliness is gone. What changed isn't just the pace but the structure. The Wardley phases have been industrialized. We've built factories for moving technologies through the evolution cycle, and those factories are getting more efficient with each iteration. The cycle that once took decades now completes in years; the cycle after that will complete in months. And each completion makes the next one faster, because the output of each cycle becomes the input for accelerating the next.

The Shenzhen Recursion

In 1980, Deng Xiaoping designated a fishing village of 30,000 workers as one of China's first Special Economic Zones. Shenzhen was an experiment: a 330 square kilometer sandbox where the central government could test policies too risky for the broader economy. Foreign ownership, contract labor, stock exchanges, land auctions—all were trialed there first. If they worked, they'd graduate to the mainland. If they failed, the damage would be contained.

The results exceeded anyone's projections. Against a national average of 10% annual GDP growth, Shenzhen grew at 58% from 1980 to 1984. By 1988, the central government had implemented many of Shenzhen's reforms across nearly 300 regions covering 20% of China's population. Today Shenzhen's GDP exceeds Hong Kong's. It's home to Tencent, Huawei, and DJI. The fishing village became the factory of the world.

This story usually gets told as a tale of economic liberalization, or of China's pragmatic approach to reform. But there's a different lesson buried in it, one that has nothing to do with economics and everything to do with how complex systems absorb change. The SEZ model is an architecture for experimentation at scale—bounded risk, clear graduation criteria, systematic diffusion. And that architecture is exactly what software development needs now that AI has made code cheap to produce.

The Taste Squeeze

The moment of selection: digital designs meeting physical fabric

Diarra Bousso runs an AI-first fashion house. She uses generative tools to prototype designs, tests them with Instagram polls before production, and operates an on-demand supply chain that lets her sell garments that don't exist yet—customers pay for AI renders of wool capes, and artisans manufacture them after the order comes in. She's flipped the fashion industry's cash flow equation: instead of spending fourteen months on prototypes, trade shows, and inventory before seeing revenue, she gets paid first.

When asked whether AI will replace designers, her answer is unequivocal: no. The tools amplify human creativity; they don't substitute for it. "You could use all the AI tools in the world," she says, "you will never get these images I just showed you because there's a lot of work behind it that comes from taste, that comes from being a designer, that comes from being an artist, that comes from culture, that comes from my upbringing."

This is the optimistic case for human irreplaceability in creative work, and it deserves to be taken seriously before we complicate it. The argument has three parts, and each contains real insight.

Impossible Algebra

Fifty pull requests per week requires more hours than exist. Ten minutes each—and that's generous, assuming no review, no debugging, no context-switching—yields over eight hours of uninterrupted production daily. The number doesn't stretch toward difficult. It breaks arithmetic entirely.

This is the first axiom: the target must be impossible under current assumptions. Not ambitious. Impossible. If you can imagine reaching it by working harder, it isn't impossible enough.

The second axiom follows from the first: impossible targets don't yield to effort. They yield to transformation. The developer who achieves fifty PRs weekly isn't a faster version of the developer who achieves five. They're a different kind of thing—an orchestrator running parallel agents, not a craftsperson at a single terminal.

What looks like productivity is actually ontology. The algebra that makes impossible numbers possible isn't about efficiency. It's about becoming.

The Political Economy of Fog

Prologue: The Point of the Game

I keep returning to something the philosopher James Carse wrote nearly forty years ago, and that Sangeet Paul Choudhary put more sharply in a recent post: the point of an infinite game is to keep playing.

This sounds like a platitude until you watch people forget it. You stay in the infinite game by winning finite games: the funding round, the product launch, the quarterly target, the acquisition. These finite games have clear winners and losers. They feel urgent. They come with metrics and deadlines and congratulations when you close them. But they are not the point. They are what you do to remain in the arena where the actual game unfolds.

The pathology (and it is a pathology, not a mistake) is when we optimize so hard for the next finite win that we sacrifice the capacity to keep winning. When we confuse the battle for the war. When every decision serves next quarter at the expense of next decade. I've watched this happen to people I respect, and I've caught myself doing it more than I'd like to admit.

What makes the AI platform market worth examining is that it has industrialized this confusion. Meta pays $2 billion for Manus in ten days. Cursor raises at $29 billion. The valuations make no sense as prices for things that exist; they make complete sense as prices for options on things that might. Everyone is playing finite games (the demo, the deal, the markup) and almost no one can tell whether these finite wins are building something durable or consuming the conditions for durability.

The fog prevents the knowing. What follows is an attempt to trace its contours.

By fog I mean something specific: a market condition where participants cannot evaluate their own productivity. Buyers cannot distinguish platforms that work from platforms that perform. Capital flows toward stories rather than outcomes; outcomes resist measurement. The fog isn't a temporary inconvenience that better tools will disperse. The fog is structural. It's produced by the same dynamics that produce the market itself.

This essay moves in concentric circles through that fog. It starts with the individual developer who feels productive while actually getting slower, a perception gap documented in studies that should have caused more alarm than they did. It moves outward to markets that cannot learn from their participants, capital structures that reward performance over verification, and political battles over who gets to define what "working" even means. At the center is a question I find genuinely unsettling: what kind of people do we become when we work in conditions of structural unknowability? What happens to judgment, to attention, to the capacity for honest self-assessment, when the tools we use are optimized to make us feel effective regardless of whether we are?

I don't have comfortable answers. But I've become convinced that the question (which game are you actually playing?) is the one that matters. The fog makes it difficult to answer, and that difficulty is itself the subject.

Twelve Predictions for 2026

Most AI predictions ask what the technology will do. These predictions ask what we'll become.

The capability forecasts will mostly be right. Models will get faster, cheaper, more capable. Agents will handle more tasks. Adoption curves will steepen. None of that is particularly interesting to predict because none of it tells us what matters: how these tools will reshape the people and organizations that use them.

These are my predictions for the transformations we'll see in 2026. I draw on thinkers who've spent careers studying what happens when tools change their users: Stiegler's pharmacology of technology, Brown's analysis of neoliberal subjectivity, Zuboff's instrumentarian power, Ingold's critique of hylomorphism, Snowden's complexity framework.

Deep Structure

Why do presentations exist as artifacts at all?

The sociologist Bruno Latour offers a useful concept. He calls certain objects immutable mobiles: things that can travel across distances and contexts while remaining stable. A map is an immutable mobile. So is a legal contract, a scientific paper, a quarterly report. These artifacts enable coordination without co-presence. You don't need to be in the room; the document carries the message. It arrives the same as it left.

Presentations function as immutable mobiles. The deck you send to investors, the training materials distributed to five hundred employees, the pitch signed off by legal—these need to arrive identical to how they departed. The artifact isn't just for viewing. It's for traveling.

Thin to Thick

The hidden design challenge in every AI tool.

Everyone building AI tools is solving the same problem, and most don't realize it.

The visible problem is capability: can your tool write code, analyze data, automate workflows? The hidden problem is structure: how do users move from "I tried ChatGPT once" to "this tool understands how I work"? What primitives do you give them? How do those primitives grow?

Scroll through LinkedIn and you'll find practitioners mapping their journey:

  1. Casual use — random questions, sporadic interactions
  2. Power user — saved prompts, custom instructions
  3. Packager — building reusable workflows
  4. Chaos — too many tools that don't talk to each other
  5. Workspace need — craving a single place where everything lives

The stages feel true. They also encode assumptions worth excavating. What theory of progression does this taxonomy assume? What happens to users who don't follow the path?

The Political Economy of Moats

We say "moat" and imagine water, stone, permanence. Architecture designed to repel. But medieval sieges tell a different story. Historians estimate that three-quarters of sieges succeeded, and more through negotiation or starvation than assault. Fortresses fell to betrayal, bribery, and coercion at least as often as to military force. In 1119, Louis VI of France bribed the castellan of Les Andelys to smuggle soldiers inside hidden under straw; the "impregnable" fortress fell without a blow struck. Under accepted rules of war, a town captured by force could be sacked, but one that surrendered could not. Surrender was economically rational for everyone except the ideologically committed.

The drawbridge was always the point. Moats weren't designed to prevent entry; they were designed to make entry expensive enough that negotiation became preferable to assault. The moat's function was never architectural but political-economic. Every castle had a price. The question was whether attackers could pay it.

This week, three drawbridges lowered.

Disney licensed 200 characters to OpenAI's Sora for $1 billion plus equity. Mickey Mouse, Darth Vader, Iron Man—a century of IP fortress-building converted to platform positioning in a single transaction. Meta announced its pivot from open-source Llama to closed proprietary "Avocado," abandoning the strategy that made it the center of open AI development. And Anthropic's Claude Skills feature invited companies to "package your procedures, best practices, and institutional knowledge" into the platform—the capability moats that vertical SaaS companies believe make them defensible.

Three moats. Three prices paid. The water didn't drain; it converted to currency.

What we're witnessing isn't moat failure. It's moat-liquefaction: the transformation of defensive barriers into transactional surfaces. A liquid moat doesn't breach; it trades. The castle doesn't fall; it renegotiates. And the question that medieval castellans understood perfectly—whose incentives govern the drawbridge?—turns out to be the only strategic question that matters.

On Slop

The word arrived with the force of revelation. Slop. Scrolling through feeds thick with AI-generated images, articles, videos—content that felt somehow wrong, uncanny, excessive—we finally had a name for the unease. The term spread because it captured something visceral: the texture of language that reads smoothly but says nothing, images that resolve into coherence without ever achieving meaning, an endless tide of stuff that nobody asked for and nobody quite wanted.

The complaint is now ubiquitous. Graphite's analysis of web content found that over half of new articles are now AI-generated. Bynder's research shows a majority of consumers report reduced engagement when they suspect content is machine-made. The Macquarie Dictionary named "AI slop" its 2025 word of the year. The diagnosis seems clear: AI is flooding our information environment with garbage, and the flood is drowning authentic human expression.

But what if the diagnosis is wrong? Not factually wrong (the volume is real, the uncanniness genuine) but conceptually wrong. What if "slop" is a category error that mistakes volume for vice, aesthetics for ethics, and origin for orientation? The question isn't whether AI produces garbage. It does, in abundance. The question is whether that's the right thing to be worried about.