Skip to content

2026⚓︎

Wardley Factories

The first industrial revolution made goods. The resistance of raw material to finished product—spinning cotton into thread, forging iron into rails—was the friction that defined an era. Then something shifted. The resistance moved up a level. We stopped just making goods and started making the machines that make goods. Resistance became: how do you build a factory? How do you systematize production itself?

This pattern recurs. Each time we solve the friction at one level, we create the conditions for the next level to become the bottleneck. And then we industrialize that. The resistance keeps moving up, and we keep following it, building machines to solve the problems created by the previous generation of machines.

Simon Wardley's evolution model—Genesis, Custom-Built, Product, Commodity—was supposed to describe how technologies naturally mature. Something new emerges (genesis), gets built bespoke for early adopters (custom), standardizes into products that compete on features (product), and eventually becomes undifferentiated infrastructure everyone assumes exists (commodity). The phases had a certain stateliness to them. You could watch a technology work through them over years, sometimes decades. The journey from ARPANET to commodity cloud computing took roughly forty years.

That stateliness is gone. What changed isn't just the pace but the structure. The Wardley phases have been industrialized. We've built factories for moving technologies through the evolution cycle, and those factories are getting more efficient with each iteration. The cycle that once took decades now completes in years; the cycle after that will complete in months. And each completion makes the next one faster, because the output of each cycle becomes the input for accelerating the next.

The Shenzhen Recursion

In 1980, Deng Xiaoping designated a fishing village of 30,000 workers as one of China's first Special Economic Zones. Shenzhen was an experiment: a 330 square kilometer sandbox where the central government could test policies too risky for the broader economy. Foreign ownership, contract labor, stock exchanges, land auctions—all were trialed there first. If they worked, they'd graduate to the mainland. If they failed, the damage would be contained.

The results exceeded anyone's projections. Against a national average of 10% annual GDP growth, Shenzhen grew at 58% from 1980 to 1984. By 1988, the central government had implemented many of Shenzhen's reforms across nearly 300 regions covering 20% of China's population. Today Shenzhen's GDP exceeds Hong Kong's. It's home to Tencent, Huawei, and DJI. The fishing village became the factory of the world.

This story usually gets told as a tale of economic liberalization, or of China's pragmatic approach to reform. But there's a different lesson buried in it, one that has nothing to do with economics and everything to do with how complex systems absorb change. The SEZ model is an architecture for experimentation at scale—bounded risk, clear graduation criteria, systematic diffusion. And that architecture is exactly what software development needs now that AI has made code cheap to produce.

The Taste Squeeze

The moment of selection: digital designs meeting physical fabric

Diarra Bousso runs an AI-first fashion house. She uses generative tools to prototype designs, tests them with Instagram polls before production, and operates an on-demand supply chain that lets her sell garments that don't exist yet—customers pay for AI renders of wool capes, and artisans manufacture them after the order comes in. She's flipped the fashion industry's cash flow equation: instead of spending fourteen months on prototypes, trade shows, and inventory before seeing revenue, she gets paid first.

When asked whether AI will replace designers, her answer is unequivocal: no. The tools amplify human creativity; they don't substitute for it. "You could use all the AI tools in the world," she says, "you will never get these images I just showed you because there's a lot of work behind it that comes from taste, that comes from being a designer, that comes from being an artist, that comes from culture, that comes from my upbringing."

This is the optimistic case for human irreplaceability in creative work, and it deserves to be taken seriously before we complicate it. The argument has three parts, and each contains real insight.

Impossible Algebra

Fifty pull requests per week requires more hours than exist. Ten minutes each—and that's generous, assuming no review, no debugging, no context-switching—yields over eight hours of uninterrupted production daily. The number doesn't stretch toward difficult. It breaks arithmetic entirely.

This is the first axiom: the target must be impossible under current assumptions. Not ambitious. Impossible. If you can imagine reaching it by working harder, it isn't impossible enough.

The second axiom follows from the first: impossible targets don't yield to effort. They yield to transformation. The developer who achieves fifty PRs weekly isn't a faster version of the developer who achieves five. They're a different kind of thing—an orchestrator running parallel agents, not a craftsperson at a single terminal.

What looks like productivity is actually ontology. The algebra that makes impossible numbers possible isn't about efficiency. It's about becoming.

The Political Economy of Fog

Prologue: The Point of the Game

I keep returning to something the philosopher James Carse wrote nearly forty years ago, and that Sangeet Paul Choudhary put more sharply in a recent post: the point of an infinite game is to keep playing.

This sounds like a platitude until you watch people forget it. You stay in the infinite game by winning finite games: the funding round, the product launch, the quarterly target, the acquisition. These finite games have clear winners and losers. They feel urgent. They come with metrics and deadlines and congratulations when you close them. But they are not the point. They are what you do to remain in the arena where the actual game unfolds.

The pathology (and it is a pathology, not a mistake) is when we optimize so hard for the next finite win that we sacrifice the capacity to keep winning. When we confuse the battle for the war. When every decision serves next quarter at the expense of next decade. I've watched this happen to people I respect, and I've caught myself doing it more than I'd like to admit.

What makes the AI platform market worth examining is that it has industrialized this confusion. Meta pays $2 billion for Manus in ten days. Cursor raises at $29 billion. The valuations make no sense as prices for things that exist; they make complete sense as prices for options on things that might. Everyone is playing finite games (the demo, the deal, the markup) and almost no one can tell whether these finite wins are building something durable or consuming the conditions for durability.

The fog prevents the knowing. What follows is an attempt to trace its contours.

By fog I mean something specific: a market condition where participants cannot evaluate their own productivity. Buyers cannot distinguish platforms that work from platforms that perform. Capital flows toward stories rather than outcomes; outcomes resist measurement. The fog isn't a temporary inconvenience that better tools will disperse. The fog is structural. It's produced by the same dynamics that produce the market itself.

This essay moves in concentric circles through that fog. It starts with the individual developer who feels productive while actually getting slower, a perception gap documented in studies that should have caused more alarm than they did. It moves outward to markets that cannot learn from their participants, capital structures that reward performance over verification, and political battles over who gets to define what "working" even means. At the center is a question I find genuinely unsettling: what kind of people do we become when we work in conditions of structural unknowability? What happens to judgment, to attention, to the capacity for honest self-assessment, when the tools we use are optimized to make us feel effective regardless of whether we are?

I don't have comfortable answers. But I've become convinced that the question (which game are you actually playing?) is the one that matters. The fog makes it difficult to answer, and that difficulty is itself the subject.