Skip to content

Archive⚓︎

The Phoenix Test

"Voting With Fire" made the case for the phoenix: a codebase designed for regeneration rather than permanence, where specifications and acceptance tests constitute the durable layer and application code is disposable. But it left a question open that the archaeological evidence answers with uncomfortable precision. Which specifications? Which tests? What, exactly, needs to survive the fire — and what looks durable but will burn with the structure it serves?

Luke Kemp's evidence across two hundred cases of civilizational collapse reveals a variable that determines everything, a variable that has nothing to do with the knowledge itself and everything to do with the social arrangement that carries it. The same knowledge, held differently, has radically different survival properties. The critical question is never what the knowledge is. It is who holds it, how it travels, and whether it serves the community or the palace.

Voting with Fire

In Goliath's Curse, the historian Luke Kemp renames civilization. He calls it Goliath: a collection of dominance hierarchies in which some individuals dominate others to control energy and labor. Named for the Bronze Age warrior — imposing in stature, reliant on violence, surprisingly fragile. Across two hundred case studies spanning five millennia, Kemp documents the same structural dynamic. As hierarchical societies age, inequality concentrates, decision-making deteriorates, and the system grows brittle. Complexity scientists call it critical slowing down. A healthy system absorbs shocks and recovers quickly. An extractive one recovers more slowly from each successive disturbance, like an aging body that takes longer to heal from each injury, until eventually a shock that the system would have once absorbed tips it into collapse.

The curse is internal. Goliaths don't die from external assault. They hollow themselves out. The wealth pump transfers resources upward; the exchange between rulers and ruled grows more unequal; elites compete for shrinking returns; and the population that once sustained the structure loses both the incentive and the capacity to defend it. Then drought comes, or invasion, or rebellion, and the system that looked permanent proves to have been perched on a knife's edge for decades.

Software engineers will recognize this dynamic because they live inside it.

DISCOVER

Software development has two scoreboards and a process between them. The first measures deployment: how fast code ships, how often it breaks, how quickly you recover. The second measures the organization: how fast signals reach the right person, how autonomous the response is. Comprehension — understanding the system well enough to decide and act correctly — sits between them.

The progression follows a logic that Simon Wardley mapped in his work on technology evolution: every capability that becomes commodity accelerates everything that depends on it, exposing the next constraint beneath.

The Decision Loop

DORA measures how fast code flows from commit to production. MOVE measures how fast the organization senses, decides, and acts. Both track real performance. Both share an assumption: that someone, somewhere, understood the system well enough to make the right call.

That assumption has a cost, and it's larger than most organizations realize.

The Replacement Rate

Two guys in the jungle. A tiger charges. One kneels to tighten his shoelaces. The other yells: "You can't outrun a tiger!" First guy: "I don't have to outrun the tiger. I only have to outrun you."

Thorsten Ball used this joke recently to make a point about AI and the average software engineer. The joke is more precise than he may have intended. It contains, in five sentences, both a correct economic model and a game-theoretic trap. The model: your value isn't absolute; it's relative to the next-best alternative. The trap: when everyone tightens their shoes, the tiger catches someone anyway, and the race never ends.

Sports analytics formalised this intuition decades ago. The framework is called VORP: Value Over Replacement Player.

The Phantom Limb Economy

There are more employed musicians in the United States today than at any point since 1850. Over 221,000 of them, according to the US Census Bureau. The number gets cited with comforting regularity every time new technology threatens creative work. Phonograph? Musicians survived. Radio? Still here. Streaming? More than ever.

The data is real enough; it just doesn't tell you what you think it does. Arun Panangatt took the 221,000 figure apart, and what sits inside it undermines the argument the number is usually recruited to make.

MOVE: Metrics for the AI-Native Organization

We spent a decade measuring how fast teams ship code. Now the question is how fast the whole organization senses, decides, and acts.

MOVE measures what DORA cannot — how effectively an organization operates when intelligent systems participate in execution. Any organization can buy AI. MOVE asks whether AI changed how the organization operates.

The AI Capability Map: An Expanded Inventory

You don't get to opt out of commodity AI. That's what "commodity" means: not "cheap" or "boring" but "compulsory." Ivan Illich saw this pattern with electricity, automobiles, schools. The moment something becomes a utility, non-participation becomes deviance. Prasad Prabhakaran's recent Wardley map of enterprise AI capabilities plots where different technologies sit on the evolution axis. The map is useful. But its most important insight is implicit: everything in the Commodity column is no longer a choice.

What follows is an expanded inventory: the original categories, what's missing from each, and the harder question of what the categories themselves fail to capture. The act of mapping shapes what gets mapped. The categories we use determine the investments we make. And some capabilities don't fit the Genesis-to-Commodity axis at all.

Reading After Readers

Jonathan Boymal, writing about education in the AI era, argued that deep reading, historically treated as foundational to intellectual development, requires reassessment. The humanist tradition from Simone Weil through Maryanne Wolf emerged "under conditions of relative informational scarcity." Those conditions no longer hold. Students now encounter algorithmic language that "asks less to be interpreted than to be accepted." The response, Boymal suggests, is lateral reading: moving across contexts rather than diving into single texts, asking where claims come from and how meaning differs elsewhere.

The counterpoint came from Johanna Winant in Boston Review, defending close reading's ongoing power. Close reading, she argues, "grounds and extends an argument, reasoning from what we all know to be the case to what the close reader claims is the case." Her students at West Virginia University learned to build arguments from the ground up, noticing details small enough to fit under a finger. One became a nurse who writes notes for doctors using argumentative techniques learned from literature. Another used the method to write a police report about an assault "so she would be understood and believed." Close reading, in this telling, isn't literary technique—it's transferable attention to detail that works in courtrooms and hospitals.

The Family Quarrel

Look at what close and lateral reading share. Both assume an autonomous reader navigating information. Both treat texts as discrete objects to be approached with the right technique. Close reading says go deep; lateral reading says don't be naive. But both preserve the modernist figure of the individual reader making choices about what to trust and how to engage.

This is a family quarrel. The participants disagree on tactics while sharing deeper assumptions: the reader as subject, the text as object, reading as something the subject does to the object. The debate generates heat because both sides sense something is shifting, but neither quite names it. They're arguing about which room to occupy while the building's foundation moves.

The question isn't close versus lateral. It's what happens to reading when the reader—the individual, autonomous, choosing reader—starts to dissolve.

The Anatomy of a Ratchet

Dan Lorenc's multiclaude takes a counterintuitive position on multi-agent orchestration: the best way to coordinate AI agents working on the same codebase is to barely coordinate them at all. Instead of building sophisticated protocols to prevent conflicts and duplicate work, multiclaude embraces chaos and lets CI serve as the filter. The result is a system that ships more code precisely because it doesn't try to manage what each agent is doing.

This isn't accidental. The project calls its philosophy "The Brownian Ratchet," borrowing from physics: random motion in one direction, a mechanism that prevents backward movement, and net forward progress despite apparent disorder. The metaphor isn't decoration; it's the architectural blueprint.