DORA measures how fast code flows from commit to production. MOVE measures how fast the organization senses, decides, and acts. Both track real performance. Both share an assumption: that someone, somewhere, understood the system well enough to make the right call.
That assumption has a cost, and it's larger than most organizations realize.
Two guys in the jungle. A tiger charges. One kneels to tighten his shoelaces. The other yells: "You can't outrun a tiger!" First guy: "I don't have to outrun the tiger. I only have to outrun you."
Thorsten Ball used this joke recently to make a point about AI and the average software engineer. The joke is more precise than he may have intended. It contains, in five sentences, both a correct economic model and a game-theoretic trap. The model: your value isn't absolute; it's relative to the next-best alternative. The trap: when everyone tightens their shoes, the tiger catches someone anyway, and the race never ends.
Sports analytics formalised this intuition decades ago. The framework is called VORP: Value Over Replacement Player.
There are more employed musicians in the United States today than at any point since 1850. Over 221,000 of them, according to the US Census Bureau. The number gets cited with comforting regularity every time new technology threatens creative work. Phonograph? Musicians survived. Radio? Still here. Streaming? More than ever.
The data is real enough; it just doesn't tell you what you think it does. Arun Panangatt took the 221,000 figure apart, and what sits inside it undermines the argument the number is usually recruited to make.
We spent a decade measuring how fast teams ship code. Now the question is how fast the whole organization senses, decides, and acts.
MOVE measures what DORA cannot — how effectively an organization operates when intelligent systems participate in execution. Any organization can buy AI. MOVE asks whether AI changed how the organization operates.
You don't get to opt out of commodity AI. That's what "commodity" means: not "cheap" or "boring" but "compulsory." Ivan Illich saw this pattern with electricity, automobiles, schools. The moment something becomes a utility, non-participation becomes deviance. Prasad Prabhakaran's recent Wardley map of enterprise AI capabilities plots where different technologies sit on the evolution axis. The map is useful. But its most important insight is implicit: everything in the Commodity column is no longer a choice.
What follows is an expanded inventory: the original categories, what's missing from each, and the harder question of what the categories themselves fail to capture. The act of mapping shapes what gets mapped. The categories we use determine the investments we make. And some capabilities don't fit the Genesis-to-Commodity axis at all.
Jonathan Boymal, writing about education in the AI era, argued that deep reading, historically treated as foundational to intellectual development, requires reassessment. The humanist tradition from Simone Weil through Maryanne Wolf emerged "under conditions of relative informational scarcity." Those conditions no longer hold. Students now encounter algorithmic language that "asks less to be interpreted than to be accepted." The response, Boymal suggests, is lateral reading: moving across contexts rather than diving into single texts, asking where claims come from and how meaning differs elsewhere.
The counterpoint came from Johanna Winant in Boston Review, defending close reading's ongoing power. Close reading, she argues, "grounds and extends an argument, reasoning from what we all know to be the case to what the close reader claims is the case." Her students at West Virginia University learned to build arguments from the ground up, noticing details small enough to fit under a finger. One became a nurse who writes notes for doctors using argumentative techniques learned from literature. Another used the method to write a police report about an assault "so she would be understood and believed." Close reading, in this telling, isn't literary technique—it's transferable attention to detail that works in courtrooms and hospitals.
Look at what close and lateral reading share. Both assume an autonomous reader navigating information. Both treat texts as discrete objects to be approached with the right technique. Close reading says go deep; lateral reading says don't be naive. But both preserve the modernist figure of the individual reader making choices about what to trust and how to engage.
This is a family quarrel. The participants disagree on tactics while sharing deeper assumptions: the reader as subject, the text as object, reading as something the subject does to the object. The debate generates heat because both sides sense something is shifting, but neither quite names it. They're arguing about which room to occupy while the building's foundation moves.
The question isn't close versus lateral. It's what happens to reading when the reader—the individual, autonomous, choosing reader—starts to dissolve.
Dan Lorenc's multiclaude takes a counterintuitive position on multi-agent orchestration: the best way to coordinate AI agents working on the same codebase is to barely coordinate them at all. Instead of building sophisticated protocols to prevent conflicts and duplicate work, multiclaude embraces chaos and lets CI serve as the filter. The result is a system that ships more code precisely because it doesn't try to manage what each agent is doing.
This isn't accidental. The project calls its philosophy "The Brownian Ratchet," borrowing from physics: random motion in one direction, a mechanism that prevents backward movement, and net forward progress despite apparent disorder. The metaphor isn't decoration; it's the architectural blueprint.
Mark Carney said it plainly: nostalgia is not a strategy. He was talking about geopolitics, about Canada's relationship with an America that no longer plays by the old rules. But the line lands harder than he intended. It cuts through the entire discourse about AI and work, the endless back-and-forth between doomers and boosters, the think pieces and policy papers and LinkedIn manifestos. Nostalgia is not a strategy. You cannot wish your way back to a world that isn't coming back. Accept the fracture. Move forward.
He's right. And almost everyone responding to him is proving his point while thinking they're refuting it.
Peter Steinberger recently described his workflow with GPT-5.2's codex: he no longer reads code line by line but "watches the stream," trusting the model's output more than any previous generation. The striking phrase wasn't about capability. It was about practice. He called certain established workflows "charades"—rituals necessary for older models that become vestigial as the technology improves. Plan mode, he suggested, is "a hack."
This provokes a question that's been nagging at me as the primitives for AI collaboration proliferate: plan mode, terminals, memory, canvas, artifacts, statuslines, chat. We keep adding surfaces. We keep building more scaffolding. The implicit assumption is that more visibility and more control equals better collaboration. But what if the trajectory runs the other way? What if the primitives that define skilled collaboration are precisely the ones that disappear?
There is a drawer in my mind where the passports accumulate.
I do not mean this only as metaphor. Reading widely produces a particular sensation, one that rarely gets named. You finish a week in which you have moved from W.G. Sebald's melancholy wanderings to a paper on protein folding to Fernando Pessoa's heteronyms to something dense on market microstructure. And you notice that you were not quite the same person in each encounter. The reader of Sebald occupied a tempo, a quality of attention, that the reader of the protein paper could not sustain. Pessoa demanded a willingness to dissolve that the market microstructure paper would have found absurd.
These are not "perspectives" you have acquired. They are closer to visas stamped in a document you did not know you carried. Each grants temporary residence in a country with its own customs, its own texture of thought, its own way of standing in relation to time. And here is what nobody tells you: many of these countries no longer exist.