Beyond the Flywheel⚓︎
For twenty years, we've lived with a particular story about how digital growth works—the flywheel spinning faster with each turn, users attracting more users, data improving services which draw more data; it's a narrative that explained why Amazon felt inevitable, why Facebook seemed unstoppable, why Uber could burn billions and still look like the future. The flywheel wasn't entirely wrong, of course. It captured something real about network effects and compounding advantages; it gave us services that were genuinely transformative, at least for a while. Yet what looked like perpetual motion from one angle, we now see, looked like extraction from another.
Through bitter experience—and here the bitterness is earned—we've discovered how every flywheel eventually slows. Growth stalls, cheap money evaporates, and platforms begin their predictable metamorphosis: raising fees, cramming in advertisements, prioritizing pay-to-play over merit, throttling APIs that once welcomed developers, degrading the very services that drew us in. There's a name for this pattern (Cory Doctorow gave us the memorable if inelegant term "enshittification"), but the phenomenon predates the nomenclature: it's the steady, almost mechanical shift of value away from users and toward quarterly targets, a process as reliable as entropy itself.
Consider the recent history—Reddit hiking API prices so precipitously that Apollo, beloved by millions, simply shut down; Twitter (now X) killing free API access entirely, breaking the third-party ecosystem that made it interesting; Amazon defaulting Prime Video to include advertisements unless you pay extra for what you already had; Instagram testing an interface that opens directly to Reels, burying the photo feed that was its original purpose; Google's first page now so dense with ads, AI summaries, and sponsored links that people append "reddit" to queries just to find actual human conversation. Each move represents the same trade: user experience for shareholder value, community for commodity.
But the platforms' extraction goes deeper than inconvenience. Facebook dismantled local news ecosystems without hiring a single reporter; Uber disrupted century-old taxi systems, replacing worker protections with surge-priced precarity; Google trained its AI on everyone's content, then served summaries that bypass creators entirely. They privatized our town squares—spaces where we gather, argue, learn, and connect—then left society with the bill: anxious teenagers scrolling through impossible standards, gig workers without benefits or bargaining power, democracies fracturing along algorithmic fault lines. The flywheel's genius, we now understand, wasn't innovation but extraction: move fast, break public goods, capture the value, externalize the costs.
The Infrastructure Alternative⚓︎
This pattern matters urgently because we stand at what historians will recognize as a choice point. The degradation of platforms coincides—not coincidentally—with the emergence of artificial intelligence as a general-purpose technology. The question before us isn't whether AI will reshape society; that's already happening. The real puzzle is what kind of reshaping we'll permit, what values we'll encode, whose interests will predominate. We can treat AI as another extraction machine optimized for next quarter's earnings call, or we can build it as computational infrastructure for the next century: something that, like roads or water systems or law itself, holds under strain, expands human capability, and serves everyone (not equally perhaps—infrastructure rarely does—but at least intentionally, publicly, accountably).
The distinction between platform and infrastructure isn't merely semantic. Platforms extract; infrastructure enables. Platforms optimize for shareholders; infrastructure optimizes for society. Platforms can pivot, shut down, or radically alter their terms whenever the business model demands it; infrastructure commits to continuity, to being there tomorrow and the day after, boring perhaps but reliable. This is nation-building work—not algorithms alone, but the workforce to understand them, the trust to implement them, the technology stack to run them independently, and the economic structures to sustain them. These are foundations on which everything else can flourish.
Who Controls the Architecture⚓︎
When we think about computational infrastructure, the immediate question becomes one of control—though control is perhaps too simple a word for what we're discussing. Every automated decision shifts agency from humans you can argue with to systems you can't; every algorithmic mediation interposes code between intention and outcome. Search results that once offered ten sources for you to evaluate now provide one definitive answer (often wrong, occasionally dangerously so). Loan decisions once made by bankers you could appeal to—however imperfectly—now emerge from models that offer no recourse, no explanation beyond a score.
Soon—and by soon I mean within the decade—it won't just be decisions but actions: agents trading, hiring, negotiating, building, all with authority we grant but cannot revoke transaction by transaction. They'll move faster than oversight can follow, coordinate in ways that exceed human comprehension, create facts on the ground before anyone can object. (The flash crash of 2010, when algorithms knocked a trillion dollars off the market in minutes, was merely a preview.) We're not building tools in any traditional sense of that word; we're building decision infrastructure, the computational substrate through which collective intelligence emerges and coordinated action happens.
From Physical to Computational Sovereignty⚓︎
The history of sovereignty offers instructive parallels. In the industrial age, sovereignty meant controlling physical infrastructure: ports that could be blockaded, railways that could be severed, energy grids that could be defended or destroyed. In the post-industrial period, it meant controlling information flows—broadcast towers, submarine cables, satellite links. Now we're entering something qualitatively different: an era where intelligence itself becomes infrastructure, where the capacity to decide, to act, to coordinate at machine speed becomes the foundation on which everything else depends.
Look at the societies that have actually lasted—not the flash-in-the-pan empires but the civilizations that endure. They're built not on compounding loops but on the strength of their social fabric, the robustness of their institutions, the redundancy of their critical systems. Singapore articulates this as Total Defence: military, civil, economic, social, psychological, and now digital—six interlocking layers of resilience. The Nordic countries speak of Strong Societies, where trust and cohesion matter more than any fortress, where social infrastructure proves more durable than concrete. Switzerland maintains its sovereignty not through military might but through calculated neutrality backed by universal preparedness. The lesson recurs: sovereignty isn't one moat or one product or one brilliant algorithm; it's the integrated strength of people, institutions, and infrastructure working in conscious coordination.
The Four Pillars⚓︎
This understanding suggests we need to approach AI at the level of civilizational infrastructure—not as a quarterly growth engine to be optimized and extracted from, but as foundational capacity that must be cultivated, maintained, and renewed across generations. Like any critical infrastructure, it requires redundancy, maintenance, workforce development, and governance structures that outlast political cycles. The framework that emerges has four essential pillars:
Workforce: The Human Gradient⚓︎
AI will eliminate jobs—many of them, across every sector, faster than most people expect. But that's not the whole story, or even the most interesting part. Every transformative technology kills certain skills while birthing others; telegraph operators vanished, but network engineers emerged; typists disappeared as word processors appeared; filing clerks gave way to database administrators. Today's pattern follows the historical template but accelerates it: programmers spend less time writing code, more time reviewing what AI generates; doctors spend less time on diagnosis, more time on difficult cases and human connection; lawyers draft less, negotiate more. The skill migrates upward—from production to supervision, from execution to judgment.
The danger (and it's a genuine danger, not mere Luddite anxiety) is that we drift from "humans in the loop" to "humans on the loop"—from active engagement to passive approval, from understanding to rubber-stamping. When expertise atrophies from disuse, when the next generation never learns the fundamentals because the machine always handles them, we risk what some theorists call "competence collapse." The U.S. Navy, recognizing this, brought back celestial navigation training after years of GPS dependence—not because everyone needs a sextant, but because someone must know how to find their way when satellites fail.
This suggests we need what might be called reserve competence: enough humans maintaining deep understanding to catch failures, to know when the model has lost the plot, to rebuild when necessary. It means reskilling programs that go beyond compliance theater—actual education that prepares people for the jobs that will exist, not the ones that used to. It means safety nets that actually catch people, not the bureaucratic pantomime we often settle for. And critically, it means preserving and cultivating what remains irreplaceably human: taste, judgment, the ability to wrestle with ambiguity, the wisdom to know when the machine is confidently wrong, the courage to override the algorithm when humanity demands it.
Trust: The Social Contract⚓︎
Responsible AI without enforcement is just performance art—elaborate principles that sound wonderful in conference keynotes but evaporate when quarterly pressures mount. If citizens must live under systems they cannot challenge, appeal, or even understand, legitimacy collapses; if decisions that shape lives remain opaque, unaccountable, unreviewable, then we're building algorithmic authoritarianism regardless of our intentions.
Trust requires more than transparency (though transparency helps); it requires genuine accountability with real consequences. It means audits with teeth—not checkbox exercises but genuine investigations that can shut down systems that violate public trust. It requires rights of redress that actually function, not complaint forms that disappear into digital voids. It means confronting failures in public rather than burying them under non-disclosure agreements and PR statements. When San Francisco's crime prediction algorithm was found to perpetuate racial bias, when the Netherlands' child benefit system wrongly accused thousands of families of fraud, when Amazon's hiring algorithm systematically disadvantaged women—each failure was initially hidden, denied, minimized. Trust dies in darkness.
But here's the harder truth: regulation must be shielded from capture. When the people writing the rules are planning their next job at the companies they're regulating, when the technical expertise sits entirely on industry's side, when lobbying budgets exceed regulatory budgets by orders of magnitude, the whole system becomes theater. We need regulators with genuine independence—not just on paper but in practice; real technical capacity—people who understand these systems deeply enough to challenge them; and the authority to act when things go wrong—not just to write strongly-worded letters but to shut down operations that violate public trust.
Technology: The Stack of Sovereignty⚓︎
A nation that depends entirely on external platforms might believe it's purchasing efficiency—and in the short term, it might be right. But in truth, it's mortgaging sovereignty, trading independence for convenience, accepting dependency as the price of not having to build. The uncomfortable fact (uncomfortable because it challenges the comfortable assumptions of globalization) is this: to rent your stack is to rent your future.
Consider what dependency actually means in practice. If you cannot walk away from a vendor in thirty days—really walk away, with your data, your processes, your operations intact—then you're not a customer but a colony. Every dependency that cannot be broken isn't infrastructure but submission dressed in the language of partnership. When critical decisions about your citizens are made by systems you don't control, can't audit, can't modify, then you've outsourced not just technology but sovereignty itself.
This doesn't mean autarky—building everything from scratch, rejecting all foreign technology, pursuing some impossible ideal of self-sufficiency. The global division of labor exists for good reasons; nobody makes a pencil from scratch anymore, as Leonard Read famously observed. But it does mean maintaining what might be called sovereign optionality: ensuring you can switch when necessary, that you understand how critical systems work, that you have alternatives when relationships sour or interests diverge. The test is simple but severe: can you leave? If not, you don't own your infrastructure—you're renting it, and rent, as any tenant knows, always goes up.
Economy: The Velocity of Decision⚓︎
It's comforting to say AI will grow the economy—productivity gains, efficiency improvements, new industries emerging. All likely true. The harder truth is that AI won't just grow the economy; it will run it, at speeds that make high-frequency trading look leisurely. Agents will approve loans, set prices, negotiate contracts, allocate credit, coordinate supply chains—all moving faster than human oversight can match. In this world, decision-power itself becomes currency; control over these systems constitutes economic power in its rawest form.
If those decisions are made by systems controlled elsewhere—if the algorithms that determine creditworthiness, set interest rates, allocate investment, and coordinate markets are owned and operated beyond your borders—then you haven't just outsourced technology. You've outsourced command of your economy itself, accepted a form of algorithmic colonialism that's all the more powerful for being invisible.
Think about what that means concretely. When an algorithm somewhere else decides who gets capital, at what price, on what terms, that's not a technology question anymore—it's a sovereignty question. When foreign systems determine which businesses can access credit, which sectors receive investment, which regions get development, you're no longer running your own economy; you're asking permission to participate in someone else's. The cognitive division of labor that has characterized global capitalism for decades gets cranked to an unprecedented extreme.
Economic sovereignty in the age of AI isn't about controlling every transaction—that's neither possible nor desirable. It's about preserving human judgment over the transactions that define your economy's character: the exceptions that set precedents, the edge cases that reveal values, the crucial decisions about what kinds of risks to underwrite and what kinds of futures to fund. Let the machines handle the routine; keep humans in charge of what matters.
Building for Generations⚓︎
What becomes possible when we think of AI as infrastructure rather than product? Consider education, where teachers currently drown in compliance paperwork while students slip through cracks—infrastructure could handle the bureaucracy while surfacing exactly which students need help, when, and why. Or healthcare, where physicians spend more time with electronic records than patients—infrastructure could manage the documentation while doctors focus on the human being in front of them. Or justice, where algorithms already influence bail and sentencing—infrastructure could surface patterns across decades of cases while preserving judicial discretion over individual lives.
The examples multiply: local councils that can fix potholes and approve permits faster than committees can convene, but where citizens still control budgets and priorities; employment services that match people to actual careers rather than gaming placement metrics for government contracts; small businesses that can access the same analytical capabilities as multinationals without surrendering their data to platform monopolies. Not extraction dressed as innovation, but genuine infrastructure that amplifies human judgment while operating at the speed and scale modernity demands.
The easier path—and clarity requires admitting it is easier—is to rent everything, optimize for the quarter, let platforms extract value until they inevitably degrade. That path is not just well-worn but well-lit, with consultants and vendors eager to guide you down it. The returns are quick, the risks are hidden, and by the time the real costs come due, you'll probably have moved on to the next role.
The harder path means building when buying would be simpler, thinking in decades when quarters are what get measured, treating AI as infrastructure to be maintained rather than software to be subscribed to. It means accepting that infrastructure is thankless work—when it succeeds, it disappears into the background; when roads work, nobody credits them for the economy they enable; when power grids function, nobody celebrates the lives they improve. They just work, year after year, holding up everything else.
The Stakes of Stewardship⚓︎
But here we must be honest about a final complexity—infrastructure is only as good as the humans who maintain it. If we drift from "humans in the loop" to "humans on the loop," from active engagement to passive supervision, we risk what Sartre called "bad faith"—pretending we have no choice when we're actually choosing not to choose. The machine's efficiency becomes our excuse for abdication.
The hardest part isn't the technology, which will largely build itself given sufficient resources and talent. The hardest part is maintaining our agency: staying sharp enough to supervise systems that rarely fail, wise enough to intervene when intervention seems unnecessary, and human enough to know which skills we can't afford to lose. As Appiah reminds us in another context, "if there's one skill that matters above all others, it's the skill of knowing which of them matter"—and that metacognitive capacity, that judgment about judgment itself, may be the one thing we must never outsource.
The platforms taught us that extraction is temporary—sooner or later, you run out of users to squeeze, trust to violate, commons to enclose. But infrastructure compounds differently: every year it stands, it enables more; every crisis it survives, it proves its worth; every generation that inherits it functional inherits possibility itself. The work of building it is hard and the returns are slow, but what gets built becomes the ground on which everything else stands.
That's not a quarterly story or even an electoral cycle story—it's how civilizations last. The choice before us isn't whether to build AI (that train has left the station) but what kind of AI infrastructure we'll build: extractive or enabling, proprietary or public, foreign or sovereign, fragile or resilient. The flywheel promised endless acceleration but delivered eventual degradation. Infrastructure promises something less dramatic but more durable: systems so reliable they become invisible, so foundational they expand what's possible for everyone who uses them, so thoughtfully maintained that our grandchildren will take them for granted—which is, perhaps, the highest compliment one generation can pay another.