Twelve Predictions for 2026⚓︎
Most AI predictions ask what the technology will do. These predictions ask what we'll become.
The capability forecasts will mostly be right. Models will get faster, cheaper, more capable. Agents will handle more tasks. Adoption curves will steepen. None of that is particularly interesting to predict because none of it tells us what matters: how these tools will reshape the people and organizations that use them.
These are my predictions for the transformations we'll see in 2026. I draw on thinkers who've spent careers studying what happens when tools change their users: Stiegler's pharmacology of technology, Brown's analysis of neoliberal subjectivity, Zuboff's instrumentarian power, Ingold's critique of hylomorphism, Snowden's complexity framework.
At a Glance⚓︎
| # | Prediction |
|---|---|
| 1 | Workers trained in AI-assisted environments will discover they can't function without assistance—and won't know they've crossed the threshold. |
| 2 | AI assistance won't reduce workload. It will raise the baseline while creating new work to absorb the gains. |
| 3 | AI will produce infinite adequate content. Meaning won't disappear; it will migrate upstream, from production to selection. |
| 4 | The architecture of choice blindness is already operational. Agent mediation will complete what recommendation systems began. |
| 5 | "Human-made" will become a managed ambiguity—premium brand, not verifiable fact. |
| 6 | Inside every agentic organization, humans will quietly maintain the actual work. We'll know because layoffs will reveal what capacity maps couldn't see. |
| 7 | 2026 will see a cascade failure that crosses system boundaries. Benchmark gaming will have inflated confidence in capability. |
| 8 | Paying to avoid AI mediation will become luxury. Everyone gets the tool; not everyone gets the human who knows when it's wrong. |
| 9 | Trust infrastructure will become control infrastructure once verification data becomes monetizable. |
| 10 | Skill atrophy will follow the tools, not the work—and the jags are training artifacts, not natural boundaries. |
| 11 | Vibe coding will produce software no one understands. Failure won't matter until it does—and then no one can fix it. |
| 12 | Most content will be produced for AI consumption, not human reading. Human literacy becomes optional; machine parseability mandatory. |
1⚓︎
The prediction: Workers trained in AI-assisted environments will discover they can't function without assistance. The backup forgets how to be backup. Organizations will learn, in moments of system failure or changed circumstance, that "human in the loop" assumed competencies that continuous AI assistance was actively depleting.
Bernard Stiegler developed what he called a pharmacology of technology (borrowed from Derrida, who borrowed from Plato, who noted that the Greek pharmakon meant both poison and cure). The ambivalence isn't a bug to be fixed. It's constitutive. Every tool that extends human capacity creates new dependencies, new incapacities, new vulnerabilities that didn't exist before the extension.
Stiegler called this proletarianization—not in the Marxist sense of class position but in the older sense of losing the knowledge embedded in your practice. The proletarian, etymologically, is one who has nothing left but their offspring; their skill has been captured by machines and managers, leaving only labor-power to sell. Frederick Taylor's time-motion studies began this capture for physical labor. AI completes it for cognitive work.
The evidence is already clinical. A Lancet study found that endoscopists' adenoma detection rate dropped from 28% to 22% after regular AI assistance—a 20% relative decline in finding precancerous growths once the AI was removed. Months of exposure, not years. The pharmacological threshold crossed faster than anyone expected. The researchers called it "the Google Maps effect": the natural human tendency to over-rely on decision support systems until the underlying skill atrophies.
This is the deepest form of the threshold problem. You cannot perceive the crossing because crossing it is what eliminates your capacity to perceive. The question isn't whether augmentation helps—it does—but what's the dose at which cure tips into poison. When does "human oversight" become ceremonial, the supervisor who rubber-stamps decisions they no longer have competence to evaluate?
Dave Snowden's Cynefin framework names this property: in complex systems, cause and effect are only coherent in retrospect. You cannot analyze your way to the threshold in advance because the threshold emerges from the interaction of factors that only become legible after the crossing. Organizations will try to monitor for skill degradation, to establish metrics and checkpoints. These efforts assume the problem is complicated—discoverable through expert analysis. But pharmacological thresholds are complex. They reveal themselves only when you've already passed them.
Call it what it is: pharmacology, the constitutive double of augmentation, the incapacity produced by the capacity.
2⚓︎
The prediction: AI assistance won't reduce workload. It will raise the baseline. Every efficiency gain gets captured by expanded scope, reduced headcount, or elevated expectations. Workers will keep more balls in the air while being told they're augmented, enhanced, empowered. The exhaustion is structural, and invisible to those experiencing it.
Wendy Brown, in Undoing the Demos, traced how neoliberal rationality remakes the human subject. The citizen becomes human capital. Every domain (education, health, relationships, leisure) gets reframed as investment in oneself. You don't learn for understanding; you acquire credentials. You don't rest; you recover strategically. The metric is always appreciation of the asset that is you, measured against others similarly investing in themselves.
The achievement-subject that Byung-Chul Han describes—the entrepreneurial self who exploits itself more efficiently than any external discipline could—finds its apotheosis in AI augmentation. The tool makes you more productive; productivity raises the baseline of expected production; meeting the baseline requires more optimization. You run faster to stay in place.
Wharton research calls this the "AI Efficiency Trap": a four-stage pattern where efficiency gains become permanent baselines. The Red Queen dynamic. And it compounds: what AI produces, humans must verify. The "workslop" phenomenon (low-quality AI output that recipients must then fix) means efficiency for the producer becomes verification burden for the recipient. One analysis estimates $9 million in annual drag for a 10,000-employee organization, the hidden cost of absorbing output that passed automated checks but fails human evaluation.
The acceleration isn't experienced as acceleration. Workers report productivity gains even as total cognitive load increases. The treadmill is invisible to those on it. You can't rest because resting means falling behind; you can't question the structure because questioning looks like excuse-making, a failure to take responsibility for your own human capital. The exhaustion may not be experienced as exhaustion but as a new normal that feels like personal inadequacy rather than systemic extraction.
3⚓︎
The prediction: AI will produce infinite adequate content. Every channel will fill with output that's competent, interchangeable, and meaningless. Markets will respond with craft premiums and "human-made" labels. But meaning won't simply become scarce—it will migrate upstream, from production to selection.
Tim Ingold, the anthropologist, has spent decades studying how makers relate to materials. His central argument is that Western thought is captive to hylomorphism—the idea that making is imposing form on matter. You have a design; the material is just stuff to be shaped. Against this, Ingold proposes that real making is conversation. The maker follows the material's tendencies, adjusts to its resistances, discovers possibilities that neither maker nor material could have specified in advance.
AI is pure hylomorphism. You prompt; it produces. The creative stack collapse (idea to production to distribution compressed to nothing) means the elimination of resistance. And resistance is how we learn. The woodworker who learns which way the grain runs, the writer who discovers what the sentence wants to say: these encounters with material recalcitrance are how knowledge accrues through practice, how the maker becomes skilled rather than merely productive.
Remove the resistance and you remove the education. You can generate more, faster, but you don't learn from the generating because the generating teaches nothing—it complies.
The hylomorphism critique raises a question it doesn't resolve: does infinite adequate content crowd out the conditions for meaning-making, or does meaning-making simply migrate to new domains? If meaning emerged from resistance in production, perhaps it now emerges from curation, selection, and context-setting—activities that remain human even when production doesn't. The curator who chooses what to surface, the editor who shapes the raw output, the audience that determines what resonates: meaning may not disappear so much as move upstream.
This doesn't resolve the problem; it relocates it. The skills that atrophy are production skills. The skills that matter become selection skills. And the question remains whether selection without production retains access to the tacit knowledge that made good selection possible.
4⚓︎
The prediction: As AI agents handle discovery, comparison, and purchase, consumers will lose not just influence over decisions but awareness that decisions occurred. What got shortlisted, what got filtered, what criteria governed the selection—all invisible, all upstream, all architecture.
But this isn't a step change. It's the completion of something already underway.
Shoshana Zuboff's Surveillance Capitalism introduced instrumentarian power: a form of domination that works through behavior shaping rather than coercion or ideology. Instrumentarianism doesn't care about your soul; it cares about your actions. It doesn't need you to believe anything. It just needs you to do what the prediction engines predict, nudged toward outcomes that serve interests other than your own.
The architecture of choice blindness is already operational. Algorithmic curation on platforms already produces it—what TikTok shows, what Amazon recommends, what Spotify queues. You experience the result of a selection process without experiencing the selection. The menu is presented as the territory.
Agent mediation intensifies this but doesn't inaugurate it. "Your customer is an agent" means the last moment of visibility—when the curated options were at least presented to a human—disappears entirely. Delegation becomes invisibility. The instrumentarian power isn't visible because it operates through defaults, through what seems natural, through paths of least resistance that were engineered to resist least in particular directions.
This is qualitatively different from advertising. Advertising presents itself as persuasion and allows resistance. Agent mediation removes even the moment of presentation. There's nothing to resist because there's nothing to see. The choice was made before you knew a choice existed.
Regulatory attention will eventually arrive. But regulation assumes something legible to regulate. The whole point of agent mediation is illegibility: decisions happening in latent spaces no one can audit. The regulatory apparatus was built for a world where power announced itself. Instrumentarian power doesn't announce.
5⚓︎
The prediction: "Human-made" will become a performance genre, not a fact about origin. Content will be designed to signal human production whether or not humans produced it. The market rewards the signal, so the signal is what gets produced. Authenticity becomes something you perform, brand, and sell—detached from any underlying reality about how things were made.
This is Brown's entrepreneurial self applied to creative work. Your humanity becomes an asset class with exchange value in a market where AI handles commodity production. "Being human becomes the differentiator" sounds affirming until you notice the framing: human as competitive positioning, identity as brand strategy, authenticity as authentication theater performed for market advantage.
The EU AI Act's disclosure mandates will accelerate the dynamic. Once "AI-generated" is flagged, "human-generated" becomes a performable brand—with or without humans involved. The premium isn't for quality; AI can produce quality. The premium is for the story that someone cared, for the ritual significance of human attention, for authentication that may or may not correspond to anything about how the thing was made.
The verification problem compounds this: as "human-made" becomes a valuable signal, the incentive to fake it increases, which drives demand for verification infrastructure, which creates new vectors for gaming. C2PA Content Credentials can verify AI involvement but not human involvement; the asymmetry matters. You can prove something touched AI; you can't prove it didn't.
The equilibrium is probably "human-made" as luxury brand rather than verifiable fact—like "organic" or "artisanal." A managed ambiguity that serves market differentiation without resolving the underlying question. The authentication isn't about truth. It's about the performance of a category that consumers will pay for.
6⚓︎
The prediction: Inside every agentic organization, humans will quietly maintain the actual work. They'll patch what agents can't see, handle exceptions the system doesn't model, embody tacit knowledge the capacity map refuses to recognize. These shadow systems will be undocumented, unmeasured, invisible to leadership. And load-bearing.
Consider the procurement specialist at a hospital who knows which vendor API calls timeout under load and manually re-submits orders every Tuesday when the system backs up. Or the senior developer who maintains a mental map of which AI-generated code paths actually work in production versus which ones pass automated tests but fail under edge conditions. The documentation says the workflow is automated. The workflow is automated. These people are why it works.
James C. Scott documented how high-modernist schemes always generate informal resistance from the people who need to make things work. The formal system sees legible inputs and outputs; the shadow system handles everything illegible. Organizations that redesign around AI will create clean processes that ignore the messy reality of work. The people who bridge the gap—translating between what the system expects and what actually happens—won't appear on any org chart.
Shadow systems cluster at predictable locations: at the interfaces between AI systems where handoffs fail silently, at exception-handling boundaries where the model's training distribution doesn't match production reality, wherever tacit knowledge is required that can't be specified in advance. The procurement specialist who knows which API calls to re-submit isn't following a procedure that could have been documented. She's responding to patterns that only become visible through immersion in the work itself. You can't capture that in a capacity map because the capacity map assumes the complicated domain. The actual work keeps slipping into complexity.
This labor will be invisible until it walks out the door. Organizations that lay off "redundant" roles after AI adoption will experience a measurable increase in system failures within 6-12 months—that's the testable version of this prediction. The knowledge was tacit, embedded in practice, resistant to documentation precisely because it lived in the gap between what the official process specified and what the work actually required. The shadow system becomes visible only through its absence.
7⚓︎
The prediction: 2026 will see at least one high-profile cascade failure. An over-optimized system will encounter conditions its designers assumed away. No human will be positioned to intervene—not because humans were absent but because the humans present lacked the competence, authority, or situational awareness to act. "Human in the loop" will stop being reassurance and start being questioned.
High-profile failures have already occurred: Tesla FSD crashes, Cruise dragging a pedestrian 20 feet, Air Canada's chatbot giving legally binding bad advice. What distinguishes a 2026 event is the cascade element: failures that cross system boundaries, affect multiple organizations, reveal interconnection that wasn't previously visible.
A specific mechanism feeds this overconfidence: benchmark gaming. Benchmarks are verifiable environments, which makes them susceptible to the same optimization pressure that makes models spike in capability. As Karpathy observed, "training on the test set is a new art form." Organizations deploy based on benchmarked performance that's been systematically inflated by training incentives, then encounter conditions outside the test distribution. The gap between benchmarked capability and actual capability is invisible until it isn't.
The fragility compounds quietly. Each efficiency optimization removes a buffer. Each removal seems rational in isolation. The system as a whole becomes a house of cards—not because any single card is weak but because removing "redundant" cards was the entire optimization strategy. Nassim Taleb's framework applies: every optimization that removes human redundancy is a bet that conditions remain stable. When AI agents transact with AI agents, human circuit breakers are removed entirely. Small errors cascade with no one positioned to notice until too late.
Cynefin clarifies the failure mode. Removing human buffers assumes the system operates in the complicated domain: stable enough that expert design can anticipate failure modes, that sense-analyze-respond will catch problems before they propagate. But interconnected systems under stress don't stay complicated. They drift toward complexity, where causes produce effects that weren't predictable from the initial conditions, where the right response is probe-sense-respond—and probing requires someone positioned to probe. Chaotic conditions demand act-sense-respond, immediate intervention to stabilize before analysis. Neither response is available when the humans have been optimized out.
The highest risk domains: financial systems where algorithmic trading already produces flash crashes, logistics where supply chain optimization has removed inventory buffers, healthcare where diagnostic systems may interact with treatment systems with minimal human checkpoint. A domain-specific prediction is riskier but more useful: watch for cascade events in logistics, where the interconnection is highest and the buffers thinnest.
8⚓︎
The prediction: Paying to avoid AI mediation will become luxury, not Luddism. Human customer service, unaugmented education, the experience of dealing with someone who has time to attend—these will command premium prices and carry social status.
But the class structure is more complex than "AI for the many, human attention for the few."
This is the refusal economy. Not rejection of technology but selective exemption from it. The wealthy already purchase human attention: concierge medicine, private tutors, personal assistants. As AI mediation becomes the default experience, unmediated human contact becomes scarce and therefore valuable. The refusal is marketed, not stigmatized; positioned as premium, not resistance.
The pattern has precedent. Organic food, slow fashion, digital detox retreats: each emerged as the commodity version of some experience became so degraded that paying to avoid it became desirable. AI mediation is the next commodity experience to generate its own refusal premium.
The sharper formulation is contested. Karpathy argues that "regular people benefit a lot more from LLMs compared to professionals"—vibe coding lets anyone build software, AI writing assistants help the less-skilled more than the already-skilled. There's truth in this: AI does democratize production. But democratized production and quality outcomes aren't the same thing. The truly wealthy will get both: AI productivity plus human oversight, AI efficiency plus human judgment. The class divide isn't AI versus human but AI-plus-human for some, AI-alone for the rest. Everyone gets access to the tool; not everyone gets access to the human who knows when the tool is wrong.
9⚓︎
The prediction: The infrastructure built to establish trust (provenance, audit trails, content credentials) will become infrastructure for influence. What can be verified can be shaped. The systems we build to know what's true will be the systems through which truth gets managed.
The prediction that "trust becomes expensive" captures something real. When AI can generate anything, establishing authentic origin requires infrastructure: cryptographic signatures, audit trails, verification protocols. C2PA Content Credentials, adopted by OpenAI and others, attach metadata to establish origin and detect alterations.
But Zuboff's framework asks: who controls the trust infrastructure? Legibility is the precondition for control. The systems that verify provenance are systems that track provenance. The platforms that authenticate content are platforms that know what content exists, where it came from, who made it. Trust infrastructure is surveillance infrastructure by another name. What can be audited can be shaped.
The pivot point is monetization. Trust infrastructure becomes control infrastructure when verification data becomes monetizable—when a platform uses verification records for purposes beyond verification. The question is mechanism: Does it happen when regulators mandate verification that platforms then monetize? When interoperability requirements create centralized verification bodies? When verification becomes a condition of distribution and the distributors set the terms?
The direction is clear even if the precise mechanism isn't. Infrastructure built for one purpose gets repurposed for the purposes of whoever controls it. This has happened with every information infrastructure in living memory. It will happen with trust infrastructure too.
10⚓︎
The prediction: Skill atrophy won't be uniform. It will follow the contours of what's easiest to externalize. Workers will retain capacities that resist externalization while losing capacities that externalize smoothly. The shape of the hollowing will be determined by the shape of the tools, not by what would be safe or wise to lose.
The junior analyst who uses Claude to draft equity research may retain the ability to structure an argument while losing the ability to read a balance sheet closely. The architect using parametric design may retain spatial intuition while losing the feel for construction constraints that came from years of watching buildings fail. The doctor using diagnostic AI may retain bedside manner while losing the pattern recognition that came from seeing thousands of cases without assistance.
The hollowing follows the tools, not the work. What gets externalized is what AI handles well, not what humans can afford to forget. The shape of incapacity is the negative space of the AI's competence. Harvard Business School's "jagged frontier" research documents this precisely: AI improves performance 40% within its capability boundary but causes a 19 percentage point performance drop outside it. The frontier is jagged, and the hollowing follows the jags.
But the jags aren't natural capability boundaries. They're training artifacts, shaped by what's verifiable. Models spike in capability near math and code puzzles because those domains offer automatic rewards for optimization. The jagged frontier tracks what's trainable, not what's important. Skills that resist verification—judgment, taste, knowing when to distrust the output—may be precisely the skills that atrophy fastest, because they're the skills that don't show up in benchmarks.
The hollowing accelerates generationally. If seniors retain tacit knowledge but juniors never acquire it, the gap deepens over time. The transmission problem. The skills live in the seniors' hands but never transfer because the juniors were augmented from the start. Stanford research found nearly 20% employment decline for developers aged 22-25 since late 2022, the period coinciding with generative AI's emergence. Meanwhile, developers over 26 saw stable or growing employment. Entry-level positions are where augmentation hits first, which is precisely where skill formation happens.
The shape of the hollowing isn't static. It deepens in the direction of AI capability, creating ever-wider gaps in what humans can do without assistance. And the gaps become permanent when the people who could have taught the skills age out before the teaching happens.
11⚓︎
The prediction: The rise of vibe coding will produce software that no one understands. Applications conjured through natural language, never inspected, deployed into production. The maintenance crisis won't be too much code—it will be code without comprehension. Systems will fail in ways that can't be debugged because debugging assumes someone understood the system in the first place.
Karpathy coined "vibe coding" to describe programming via English, forgetting the code exists. He's right that it empowers: anyone can build software, professionals can build more, projects that would never exist now get built. "Code is suddenly free, ephemeral, malleable, discardable after single use."
But discardable code has a way of becoming load-bearing. The quick script becomes the production system. The vibe-coded prototype never gets replaced because it works—until it doesn't. And when it fails, no one knows how it works because no one ever looked. The code was generated, not written; deployed, not understood.
This is the hylomorphic trap applied to software. You prompt; it produces. The resistance that taught programmers how their systems behaved—the compilation errors, the debugging sessions, the hours tracing logic—disappears. You get the output without the education. The maker becomes a requester.
The failure mode isn't that vibe-coded software is bad. Much of it works fine. The failure mode is that no one can fix it when it breaks, extend it when requirements change, or audit it when security matters. The software exists; the understanding doesn't. Technical debt accumulates in a new form: not messy code that someone could clean up, but code that no one can read because no one ever did.
Two classes of software will emerge: serious infrastructure maintained by shrinking cadres of specialists who still understand what they're building, and an expanding ocean of vibe-coded applications that work until they don't. The middle—comprehensible, maintainable, medium-scale software built by developers who understood their tools—will hollow out.
The optimistic response: just vibe-code the next thing. Failure stops mattering when replacement is cheap. But this assumes failure is recoverable. What about the data accumulated over years? The integrations with systems you don't control? The failure at 2am during a critical process? Regeneration takes time; damage happens in real-time. The disposability thesis assumes failures are clean and contained. Real failures are messy and cascading—especially through systems no one understood in the first place.
12⚓︎
The prediction: Most written content will be produced for AI consumption, not human reading. Documentation, reports, articles—written with the assumption that AI will process them before (or instead of) human eyes. Human literacy becomes optional; machine parseability becomes mandatory.
We're already there. Documentation written assuming AI will summarize it. Reports produced "roughly" because the expectation is that an AI will clean them up. Articles structured for RAG retrieval rather than human comprehension. The audience has shifted. Content becomes training data, not communication.
This closes the loop on infinite adequate content. If AI writes and AI reads, humans become peripheral to the communication itself. The slop loop: AI generates, AI summarizes, human sees summary, human prompts AI, AI generates. At each step, the human touches less of the original. The content exists; human comprehension of it becomes optional.
The feedback loop that made writing improve—human readers responding to human writers—breaks when the readers are models. What makes text "good" shifts from what communicates to what's parseable, retrievable, summarizable. Clarity for humans and clarity for machines aren't the same thing; when they diverge, machine clarity wins because machines are the actual audience.
This is instrumentarian power applied to cognition itself. Not just what you buy or what you choose, but what you read, what you think you know, what you believe you understand. The mediation extends from the marketplace to the mind.
What Connects Them⚓︎
The common thread is pharmacology—Stiegler's insight that every extension creates a new incapacity, every cure its own poison. AI extends cognitive capability and creates cognitive dependency. AI extends choice and creates choice blindness. AI extends productivity and creates the conditions for exhaustion. The extensions are real. The incapacities are equally real. The incapacities are harder to see because the extensions are what we're looking at.
The second thread is agency. When we use passive voice—skills become bottlenecks, trust becomes expensive—we obscure that someone is making choices, capturing value, distributing risk. Passive voice turns political arrangements into natural processes. Predictions become descriptions of what will happen rather than what is being done.
The third thread is domain confusion. Cynefin's contribution is recognizing that different kinds of problems require different kinds of responses, and that treating complex systems as merely complicated is a category error with consequences. The pharmacological thresholds in prediction 1, the shadow systems in prediction 6, the cascade failures in prediction 7—all share a common structure: problems that look analyzable from a distance but turn out to be emergent up close, requiring responses (probe-sense-respond, or act-sense-respond in crisis) that weren't built into the system. The optimization that removed human redundancy assumed complicated conditions would persist. The failure occurs when they don't.
The fourth thread is ephemerality. Code that's disposable, content that's replaceable, skills that don't transmit, understanding that never forms. The predictions describe a world where nothing persists except the systems that generate ephemeral outputs. The permanent becomes scarce; the temporary becomes default. We lose the concept of building on what came before—because what came before was never meant to last.
These predictions are about transformations already underway, transformations we might navigate differently if we saw them clearly.
Sources⚓︎
- Bernard Stiegler, Technics and Time series, especially vol. 3: Cinematic Time and the Question of Malaise (Stanford University Press, 2011)
- Wendy Brown, Undoing the Demos: Neoliberalism's Stealth Revolution (Zone Books, 2015)
- Shoshana Zuboff, The Age of Surveillance Capitalism (PublicAffairs, 2019)
- Tim Ingold, Making: Anthropology, Archaeology, Art and Architecture (Routledge, 2013)
- James C. Scott, Seeing Like a State (Yale University Press, 1998)
- Byung-Chul Han, The Burnout Society (Stanford University Press, 2015)
- Nassim Nicholas Taleb, Antifragile (Random House, 2012)
- Dave Snowden & Mary E. Boone, "A Leader's Framework for Decision Making," Harvard Business Review (November 2007)
- Dell'Acqua et al., "Navigating the Jagged Technological Frontier," Harvard Business School Working Paper (2023)
- Romańczyk et al., "Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy," The Lancet Gastroenterology & Hepatology (2025)
- Brynjolfsson, Chandar, Chen, "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence," Stanford Digital Economy Lab (2025)
- Andrej Karpathy, "2025 LLM Year in Review," personal blog (December 2025)
- Andrej Karpathy, "Animals vs. Ghosts," personal blog (2025)