Remix / Remake / Remade⚓︎
In China Miéville's The City & The City, two cities occupy the same physical space. Besźel and Ul Qoma share streets, share buildings, share the very air, yet their citizens are trained from birth to unsee the other city. You learn to recognize the architecture, the clothing, the gait of the other place, and you learn to let your gaze slide past it without acknowledgment. The unseeing isn't ignorance—it's a disciplined practice, socially enforced and after long enough, automatic, a learned incapacity so thorough that the other city becomes invisible not through absence but through cultivated blindness. To see is a crime called Breach. The citizens of Besźel walk past the citizens of Ul Qoma every day, their shoulders nearly brushing, and neither acknowledges the other's existence.
We are practicing the same unseeing now.
The debates about AI and work have the same structure as unseeing. McKinsey publishes a report estimating that 12 million Americans will need to change occupations by 2030; the World Economic Forum counters with projections of 97 million new jobs created against 85 million displaced. Congressional hearings feature testimony about automation's impact on trucking, radiology, customer service. Think tanks model scenarios. Universities launch reskilling initiatives. The Goldman Sachs research note lands in my inbox the same week I watch a junior analyst use Claude to draft the kind of equity research that Goldman's own analysts produce—the firm studying AI displacement while experiencing it, measuring a phenomenon they're inside of rather than observing from above.
The debates are real, the numbers carefully modeled, the concerns legitimate. And yet they share a peculiar blindness: they ask what AI will do to workers while treating "worker" as a stable category that AI acts upon from outside. They measure displacement—workers moved from one occupation to another—when the deeper transformation isn't movement between categories but mutation of what the categories contain. The junior analyst isn't displaced. She still has her title, her desk, her LinkedIn profile listing the same role. But what she does, what she is in the doing of it, has already become something her job description doesn't name and her firm's displacement studies don't measure.
We argue about whether AI will transform work as if the question were still open, as if we were standing at a decision point where different futures remained possible. But the question assumes a stability that has already dissolved. The worker who will be transformed, the job that will be automated, the skill that will become obsolete—these categories presuppose a world where workers, jobs, and skills exist as stable entities that AI acts upon from outside. What if the transformation isn't an action upon existing things but a remaking of what kinds of things exist? What if the figures of the Remade are already walking among us, visible if we knew how to look, and we have learned—through the same disciplined practice that lets Besźel ignore Ul Qoma—not to look?
Remix⚓︎
This is where most discourse lives: AI as tool that remixes human work.
Garry Kasparov, after his famous loss to IBM's Deep Blue in 1997, proposed what he called the centaur—human and machine in partnership, each contributing distinct strengths to a collaboration greater than either could achieve alone. The human torso of strategic insight and creative intuition mounted on the computational horsepower of the algorithm. The image was deliberately mythological, evoking ancient hybrids while gesturing toward a modern synthesis. In 2023, Ethan Mollick at Wharton led what became the largest pre-registered experiment on AI and professional work: 758 consultants at Boston Consulting Group, performing eighteen realistic tasks, with and without access to GPT-4. Analyzing the results, Mollick named two patterns of successful AI use. The centaur maintains clear boundaries between human and machine work, delegating discrete tasks at defined interfaces, preserving the seam between what the human does and what the machine does. The cyborg weaves human and AI contributions together so seamlessly that the seam itself disappears, moving fluidly back and forth across what Mollick calls the "jagged frontier" of AI capability—the irregular boundary between what AI does well and what it does poorly, a boundary that shifts unpredictably across tasks and contexts.
This framing isn't wrong. The BCG study showed consultants using AI completing 12.2% more tasks on average, finishing 25.1% faster, and producing 40% higher quality results—numbers impressive enough to launch a thousand corporate training programs and LinkedIn posts about "AI fluency." Perhaps more striking was the "skill leveler" effect: consultants who scored lowest at baseline saw performance gains of 43% with AI access, while top performers gained less. The productivity gains are real, demonstrable, replicable across different task types and seniority levels. The framework captures something true about how knowledge workers are learning to collaborate with language models, image generators, code assistants, and the expanding ecosystem of tools that amplify cognitive labor.
But the centaur frame carries an assumption so deeply embedded it becomes invisible: that the human half remains human. The horse is bolted on; the torso is unchanged. The mythological centaur doesn't become a different kind of being through its fusion—it remains Chiron, wise teacher, healer, distinct individual who happens to have the body of a horse. You use a tool, even a powerful one, even a tool that processes language and generates images and writes code, and you remain yourself: augmented, perhaps faster, but ontologically intact. The same person who existed before, now with a better calculator.
This is the view from before the transformation—useful for what it captures, misleading for what it assumes. Consider an analogy. In 1910, you might reasonably have debated whether the telephone would replace personal correspondence, whether the intimacy of the letter could survive the immediacy of voice. The debate would have been technically meaningful; some people did stop writing letters. But the real transformation was happening elsewhere. The automobile was remaking the city itself—where people lived, how far they traveled to work, what kinds of communities were possible, what "neighbor" meant. The telephone-versus-letter debate fit neatly within existing categories; the automobile transformation made those categories irrelevant by changing the substrate they depended on.
The centaur frame is the telephone debate. Real, measurable, actionable—and beside the point. The question of how humans collaborate with AI tools assumes the human remains a stable entity collaborating with an external tool. It doesn't ask what happens when the collaboration becomes constitutive, when the "tool" isn't external but internal to the work itself, when the human who exists after the collaboration isn't the same kind of being who existed before.
Remake⚓︎
The more sophisticated frame comes from systems thinkers like Sangeet Paul Choudary, who asks us to look past first-order effects to second and third-order consequences: AI isn't automating tasks, it's collapsing coordination costs. The real transformation isn't what AI does to individual jobs but how entire systems restructure around it.
The shipping container didn't automate ports. When Malcolm McLean loaded his first container ship in 1956, it looked like a metal box that made loading ships more efficient—a tool for the existing system of moving goods by sea, an incremental improvement that would speed up the docks and perhaps reduce the number of longshoremen needed to load cargo. If you had asked "how many dockworkers will containers displace?" you would have gotten a number, and that number would have been real, and it would have captured almost nothing of what actually happened. Because containers didn't just automate loading. They restructured global manufacturing entirely. Factories moved to wherever labor was cheapest—not because the box was smart, but because coordination costs had collapsed. When moving goods becomes nearly frictionless, the entire geography of production transforms. The container wasn't a port technology; it was a restructuring of where things could be made and how value chains could be organized across oceans and continents. The question "how many dockworkers will containers displace?" was real but radically incomplete, like asking how many stable hands the automobile would require.
AI is coordination technology in the same way. When the cost of coordinating knowledge work approaches zero—when expertise can be queried at scale, when judgment can be extracted from one context and applied in another, when the tacit knowledge that once required years of apprenticeship can be pattern-matched from millions of examples—the question isn't which tasks get automated. The question is: what happens when expertise can be unbundled from experts? When judgment can be extracted, packaged, deployed at scale without the person who developed that judgment being present? When the friction that protected professional guilds (the years of training, the tacit knowledge, the hard-won intuition that could only be acquired through time and mentorship) becomes legible, transferable, and cheap?
Sangeet's frame pushes past the first-order thinking of "AI takes jobs" to second and third-order effects: work being taken apart and reassembled around a different logic, value chains fragmenting and recombining, the system itself being remade around new coordination possibilities. This is sophisticated analysis, and it's directionally correct, and anyone thinking seriously about AI's impact on work needs to grapple with it.
But even this sophisticated frame leaves something untouched. It describes what happens to work while the worker floats strangely outside the analysis—as if humans were constants while only the system variables changed. As if the person doing the job would remain the same kind of person, just slotted into different positions in a reorganized system, playing different roles in a transformed economy but remaining fundamentally the same kind of being who played roles in the old economy.
The container changed where factories were located. It changed logistics, trade patterns, port cities, labor relations, the geography of poverty and prosperity. What it didn't change was what a factory worker was. The worker in Shenzhen and the worker in Detroit were both recognizably workers in a way their grandparents would have understood—people who exchanged labor for wages, who developed skills through practice, who knew their craft through years of doing it, whose identities were shaped by the work they did but not fundamentally transformed by the tools they used. The factory worker using a computerized lathe was still a factory worker. The logistics coordinator using supply chain software was still a logistics coordinator. The tools changed; the type of being wielding them did not.
This is where the Remake frame, for all its sophistication, stops short. It sees the transformation of systems but misses the transformation of selves. It tracks the restructuring of work while assuming the worker remains a stable category.
Remade⚓︎
In Miéville's Bas-Lag novels, the city of New Crobuzon punishes criminals not through imprisonment, not through execution, but through Remaking. The Remade are surgically fused with machines, with animals, with other bodies, with whatever materials serve the punishment or the fancy of the biothaumaturge performing the modification. A woman who smothered her child has the infant's arms grafted to her face. A thief becomes part lockpick, flesh fused with metal, the tool of his crime made indistinguishable from his body. A debtor is transformed into a transport vehicle, legs replaced with wheels, torso bent to carry passengers, consciousness intact within a body that is now infrastructure. The modifications sometimes reference the crime in cruel irony, but often they're simply what Miéville calls "Mad Artists fucking around with the body"—transformations without clear logic, changes imposed because the power to impose them exists.
The Remade aren't prisoners who serve their time and return to society unchanged. They become something else. Their punishment isn't loss of freedom but transformation of what they are. The identity that existed before the Remaking—the person who committed the crime, who had a history and relationships and a sense of self—doesn't disappear, but it becomes entangled with a new form that didn't exist before, a body that carries the marks of power in flesh and metal. Some find new capacities in their changed forms; Tanner Sack, originally Remade with tentacles as punishment, chooses further modification, becomes amphibious, discovers freedom in the water that he never found on land. The transformation that was meant to degrade becomes the source of new possibilities. But the change is irreversible. They don't go back. The person who exists after Remaking is continuous with the person who existed before but is not the same person, cannot be the same person, because the substrate of selfhood has been transformed.
This is closer to what's happening now than either the Remix or Remake frames can capture. But I want to be careful here, because the analogy is imperfect and the imperfection matters.
The Remade of New Crobuzon are punished—their transformation is degradation, imposed by power as sanction. That's not quite right for what's happening to workers. The consultant who learns to work with Claude isn't being punished; they're adapting, often enthusiastically, to tools that genuinely make their work faster and sometimes better. The radiologist whose diagnostic practice now involves AI isn't suffering a sanction; they're navigating a technological shift that may well improve patient outcomes. To cast all AI-related work transformation as punishment would be to miss both the genuine benefits and the genuine agency that workers exercise in adapting to new tools.
And yet. The Remade metaphor captures something that gentler framings obscure. The transformation isn't fully chosen. The radiologist didn't vote on whether AI would enter diagnostic medicine; they adapted or became obsolete. The consultant didn't decide whether their firm would adopt AI tools; they learned them or fell behind. The adaptation is real, but it happens within constraints that weren't negotiated, weren't consented to, weren't even visible until they had already closed around the available options. The Remade who find freedom in their new forms are still Remade—transformed by power they didn't choose, even when they come to embrace what they've become.
So the metaphor is imperfect, as metaphors are. What it captures: transformation at the level of what kinds of beings exist, not just what those beings do. What it risks overstating: the degree to which the transformation is purely imposed rather than partly chosen, partly beneficial, partly a genuine improvement on what came before.
Holding both truths in tension: the emergence of new kinds of workers who are neither the humans of before nor the AI that supposedly threatens them. Hybrid beings that didn't exist a decade ago and won't be called by any name we currently use a decade hence. Workers whose relationship to work has been transformed at a level more fundamental than which tasks they perform or which systems they operate within—transformed at the level of what kind of being does the working. The transformation includes genuine gains and genuine losses, freely chosen adaptations and structurally coerced changes, improvements in capability and foreclosures of possibility. It's all of these at once, and the difficulty of holding them together is part of what makes the transformation hard to see clearly.
Tim O'Reilly and Mike Loukides, in their recent analysis "What If? AI in 2026 and Beyond," frame the AI future as two competing scenarios: economic singularity or normal technology. In the singularity version, we're in a civilizational discontinuity and all previous frameworks fail; the transformation is so rapid and total that comparison to previous technological transitions becomes meaningless. In the normal technology version, AI diffuses gradually like electricity or the internet, the hype proves overblown, and we adapt through the familiar processes of training and job transition that have characterized every technological change in living memory. Both scenarios are intelligently argued. Both are probably partially true. Both focus on what happens to jobs, to companies, to economic systems, to the measurable flows of employment and capital and productivity.
Neither asks what happens to the worker as a kind of being.
That's the unseeing. We can debate whether there will be more jobs or fewer, whether inequality will increase or decrease, whether AI looks more like electricity (gradual diffusion, massive productivity gains, new industries emerging) or more like the dot-com bubble (irrational exuberance, crash, eventual recovery, gains more modest than initially predicted). These are real debates about real outcomes, and they matter, and they miss the transformation happening beneath them. We've trained ourselves not to see the ontological shift—the transformation not of work but of what it means to be a worker.
The Remade are already here. We just haven't learned to name them.
Figures of the Remade⚓︎
Stitchwork⚓︎
The first figure exists at the join.
Think about what happens when you ask someone to synthesize outputs from multiple AI systems into a coherent deliverable. A language model generates strategic recommendations. An image model generates visuals to accompany them. A data model generates projections and forecasts. A code model generates implementation scaffolding. None of these systems speak to each other, not really—they operate on different representations, optimize for different objectives, hallucinate in different ways. Someone has to take these outputs and compose them into something that passes as unified thought, something a client can receive as a coherent artifact rather than a collection of machine-generated fragments.
This person is not editing in any traditional sense. They're not correcting errors in a document that has an author. They're not curating a collection that someone else assembled. They're doing something more intimate than either—developing a feel for what the models produce, for the characteristic textures of their outputs, for the gaps between what each system generates and what the synthesis requires, for the kind of sense-making that emerges only at the seam where incompatible outputs meet. The Stitchwork reads all three or four or five outputs, feels where they misalign, produces something that passes as unified thought. The final document has no single author. The Stitchwork isn't the author but the ligament—the connective tissue that holds composite outputs together, the living interface that makes machine cognition legible to human organizations.
You see them already if you know how to look.
In architecture firms, someone takes the parametric designs—the algorithmically generated building forms that a tool like Grasshopper produces by varying parameters for light, circulation, material efficiency—and reconciles them with the structural analysis from another system, the energy modeling from a third, the photorealistic rendering from a fourth. The parametric tool generates forms that look striking but may be unbuildable; the structural analyzer says what physics allows but not what's beautiful; the energy model optimizes for climate performance without knowing what the client wants to look at; the renderer hallucinates materials and textures that may not exist at any price. Someone reads all four outputs, feels where they conflict, negotiates between them to produce something buildable and beautiful and efficient enough—something the client can understand as a unified vision rather than a collision of algorithmic outputs.
In pharmaceutical research, someone bridges the protein-folding predictions from AlphaFold with the drug interaction models, the clinical trial database analysis, the regulatory compliance checks, the patent landscape surveys. None of these systems know about each other. The protein folder doesn't understand FDA requirements; the regulatory checker doesn't understand molecular biology; the patent analyzer doesn't understand clinical significance. Someone holds all of it together—or more accurately, develops a feel for how to move between these systems, when to trust one over another, how to translate between their different languages of possibility and constraint. The drug candidate that eventually reaches trials, if it does, emerges from this translation work as much as from any single analysis.
In legal practice, someone synthesizes the contract analysis AI's clause-by-clause parsing with the case law research system's precedent identification, the risk assessment model's scenario projections, and the negotiation strategy generator's tactical recommendations. Each system is sophisticated in its domain; none understands the gestalt of what a deal actually is—the relationship between parties, the unspoken interests, the leverage dynamics that determine which clauses matter. Someone reads all four outputs and produces advice that sounds like it came from a partner, not because it did, but because that someone has learned to make machine cognition pass as professional judgment.
Their value isn't in any particular domain expertise, isn't in writing ability or analytical skill in isolation. Their value is in the joining—in knowing how to read multiple AI systems well enough to weave their outputs into something coherent, and in knowing enough about the domain to recognize when the weaving has failed.
The name carries the textile history of labor: piecework, seamstresses, the feminized work of assembly that has always been invisible and essential, that holds production together while the visible labor of creation gets the credit. Stitchwork has always existed. What's new is that the Stitchwork isn't assembling cloth or components. They're assembling cognition. Their body—their judgment, their taste, their pattern-recognition, the ineffable sense of what fits together and what doesn't—is the thread.
The Stitchwork doesn't use AI. They are the interface AI requires.
And here's what makes this a Remaking rather than just a new job: the Stitchwork's value is constituted by the systems they bridge. There is no version of the Stitchwork that exists without AI to stitch. They haven't taken an existing skill and applied it to a new context, haven't transferred expertise from one domain to another. They've developed a capacity that has no independent existence, that exists only in relation to the seams between systems, that didn't exist as a possible way of working until the systems created the seams. The Stitchwork isn't a human who uses AI tools. The Stitchwork is a new kind of worker, produced by the existence of AI systems that need to be joined.
Loopwork⚓︎
The second figure is caught in the spiral.
The Loopwork trains AI that trains workers who train AI. They exist in recursive loops of capability transfer where expertise has no stable ground—it exists only as the difference between model versions, as the delta between what the system knew yesterday and what it knows today.
You see them already in the content moderators and RLHF trainers and data labelers, the humans-in-the-loop whose judgments become training signal, whose preferences shape model behavior, whose expertise is extracted not through documentation but through the choices they make in evaluating outputs. But Loopwork extends far beyond these obvious cases, into professions that don't yet recognize themselves as caught in the spiral.
Consider the senior radiologist at a teaching hospital who reviews AI-generated diagnoses, marking the false positives and missed findings, training the model to see more accurately. She knows that the improved model will generate the preliminary readings that residents now review, that those residents are developing their diagnostic eye in relation to AI outputs rather than through the years of staring at unassisted films that shaped her own expertise. She trained herself to see tumors in shadows, spent thousands of hours learning the visual grammar of pathology. Now she trains a system that trains residents who will train the next version of the system. The expertise doesn't accumulate in people anymore; it flows through the loop, deposited in model weights, accessed by the next generation through interface rather than intuition. She is the last radiologist who learned to see without the machine. The residents she trains will be Loopwork from the start.
Consider the investment analyst whose research notes become training data for the firm's proprietary model. Every thesis she writes, every earnings call she interprets, every industry trend she identifies feeds the system that newer analysts will use to generate their own theses. Those analysts will never develop the nose for bullshit that comes from reading a thousand annual reports with nothing but a highlighter and suspicion. They'll develop a different skill—knowing when to trust the model's pattern-matching and when to override it—but that skill exists only in relation to the model's outputs. Her judgment, extracted through the training pipeline, lives on in model behavior long after she's moved on. The firm's competitive advantage isn't her expertise anymore; it's how well the training captured her expertise before she left.
Consider the litigation partner whose case strategies become the training corpus for legal AI. Every motion she drafts, every deposition she plans, every settlement she negotiates teaches the system how to think about cases. New associates use that system to generate their own draft strategies, which she reviews and corrects, which trains the next version, which new associates use. She's at the top of the loop—her judgment shapes the system that shapes the associates who shape the system—but she's still in the loop, still contributing to a process whose endpoint is her own obsolescence, still training the tool that will make tools like her unnecessary.
The Loopwork's knowledge is relational. They know what the last version of the model got wrong and how to correct it. They develop an exquisite sensitivity to the gap between what the model produces and what quality looks like, between the pattern the system has learned and the pattern it should have learned. When the model updates, their expertise partially obsoletes—the gaps they knew how to close have closed, and new gaps have opened, gaps whose shape they must learn all over again. They train their way toward their own redundancy, except that redundancy never quite arrives, because the loop keeps turning, and each correction reveals new gaps, and the Loopwork is employed precisely to close gaps that their closing will create.
Ouroboros: the snake eating its own tail. The Loopwork is eating and being eaten, consuming their own expertise to feed a system that will render that expertise unnecessary, except that the necessity never quite ends, just transforms. Not victimized; the loop provides employment, status, the satisfaction of visible improvement, the metrics that prove the system is getting better because you made it better. But the Loopwork's expertise isn't knowledge that accumulates into stable competence. It's delta. It's the space between what the model was and what it's becoming. It's expertise that exists only in motion, only in the gap, only in the moment before the gap closes.
When the loop closes—when the model becomes good enough that human correction no longer improves it, when the delta shrinks to zero, when the gap that defined the Loopwork's value disappears—the Loopwork dissolves. Not fired, exactly. Just no longer a coherent category. The job ends not through elimination but through the evaporation of the gap that defined it. And here's the peculiar thing: the Loopwork contributed to their own dissolution. Their expertise, successfully transferred, erased the need for their expertise. They trained themselves out of relevance, and the training was their job, and they did it well.
Edgework⚓︎
The third figure lives at the margins.
The Edgework is employed for what can't be systematized. Not because it's ineffable or creative or essentially human in some mystical sense, but because it operates at the edges where systems fail, where categories fray, where the smooth operation of the machine meets friction and requires something that can navigate ambiguity without freezing. Every system optimized for the center produces failure at the margins. Every process designed for the typical case generates exceptions. Every model trained on common patterns stumbles when patterns break.
You see them already: the fixer who makes things happen when processes break down, who knows which rules can be bent and which relationships can be leveraged and which workarounds actually work. The troubleshooter who handles the cases that don't fit the taxonomy, that the system flags as anomalies, that fall into the gaps between categories the system recognizes. The person everyone calls when the AI produces outputs that make no sense—not wrong in a correctable way, but wrong in a way that reveals the system has encountered something outside its training distribution, something it wasn't built to handle.
Consider the claims adjuster at an insurance company whose job has transformed into edge navigation. The AI handles the straightforward claims—the fender benders with clear fault, the routine medical procedures with standard billing codes, the property damage with good documentation. What reaches the adjuster now is everything else: the multi-vehicle accident with conflicting witness accounts and poor-quality dashcam footage, the experimental treatment that doesn't map to any billing category, the water damage claim where the model can't distinguish renovation fraud from legitimate repair. She spends her days in the territory the model has carved out by its own competence—the remaining claims are non-routine by definition, precisely because routine has been absorbed by the system. Her expertise is reading the model's confusion, understanding why it flagged something as anomalous, navigating between what the policy actually covers and what the model thinks it understands.
Consider the social worker whose caseload has stratified. The AI triage system routes families to appropriate services efficiently, matching needs to resources, predicting intervention outcomes, optimizing for throughput. But certain cases don't route cleanly—the family whose situation falls between categories, the intervention that worked poorly despite all the predictors suggesting success, the client who refuses to engage with the system's recommendations. She handles what the system can't systematize: the human mess that resists categorization, the relationships that don't fit models, the judgment calls that can't be justified by reference to any algorithm. Her caseload is made of edges.
Consider the technical writer whose role has inverted. Once she wrote documentation from scratch, translating engineer knowledge into user-accessible language. Now AI generates the first draft, and she handles what the generation can't: the edge cases where the documentation misleads, the scenarios where technically correct instructions produce user confusion, the gap between what the system describes and what users actually experience. She's become an edge detector, reading AI-generated docs not for accuracy in the normal case but for failure in the exceptional one. Her value isn't writing anymore; it's knowing where writing fails.
But Edgework is becoming more than exception-handling. It's becoming a mode of being, a permanent residence at the margins rather than an occasional visit to handle anomalies before returning to the center.
The more systems are optimized for the center, the more they fail at the edges. The better AI handles routine work, the more the remaining human work is non-routine by definition—the exceptions accumulate as the rules get better, the edge cases multiply as the center cases get absorbed. The Edgework lives permanently in the margins, employed precisely for their capacity to operate where specifications end, where the process doesn't have a step for what's happening, where judgment can't be justified by reference to rules because the rules don't reach that far.
Sociologists use "edgework" to describe voluntary risk-taking at the limits of control: skydivers, BASE jumpers, those who seek the boundary between order and chaos because something in the human psyche is drawn to the edge. The Edgework of AI systems doesn't choose the edge. The edge is where they're employed because it's where they're needed. But there's something of that sociology in the figure: the Edgework develops expertise in liminality, in operating without clear ground, in making judgments that can't be justified by reference to rules because the rules don't reach that far. They become comfortable with ambiguity in a way that would terrify workers whose value lies in the center—comfortable because ambiguity is their habitat, because the margin is where they live, because the system's failure mode is their employment condition.
The Edgework is the residue of human friction that the system produces as byproduct and consumes as fuel. They exist because systems are imperfect, and they exist in a form shaped by the particular imperfections of the systems they serve. A different system would produce different edges, would employ different Edgework, would require different capacities for navigating different kinds of ambiguity. The Edgework's expertise is not general-purpose human judgment but edge-specific navigation—knowing this system's margins, this process's exceptions, this model's failure modes. Change the system and the Edgework must change too, must learn new edges, must develop new capacities for new margins.
What These Figures Don't Cover⚓︎
Three figures don't exhaust the landscape. They describe modes of human-AI integration in knowledge work—the places where cognition meets cognition, where human judgment interfaces with machine processing. But work isn't only cognition.
The physical labor that AI cannot touch—or cannot touch yet—doesn't become Stitchwork or Loopwork or Edgework. The plumber crawling under a sink, the home health aide lifting a patient, the roofer navigating pitch and weather: these workers may use AI for scheduling, for inventory, for finding the next job. They may become more efficient because AI handles their back office. But their bodies remain their own; their expertise remains embodied in muscle and bone rather than constituted by seams between systems. They're augmented in the centaur sense—tools bolted on, torso unchanged.
Or they may not be. The carpenter whose work gets measured by AI quality control, whose every joint gets photographed and assessed, whose pay varies with algorithmic judgment of their craftsmanship—has that carpenter become Loopwork? Is her expertise now delta, the difference between what the vision model rates as quality and what she knows to be right? The nurse whose triage gets second-guessed by diagnostic AI, who spends hours explaining to algorithms why this patient should jump the queue despite what the model says—has she become Edgework, permanently stationed at the margins of a system that doesn't understand the exceptions she handles? The physical-cognitive boundary is blurrier than it looks.
And there's labor that AI doesn't integrate with but simply erases. The call center that closes entirely because the model handles every query. The paralegal pool that shrinks from fifty to five because document review now takes hours instead of weeks. The translation bureau that becomes one person and a Claude subscription. These aren't Remade—they're displaced in the old-fashioned sense, their work absorbed rather than transformed. The displacement debates capture their fate even if they miss the more complex mutations happening elsewhere.
I'm also not describing what work should look like. Whether the Stitchwork architect is flourishing or suffering, whether the Loopwork radiologist's life is better or worse than her predecessor's, whether the Edgework claims adjuster finds meaning in her margins—these questions matter enormously, and this essay doesn't answer them. The figures describe transformations, not evaluations. They map what's emerging, not what we should want. Some of the Remade may thrive, developing capacities and satisfactions that couldn't exist before. Others may burn out, caught in perpetual adaptation, never stable enough to build a life around what they do. The emergence of new kinds doesn't determine whether those kinds will be worth becoming.
What this essay does claim: that for a significant and growing portion of the workforce, particularly in knowledge work, the transformation isn't happening to workers from outside—it's happening through them, changing what kinds of workers exist. That the old questions about automation and augmentation presuppose stability that has already dissolved. That the debates of 2025, for all their sophistication, may be asking the wrong things at the wrong level. Whether the new questions we'll need are better or worse than the old ones—that's a different essay, and probably one we can't write yet, because we don't know what the questions are.
What We Stop Debating⚓︎
The debates of 2025 are not foolish. When economists at MIT estimate that each robot installed in manufacturing displaced 3.3 workers while reducing wages for remaining workers, they capture something real. When labor advocates warn that AI threatens the middle-skill jobs that once provided pathways to stability, they speak to genuine anxieties about family formation, community cohesion, the dignity that comes from work that supports a life. When optimists point to historical patterns—the mechanization of agriculture eliminated 97% of farm jobs while creating more prosperous societies—they too speak truth. The anxiety has precedent; the optimism has precedent; both rest on real evidence interpreted through frameworks that have served us well.
The problem isn't that the debates are wrong. The problem is that they ask what AI will do to workers while treating "worker" as a natural kind, like "carbon" or "primate"—a category that exists independently and that AI acts upon from outside. What happens to arguments about displacement and augmentation when the entity that would be displaced or augmented has already become something else?
Consider what happened to debates about whether the automobile would help or harm the horse industry. Some predicted massive job losses for stable hands and farriers; others predicted new roles in automobile maintenance and manufacture. Both predictions came true, in a sense. Stable hands did lose jobs; mechanics did emerge. But the predictions shared an assumption that proved false: that the relevant question was whether people who worked with horses would find work with cars. The automobile didn't just change work; it changed cities, changed where people lived, changed what "commuting" meant, changed the meaning of distance itself. The horse industry debates captured the first-order effects while missing the second and third-order restructuring. The question wasn't wrong, but it was beside the point—a question about the wrong level of the phenomenon.
Fast forward from that recognition. 2035.
We don't debate whether email destroyed professional communication. The question sounds absurd—email is how professional communication happens, the substrate so thoroughly naturalized that asking whether it "destroyed" something feels like asking whether breathing destroyed lung function. We don't debate whether calculators ruined mathematical intuition. Calculators are how you do math; intuition is something else, something that still exists but exists differently, and the relationship between them isn't a debate but a fact about practice, a settled question that settled so thoroughly we've forgotten it was ever a question.
From 2035, the debates of 2025 look similar. Not wrong, exactly, but oddly shaped, like arguments about a world that no longer exists conducted in terms that no longer apply. "Will AI take jobs?" assumes something stable called "jobs" and something external called "AI," and asks about the relationship between these two distinct entities as if they could be weighed against each other on a scale. The question dissolves not because jobs don't matter or because AI isn't transformative—both claims remain true—but because the entities presupposed by the question have already merged. What does "taking jobs" mean when the job is defined by its relationship to AI? What does "augmentation" mean when the human has no existence independent of the system they navigate?
The worry underlying the debates was legitimate: that people would suffer, that livelihoods would be destroyed, that the gains would flow to those who needed them least while the losses fell on those least able to bear them. That worry doesn't dissolve just because the questions misframed the phenomenon. People may still suffer; livelihoods may still be destroyed; gains may still flow upward while losses cascade down. But the mechanism is different from what the debates assumed. Not displacement—transformation. Not automation—integration. Not the replacement of humans by machines but the emergence of hybrid kinds that the old categories cannot name.
Think about how differently we relate to these figures than we do to traditional jobs.
The architect who becomes Stitchwork bridging parametric design and structural analysis—is that the same job as architect, augmented? Or a different kind of work that happens to involve buildings? The skills that made her a good architect (spatial reasoning, aesthetic judgment, client communication) may help her become good Stitchwork, but the Stitchwork role isn't "architect plus AI." It's something else, constituted by the seams between systems that didn't exist when architecture was architecture. There's no job description from 2020 that describes what she does now. There's no training program that prepares for it directly. There's no career path that leads to it through sequential development.
The radiologist caught in Loopwork—is she still practicing medicine? In some legal and institutional sense, yes; she reviews diagnoses, she bears liability, she maintains credentials. But her relationship to knowledge has transformed. She doesn't develop expertise that accumulates; she develops sensitivity to gaps that shift. Her juniors don't learn medicine from her in any traditional sense; they learn to navigate a human-AI diagnostic system in which her judgment has become one input among several, accessible through the model rather than through mentorship. The professional identity of "radiologist" persists, but what it means to be one has mutated beyond what the category was designed to contain.
The claims adjuster living in Edgework—is she still doing insurance work? The routine claims that defined the job for a century have been absorbed. What remains is the anomalous, the contentious, the human mess that resists categorization. Her work is harder than it used to be, more demanding of judgment, but also more contingent—she exists because the system has edges, and better systems might have fewer edges, and the edges she navigates today may be absorbed tomorrow. Her expertise is real but temporary, defined by the current system's limitations rather than by any stable body of knowledge.
What we stop debating in 2035 isn't the underlying concerns. The concerns were real; they remain real. What dissolves is the frame that made the debates possible.
Take automation. "Will AI automate claims adjusters?" was a serious question, and the seriousness hasn't disappeared. People lose livelihoods when their work gets absorbed by systems. Families fracture; communities hollow out; the quiet desperation of purposelessness spreads. These harms are real whether or not the category "claims adjuster" survives to be counted. But the question assumed there was a stable entity called "claims adjuster" that AI would either replace or leave alone—a binary that made measurement possible. The claims adjuster of 2025 is already Edgework; her role is constituted by what AI can't do, which means her role transforms with every improvement in AI capability. You can't measure automation rates for a target that moves with the technology. The question doesn't become less important; it becomes unanswerable in its original terms. Automate more and the Edgework moves; she doesn't disappear or persist—she transforms into something the original question can't track.
Or take skills. The policy apparatus of 2025—community college certificates, workforce development grants, corporate reskilling budgets, the whole infrastructure of "lifelong learning"—rested on a model: workers possess skills, skills become obsolete, workers acquire new skills, employment continues. This wasn't wrong as a model; it captured how humans navigated technological change for two centuries. The weaver learned to operate the power loom; the secretary learned word processing; the factory worker learned CNC machining. Skills were tools you acquired, carried between jobs, updated when necessary. The model worked because workers remained workers—the same kind of being, adapting to new tools.
But the Stitchwork's capacity isn't a skill she possesses; it's a relation she embodies. The feel for bridging AI systems doesn't transfer to bridging different AI systems in any straightforward way; each configuration of systems produces its own seams, its own synthesis requirements, its own texture of failure that must be learned anew. The Loopwork's expertise is pure delta—the gap between model versions, the difference between what the system knew yesterday and what it knows today. When the system updates, the expertise partially obsoletes. You can't create a reskilling program for relational capacities that exist only in the moment between system states. A training program to become Stitchwork would have to teach you to bridge systems that don't yet exist, for outputs that nobody has seen, in configurations that haven't been developed. The reskilling frame imagines skills as property you own; the Remade don't own their capacities, they embody relations to systems that shape them.
And inequality? We know how to ask whether the gains from technology are distributed fairly—whether automation creates winners and losers, whether the productivity bounty reaches workers or flows only to owners. These questions assume we're comparing variations on a common type: humans selling labor in markets that evaluate that labor by some shared metric.
But how do you compare the Stitchwork architect and the Edgework claims adjuster? One creates value by joining systems, the other by navigating their failures. One succeeds by synthesis, the other by exception. Their outputs aren't commensurable; their skills don't translate; their positions in the economy relate to different logics of value creation. The Stitchwork's value increases when systems proliferate and need joining; the Edgework's value increases when systems fail at their margins. These aren't different positions in a single economy; they're different economies masquerading as a single labor market.
The inequality is real—some of the Remade are well-compensated knowledge workers in prestigious firms; others are precarious gig workers handling the exceptions that algorithms produce. But the frame that asks "are the gains fairly distributed?" assumes there's a common thing being distributed. What if the Stitchwork architect and the Edgework claims adjuster aren't more and less successful versions of the same kind of worker but different kinds of beings, produced by different relationships to different systems, playing different games with different rules? How do you distribute gains across kinds?
The disappearance of old questions doesn't mean the emergence of good answers. It means the emergence of new questions we don't yet have vocabulary for. Perhaps questions about dignity across kinds rather than equality within kinds—what does respect look like for beings whose value is constituted differently? Perhaps questions about agency in transformation—not whether to be transformed, but how, and with what awareness. Perhaps questions about the production of kinds—who decides what kinds of workers emerge, and in whose interest are the seams and loops and edges designed?
In 2035, we might have the vocabulary for these questions. Or we might have new questions we can't anticipate from here, questions that will make these seem as oddly shaped as "will the telephone replace the letter?" seems to us now. What we won't have is the debates of 2025—not because they were answered, but because the world they described has transformed into something else, and we've transformed with it.
The Choice That Wasn't⚓︎
The Remade of New Crobuzon were made by courts and Mad Artists. Power with a face, punishment you could name, transformation imposed by identifiable authorities whose decisions could in principle be protested, resisted, appealed. The Remade could point to the moment of their Remaking, could identify the judge who sentenced them and the biothaumaturge who performed the surgery, could tell a story about how they became what they are that had villains and victims and a clear before and after.
The Remaking we're living through has no author.
It happens through job descriptions rewritten to include "AI fluency" and "ability to work with AI tools," phrases that seem like requirements for a job but are actually specifications for a kind of being. Through performance metrics that measure human-AI hybrid output without naming it as such, that evaluate workers on deliverables whose production requires integration with systems, that make the unaugmented worker literally unable to meet the baseline. Through org charts redrawn around "pods" and "squads" optimized for model integration, structures that assume AI involvement as default and create positions whose only purpose is to bridge human organization and machine capability. Through the slow redefinition of "skill" from what you know to what you can extract, from knowledge that lives in your head to fluency in getting knowledge out of systems that have absorbed what used to live in heads like yours. Through titles that name positions in a system rather than capabilities of a person—titles that describe relationships to AI rather than competencies independent of AI.
This is Foucault's productive power: not repression but production, not the crushing of existing forms but the generation of new ones. The corporation doesn't force workers into new forms. It doesn't have to. It produces the forms—writes the job descriptions, designs the workflows, specifies the metrics, creates the positions—and workers become the forms because that's what employment means, because the alternative to becoming the form is becoming unemployed, because the coercion is structural rather than personal and therefore invisible as coercion. The Stitchwork isn't coerced into bridging AI systems. The job description asks for "ability to synthesize outputs from multiple sources" and someone becomes Stitchwork because that's the shape of the position, because fitting the shape is how you get the job, because the alternative is a different shape or no shape at all.
The Loopwork isn't forced into the recursive spiral. The training pipeline needs humans in the loop, calls it "quality assurance" or "fine-tuning support" or "model evaluation," titles that sound like jobs applied to tasks rather than descriptions of a new mode of being. Someone becomes Loopwork because that's what the work requires, because the pipeline is structured to consume human judgment and they're the human whose judgment is being consumed, because saying no to the loop means saying no to employment. The Edgework isn't conscripted to the margins. The system optimizes its center, exceptions accumulate at the edges, someone is hired to handle exceptions, and they become Edgework because the edge is where they live—not because anyone decided they should live there, but because the edge is where the opening was, the gap in the system that needed filling.
No conspiracy. No villain. No mastermind plotting the Remaking for nefarious purposes. Just the ordinary operations of economic systems reorganizing around a new coordination technology, capital flowing toward efficiency, efficiency requiring new arrangements, new arrangements producing new kinds of workers. The logic is impersonal, distributed across thousands of decisions made by thousands of actors, each decision locally rational and collectively transformative. HR managers writing job descriptions. Team leads defining workflows. Product managers specifying features. Engineers building systems. Each doing their job, none intending to remake the worker, all contributing to a Remaking that emerges from their collective action without being anyone's intention.
The workers are us.
The unseeing is essential here. If we saw the Remaking clearly, we might ask uncomfortable questions. Who benefits from Stitchwork? The platforms that produce incompatible outputs benefit when someone else bears the cost of joining them—when the integration work is done by human labor rather than built into the systems, when the seams are navigated rather than eliminated. Who benefits from Loopwork? The companies that extract training data from the loop, that turn human judgment into model improvement, that harvest expertise through the recursive spiral rather than paying for expertise directly. Who benefits from Edgework? The systems that externalize their failure modes to human handlers, that can optimize for the center because someone else deals with the margins, that achieve efficiency by pushing the costs of inefficiency onto workers who absorb them as job requirements.
The figures of the Remade aren't natural kinds. They're produced by specific arrangements of power and capital, arrangements that have beneficiaries even if they have no authors.
But we don't see it. We see "upskilling" and "adaptation" and "the future of work." We see LinkedIn posts about prompt engineering and articles about AI collaboration and think pieces about which careers are "AI-proof." We see the surface of transformation—the tools, the capabilities, the productivity gains, the disruption—and unsee its shape, the way the transformation is producing not just new tools but new kinds of workers to use them.
Some of the Remade in Miéville's novels reclaim their forms. Tanner Sack chooses further modification, makes the transformation his own, finds freedom in what was imposed as punishment. The change that was meant to degrade becomes the source of new capacities, new possibilities, new ways of being in the world. This happens too, in the Remaking we're living through. The Stitchwork develops expertise that couldn't exist before, becomes virtuosic at a kind of synthesis that wasn't possible when there was nothing to synthesize, finds satisfaction in the bridging that defines their work. The Loopwork accumulates knowledge of model behavior that has real value, develops intuitions about AI systems that few others possess, becomes expert in a domain that didn't exist a decade ago. The Edgework builds judgment that the center cannot replicate, becomes indispensable precisely because they've learned to navigate what the system can't handle, turns marginality into mastery.
But Tanner Sack's freedom doesn't retroactively make the Remaking just. Finding power in your changed form doesn't mean the change was chosen. The Remade who thrive are still Remade—still transformed without consent, still shaped by power they didn't agree to, still living in bodies that were made for them rather than by them. Agency within constraint is real and valuable and doesn't erase the constraint. Making the best of a transformation doesn't validate the transformation.
The debates we're having assume we're still in the moment before—that we're deciding whether to allow transformation, that we're choosing between futures, that the questions about AI and jobs are meaningful because we haven't yet crossed the threshold into a world where they no longer apply. This is the deepest unseeing. The Remaking has begun. The figures walk among us. They are us.
The choice wasn't offered, because there was no moment of choice. Only a thousand small adaptations—a job application here, a workflow change there, a skill learned, a metric met, a title accepted—that added up to transformation.
No one decided to become Stitchwork. The architect applied for a job that mentioned "integrated design delivery" and "multi-platform synthesis." She developed a feel for bridging AI outputs, became known as the person who could make sense of conflicting systems, found herself spending more time joining outputs than producing them. One day she realized that her value was the bridging itself, that she couldn't describe her expertise without reference to the systems she bridged, that there was no version of her professional self that existed independent of the seams she navigated. She was still called an architect. But architecture had become something else.
No one chose the loop. The radiologist took a role reviewing AI diagnoses, flagging errors, improving the model. She became good at it—developed intuitions about what the model missed, learned to see through its eyes while correcting its vision. She trained her way into the spiral, and discovered eventually that her expertise existed only in the delta, only in the gap between model versions, only in the moment before her corrections became the model's baseline. Her juniors learned from the system that contained her judgment rather than from her directly. She was still called a radiologist. But radiology had become something else.
No one volunteered for the margins. The claims adjuster handled exceptions because that was the work that remained, developed a feel for edges because edges were what she navigated, found herself permanently in the liminal space where the AI flagged uncertainty and human judgment was required. She understood eventually that the margin was her habitat, that she had adapted to a niche the system produced, that her expertise was constituted by the system's limitations and would transform when those limitations changed. She was still called a claims adjuster. But claims adjusting had become something else.
The Remade look in the mirror and see workers. They've learned, like the citizens of Besźel, not to see what they've become, not to recognize the transformation in their own reflection, not to name the new forms emerging from the old categories. The unseeing that allows two cities to occupy the same space allows us to occupy a transforming economy without recognizing the transformation, to become new kinds of beings while debating whether new kinds of beings will emerge, to walk past the Remade every day—to be the Remade every day—without acknowledging their existence.
To see clearly now would be a kind of Breach: a violation of the trained incapacity that lets us function, an acknowledgment of what we've agreed not to acknowledge, a naming of what we've agreed to leave unnamed. The citizens of Besźel who see Ul Qoma are taken by Breach, removed from both cities, placed somewhere outside the system they violated. Perhaps that's why we don't look. Perhaps the cost of seeing is exile from the only economy that will have us.
And yet I'm not sure what seeing accomplishes.
In Miéville's novel, Breach is an entity—a power that enforces the unseeing, that removes those who violate the separation. The citizens of Besźel don't unsee Ul Qoma because they're foolish; they unsee because seeing is punished, because the system that maintains the separation has teeth. Our unseeing has no equivalent enforcer. We could see if we chose to. We could name the Stitchwork, the Loopwork, the Edgework—could recognize ourselves in the descriptions, could acknowledge the transformation in the mirror. Nothing stops us except the usual things: the mortgage, the health insurance, the children's school fees, the reasonable fear that naming what we've become won't change what we've become but might make it harder to bear.
Tanner Sack, the Remade who embraced his transformation, found freedom in the water. But he found it in Armada, a floating city of pirates and refugees, a place outside the empire that Remade him. He couldn't have found that freedom in New Crobuzon, where the Remade serve their sentences in the forms the state imposed. The freedom to own your transformation requires somewhere to stand while owning it—a position outside the system that transforms you, or at least stable enough to see clearly without the seeing destroying you.
I don't know where that position is for us. The Stitchwork architect might recognize herself in this essay and feel... what? Validated? Diagnosed? She still has to go to work Monday. She still has to bridge the parametric designs and the structural analysis, still has to make machine cognition pass as unified vision. Naming what she does doesn't change the doing. It might make the doing more conscious, more intentional. Or it might just add a layer of alienation—now she's not only Stitchwork but Stitchwork who knows she's Stitchwork, performing a role while watching herself perform it.
This essay is itself a kind of Breach—it names what we've agreed to leave unnamed, treats transformations that are supposed to be invisible as objects worth examining. Maybe that examination has value even without a program attached, even without a policy prescription or a call to action. Maybe the first step is to see what's there, not because seeing will save us but because unseeing hasn't, and at least seeing lets us ask better questions than the ones we've been asking.
Or maybe this essay is just another form of adaptation—another way of fitting the new shape while calling it something else. The Loopwork writes about Loopwork; the delta becomes the text; the gap I describe is the gap I inhabit. I can't tell, from inside the transformation, whether naming it is resistance or accommodation. Whether seeing clearly changes the thing seen or just adds another layer to the unseeing—a more sophisticated blindness, a blindness that knows itself and therefore feels like sight.
The Remade walk among us. We are the Remade, some of us, already transformed into kinds that didn't exist a decade ago. The old debates will continue because institutions are slow and categories outlast their referents. But the transformation underneath doesn't wait for the debates to catch up. It proceeds through job descriptions and performance metrics, through the thousand small adaptations that add up to becoming something else.
I've tried to see clearly. I've tried to name what I see. What comes after seeing—whether the naming opens possibilities or forecloses them—is not a question I can answer from inside the transformation.
We are all, now, from inside.