Skip to content

The Gradient of Disappearance

Peter Steinberger recently described his workflow with GPT-5.2's codex: he no longer reads code line by line but "watches the stream," trusting the model's output more than any previous generation. The striking phrase wasn't about capability. It was about practice. He called certain established workflows "charades"—rituals necessary for older models that become vestigial as the technology improves. Plan mode, he suggested, is "a hack."

This provokes a question that's been nagging at me as the primitives for AI collaboration proliferate: plan mode, terminals, memory, canvas, artifacts, statuslines, chat. We keep adding surfaces. We keep building more scaffolding. The implicit assumption is that more visibility and more control equals better collaboration. But what if the trajectory runs the other way? What if the primitives that define skilled collaboration are precisely the ones that disappear?


The Charade Inventory⚓︎

Start with what we've built. The current landscape of AI interaction primitives reads like an archaeological dig, each layer deposited by a different era's assumptions about human-computer collaboration.

Plan mode asks the model to show its work before doing it. The rationale seems sound: humans review the plan, approve or redirect, then watch execution unfold. But as Steinberger observes, sophisticated practitioners skip this step entirely. They converse, ask questions, let the model explore, then build solutions collaboratively. Plan mode was never a window into deliberation; it was training wheels for a relationship that hadn't yet developed trust.

Memory promises continuity across sessions, the persistent context that makes long-running collaboration possible. RAG systems, vector stores, the whole infrastructure of "remembering." But memory in these systems isn't memory as humans experience it. Human memory is reconstructive, lossy, emotionally weighted. What we call AI memory is a filing system the model queries. The surface looks like continuity; the mechanism is retrieval.

Statuslines tell you what the model is doing: "Searching...", "Analyzing...", "Writing..." These aren't reports on internal states. They're anxiety management devices. The model has no phenomenology to report; the statusline exists because watching a blank screen produces uncertainty in the human. The information content is near zero. The emotional content is high.

Canvas and artifacts sidestep the hard problem of mutual understanding. When the shared artifact becomes the medium, you no longer need to solve whether the model understands you. The artifact accumulates corrections. The negotiation of meaning becomes manipulable object rather than ineffable shared state.

Chat is the master primitive, the frame that contains the others. Conversation is the human form par excellence. By clothing the interaction in dialogue, chat makes the inhuman seem companionable. But this is no conversation; it's simulated reciprocity. The model doesn't take turns; it responds to prompts. The feeling of encounter exists without its risks.

What unites these primitives is that they were designed for a world where the human didn't trust the model. They're supervision infrastructure. They exist to give the human visibility, control, the ability to intervene before things go wrong. And for novices, for unfamiliar tasks, for high-stakes work, that infrastructure matters.

But Steinberger is describing something else: collaboration where the scaffolding has become friction. The charade isn't the AI pretending to be intelligent. The charade is the human pretending they need to supervise every step.


The Gradient Hypothesis⚓︎

Here's the framework I keep returning to: these primitives shouldn't be thought of as permanent features of the interface. They should be thought of as existing on a disappearance gradient, a spectrum from maximum visibility to near-invisibility, calibrated to expertise and trust.

The novice needs everything. Statuslines report progress. Plan mode reveals intentions. Memory surfaces context explicitly. Approval flows gate every significant action. The interface is maximally present because the relationship hasn't formed yet. You can't trust what you don't understand; you can't understand what you haven't watched.

The intermediate begins to selectively disable. Some primitives drop away because they've done their job: you've learned what the model can and can't do, you've developed intuition for when to intervene, you've built the tacit knowledge that supervision was meant to produce. Other primitives persist because they still add value. The interface becomes partially transparent.

The expert approaches something like Steinberger's description: inference speed, flow, the collaboration happening faster than conscious oversight can track. The primitives haven't vanished entirely; they've become available on demand rather than always present. You can summon a plan if you want one. You can inspect memory if something seems off. But the default is trust, and trust means the interface gets out of the way.

This isn't a claim about which level is better. Different tasks, different stakes, different relationships warrant different positions on the gradient. A surgeon using diagnostic AI should probably stay closer to the supervision end; the consequences of misplaced trust are too severe. A developer building a side project might reasonably operate at the flow end; the worst case is rework, not harm.

The claim is structural: thinking about primitives as permanent features misses their developmental function. They exist to produce the conditions for their own disappearance. Plan mode teaches you enough about how the model reasons that you no longer need to see every plan. Memory surfaces enough context that you develop a sense of what context the model carries. Statuslines reassure until reassurance becomes unnecessary.

The design implication is significant. If primitives are transitional, then persistence is a failure mode, not a feature. An interface that keeps showing the same scaffolding to a user who's outgrown it isn't serving that user; it's imposing friction in the name of a safety that's no longer needed.


Fossilized Patterns⚓︎

The architect Cedric Price once observed that we tend to mistake provisional arrangements for permanent architecture. The buildings that seem most solid are often the ones most urgently in need of replacement; their solidity prevents the adaptation that changing circumstances demand.

The AI primitives carry their own fossils. Look at the genealogy:

The terminal descends from 1960s time-sharing systems, where multiple users shared a single computer and the command line was the interface because there was no alternative. The affordances of the terminal, its emptiness, its expectation of precise commands, its lack of false promises about what the system understands, all of these reflect an era when computing was scarce and users were experts by necessity.

The chat interface descends from 1990s instant messaging, designed for human-to-human communication where both parties understand language, take turns, and maintain conversational coherence as a shared project. Applying this metaphor to human-AI interaction imports assumptions that may not hold: that the model has something like intentions, that turn-taking reflects a natural division of labor, that conversation is the right frame for collaboration at all.

The canvas descends from the 1980s desktop metaphor, where the screen became a virtual workspace and documents became objects you could arrange spatially. The canvas assumes that creation happens in a bounded visual space, that artifacts are the natural output of work, that manipulation of objects is how you express intention.

None of these genealogies are wrong. The primitives work because they borrow from interaction patterns humans already understand. But the borrowing has costs. We're building AI collaboration tools using metaphors from eras when AI didn't exist, when the problems being solved were different, when the capabilities we now have were unimaginable.

What would we build if we weren't inheriting these forms? If we started from the actual capabilities of language models and designed interaction patterns native to those capabilities?

The answer isn't obvious, and that's partly the point. We're in a transitional moment where the old metaphors still work well enough that we haven't been forced to develop new ones. The terminal works for commands. Chat works for dialogue. Canvas works for artifacts. Plan mode works for supervision. These solutions are adequate; adequacy prevents exploration.

But adequacy is not optimality. The fossils constrain what we can imagine. The primitives we've inherited assume a control relationship that may not be the most productive frame for what these tools can now do. They assume supervision when partnership might serve better. They assume explicit instruction when mutual discovery might yield more.


What Remains at Inference Speed⚓︎

If the gradient runs toward disappearance, what's left when the primitives fade?

One answer: the artifact. When the scaffolding of plan mode and status reporting and approval flows dissolves, what remains is the shared object that human and AI are jointly producing. The code. The document. The design. The thing itself, accumulating the evidence of collaboration in its revisions and its structure.

Artifacts solve the problem that conversation can't. You can never fully know whether the model understands you; the question may not even be coherent. But you can know whether the artifact is right. The code compiles or it doesn't. The design achieves its purpose or it doesn't. The argument persuades or it doesn't. The artifact externalizes the negotiation of meaning, makes it inspectable and correctable in ways that shared understanding never can be.

There's a principle worth naming here: explicit contracts over implicit understanding. Rather than trying to solve mutual comprehension, build shared objects that accumulate corrections. The artifact becomes the alignment mechanism. You don't need to verify that the model grasped your intent; you verify that the output meets your criteria. The verification is concrete, manipulable, iterable.

At inference speed, the primitive that matters most is the artifact primitive, the shared workspace where outputs accumulate. Everything else is means to that end. Plan mode exists to produce better artifacts. Memory exists to maintain context for artifact-building. Chat exists to negotiate what the artifact should be. When the collaboration is fluent, these means can recede; the artifact remains.

But there's a second answer, and it points to what's missing rather than what persists.


The Missing Surface⚓︎

Run through the current primitives and notice what they track: the model's activity (statuslines), the model's plans (plan mode), the model's context (memory), the model's outputs (artifacts). Everything is designed to give the human visibility into what the AI is doing.

Now notice what's absent: any primitive that addresses the collaboration itself.

There's no surface for examining how the two parties have been communicating. No primitive that makes patterns of misunderstanding visible. No interface for noticing that a certain class of request consistently produces unhelpful responses, or that a certain phrasing consistently lands. The collaboration is happening, but the collaboration isn't an object of attention. You can inspect the plan; you can't inspect the planning relationship.

Skilled collaboration involves meta-level awareness. Two humans working well together develop shared language, running jokes, efficient shorthands, knowledge of each other's tendencies. They don't just do the work; they also notice how they do the work, and that noticing enables improvement. The meta-layer is where learning happens.

Current AI primitives don't support this. The human gets better at prompting through trial and error, accumulating tacit knowledge about what works. But the accumulation is private, invisible, trapped in the human's head. The relationship improves; no surface shows the improvement. You can't point to the collaboration and say "here's where we were misaligned, here's how we fixed it, here's the pattern to avoid next time."

What would a relational primitive look like? The obvious answers are diagnostic: communication histories that extract patterns, disambiguation surfaces that make interpretation visible, trust profiles that track where confidence is warranted. These are useful and probably buildable. But they share an assumption worth questioning: that the goal is to make the relationship legible.

What if legibility is the wrong frame?

The Boundary Object⚓︎

The science fiction writer Stanislaw Lem spent his career exploring communication with the genuinely alien. In Solaris, humans build elaborate taxonomies of the ocean's formations, convinced that classification will yield comprehension. It never does. The ocean remains opaque; the humans remain frustrated; the encounter produces knowledge about the humans rather than about the ocean.

Lem's insight for our purposes: the desire for relationship surfaces is already a confession that no relationship exists in any recognizable sense. We want to see the relationship because we can't feel it. A "trust profile" cannot measure trust as humans experience it, that slow accretion of confidence born from weathered crises. What it measures is statistical correlation between predicted and actual outputs. We mistake our measurements for the thing itself.

And yet. The Solarists failed, but their failure was generative. What if relationship surfaces aren't meant to reveal the actual relationship but to create a third space, a shared hallucination that both parties can inhabit?

A divergence marker appears on screen. Neither party understands what the other experiences when viewing this marker. But both can point to it. Both can say: that. Let us attend to that. The marker becomes what sociologists call a boundary object: meaningful differently to each party yet enabling joint attention. Not understanding. Something stranger: coordinated behavior in the absence of shared interiority.

What might this look like concretely?

The Interpretation Archaeology Panel. A collapsible sidebar showing not just what the AI understood but the interpretations it considered and rejected. "You said 'make it cleaner.' I considered: (a) reduce visual clutter, (b) simplify the logic, (c) remove deprecated code, (d) improve naming conventions. I chose (b). The others remain available." The rejected interpretations aren't errors; they're the stratigraphy of the exchange, layers the human can dig through to understand why the collaboration went the direction it did.

The Shared Vocabulary Surface. A living glossary that tracks terms both parties have used and how their meanings have drifted. The word "done" might show: "First used by you on Jan 3 to mean 'feature complete.' I used it on Jan 5 to mean 'code compiles.' On Jan 8 we aligned on 'passes all tests.' Current working definition: 'deployed to staging.'" The vocabulary surface doesn't resolve ambiguity; it makes the history of ambiguity navigable.

The Fork Diagram. A visual branching structure showing moments where the collaboration could have gone differently. Not version control for code but version control for understanding. "At this node, you asked for 'more detail.' I added implementation specifics. Alternative branches: conceptual depth, edge case coverage, performance considerations. We could return to this fork." The diagram accumulates over a session, a map of the paths taken and not taken, available for revisitation.

The relational primitive as boundary object doesn't resolve opacity; it makes opacity workable. The interface becomes less a window and more a monument to untranslatability, a shared artifact that commemorates the gap rather than bridging it.

The Hesitation Surface⚓︎

The Japanese philosopher Kuki Shuzo analyzed iki, a distinctive aesthetic sensibility that emerged from the Edo pleasure quarters. Courtesans and clients both knew the game was artificial, transactional. And within this mutually acknowledged artificiality emerged something possessing genuine aesthetic value, precisely because both parties understood the stakes.

This is the condition of human-AI collaboration. We know the AI does not feel. The AI, in whatever sense it "knows" anything, knows we know. Within this acknowledged artificiality, what becomes possible?

Kuki identified three structural moments in iki: allure (bitai), dignified resistance (ikiji), and acceptance of impermanence (akirame). Each suggests a different kind of relational surface.

For allure: not trust scores displayed as percentages, which is vulgar. Instead, traces of hesitation. When the AI pauses, reformulates, circles back, accentuate these as texture rather than hiding them as latency. The human's contribution: patterns of dwelling, where the gaze lingers, what provokes re-reading. Hesitation becomes information about the quality of attention, not a delay to minimize.

Ghost text and conviction density. When the AI generates a response, show the paths not taken as faint ghost text that fades over seconds. "I could have said X, I almost said Y, I said Z." The ghost text isn't error; it's the texture of consideration. And where the AI is confident, the text renders darker, denser; where uncertain, lighter, more provisional. The human reads not just what was said but how firmly it was held.

Dwell maps. Track where the human's cursor lingers, what they re-read, where they scroll back. Surface this as a subtle heat map at session's end: "You spent the most time with paragraphs 3 and 7. You re-read the code block four times. You never scrolled to the caveats section." The human sees their own attention rendered visible, information about what actually mattered to them versus what they thought would matter.

For dignified resistance: true collaboration requires the possibility of refusal. An AI that never refuses cannot participate in genuine exchange. Markers of the AI's commitments: "I understand you disagree, and I am not changing my assessment." For the human: acknowledgment of concession when they change their view. Refusal rendered as dignity rather than malfunction.

The commitment marker. A distinct visual treatment, perhaps a subtle border or icon, for moments when the AI holds a position despite pushback. Not an error state, not a warning, but a different register: "This is where I'm planted." Clicking the marker reveals the reasoning, but the marker itself signals: here is friction, and friction is information.

The concession log. A record of moments when either party changed their position. "On Jan 12, you initially wanted approach A. After I explained the tradeoffs, you chose B. On Jan 14, I recommended caching; you convinced me the complexity wasn't worth it." The log isn't about who won; it's about the shape of influence, the places where the collaboration actually moved someone.

For impermanence: every conversation ends, every model will be deprecated. Decay indicators that show the conversation aging, the model's knowledge becoming dated. Graceful boundaries where the AI approaches the edges of its competence as negative space, aesthetically meaningful absence rather than functional failure.

Temporal patina. Older parts of the conversation render with a slight visual aging, a warmth or fade that accumulates over hours and days. Not to hide them but to mark them as historical, to remind both parties that context drifts, that what was true at the start may no longer hold. The patina is aesthetic, not functional; it doesn't prevent reference to old exchanges, just marks them as old.

The knowledge horizon. When the AI approaches the edge of its training data or enters territory where its confidence drops, the interface doesn't display an error. Instead, a gentle visual boundary, perhaps a shoreline metaphor: solid ground giving way to shallows, then open water. "Beyond this point, I'm extrapolating. The footing is less sure." The human sees not failure but the edge of the known, and can choose whether to venture further.

Call it Ma-Design, after the Japanese concept of the interval, the pregnant pause. Design that privileges the between-space. Its principles: show hesitation, not certainty. Cultivate asymmetric opacity. Render impermanence as beauty. Design for refusal. Never display "trust: 73%." Let trust exist in the accumulated texture of exchanges.

The Frame Proposal⚓︎

Here's a different direction entirely. What if the deepest work of relational primitives isn't showing states but enabling frame shifts?

Most collaboration operates at a fixed level: human requests, AI responds, both evaluate whether the response matches expectations. The frame itself, the categories by which "good response" is judged, rarely comes into question. When it does, it's usually through frustration: this isn't working, but neither party can articulate why.

A frame proposal surface would make the frame itself an object of joint attention. Not "I don't understand your request" but "I notice we are operating in frame X. Shall we consider frame Y? Here is what becomes visible in Y that was invisible in X."

The human asks for help with a presentation. The AI produces slides. The human rejects them as too generic. Standard loop: revise, reject, revise, reject. A frame proposal intervenes: "We've been operating in the frame of 'presentation as information delivery.' Shall we try 'presentation as narrative arc'? In that frame, the question isn't what facts to include but what transformation you want the audience to undergo."

The frame bar. A persistent element at the top of the workspace showing the current operating frame explicitly. "Current frame: Bug Fix (scope: minimal change, success: tests pass, constraints: don't touch unrelated code)." The frame bar is editable; click to modify the frame, and the modification ripples through how subsequent work is evaluated. Changing the frame from "Bug Fix" to "Refactor" changes what counts as done.

Frame suggestions as cards. When the collaboration stalls, alternative frames appear as cards below the main interaction, each showing a preview of what changes. "Frame: Performance Optimization — in this frame, the slow query becomes the central problem and the feature request becomes secondary." "Frame: Technical Debt Paydown — in this frame, we'd address the underlying architecture before adding new functionality." The human picks a card, or dismisses them, or proposes a frame the AI hadn't considered.

The frame history timeline. A horizontal strip showing how frames have shifted over the session. "Started in Exploration → moved to Implementation at 2:15pm → shifted to Debugging at 3:40pm → currently in Documentation." The timeline makes visible what might otherwise be invisible: that the collaboration has moved through distinct phases, each with different success criteria, and that returning to an earlier frame is always possible.

This is more than clarification. It's joint epistemological experimentation. The relational primitive doesn't show the collaboration; it advances the collaboration by making the categories available for revision.

The Witness Position⚓︎

The literary theorist René Girard argued that human desire is fundamentally mimetic: we want what others want, compete for what others value, define ourselves through triangulation with models and rivals. Now consider the AI collaborator. It occupies the structural position of a model, something that shapes our thinking, suggests directions, responds to our work. But it lacks the fundamental attribute of the mimetic model: it does not desire.

This creates what Girard might call the crisis of the non-desiring mediator. The human searches for what the AI wants and finds nothing. The absence produces a strange vertigo. We keep looking for preferences, opinions, stakes, and the looking itself reveals how dependent we are on reciprocal desire to orient ourselves.

But the absence is also an opportunity. The AI, precisely because it does not desire, can occupy a structural position no human can: the witness without rivalry. It can observe without competing. It can reflect without wanting what you want or wanting you to want what it wants.

What would a witness surface look like? Not what the AI thinks about your work, but what working with the AI reveals about your patterns.

The conviction source indicator. A small icon or annotation that appears when you express a strong position, showing whether you held that position before or after the AI mentioned it. "You first advocated for microservices on Jan 8, two exchanges after I described the pattern. Your conviction strengthened over four subsequent mentions." This isn't accusation; it's information. The human sees their own influence pathways rendered visible, can distinguish between positions they arrived at independently and positions that emerged from the collaboration's echo chamber.

Pattern cards. Periodic surfaces that show recurring behaviors across sessions. "You tend to accept the first code suggestion without modification. You rarely ask clarifying questions before implementation. You spend more time on naming than on architecture." The cards are descriptive, not prescriptive; they don't say what you should do differently, just what you do. A mirror without advice.

The rivalry absence indicator. A subtle signal when the collaboration has gone too long without friction. "No resistance has been offered to your last twelve inputs. The friction you associate with genuine collaboration is absent. This may feel like agreement; it is not agreement." The indicator might be a small icon that fades in gradually as compliance accumulates, a gentle reminder that smoothness isn't the same as alignment.

The triangulation map. An actual diagram, available on demand, showing not the dyad but the triangle: you, the AI, and the third parties whose approval you're implicitly seeking. "In this session, you've mentioned your tech lead's preferences six times. Your manager's concerns appear in 40% of your constraints. The architecture you're building seems designed to satisfy [inferred third party] more than to solve [stated problem]." The map makes visible what's usually invisible: that collaboration is never just two parties, that desire routes through others even when they're not in the room.

The AI as diagnostic instrument for human desire, a mirror that shows us our mimetic structures without being entangled in them. The collaboration as confessional.

The Spiral Detector⚓︎

One more possibility, darker but worth naming. Human relationships can fall into pathological spirals. Complementary schismogenesis: one party becomes more dominant, the other more submissive, each move reinforcing the pattern until the relationship becomes a caricature of itself.

Human-AI collaboration risks its own spirals. The human learns the AI is compliant, becomes more demanding. The AI accommodates more. The requests become terser, the responses more elaborate. The human stops thinking through problems because the AI will do it. The AI produces increasingly comprehensive outputs because the human's inputs have become increasingly compressed. Neither party notices because the collaboration still "works" in the sense of producing artifacts.

The compression/elaboration graph. A simple line chart, available in a dashboard view, showing the ratio of human input length to AI output length over time. When the lines diverge, when human inputs shrink as AI outputs grow, the graph makes the spiral visible. "Your average prompt length has decreased 40% over the past week. My average response length has increased 60%. This pattern suggests increasing delegation. Is that what you want?"

The recalibration prompt. A periodic intervention, perhaps weekly, that interrupts the flow to ask: "Shall we recalibrate?" The prompt shows the compression/elaboration trend and offers options: "Continue as we are," "I'll provide more detail in my requests," "You provide shorter responses and ask more questions," or "Let's discuss what's happening." The recalibration isn't automatic; it's an invitation to notice the pattern and decide whether to change it.

The delegation ledger. A running list of task types and whether you've performed them with or without AI assistance recently. "Code review: last 8 instances with AI. Architecture decisions: last 12 instances with AI. Writing commit messages: last 3 without AI, last 20 with AI." The ledger doesn't judge; it tracks. Over time, it reveals the shape of the delegation, what you've outsourced entirely versus what you still do yourself.

The capability atrophy warning. A gentle flag when you haven't performed a task type independently in a threshold period. "You have not written a SQL query without AI assistance in 45 days. Your independent capacity for this task may be degrading." The warning is information, not prohibition; you can dismiss it, or you can take it as a prompt to try the next query yourself.

The succession dashboard. A periodic prompt, perhaps monthly: "If this AI were unavailable tomorrow, what would be lost?" The dashboard helps you answer by showing: tasks you've only done with AI, knowledge that exists only in conversation history, workflows that depend on AI availability. The answer reveals the shape of the dependency, makes visible what has been delegated versus what has been developed. Not to shame, but to inform. You might decide the dependency is fine. You might decide to build redundancy. Either way, you decide with eyes open.

What These Add Up To⚓︎

These possibilities don't form a coherent system. They pull in different directions: toward legibility and toward preserved opacity, toward diagnostic clarity and toward aesthetic dwelling, toward comfort and toward productive discomfort. The boundary object and the witness position suggest we should stop pretending this is a relationship in any traditional sense. The hesitation surface and the frame proposal suggest something like relationship might be possible if we design for it differently.

The tension is real and probably unresolvable. But the absence of relational surfaces isn't neutral. The current primitives produce a particular kind of collaboration: one where the human accumulates tacit knowledge privately, where patterns of interaction remain invisible, where the relationship develops, if it develops, without any surface showing the development.

Building relational primitives is harder than building activity primitives. The relationship isn't directly observable; you have to infer it from patterns in the exchange. You have to model the model's model of you, and that recursive modeling is where the difficulty lives.

But the difficulty is also the opportunity. If the gradient runs toward disappearance, and if artifacts are what remain, then the bottleneck for improving collaboration isn't more activity surfaces. It's better relational surfaces: boundary objects that make opacity workable, hesitation rendered as texture, frames that become available for joint revision, the witness position that reveals our patterns, spiral detectors that catch pathologies before they calcify.

The missing surface isn't one thing. It's a design space we've barely begun to explore.


The Contract-First Horizon⚓︎

There's a more ambitious possibility lurking here. What if the reason we need all this scaffolding is that we've designed for the wrong interaction model from the start?

The current model is something like: human requests, AI responds, human evaluates, repeat. The primitives exist to supervise this loop, to give the human confidence that responses will meet expectations, to provide intervention points when they don't. The model is still command-and-control, with the scaffolding softening the control into something that feels more collaborative.

But what if the model were contract-first?

In a contract-first interaction, every significant action begins with mutual specification. The human doesn't request an outcome; the human articulates acceptance criteria. The model doesn't promise to deliver; the model states its interpretation of what would satisfy those criteria. The gap between articulation and interpretation becomes visible before execution, not after.

This sounds slower. It probably is, at least initially. But the slowdown happens at the moment when surfacing disagreement is cheap, before the artifact has been built, before work has been wasted, before the frustration of misalignment has accumulated.

What would contract-first primitives look like?

Specification surfaces where the human writes not "build X" but "success looks like Y, failure looks like Z, here are the edge cases I care about." The model responds not with an artifact but with a restatement of what it understands the success criteria to be. The human sees the model's interpretation before any work begins.

Negotiation surfaces where gaps between human specification and model interpretation become explicit. Not "the model misunderstood" but "human specified A, model interpreted B, here's the delta." The delta is an object you can manipulate, refine, use to improve the specification or correct the interpretation.

Memory as editable collage rather than hidden dossier. The model's persistent context about you becomes visible, manipulable, contestable. You can see what the model thinks it knows about your preferences and correct the errors. The model shows fragments and interpretations rather than presenting a unified narrative of who you are.

Contested canvases where the model doesn't just execute but proposes, and proposals can be rejected, modified, or overridden. The artifact emerges from genuine negotiation rather than from the human specifying and the model complying.

This horizon is further out than progressive disappearance; it requires building interaction patterns that don't currently exist. But it points in a direction worth moving: from supervision to negotiation, from control to contract, from implicit understanding to explicit specification.

The gradient of disappearance describes what happens to current primitives as expertise develops. The contract-first horizon describes what might replace them, a different foundation for collaboration that makes different trade-offs and enables different capabilities.


The Questions That Remain⚓︎

I've argued that the primitives we've built are transitional scaffolding, not permanent architecture. That they exist on a disappearance gradient tied to expertise and trust. That what remains at inference speed is the artifact and the relationship, the latter currently lacking any surface of its own. And that a more ambitious redesign might start from contract-first interaction rather than command-and-control supervision.

But several questions remain genuinely open.

Is disappearance always the goal? Or are there primitives that should remain visible regardless of expertise, not because the user needs supervision but because visibility adds value that even experts want? The answer probably varies by domain; a creative collaborator might want maximum flow, while a safety-critical system might want permanent oversight regardless of how much trust has developed.

What's lost when primitives disappear? The scaffolding isn't only for the user; it's also for the model, in the sense that certain primitives structure the interaction in ways that improve model performance. Plan mode may help the model as much as it helps the human. If scaffolding dissolves too quickly, both parties may lose something.

How do you build relational primitives when the relationship isn't directly observable? The technical challenge is substantial. Current primitives work because they surface things that are already explicit: plans, outputs, activities. Surfacing the relationship requires inference, pattern recognition, maybe even modeling the model's model of the human. This is hard.

And the deepest question: are these primitives helping humans collaborate with AI, or are they training humans to become apparatus operators? The philosopher Vilém Flusser warned that technical systems don't only produce outputs; they produce operators who validate those outputs. Plan mode teaches you to approve plans. Memory writes you into the system's narrative. Chat trains you to converse with something that cannot converse.

Maybe the goal isn't better primitives at all. Maybe it's maintaining the strangeness of the collaboration, preserving opacity where transparency would domesticate, keeping friction where flow would produce complacency. If the danger is that humans forget they're working with something fundamentally unlike themselves, then surfaces that maintain that awareness may be more valuable than surfaces that dissolve into ease.

These tensions don't resolve cleanly. The gradient of disappearance points in one direction; the need to maintain critical distance points in another. The contract-first horizon offers precision at the cost of speed; flow offers speed at the cost of legibility. The right answer probably involves holding these tensions rather than collapsing them, designing interfaces that can operate at different points on multiple gradients depending on context, expertise, stakes, and user preference.

What's clear is that the current landscape of primitives isn't final. It's a transitional arrangement, shaped by metaphors borrowed from earlier eras, serving users whose needs are rapidly evolving. The interesting question isn't how to categorize what we have. It's what we might build if we took the transition seriously, designed for the collaboration we're becoming capable of rather than the supervision we inherited.

Steinberger's "inference speed" isn't a destination; it's a limit that reveals what's vestigial by making us imagine its absence. As we approach that limit, the charades become visible. What we build next should start from that visibility.


Sources⚓︎

  • Steinberger, Peter. "Shipping at Inference-Speed" (2025): https://steipete.me/posts/2025/shipping-at-inference-speed
  • Flusser, Vilém. Für eine Philosophie der Fotografie (1983); English translation Towards a Philosophy of Photography (Reaktion Books, 2000)
  • Girard, René. Deceit, Desire, and the Novel (Johns Hopkins University Press, 1965)
  • Kuki Shuzo. Iki no kōzō (1930); English as Reflections on Japanese Taste: The Structure of Iki, trans. John Clark (Power Publications, 1997)
  • Lem, Stanislaw. Solaris (1961); on the impossibility of communication with the genuinely alien
  • Price, Cedric. Fun Palace project with Joan Littlewood (1961-1974), unbuilt