Skip to content

Reading After Readers

Jonathan Boymal, writing about education in the AI era, argued that deep reading, historically treated as foundational to intellectual development, requires reassessment. The humanist tradition from Simone Weil through Maryanne Wolf emerged "under conditions of relative informational scarcity." Those conditions no longer hold. Students now encounter algorithmic language that "asks less to be interpreted than to be accepted." The response, Boymal suggests, is lateral reading: moving across contexts rather than diving into single texts, asking where claims come from and how meaning differs elsewhere.

The counterpoint came from Johanna Winant in Boston Review, defending close reading's ongoing power. Close reading, she argues, "grounds and extends an argument, reasoning from what we all know to be the case to what the close reader claims is the case." Her students at West Virginia University learned to build arguments from the ground up, noticing details small enough to fit under a finger. One became a nurse who writes notes for doctors using argumentative techniques learned from literature. Another used the method to write a police report about an assault "so she would be understood and believed." Close reading, in this telling, isn't literary technique—it's transferable attention to detail that works in courtrooms and hospitals.

The Family Quarrel⚓︎

Look at what close and lateral reading share. Both assume an autonomous reader navigating information. Both treat texts as discrete objects to be approached with the right technique. Close reading says go deep; lateral reading says don't be naive. But both preserve the modernist figure of the individual reader making choices about what to trust and how to engage.

This is a family quarrel. The participants disagree on tactics while sharing deeper assumptions: the reader as subject, the text as object, reading as something the subject does to the object. The debate generates heat because both sides sense something is shifting, but neither quite names it. They're arguing about which room to occupy while the building's foundation moves.

The question isn't close versus lateral. It's what happens to reading when the reader—the individual, autonomous, choosing reader—starts to dissolve.

The Wardley Map of Reading⚓︎

Put reading on a Wardley map. X-axis: evolution from genesis to commodity. Y-axis: visibility to user. What do you see?

The first thing you notice: "reading" isn't a single capability. It's a bundle. And bundles get unbundled.

The component stack looks something like this. At the bottom: OCR and text extraction, fully commoditized. Entity recognition, same. Summarization is a product rapidly heading toward commodity—what cost $500/hour from a McKinsey analyst is now an API call. Cross-document search is accelerating through the product phase. Argument extraction is moving from custom-built to product. Synthesis across sources sits at custom-built with early movement.

And at the top of the stack: interpretation, judgment, the "so what" that connects reading to decision. These sit at genesis. And they're not moving.

Not because they're protected or special. Because they're not capabilities in the Wardley sense. They don't have the characteristics that enable evolution along the axis. They're not standardizable, not fungible, not measurable in units. You can't buy interpretation by the yard.

This is the Wardley insight. Reading was never one thing—it was a vertically integrated stack. A professional who "read documents" was actually performing: text processing, entity extraction, summarization, pattern matching, synthesis, and judgment. The stack was bundled because humans couldn't separate the layers. You had to do all of it to do any of it.

AI unbundles the stack. The lower layers peel off into infrastructure. Summarization becomes a service. Pattern-matching becomes an agent swarm. What remains is what can't be unbundled: the part that requires context the model doesn't have, stakes the model doesn't bear, judgment the model can't be accountable for.

The close/lateral reading debate is arguing about the top of a stack while the bottom is being industrialized beneath it. Both camps assume a vertically integrated reader. But vertical integration is a temporary market structure. When lower layers commoditize, the integrated player gets disrupted, or redefined.

The question isn't "close or lateral." It's what reading looks like when the bottom four layers are infrastructure and only the top two remain human work.

Death of the Reader⚓︎

Roland Barthes declared the author dead in 1967. The meaning of a text, he argued, wasn't something encoded by the writer and decoded by the reader. Meaning was produced in the reading itself. The birth of the reader, Barthes wrote, must be at the cost of the death of the Author.

But Foucault's apparatus analysis suggests the reader Barthes celebrated was never free. The "liberated reader" who could produce meaning was itself an institutional creation—authorized by universities, journals, pedagogies. Certain readers were empowered to interpret; others merely consumed. The death of the author didn't liberate reading. It shifted authority from one institutional formation to another.

AI doesn't complete the death of the author. It announces something else: the death of the reader.

Not the end of humans engaging with texts. But the emergence of a new reader-subject. One who is always-already supplemented. Who cannot locate where their interpretations come from. Who reads with the model's suggestions echoing in the background, who struggles to remember which insight was theirs and which was surfaced by the machine. The human-AI hybrid that processes text but doesn't quite know what it knows.

This is historical transformation, not loss. Each major shift in reading technology has produced a new reader-subject with different capacities and different blindnesses. The transitions are worth examining closely, because they reveal what's at stake.

Consider the shift from reading aloud to silent reading, which unfolded gradually between late antiquity and the early modern period. When you read aloud, your cognitive bandwidth is split: decoding symbols into sounds, coordinating breath and voice, maintaining a pace that works for listeners. The reader who vocalizes cannot pause to think without breaking the flow. Cannot reread a puzzling passage without announcing confusion. Cannot skip ahead or jump back without losing the thread for everyone else. Reading aloud is inherently social and inherently linear.

Silent reading changed what was possible. The silent reader could stop mid-sentence to think, then return without anyone noticing. Could disagree with the text without displaying disagreement. Could dwell on a difficult passage for minutes, then race through a familiar one, varying pace to match comprehension. Could have private reactions—confusion, delight, boredom, arousal—that remained entirely interior.

The deeper consequence: silent reading may have created the modern sense of interiority itself. The "inner voice" that we identify with thinking, that stream of internal monologue, likely developed alongside silent reading practices. Before silent reading, thoughts were things you spoke or heard spoken. After, you could think in text, in a private mental space that no one else could access. The silent reader was a new kind of self—more interior, more private, more autonomous. The thoughts available to this reader weren't available to the one who vocalized every word, because the very structure of thinking had changed.

The transition from manuscript to print produced another reader-subject. The manuscript reader worked with what was physically present: a single copy, hand-produced, likely containing errors accumulated across generations of copying. Each manuscript was unique. Comparing versions required being in multiple monasteries. Building on another's work meant trusting that your copy matched theirs.

The print reader entered a different world. Standardized texts meant everyone could read "the same" book—the same words in the same order, page after page, copy after copy. This enabled citation: you could reference page 47 and expect your reader to find the same passage. It enabled fact-checking across sources: hold two books side by side and compare claims. It enabled the scholarly apparatus of footnotes, bibliographies, indexes. The print reader could synthesize across texts in ways the manuscript reader couldn't imagine, because the print reader could trust that the texts held still.

Now consider the AI-supplemented reader. What capacities emerge? What blindnesses?

The supplemented reader has access to infinite retrieval. Any passage, any connection, any pattern across texts too numerous for any human to read—all available on demand. The supplemented reader can ask "what else has been written about this?" and receive answers immediately. Can request summaries, comparisons, critiques. Can have the lower layers of the reading stack handled invisibly, freeing attention for the upper layers.

But the suggestions are already there, shaping what gets noticed. When the model surfaces a connection, that connection becomes salient in ways organic insights don't. The supplemented reader processes more text but may dwell less on any particular passage. Has perfect recall via the machine but fuzzy ownership of insight—was that my thought, or did the model suggest it three prompts ago? The boundary between reader and tool blurs.

The AI-supplemented reader is another historical transition. The question isn't whether this reader is "really reading"—that's the wrong frame, the kind of question that produces more heat than light. The question is what this new reader produces, and what it cannot. What thoughts become available? What thoughts become harder to have? We don't yet know. The silent reader didn't know what was gained and lost until centuries had passed.

Centaur and Cyborg Readers⚓︎

Before the collective, there's the partnership. The AI-supplemented reader isn't one configuration but many.

The Centaur. Human and machine as distinct collaborators, each contributing what they do best. The human steers; the AI augments. The centaur reader uses the model for retrieval, summarization, pattern-matching across large corpora—the lower layers of the reading stack—while reserving interpretation and judgment for themselves. There's a clear division of labor. You know which parts are yours.

The centaur configuration preserves something of the autonomous reader. You're still making choices, still exercising judgment, still owning your interpretations. The AI is a tool, sophisticated but bounded. When you write about what you've read, you can trace which insights came from the machine (it found this connection, it summarized that context) and which emerged from your own dwelling with the text. The boundary holds.

This is the comfortable vision. It's also unstable.

The Cyborg. The boundary dissolves. You can no longer locate where the machine ends and you begin. The model's suggestions shape what you notice before you're aware of noticing. Its framings become your framings. You read with its voice in your ear, and after enough sessions, you can't remember which thoughts were yours first.

The cyborg reader doesn't use AI to read. The cyborg reader reads as a human-AI hybrid, a new kind of reading-subject that didn't exist before. The question "what do I think about this text?" becomes genuinely hard to answer, because the "I" doing the thinking is distributed across wetware and software in ways that resist introspection.

This isn't necessarily loss. The silent reader couldn't introspect their way back to what reading-aloud felt like; they had become a different kind of reader. The cyborg may have access to thoughts the centaur can't reach—patterns too subtle for unaugmented attention, connections across texts too numerous to hold in biological memory. But the cyborg also loses something the centaur retains: the clear sense of authorship over their own interpretations.

The Spectrum. Most readers will move between configurations depending on context. Centaur mode for professional reading where accountability matters—you need to know what you actually concluded versus what the model suggested. Cyborg mode for exploration, for play, for the kind of reading where ownership of insight doesn't matter because you're not going to cite it anyway.

The interesting question is whether you can choose your configuration, or whether the tools choose for you. A model that's designed to be invisible, to surface suggestions so naturally they feel like your own thoughts—that model pushes you toward cyborg whether you intend it or not. A model that clearly labels its contributions, that maintains visible boundaries—that model enables centaur mode. The interface is a forcing function.

And there's a third configuration emerging, darker and less discussed: the passenger. The reader who has ceded so much to the model that they're no longer steering at all. They ask the AI what to read, accept its summaries, adopt its interpretations, move on. The human provides the eyeballs and the sense that reading is happening; the machine does the actual cognitive work. This isn't reading in any meaningful sense. But it may be common. The passenger looks like a reader from the outside. They process text, they form opinions, they can discuss what they've "read." The opinions just aren't theirs.

The centaur/cyborg/passenger taxonomy matters because the collective modes that follow—factory, swarm, collective—are populated by these individual configurations. A swarm of centaurs behaves differently than a swarm of cyborgs. A collective that includes passengers has a different epistemology than one that excludes them. The individual augmented reader is the unit; the collective is the emergent form.

Factory, Swarm, Collective⚓︎

If the individual reader is dissolving into assemblage, what forms does reading-at-scale take? Three modes are emerging, each with its own logic and its own products. But the more interesting questions lie in where these modes are heading, what hybrid forms are emerging, and what strange reading-beings might exist in ten or thirty years.

The Factory⚓︎

High-throughput processing. Agent swarms extract, summarize, pattern-match across document sets too large for human attention. The logic is industrial: inputs, throughputs, outputs. Texts enter; structured data exits.

Picture a law firm's due diligence room, except the room is empty. Ten thousand contracts flow through overnight. Agents extract every entity, flag every non-standard clause, cross-reference against regulatory databases. By morning, a partner receives a dashboard: 847 material risks identified, 23 requiring human review, the rest already triaged. No one read the contracts. The factory read them.

Or a pharmaceutical company preparing a regulatory submission. Every paper ever published mentioning the compound—twelve thousand articles across forty years—summarized, contradiction-mapped, cited. A literature review that would have taken a team of researchers six months, completed in an afternoon. The researchers' job is no longer reading; it's auditing what the factory read.

Or a hedge fund's morning briefing. Every earnings call from every public company, transcribed, sentiment-scored, compared against guidance. The analysts don't listen to calls anymore. They read the factory's output, looking for the anomalies the pattern-matcher flagged. The calls themselves are never heard by human ears.

Think of it as strip-mining the forest. Maximum extraction, nothing left behind. Texts become "done": depleted, exhausted, reduced to their informational residue. The factory produces availability. Everything summarized, searchable, queryable on demand. What it cannot produce: the thing that makes you return to a passage. The strip-mined text has no roots left to regenerate meaning.

Where does this go? Some genres may simply die—not because no one writes them, but because the factory has extracted everything extractable. The legal memo, the compliance report, the background research document: these are already being written for factory consumption, optimized for extraction rather than human reading. What happens when the factory has processed the entire corpus of human writing? What's the informational residue of everything we've ever written? And what's the factory's equivalent of pollution—information sludge that clogs retrieval systems, summaries of summaries that drift from any original meaning, citation chains that loop back on themselves?

The factory will produce its own literature. Texts written explicitly for non-human readers—training data, model food, documents that exist to be processed rather than read. Humans may write these texts, but no human is the intended audience. This is already happening; it will accelerate. The factory-text is a new genre, one we don't have criticism for yet.

The Swarm⚓︎

Distributed human-AI teams doing sense-making. Many readers, loose coordination, forking interpretations that compete for attention. The logic is platform: what circulates is what gets surfaced, upvoted, shared. Visibility becomes truth.

Watch a controversy unfold in real time. A government report drops—400 pages. Within minutes, someone posts a screenshot of page 247, the damning paragraph. Someone else asks their AI to find contradictions with the agency's previous statements. A thread compiles the "worst parts." A counter-thread compiles exculpatory context. By the time anyone has read the full report, the discourse has already decided what it means. The swarm read the document collectively, in fragments, faster than any individual could have read it whole.

Or BookTok, where a novel's meaning is determined by which scenes get clipped. A 90-second video of someone reacting to a passage becomes more influential than any review. The book exists as a collection of extractable moments: the twist, the spicy scene, the quotable line. Readers arrive having already seen the fragments; they read to fill in the gaps between clips they've already watched. The swarm's reading preceded and shaped the individual's.

Or the academic preprint, uploaded at midnight, discussed on Twitter by dawn. Researchers who haven't read it quote-tweet others who have—or who say they have, or whose AI summarized it for them. The paper's reputation forms before most people finish the abstract. By the time the formal peer review happens, the swarm has already rendered its verdict.

Think of it as foraging bands moving through the text-forest. They take what they need, leave some behind, occasionally replant. The swarm doesn't exhaust the text—it fragments it. A passage goes viral while the work disappears. A quote circulates stripped of context.

The swarm produces diversity of reading. But diversity governed by platform mechanics. The infrastructure determines what questions can be asked, which interpretations gain traction. Collaborative reading through AI can become consensus-formation disguised as plurality—the illusion of many perspectives converging on conclusions the system was designed to surface.

Where does this go? New professions are emerging: the swarm-coordinator who seeds interpretations and guides collective attention, the context-restorer who tracks fragments back to sources, the interpretation-shepherd who tends particular readings across platforms. What happens to expertise when everyone has access to AI reading? Does expertise migrate from "I have read more" to "I can guide the swarm better"? Do swarms develop persistent tendencies—biases, preferences, blind spots that survive across sessions and members? Can a swarm have a tradition? A memory?

And what texts get optimized for swarm reading? Short, fragmentable, high-surface-area writing designed to generate maximum engagement per word. The swarm-text is already dominant; it's the thread, the take, the hot paragraph engineered to be ripped from context and circulated. This form will continue to evolve. Texts will be written as packages of extractable fragments, each one designed to survive alone. The whole may never be read; it may not matter.

The Collective⚓︎

Sustained social practice across time. Reading groups, interpretive communities, scholarly traditions, religious study circles that carry texts across generations. The logic is cultivation: slow, institutionalized, transmissible.

A philosophy seminar has been meeting every Thursday for twenty-three years. They've read the same fifty texts multiple times, each reading informed by the previous. New members are initiated slowly; it takes years to absorb the group's accumulated interpretations, its running jokes, its settled debates and live ones. The text they discuss tonight isn't the same text a newcomer would encounter alone. It's overlaid with two decades of marginalia, spoken and remembered.

Or the Talmud study circle, which has maintained continuous reading practices for millennia. The text comes wrapped in commentary, commentary on commentary, commentary on the commentary on the commentary. Each generation's reading becomes part of what the next generation reads. The text isn't separable from its reading history; the history is the text.

Or—emerging now—the Discord server that's been discussing a single author for four years, with a persistent AI that remembers every conversation. When a new member asks about a passage, the AI can cite not just the text but the server's previous debates about it. "We discussed this in March 2024. Three members thought X, two thought Y, and here's how the argument developed." The AI has become the collective's memory, more reliable than any individual member's. Is this still a collective, or something new?

Think of it as permaculture. The reading enriches the soil; new meanings grow. The text lives because it's tended by people who return to it, who teach it, who argue about it across decades. The collective produces depth and continuity. It's also under pressure.

The pace-layer problem: fast layers start dictating to slow. When the factory determines which documents matter and the swarm determines which passages circulate, the collective inherits fragments. The canon becomes what the algorithm surfaced. The seminar reads what the feed made visible.

Where does this go? Some collectives will retreat, building walls against factory and swarm. Digital monasteries that refuse algorithmic supplementation. Attention guilds that enforce slow practices. Reading circles with initiation rites—not hazing, but demonstrated commitment to dwelling with difficulty. These will be small, intense, probably weird from the outside. They'll preserve something, but they'll also risk becoming museums.

Other collectives will hybridize. Reading groups that include AI members with genuine standing—not as tools but as participants whose contributions over time have earned them a place. What does it mean when the model has been reading with your group for five years, has memory of every discussion, offers interpretations informed by that history? Is that a collective member or a very sophisticated tool? The boundary may not matter. The collective's identity may come to include its AI participants, the way a monastery's identity includes its library.

And new collectives will form around new practices. Groups that tend specific texts across decades, building up layers of annotation and interpretation that become inseparable from the text itself. Texts that speciate—forking into reader-generated variants that diverge over time, maintained by different communities, each version becoming its own tradition. The book as living organism, grown by its readers.

The Hybrids⚓︎

The interesting developments will happen at the boundaries.

Factory-swarm hybrids: fully automated swarms with no human coordination, processing and fragmenting and circulating without any human in the loop. Picture a network of AI accounts that monitor new publications, extract key claims, generate takes, respond to each other's takes, and produce a discourse that no human participates in directly—but that humans then encounter as "what people are saying" about a text. The swarm's output becomes the context in which humans eventually read, if they read at all. This already exists in rudimentary form; it'll become more sophisticated. What happens when the swarm has no human members, only human-written source material?

Swarm-collective hybrids: platform-based communities with long memory, where the platform itself maintains continuity across generations of human participants. Imagine a subreddit that's been discussing a single book series for fifteen years. The original members have moved on; the current members don't know them. But the AI moderator has been there since the beginning. It remembers the Great Schism of 2027, the interpretation that got a user banned, the theory that was mocked for years and then vindicated. The AI holds more institutional memory than any human member. Is the collective the humans, or the AI that carries their history?

Collective-factory hybrids: institutions that use industrial reading to feed cultivation. A university department commissions a factory-process of the entire literature of its field—every article, every book, every conference paper. The output: a structured map of who cites whom, which debates are live, which questions have been settled, which are being asked for the first time. Graduate students receive this map on day one. They've "read" more literature than any previous generation of scholars, in the sense that they've absorbed its structure. But they've also been shaped by the factory's framings before they encountered a single primary text. Is this collaboration or colonization? Depends on who sets the questions the factory answers.

The pattern extends beyond reading. Consider what's happening to science itself.

For decades, the protein folding problem seemed destined for brute-force solution. D.E. Shaw Research tested this hypothesis to destruction. They built custom silicon, taped out their own chips, burned the molecular dynamics algorithms directly into the hardware. David Shaw once arrived at a conference by helicopter to present what they'd built—a special room outside Times Square, purpose-built machines running simulations at scales no one else could match. The implicit promise: protein folding would be solved by these special computers. Maybe the government would buy five of them. Maybe we'd fold one protein a day.

Then AlphaFold came out, and you could run it in Google Colab. On your desktop. On a GPU you already own. The problem that seemed to require specialized infrastructure was solved by machine learning on experimental data—X-ray crystallography, the accumulated residue of decades of patient laboratory work. Two well-resourced groups, two different bets. The factory that processed experimental data beat the first-principles simulation by a margin that rendered the competition absurd.

Now the same unbundling is coming for scientific cognition itself. Not modeling specific systems—not virtual cells or protein dynamics—but automating the cognitive loop: generating hypotheses, choosing experiments, analyzing results, updating the world model, generating new hypotheses. The emerging configuration has a name: labs in the loop. Agents propose experiments. Humans—or robots—run them. Agents analyze the results and propose the next round. The bottleneck, it turns out, isn't the intelligence of the first hypothesis. It's knowing what reagents are in stock, what the lead times are, what's actually feasible given the lab's inventory. Logistics, not insight.

And at the top of this stack, the same unresolved question as in reading: taste. Models can enumerate hypotheses faster than any human. They can filter through literature, check against existing data, rank by plausibility. What they struggle with is knowing whether a result is interesting. Whether a discovery matters. Whether this particular finding, if true, changes anything worth changing. The factory can process; the swarm can circulate; but knowing what's worth pursuing—that remains, for now, stubbornly human.

The Strange New Beings⚓︎

Think further out. What reading-beings might exist in thirty years?

The reading-worker, whose job is not to understand texts but to provide training signal—reading to feed swarms, their attention harvested as data, paid per text processed regardless of comprehension. This is a grim possibility, but labor always follows value.

The voluntary illiterate, who refuses AI supplementation entirely as an identity position. Not unable to read, but unwilling to read with machines. A countercultural stance, maybe a political one. What community forms around this refusal?

The text-being: a text that has been read so many times, by so many agents, with so much accumulated interpretation, that it develops something like emergent personality. Not sentient, but persistent, with tendencies and patterns that feel like character. The Talmud has been this for centuries; other texts may join it.

Reader-castes based on which layers of the stack you perform manually. The artisan reader who refuses summarization. The purist who still extracts their own entities. Status markers in a world where most reading is automated.

Texts designed to mean different things to different reader-types by design. The document that says one thing to the factory, another to the swarm, a third to the human who reads it slowly. Steganography of interpretation. This already exists in rudimentary form—the legal document with different audiences—but it will become an art form.

We don't know which of these will emerge, or in what proportion. But the reading ecology of 2050 will include forms we don't have names for yet. The taxonomy of factory, swarm, and collective is a starting point, not a final map.

What Else Could Be Written⚓︎

Earlier I noted that the supplemented reader can ask "what else has been written about this?" and receive answers immediately. But there's a stranger capability emerging: the reader can now ask "what else could be written about this?"

This is a different operation entirely. Not retrieval but generation. Not finding existing texts but summoning potential ones. The reader who wonders what a critic might say about this passage can now receive a plausible critique—not from any actual critic, but from a model trained on critics. The reader curious about how an author might have responded to an objection can generate that response, complete with stylistic tics and characteristic moves.

What does this do to reading?

One possibility: it deepens engagement. The text becomes a starting point for infinite variations. You read a poem and ask: what if the final stanza went differently? What would this sound like in another voice? How might the author have revised this in light of later events? The generated variations aren't authoritative, but they illuminate the actual text by contrast. You understand what the author chose by seeing what they didn't choose—even if the alternatives are synthetic.

Another possibility: it dissolves the text entirely. If you can generate any response, any variation, any synthetic dialogue between authors who never met, the actual text becomes one option among infinite possibilities. Why dwell on what was written when you can explore what could have been? The text loses its authority as the thing to be read. It becomes a seed for generation, raw material for the reader's prompts.

There's a third possibility, more unsettling: the generative reader stops reading in any traditional sense. They skim the text for enough context to prompt well, then shift to generating variations, critiques, extensions, responses. The text is never engaged with on its own terms—it's immediately metabolized into prompts. This reader has perfect recall via the machine, can summon any synthetic commentary, can generate both sides of any interpretive debate. What they cannot do is sit with the text in its own silence, waiting for it to speak.

The generative capability also changes what texts get written. If readers can generate their own variations, extensions, and responses, what's the author's role? Perhaps authors become less producers of finished texts and more designers of generative seeds—texts optimized for productive mutation rather than direct consumption. The best text isn't the one that says everything but the one that enables the reader to generate everything.

This framing makes some writers nervous. It suggests the text is substrate rather than achievement. But consider: oral traditions worked this way. The story wasn't a fixed text but a pattern for regeneration. Each telling was a variation. The bard didn't recite; they regenerated from memory and context. Perhaps the generative reader is returning to something older, a reading-practice that predates the fixity of print.

Or perhaps this is wishful framing. Perhaps the generative reader is simply not reading at all, and no historical precedent makes that okay.

We don't have the concepts yet to evaluate these possibilities. The vocabulary of reading—close, distant, lateral, surface, deep—assumes a text that sits still while the reader approaches. What vocabulary do we need for a text that regenerates in response to the reader's questions? For a reader whose engagement is primarily prompting rather than interpreting? For the hybrid practice of reading-and-generating that may become the default mode?

These questions don't have answers yet. But they're the questions the generative capability forces us to ask.

What Survives⚓︎

In all this speculation, one question keeps returning: what reading survives industrialization? Not what reading we wish would survive, or what reading we're nostalgic for, but what reading actually persists when the factory has processed everything processable?

The answer may be: the useless reading. The gnarled tree that the lumber industry passes by because its wood is no good for lumber.

Reading poetry badly. Rereading the same passage for the fifth time because something won't resolve. Reading to fall asleep. Reading to avoid work. Reading that circles rather than progresses, that goes nowhere, that produces no extractable insight. The factory can't optimize this because there's nothing to optimize. The swarm can't fragment it because the fragments have no value. It persists precisely because no one is trying to capture it.

This isn't the whole of reading. It's not even most of reading. But it may be what remains distinctly human about it—the residue that exists for no reason the Wardley map can show.

The close/lateral debate, which started this piece, looks different from here. Both camps were trying to preserve something about reading in the face of change. Both sensed that the reader as we knew it was under pressure. But both framed the problem as a choice of technique, when the real transformation is structural. The reading stack is unbundling. The reader-subject is dissolving into assemblage. New reading-beings are emerging that don't fit either camp's assumptions.

Here's what I think is actually happening: we're not losing reading. We're gaining readings—plural, distributed, hybrid. The factory will read everything extractable. The swarms will read everything fragmentable. The collectives will tend what remains worth returning to. And somewhere in the cracks, individuals will still read badly, uselessly, for no reason anyone can optimize.

The question isn't how to preserve the old reader. That reader—autonomous, individual, humanist—was a historical formation, not an eternal truth. It emerged from specific technologies and will give way to others. The question is what we want from the new formations. What do we want factories to extract, swarms to surface, collectives to cultivate? These are design questions now, not just cultural criticism, and they're being answered by default while the debate stays stuck on technique.

The silent reader emerged without anyone designing it. The print reader emerged from economic and technological forces no one fully controlled. But we're building the infrastructure for the next reader-subject. We're writing the algorithms that will shape what gets surfaced, the interfaces that will determine whether readers become centaurs or cyborgs or passengers. For the first time in the history of reading, the transition is partly legible, partly shapeable.

This doesn't mean we'll shape it well. The factory logic is powerful; it will strip-mine whatever isn't defended. The swarm logic is seductive; visibility will keep masquerading as truth. The pace-layer problem is real; fast will keep dictating to slow. But at least we can see the structure. At least we can name what's happening while it's happening.

The close readers and the lateral readers can both be right about technique while being wrong about the frame. The real work isn't choosing the right way to read. It's deciding what reading we want to defend, what we're willing to let the factory take, and what collectives we'll build to tend what matters. The ecology is being constructed whether we participate or not. The only choice is whether to be deliberate about it.

We won't get it right. The forces are too large, the incentives too misaligned. But we might get it less wrong if we can see clearly what we're building, and recognize that for the first time in the history of reading, we have some say in what emerges.


Sources⚓︎

  • Jonathan Boymal, "Reevaluating Deep Reading in the Age of AI" (January 2026): LinkedIn
  • Johanna Winant, "The Claims of Close Reading," Boston Review (November 2025): bostonreview.net
  • Roland Barthes, "The Death of the Author" (1967), in Image-Music-Text, trans. Stephen Heath (Hill and Wang, 1977)
  • Sam Wineburg and Sarah McGrew, "Lateral Reading: Reading Less and Learning More When Evaluating Digital Information," Teachers College Record (2019): SSRN
  • Dan Sinykin and Johanna Winant, Close Reading for the Twenty-First Century (Princeton University Press, 2025)
  • Andrew White, "Automating Science: World Models and Scientific Agents," Latent Space Podcast (January 2026): latent.space