Skip to content

The Cockpit That Remembers⚓︎

In the 1900s, over 4,000 wagon manufacturers dominated American transportation. They had infrastructure, expertise, and supply chains refined across generations. How many became car companies? One. Studebaker. One company out of four thousand made the transition.

The usual explanation is that they couldn't see the future. But this is too easy, and probably wrong. Many wagon manufacturers watched automobiles improve from expensive curiosities to practical machines. They read the same newspapers, attended the same trade shows, saw the same wealthy customers buying the new contraptions. The executives who led these companies weren't fools. Some of them understood exactly what was happening.

So why didn't they adapt?

One answer points to the question they were asking. "How do we make better wagons?" is a different question than "What is transportation becoming?" The first question has answers that look like incremental improvement: lighter frames, smoother suspensions, more elegant designs. The second question has answers that look like abandoning everything you know. If you're asking the wrong question, even clear vision won't help. You'll see the future and optimize your way into irrelevance.

But there's another answer, and it's darker. Even the manufacturers who asked the right question faced an almost insurmountable problem: their own organizations. The skills that made wagons, the factories that produced them, the relationships that sold them, the identities that sustained them; all of this represented decades of accumulated investment. An executive who said "we need to become a car company" wasn't just proposing a strategy. He was proposing that the company destroy the source of its current success to pursue something unproven.

Organizations produce what Osterwalder calls corporate antibodies: an immune response against anything that threatens the existing model. The new idea might be right. The evidence might be compelling. The alternative might be obviously failing. The antibodies kill it anyway.

Here's what makes the wagon story instructive. The conceptual problem and the organizational problem weren't independent; they reinforced each other. Asking "how do we make better wagons?" felt safe precisely because it didn't trigger the antibodies. Asking "what is transportation becoming?" felt dangerous precisely because it did. Organizations don't just happen to ask the wrong questions. They systematically generate wrong questions as a defense mechanism, because right questions threaten the organism.

Studebaker didn't survive because its leaders were smarter. It survived because a combination of factors—family ownership, financial pressure, and unusual willingness to cannibalize existing success—created conditions where the right question could be asked and acted upon. Three thousand nine hundred and ninety-nine companies lacked that combination. Most of them could see the future clearly. They just couldn't get permission from themselves to pursue it.

The knowledge management industry has its own version of this story.

The Thirty-Year Confession⚓︎

Since the 1990s, corporations have poured billions into capturing institutional expertise. The premise seemed reasonable: valuable knowledge exists in employees' heads; document it before they retire; make it searchable and transferable.

The projects failed. Not partially, not in specific implementations; they failed as a category. The databases filled with documents nobody read. The expert systems captured neither the nuance nor the pattern recognition of actual experts. The wikis went stale within months of launch.

The usual explanations blamed tooling, incentives, or culture. If only the software were better. If only people were rewarded for contributing. If only leadership prioritized knowledge sharing.

These explanations have the same problem as "the wagon manufacturers couldn't see the future." They're too easy, and they locate the failure in execution rather than conception. What if the failure wasn't implementation? What if it was a category error about what knowledge is and where it lives?

Where Knowledge Actually Lives⚓︎

In 1995, the cognitive scientist Edwin Hutchins published a study of navigation aboard a U.S. Navy vessel that should have changed how organizations think about expertise. It mostly didn't.

No single person on the bridge knew how to navigate the ship. The bearing takers understood their instruments. The plotters understood the charts. The navigator understood the destination. But the navigation itself—the actual cognitive work of guiding a ship through water—existed in none of them. It existed in their coordination: the spoken numbers, the marks on paper, the spatial arrangement of bodies around instruments, the procedures refined through decades of accumulated error. The cockpit knew how to fly; no individual pilot contained that knowledge.

Traditional knowledge management assumed expertise resided in individual heads, stored in something like propositional form: facts, rules, procedures that could be articulated and written down. Extract it through interviews. Document it in manuals. Transfer it through training.

But this isn't how expertise works.

The organizational theorists Chris Argyris and Donald Schön identified the gap decades ago. What people say they do (their "espoused theory") differs systematically from what they actually do (their "theory-in-use"). This isn't hypocrisy. People genuinely cannot see their own working practices. The gap is invisible to those living in it.

Ask a senior claims processor how she handles ambiguous cases. She'll give you a logical procedure, a decision tree, a set of criteria. Watch her work for a week, and you'll see something different: pattern recognition honed across thousands of cases, attention drawn to features she couldn't name, judgments made in seconds that her verbal explanation would take minutes to justify. The verbal explanation isn't a description of the expertise; it's a post-hoc rationalization constructed to satisfy the question.

The Corruption of Asking⚓︎

Gary Klein found the same thing studying firefighters, nurses, and military commanders. Experts don't analyze options and select the optimal choice. They pattern-match to prior experience, recognize a workable response, mentally simulate it, and act. The decision process happens before conscious deliberation. When asked to explain, they generate a logical reconstruction that sounds like analysis but describes nothing that actually occurred.

Daniel Kahneman put it directly: "When we ask experts to explain their decisions, they give us retrospective rationalizations, not the actual process. The explanation sounds logical and procedural; the reality was pattern-matching and gut feel."

There's a deeper problem still. In Kahneman's research on clinical prediction, something strange emerged. Take a group of experienced clinicians. Have them make judgments repeatedly: diagnoses, predictions, assessments. Then build a simple statistical model that predicts not the actual outcomes but what each clinician would say. A model of the expert, not the phenomenon. That model outperforms the expert it was trained on.

The model extracts what's consistent in the clinician's judgments and discards what's noise. It sees the pattern in the expert's decisions that the expert cannot see in herself. Human judgment contains signal, but it's buried in variability; the same case presented twice gets different answers depending on whether the clinician is hungry, tired, distracted, or primed by the previous case.

"One of the major limitations on human performance is not bias," Kahneman observed. "It is just noise. And there is an awful lot of it."

This reframes the knowledge extraction problem entirely. The expertise is real. The pattern recognition is genuine. But it's inaccessible through articulation—not because experts are hiding it, but because the expertise lives below the threshold of conscious access. And even what experts can articulate is corrupted by noise they cannot perceive.

This is why asking experts to document their knowledge produces artifacts that don't transfer expertise. You're not capturing the signal. You're capturing a story about the signal, one the expert constructed specifically because you asked.

What Observation Captures⚓︎

Return to Hutchins on the bridge. The knowledge of navigation wasn't in any head or any document. It was distributed across people, tools, and practices, constituted by their ongoing coordination.

If you wanted to capture how that ship navigated, interviewing individuals would fail. Each would give you a partial, distorted fragment. The quartermaster doesn't know what the navigator knows. The navigator can't articulate the embodied skill of the bearing taker. And none of them can see the coordination patterns that emerge from their interaction. But observation captures what articulation cannot.

Watch the work. Record the traces. See where attention flows, what gets consulted, how exceptions get handled, which patterns repeat across contexts. The theory-in-use becomes visible not through asking but through watching.

This is what changes about the tacit knowledge problem.

The knowledge management movement failed because it relied on articulation: ask experts, document answers, hope the documentation transfers something real. But articulation corrupts the data. You get espoused theory, not theory-in-use. You get post-hoc rationalization, not pattern recognition. You get noise-laden individual accounts, not the distributed cognition that actually produces outcomes.

Systems that observe work rather than interrogate workers bypass this corruption entirely. They don't ask the surgeon what she notices; they track where her attention goes. They don't ask the analyst how he evaluates opportunities; they watch which factors predict his recommendations. They don't ask the team how they coordinate; they see the coordination in the traces of their work.

What was invisible becomes legible—not through better interviews, but by abandoning interviews altogether.

A Learning Engine at Three Layers⚓︎

The architecture that emerges from this insight operates across three distinct layers of organizational cognition. Each solves a different part of the tacit knowledge problem; together they create something no documentation project ever could.

Natural Language⚓︎

When people explain their reasoning in conversation—to colleagues, to systems, to themselves—they produce a different kind of artifact than formal documentation. It's messier, more contextual, closer to actual thinking.

Consider what happens when a loan officer talks through a difficult case with a colleague. She doesn't recite the credit policy. She says things like "this one feels like the Henderson situation from last year, but the cash flow pattern is different." She's revealing her actual reasoning: the analogies she draws, the features she weights, the exceptions she's learned to recognize. A question asked in a moment of genuine uncertainty reveals what the asker doesn't know. An explanation given to a junior colleague reveals what the senior person thinks actually matters, stripped of the procedural language they'd use in formal documentation. Chat and conversation capture reasoning in motion, not reasoning dressed up for the permanent record; the messy, contextual, half-formed thoughts are closer to theory-in-use than any policy manual.

Work Patterns⚓︎

Connectors into the systems where work actually happens—documents created, data queried, processes executed, exceptions handled—reveal theory-in-use directly. Not what people say they do, but what the logs show they did.

Consider what happens when you connect data streams that were never designed to talk to each other. Osterwalder tells a story about skincare companies sitting on buying behavior data and consumer preference data, trying to figure out how AI will change their industry. They're asking the wrong question. The real unlock is connecting that data to healthcare records: seeing which skin conditions correlate with which purchasing patterns, which ingredients actually work for which underlying conditions.

Estee Lauder has an 8 billion dollar skincare unit and no access to the health data that will determine whether their products actually work for specific conditions. The company that connects those data streams—buying patterns, skin conditions, ingredient efficacy, individual outcomes—will render traditional skincare positioning obsolete. Not by making better moisturizer, but by seeing what was always there and never visible. Traditional skincare companies don't have healthcare data and aren't asking how to get it. Someone will.

The same pattern applies inside organizations. The analyst who claims to weight financial metrics equally but whose actual queries pull customer sentiment data three times more often. The team that has an official approval process but whose email patterns show the real decisions happen in a different meeting entirely. The organization that espouses one strategy but whose resource allocation tells a different story. Work patterns don't lie. They can't. They're not constructed to satisfy a question; they're traces left by activity that happened regardless of whether anyone was watching.

Collective Intelligence⚓︎

The most valuable knowledge often exists at a level no individual can see: patterns that emerge across hundreds of cases, approaches that work in one context that might transfer to another, anomalies that signal either emerging problems or quiet innovations.

Booking.com runs what may be the most experiment-dense culture in corporate history. Every A/B test, every hypothesis, every result gets logged centrally. Not need-to-know; right-to-know. A junior intern in Amsterdam can look up what a VP in Singapore tested last quarter. The explicit purpose is institutional learning: what have we tried, what worked, what didn't, and why do we think so.

But the deeper value isn't the individual experiments. It's the patterns across experiments. Which kinds of hypotheses consistently outperform expectations? Which user segments behave differently than the models predict? Where do the confident bets fail and the long shots succeed? No single product manager sees enough cases to notice these patterns. The collective does.

This is distributed cognition made operational. The organization develops judgment that no individual possesses—not because any person got smarter, but because the system learned to see across its own history.

What Accumulates⚓︎

Each layer captures something the others miss. Conversation captures intention, reasoning, and the analogies experts actually use. Work patterns capture behavior, practice, and the theory-in-use that diverges from espoused theory. Collective intelligence captures what emerges from scale: patterns visible only when hundreds of cases accumulate.

And unlike documentation projects that go stale the moment they're completed, a learning engine accumulates continuously. Every conversation adds signal. Every work pattern refines the model. Every outcome feeds back into collective understanding. The system gets smarter the more it's used, because usage is learning.

This is the inversion of knowledge management. The old model extracted knowledge from work, compressed it into documents, and hoped the documents would transfer something real. The new model observes work as it happens, builds understanding from behavioral traces, and keeps learning as the organization keeps working. Nothing gets extracted because nothing needs to be. The knowledge stays in the system; the system just becomes able to see it.

The Knowing-Doing Gap⚓︎

Understanding why tacit knowledge resists extraction was never the hard part. Hutchins published in 1995. Argyris and Schön were writing in the 1970s. The cognitive science has been clear for decades. Anyone who wanted to understand why documentation-based knowledge management couldn't work had access to the explanation.

And yet documentation-based knowledge management continued for thirty years.

This is where the organizational problem reasserts itself. Even if you understand the epistemology perfectly—even if you can articulate exactly why observation succeeds where articulation fails—you still face the antibodies. You commission a knowledge management project. The answer comes back: stop asking experts to document; build observational infrastructure; change how you think about where knowledge lives. That answer gets killed. Not because it's wrong, but because it doesn't look like a knowledge management project.

The wagon manufacturers faced the same bind. Some of them genuinely understood that automobiles were the future. But understanding didn't grant permission. The executive who proposed "let's become a car company" was proposing something that would cannibalize current revenue, require different skills, alienate existing suppliers, and threaten the jobs of everyone who had built their careers on wagon-making. The rightness of the proposal made it more threatening, not less.

This is the cruel logic of organizational immunity. The more correct an insight, the more disruptive its implications. The more disruptive its implications, the stronger the immune response. Organizations don't kill bad ideas; bad ideas die on their own. Organizations kill good ideas that threaten the existing order.

So we're left with an uncomfortable position. The observational approach to tacit knowledge is almost certainly right. The cognitive science supports it. The failure of alternatives confirms it. But being right has never been sufficient. The wagon manufacturers who correctly understood the automobile faced the same antibodies as those who didn't. Understanding the problem doesn't dissolve it.

What Makes This Moment Different⚓︎

If correct understanding isn't sufficient, what is?

The wagon manufacturers who adapted—Studebaker, and arguably the few others who escaped into adjacent businesses—shared something beyond correct analysis. They faced external pressure that exceeded internal resistance. Studebaker was in financial trouble. Billy Durant had already exited the wagon business and had nothing to protect. The Fisher brothers saw body-making as a new opportunity rather than a threat to an existing one.

External forcing functions matter not because they provide insight (you might already have the insight) but because they change the calculus. When the threat from outside exceeds the threat from inside, proposals that would normally die can survive. The antibodies don't disappear; they get overruled.

This is what's different about the current moment for organizational knowledge.

The AI disruption isn't gentle. Consulting firms are already shedding headcount. The MIT study showing 95% of generative AI projects failing sounds like evidence of a bubble until you notice it's the same failure rate as knowledge management—and knowledge management's failures didn't prevent the underlying shift from happening. They just meant incumbents missed it.

Companies that figure out how to observe their own distributed cognition will develop institutional judgment that compounds over time. Companies that don't will watch their expertise walk out the door, fail to transfer through documentation, and gradually erode into commodity providers of whatever tasks remain unautomated.

The competitive pressure doesn't make observation easy. It doesn't dissolve the antibodies or magically grant organizational permission. What it does is raise the cost of inaction high enough that internal resistance becomes the smaller threat. When your competitors are building learning engines and you're still running documentation projects, the argument for change gets easier to make.

The Redistribution of Cognition⚓︎

Hutchins's cockpit didn't just contain distributed cognition—it constituted it. The arrangement of instruments, the procedures, the communication protocols weren't channels through which knowledge flowed. They were the knowledge. Change the arrangement and you change what the system knows.

Applied AI introduces a new element into organizational cognition. Not a tool that assists individuals, but a layer of the system itself: one that remembers what no individual remembers, sees patterns no individual sees, and makes legible what was always there but never visible.

The pessimists about tacit knowledge were right that you cannot extract it through articulation. The knowledge resists formalization because formalization was the wrong operation. You cannot compress distributed cognition into a document any more than you can compress a jazz quartet into a score.

But observation is not extraction. It doesn't remove knowledge from the system that produces it. It makes that system reflective—able to see its own patterns, learn from its own history, build on what it has already figured out.

The thirty-year knowledge management failure contained two problems, not one. The first was epistemological: asking the wrong question about where knowledge lives and how to capture it. The second was organizational: producing immune responses against answers that threatened existing models. Both problems were real, and they reinforced each other. Wrong questions felt safe because they didn't trigger antibodies. Right questions felt dangerous because they did.

What's changed isn't that we've suddenly developed correct understanding—the understanding has been available for decades. What's changed is the external pressure. The wagon manufacturers who saw automobiles clearly but couldn't get permission from themselves to adapt faced a slow-moving threat. They had years to watch their irrelevance approach. The knowledge management parallel is faster. The organizations that build learning engines will compound their advantages. The ones that don't will discover that documented expertise transfers nothing, and that the experts themselves are increasingly optional.

This doesn't guarantee success. Having the right question and organizational permission and competitive pressure still requires execution. Studebaker survived the wagon-to-car transition and then failed anyway, decades later, for different reasons. But Studebaker had a chance. The other 3,999 didn't.

The observational approach to organizational knowledge isn't a guarantee. It's a chance. For most organizations, that's more than they currently have.


Many of the ideas in this piece draw from a The AI Revolution & Business Model Transformation: Osterwalder, Yu & Choudary on business model transformation in the age of AI. The examples about corporate antibodies, the skincare data problem, and the Booking.com experimentation culture all originate there. What I've tried to do is thread their insights about business model innovation together with the cognitive science literature on tacit knowledge—Hutchins on distributed cognition, Argyris and Schön on espoused theory versus theory-in-use, Klein on naturalistic decision-making, Kahneman on noise. The synthesis is mine; the raw materials belong to them.