Your AI agents are failing because you never onboarded them
Every company has an onboarding process for new engineers. Documentation, architecture walkthroughs, coding standards, domain context. A senior hire who already knows how to code still needs weeks to understand your system before producing anything useful.
Nobody does this for agents. They get a prompt and a prayer.
That is why most agent deployments drift, hallucinate, and produce code that needs to be thrown away. The model was never the bottleneck. The context was.
The expensive lesson
A GitClear analysis of 211 million lines of code found that AI tool adoption increased output by 10% — while collapsing quality metrics across the board.
Refactoring dropped 60%. Copy-paste code rose 48%. Code churn jumped 44%. More code, less understanding.
Anthropic’s research confirmed the pattern from the other side: agent drift — where an AI slowly loses coherence over long tasks — is almost entirely a context management problem, not a reasoning problem. Their fix was not better models. It was structured context: compaction, note-taking, and architectures that manage what enters the window.
The industry’s instinct is to wait for smarter models. The evidence says the fix is already here, and it has nothing to do with intelligence.
Prompt engineering versus context engineering
Prompt engineering teaches you to ask better questions. Context engineering teaches you to build better environments.
The distinction matters. A perfectly crafted prompt inside a broken context still produces garbage. A mediocre prompt inside a rich, structured context produces something useful — reliably, session after session.
Context engineering is the discipline of designing what information reaches the model, when, and in what structure. Three layers: what enters the window (selection), how it is organised (architecture), and when it gets refreshed (lifecycle). Most people only think about the first layer. They dump everything into the prompt and wonder why the model hallucinates.
Martin Fowler’s team, Anthropic, and ICLR 2026 all converged on the same term for this discipline. It is not new. But it is newly recognised as the thing that separates agents that compound from agents that collapse.
The four ways onboarding fails
Every agent failure I have seen maps to one of four context failures. They are onboarding failures by another name.
Context pollution. Hallucinated or outdated information enters the window and compounds. The model trusts what is in its context — if bad data sits there, everything downstream inherits the error.
Onboarding a new hire with documentation that was last updated two years ago.
Context distraction. Too much irrelevant information drowns the relevant signal. A 200K token window does not help if 180K of it is noise. The model treats everything in the window with roughly equal weight.
Handing a new engineer your entire Confluence space and saying “read all of it before you start”.
Context confusion. Too many tools, too many conflicting instructions. Fifty tool definitions in the system prompt means the model spends attention on capabilities it does not need for the current task.
The onboarding deck that covers every team’s workflow instead of the one that matters.
Context clash. Contradictory information in the same window. Your project instructions say “use pnpm” but the README says “use npm”. The agent picks one randomly — or worse, alternates between them across sessions.
Two senior engineers giving a new hire opposite advice on the first day.
Every one of these has the same fix: do not dump, curate.
What agent onboarding actually looks like
A recent arXiv paper tracked how teams manage agent context in production. One project evolved into 26,000 lines of codified context — more instructions than code in some modules. Over half of their effective agent specifications were context, not instructions.
At Interlusion, every project has a context architecture that treats agent onboarding as infrastructure:
Teaching documents, not config files. The project instructions read like onboarding docs for a senior engineer who already knows how to code but does not know the codebase. Architecture, patterns, constraints, the reasoning behind decisions — not just rules, but why the rules exist.
Progressive disclosure. Not everything needs to be in the window at once. A routing document under 200 lines points the agent to detailed topic files — patterns, decisions, debugging notes. The agent loads what the current task requires instead of everything that has ever been written.
Separation of concerns. Architectural context in one place, coding conventions in another, domain knowledge in a third. The agent loads the layer it needs. This mirrors how human teams organise knowledge — you do not hand a backend engineer the design system documentation when they are fixing a database migration.
Living maintenance. The agent updates its own context as a side effect of working. When it encounters a new pattern, solves a recurring problem, or discovers a constraint — that knowledge gets captured. The context architecture improves with every session instead of drifting further from reality.
The pattern is clear: more context architecture, fewer instructions. Teach the environment. The agent learns by reading, not by being told.
The real discipline
The companies that fail at AI adoption will not have the worst models. Models are commoditising fast. They will have the worst context — unstructured knowledge, contradictory instructions, and agents that start every session from zero.
The companies that succeed will treat context like infrastructure. Maintained, versioned, tested. Designed with the same rigour they apply to the systems their agents build.
Prompt engineering was writing better emails. Context engineering is building the office the agent works in.
The question is not whether your AI is smart enough. It is whether you gave it a world worth reasoning about.
Want to build a context architecture that makes your agents compound instead of drift? Let’s talk.