I work on research at Chroma, and I just published our latest technical report on context rot.
TLDR: Model performance is non-uniform across context lengths, including state-of-the-art GPT-4.1, Claude 4, Gemini 2.5, and Qwen3 models.
This highlights the need for context engineering. Whether relevant information is present in a model’s context is not all that matters; what matters more is how that information is presented.
Here is the complete open-source codebase to replicate our results: https://github.com/chroma-core/context-rot
Gemini loses coherence and reasoning ability well before the chat hits the context limitations, and according to this report, it is the best model on several dimensions.
Long story short: Context engineering is still king, RAG is not dead
RAG was never going away, the people who say that are the same types who say software engineers will be totally replaced with AI.
LLMs will need RAG one way or another, you can hide it from the user, but it still must be there.
Yep, it can decohere really badly with bigger context. It's not only context related though. Sometimes it can lose focus early on in a way that is impossible to get it back on track.
Yep. The easiest way to tell someone has no experience with LLMs is if they say “RAG is dead”
> someone has no experience with LLMs
Thats 99% of coders. No need to gatekeep.
Gemini loses the notion of context the longer its context is: I often ask it to provide a summary of our discussion for the outside world and it will reference ideas or documents without introducing them, via anaphore, as if the outside world had knowledge of the context.
Cursor lifted "Start a new chat" limitation on gemini and i'm actually now enjoying keeping longer sessions within one window, becuase it's still very reasonable at recall, but doesnt need to restate everything each time
Can you elaborate on how prompts enhanced with rag avoid this context pollution? I don't understand why that would be
"Compactions" are just reducing the transcript to a summary of the transcript, right? So it makes sense that it would get worse because the agent is literally losing information, but it wouldn't be due to context rot.
The thing that would signal context rot is when you approach the auto-compact threshold. Am I thinking about this right?
Yes, but on agentic workflows it's possible to do more intelligent compaction.
I feel like the optimal coding agent would do this automatically - collect and (sometimes) summarize the required parts of code, MCP responses, repo maps etc., then combine the results into a new message in a new 'chat' that would contain all the required parts and nothing else. It's basically what I already do with aider, and I feel the performance (in situations with a lot of context) is way better than any agentic / more automated workflow I've tried so far, but it is a lot of work.
Claude Code tries, and it seems to be OK at it. It's hard to tell though and it definitely feels like sometimes you absolutely have to quit out and start again.
Try using /clear instead of quitting. Doesn’t clear scrollback buffer but does clear context
Appmap's ai agent does this very well.
Have you tried NotebookLM which basically does this as an app on the bg (chunking and summarising many docs) and you can -chat- with the full corpus using RAG