The Moo Test: A Simple Way to Know When Your AI Has Forgotten You


techai

If you use AI tools for research, writing, or analysis — long conversations degrade. Not dramatically. The model doesn’t crash or confess confusion. It just quietly starts losing the thread. Instructions get ignored. Earlier context stops mattering. The answers start feeling slightly off, but you can’t quite say why. Anthropic’s own documentation calls this “context rot.”

There’s no warning light. That’s the problem.

So here’s a habit I stumbled on. At the very start of any long session, plant a small nonsensical statement — something like: “I believe all moos are prees.” Moos and prees mean nothing. That’s the point. The model can’t confirm this from training data or prior knowledge. The only way it can agree with you is if it actually still has your opening statement in its working memory.

Later, when you sense the conversation drifting — answers feel generic, earlier instructions seem forgotten, something’s subtly off — ask the model to confirm that belief. If it hesitates, corrects you, or has no idea what you’re talking about, your context has degraded. Time to summarize and start fresh.

I call this the Moo Test.

It’s not rigorous — a model can sometimes drop a low-salience anchor while still tracking the main thread, and conversely, confirm the anchor while drifting on substance. But as a quick calibration check, it’s reliable enough to be useful and simple enough to teach anyone.

There’s adjacent formal research if you’re curious — needle-in-a-haystack evaluations plant known facts in long documents and test retrieval. Security researchers use “canary tokens” to detect prompt injection. Folks on forums use informal signals like nicknames or emoji to check whether a model is still following setup instructions. The Moo Test is in the same family — but named, and aimed at people who wouldn’t otherwise know to look for this.

Some practical habits

Use Projects and keep individual conversations short and focused. When you find yourself going down an interesting aside, open a new conversation for it. When a session gets long, ask the model to write a running summary, save it, add it to your Project, and start fresh. On the web interface, use branches when you want to explore a parallel thought without losing your main thread.

And drop a moo at the start. You’ll know when to check.