Observational Memory
Observational Memory (OM) is Mastra's memory system for long-context agentic memory. Two background agents β an Observer and a Reflector β watch your agent's conversations and maintain a dense observation log that replaces raw message history as it grows.
Quick StartDirect link to Quick Start
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
export const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant.",
model: "openai/gpt-5-mini",
memory: new Memory({
options: {
observationalMemory: true,
},
}),
});
That's it. The agent now has humanlike long-term memory that persists across conversations.
See configuration options for full API details.
OM currently only supports @mastra/pg, @mastra/libsql, and @mastra/mongodb storage adapters.
It also uses background agents for managing memory. The default model (configurable) is google/gemini-2.5-flash as it's the one we've tested the most.
BenefitsDirect link to Benefits
- Prompt caching: OM's context is stable β observations append over time rather than being dynamically retrieved each turn. This keeps the prompt prefix cacheable, which reduces costs.
- Compression: Raw message history and tool results get compressed into a dense observation log. Smaller context means faster responses and longer coherent conversations.
- Zero context rot: The agent sees relevant information instead of noisy tool calls and irrelevant tokens, so the agent stays on task over long sessions.
How It WorksDirect link to How It Works
You don't remember every word of every conversation you've ever had. You observe what happened subconsciously, then your brain reflects β reorganizing, combining, and condensing into long-term memory. OM works the same way.
Every time an agent responds, it sees a context window containing its system prompt, recent message history, and any injected context. The context window is finite β even models with large token limits perform worse when the window is full. This causes two problems:
- Context rot: the more raw message history an agent carries, the worse it performs.
- Context waste: most of that history contains tokens no longer needed to keep the agent on task.
OM solves both problems by compressing old context into dense observations.
ObservationsDirect link to Observations
When message history tokens exceed a threshold (default: 30,000), the Observer creates observations β concise notes about what happened:
Date: 2026-01-15
- π΄ 12:10 User is building a Next.js app with Supabase auth, due in 1 week (meaning January 22nd 2026)
- π΄ 12:10 App uses server components with client-side hydration
- π‘ 12:12 User asked about middleware configuration for protected routes
- π΄ 12:15 User stated the app name is "Acme Dashboard"
The compression is typically 5β40Γ. The Observer also tracks a current task and suggested response so the agent picks up where it left off.
Example: an agent using Playwright MCP might see 50,000+ tokens per page snapshot. With OM, the Observer watches the interaction and creates a few hundred tokens of observations about what was on the page and what actions were taken. The agent stays on task without carrying every raw snapshot.
ReflectionsDirect link to Reflections
When observations exceed their threshold (default: 40,000 tokens), the Reflector condenses them β combining related items and reflecting on patterns.
The result is a three-tier system:
- Recent messages β exact conversation history for the current task
- Observations β a log of what the Observer has seen
- Reflections β condensed observations when memory becomes too long
ModelsDirect link to Models
The Observer and Reflector run in the background. Any model that works with Mastra's model routing (e.g. openai/..., google/..., deepseek/...) can be used.
The default is google/gemini-2.5-flash β it works well for both observation and reflection, and its 1M token context window gives the Reflector headroom.
We've also tested deepseek, qwen3, and glm-4.7 for the Observer. For the Reflector, make sure the model's context window can fit all observations. Note that Claude 4.5 models currently don't work well as observer or reflector.
const memory = new Memory({
options: {
observationalMemory: {
model: "deepseek/deepseek-reasoner",
},
},
});
See model configuration for using different models per agent.
ScopesDirect link to Scopes
Thread scope (default)Direct link to Thread scope (default)
Each thread has its own observations.
observationalMemory: {
scope: "thread",
}
Resource scopeDirect link to Resource scope
Observations are shared across all threads for a resource (typically a user). Enables cross-conversation memory.
observationalMemory: {
scope: "resource",
}
In resource scope, unobserved messages across all threads are processed together. For users with many existing threads, this can be slow. Use thread scope for existing apps.
Token BudgetsDirect link to Token Budgets
OM uses token thresholds to decide when to observe and reflect. See token budget configuration for details.
const memory = new Memory({
options: {
observationalMemory: {
observation: {
// when to run the Observer (default: 30,000)
messageTokens: 30_000,
},
reflection: {
// when to run the Reflector (default: 40,000)
observationTokens: 40_000,
},
// let message history borrow from observation budget
shareTokenBudget: false,
},
},
});
Migrating existing threadsDirect link to Migrating existing threads
No manual migration needed. OM reads existing messages and observes them lazily when thresholds are exceeded.
- Thread scope: The first time a thread exceeds
observation.messageTokens, the Observer processes the backlog. - Resource scope: All unobserved messages across all threads for a resource are processed together. For users with many existing threads, this could take significant time.
Viewing in Mastra StudioDirect link to Viewing in Mastra Studio
Mastra Studio shows OM status in real time in the memory tab: token usage, which model is running, current observations, and reflection history.
Comparing OM with other memory featuresDirect link to Comparing OM with other memory features
- Message history β high-fidelity record of the current conversation
- Working memory β small, structured state (JSON or markdown) for user preferences, names, goals
- Semantic Recall β RAG-based retrieval of relevant past messages
- Observational Memory β long-context agentic memory that compresses extended sessions
If you're using working memory to store conversation summaries or ongoing state that grows over time, OM is a better fit. Working memory is for small, structured data; OM is for long-running event logs. OM also manages message history automaticallyβthe messageTokens setting controls how much raw history remains before observation runs.
In practical terms, OM replaces both working memory and message history, and has greater accuracy (and lower cost) than Semantic Recall.
RelatedDirect link to Related
- Observational Memory Reference β configuration options and API
- Memory Overview
- Message History
- Memory Processors