Observational Memory
Added in: @mastra/memory@1.1.0
Observational Memory (OM) is Mastra's memory system for long-context agentic memory. Two background agents β an Observer and a Reflector β watch your agent's conversations and maintain a dense observation log that replaces raw message history as it grows.
QuickstartDirect link to Quickstart
Enable observationalMemory in the memory options when creating your agent:
import { Memory } from '@mastra/memory'
import { Agent } from '@mastra/core/agent'
export const agent = new Agent({
name: 'my-agent',
instructions: 'You are a helpful assistant.',
model: 'openai/gpt-5-mini',
memory: new Memory({
options: {
observationalMemory: true,
},
}),
})
That's it. The agent now has humanlike long-term memory that persists across conversations. Setting observationalMemory: true uses google/gemini-2.5-flash by default. To use a different model or customize thresholds, pass a config object instead:
const memory = new Memory({
options: {
observationalMemory: {
model: 'deepseek/deepseek-reasoner',
},
},
})
See configuration options for full API details.
OM currently only supports @mastra/pg, @mastra/libsql, and @mastra/mongodb storage adapters.
It uses background agents for managing memory. When using observationalMemory: true, the default model is google/gemini-2.5-flash. When passing a config object, a model must be explicitly set.
BenefitsDirect link to Benefits
- Prompt caching: OM's context is stable and observations append over time rather than being dynamically retrieved each turn. This keeps the prompt prefix cacheable, which reduces costs.
- Compression: Raw message history and tool results get compressed into a dense observation log. Smaller context means faster responses and longer coherent conversations.
- Zero context rot: The agent sees relevant information instead of noisy tool calls and irrelevant tokens, so the agent stays on task over long sessions.
How it worksDirect link to How it works
You don't remember every word of every conversation you've ever had. You observe what happened subconsciously, then your brain reflects β reorganizing, combining, and condensing into long-term memory. OM works the same way.
Every time an agent responds, it sees a context window containing its system prompt, recent message history, and any injected context. The context window is finite; even models with large token limits perform worse when the window is full. This causes two problems:
- Context rot: the more raw message history an agent carries, the worse it performs.
- Context waste: most of that history contains tokens no longer needed to keep the agent on task.
OM solves both problems by compressing old context into dense observations.
ObservationsDirect link to Observations
When message history tokens exceed a threshold (default: 30,000), the Observer creates observations which are concise notes about what happened:
OM uses fast local token estimation for this thresholding work. Text is estimated with tokenx, while image parts use provider-aware heuristics so multimodal conversations still trigger observation at the right time. The same applies to image-like file parts when a transport normalizes an uploaded image as a file instead of an image part. For example, OpenAI image detail settings can materially change when OM decides to observe.
The Observer can also see attachments in the history it reviews. OM keeps readable placeholders like [Image #1: reference-board.png] or [File #1: floorplan.pdf] in the transcript for readability, and forwards the actual attachment parts alongside the text. Image-like file parts are upgraded to image inputs for the Observer when possible, while non-image attachments are forwarded as file parts with normalized token counting. This applies to both normal thread observation and batched resource-scope observation.
Date: 2026-01-15
- π΄ 12:10 User is building a Next.js app with Supabase auth, due in 1 week (meaning January 22nd 2026)
- π΄ 12:10 App uses server components with client-side hydration
- π‘ 12:12 User asked about middleware configuration for protected routes
- π΄ 12:15 User stated the app name is "Acme Dashboard"
The compression is typically 5β40Γ. The Observer also tracks a current task and suggested response so the agent picks up where it left off.
If you enable observation.threadTitle, the Observer can also suggest a short thread title when the conversation topic meaningfully changes. Thread title generation is opt-in and updates the thread metadata, so apps like Mastra Code can show the latest title in thread lists and status UI.
Example: An agent using Playwright MCP might see 50,000+ tokens per page snapshot. With OM, the Observer watches the interaction and creates a few hundred tokens of observations about what was on the page and what actions were taken. The agent stays on task without carrying every raw snapshot.
ReflectionsDirect link to Reflections
When observations exceed their threshold (default: 40,000 tokens), the Reflector condenses them, combines related items, and reflects on patterns.
The result is a three-tier system:
- Recent messages: Exact conversation history for the current task
- Observations: A log of what the Observer has seen
- Reflections: Condensed observations when memory becomes too long
Retrieval mode (experimental)Direct link to Retrieval mode (experimental)
Retrieval mode is experimental. The API may change in future releases.
Normal OM compresses messages into observations, which is great for staying on task, but the original wording is gone. Retrieval mode fixes this by keeping each observation group linked to the raw messages that produced it. When the agent needs exact wording, tool output, or chronology that the summary compressed away, it can call a recall tool to page through the source messages.
Browsing onlyDirect link to Browsing only
Set retrieval: true to enable the recall tool for browsing raw messages. No vector store needed. By default, the recall tool can browse across all threads for the current resource.
const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
retrieval: true,
},
},
})
With semantic searchDirect link to With semantic search
Set retrieval: { vector: true } to also enable semantic search. This reuses the vector store and embedder already configured on your Memory instance:
const memory = new Memory({
storage,
vector: myVectorStore,
embedder: myEmbedder,
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
retrieval: { vector: true },
},
},
})
When vector search is configured, new observation groups are automatically indexed at buffer time and during synchronous observation (fire-and-forget, non-blocking). Semantic search returns observation-group matches with their raw source message ID ranges, so the recall tool can show the summarized memory alongside where it came from.
Restricting to the current threadDirect link to Restricting to the current thread
By default, the recall tool scope is 'resource' β the agent can list threads, browse other threads, and search across all conversations. Set scope: 'thread' to restrict the agent to only the current thread:
const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
retrieval: { vector: true, scope: 'thread' },
},
},
})
What retrieval enablesDirect link to What retrieval enables
With retrieval mode enabled, OM:
- Stores a
range(e.g.startId:endId) on each observation group pointing to the messages it was derived from - Keeps range metadata visible in the agent's context so the agent knows which observations map to which messages
- Registers a
recalltool the agent can call to:- Page through the raw messages behind any observation group range
- Search by semantic similarity (
mode: "search"with aquerystring) β requiresvector: true - List all threads (
mode: "threads"), browse other threads (threadId), and search across all threads (defaultscope: 'resource') - When
scope: 'thread': restrict browsing and search to the current thread only
See the recall tool reference for the full API (detail levels, part indexing, pagination, cross-thread browsing, and token limiting).
StudioDirect link to Studio
To see how it works in practice, open Studio and navigate to an agent with OM enabled. The Memory tab displays:
- Token progress bars: Current token counts for messages and observations, showing how close each is to its threshold. Hover over the info icon to see the model and threshold for the Observer and Reflector.
- Active observations: The current observation log, rendered inline. When previous observation or reflection records exist, expand "Previous observations" to browse them.
- Background processing: During a conversation, buffered observation chunks and reflection status appear as the agent processes in the background.
The progress bars update live while the agent is observing or reflecting, showing elapsed time and a status badge.
ModelsDirect link to Models
The Observer and Reflector run in the background. Any model that works with Mastra's model routing (provider/model) can be used. When using observationalMemory: true, the default model is google/gemini-2.5-flash. When passing a config object, a model must be explicitly set.
Generally speaking, we recommend using a model that has a large context window (128K+ tokens) and is fast enough to run in the background without slowing down your actions.
If you're unsure which model to use, start with the default google/gemini-2.5-flash. We've also successfully tested openai/gpt-5-mini, anthropic/claude-haiku-4-5, deepseek/deepseek-reasoner, qwen3, and glm-4.7.
const memory = new Memory({
options: {
observationalMemory: {
model: 'deepseek/deepseek-reasoner',
},
},
})
See model configuration for using different models per agent.
Token-tiered model selectionDirect link to Token-tiered model selection
Added in: @mastra/memory@1.10.0
You can use ModelByInputTokens to specify different Observer or Reflector models based on input token count. OM selects the matching model tier at runtime from the configured upTo thresholds.
import { Memory, ModelByInputTokens } from '@mastra/memory'
const memory = new Memory({
options: {
observationalMemory: {
observation: {
model: new ModelByInputTokens({
upTo: {
// Faster, cheaper models for smaller inputs; stronger models for larger contexts
5_000: 'openrouter/mistralai/ministral-8b-2512',
20_000: 'openrouter/mistralai/mistral-small-2603',
40_000: 'openai/gpt-5.4-mini',
1_000_000: 'google/gemini-3.1-flash-lite-preview',
},
}),
},
reflection: {
model: new ModelByInputTokens({
upTo: {
20_000: 'openai/gpt-5.4-mini',
100_000: 'google/gemini-2.5-flash',
},
}),
},
},
},
})
The upTo keys are inclusive upper bounds. OM computes the actual input token count for the Observer or Reflector call, resolves the matching tier directly, and uses that concrete model for the run.
If the input exceeds the largest configured threshold, an error is thrown β ensure your thresholds cover the full range of possible input sizes, or use a model with a sufficiently large context window at the highest tier.
ScopesDirect link to Scopes
Thread scope (default)Direct link to Thread scope (default)
Each thread has its own observations. This scope is well tested and works well as a general purpose memory system, especially for long horizon agentic use-cases.
const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
scope: 'thread',
},
},
})
Thread scope requires a valid threadId to be provided when calling the agent. If threadId is missing, Observational Memory throws an error. This prevents multiple threads from silently sharing a single observation record, which can cause database deadlocks.
Resource scope (experimental)Direct link to Resource scope (experimental)
Observations are shared across all threads for a resource (typically a user). Enables cross-conversation memory.
const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
scope: 'resource',
},
},
})
Resource scope works, however it's marked as experimental for now until we prove task adherence/continuity across multiple ongoing simultaneous threads. As of today, you may need to tweak your system prompt to prevent one thread from continuing the work that another had already started (but hadn't finished).
This is because in resource scope, each thread is a perspective on all threads for the resource.
For your use-case this may not be a problem, so your mileage may vary.
In resource scope, unobserved messages across all threads are processed together. For users with many existing threads, this can be slow. Use thread scope for existing apps.
Token budgetsDirect link to Token budgets
OM uses token thresholds to decide when to observe and reflect. See token budget configuration for details.
const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
observation: {
// when to run the Observer (default: 30,000)
messageTokens: 30_000,
},
reflection: {
// when to run the Reflector (default: 40,000)
observationTokens: 40_000,
},
// let message history borrow from observation budget
// requires bufferTokens: false (temporary limitation)
shareTokenBudget: false,
},
},
})
Token counting cacheDirect link to Token counting cache
OM caches token estimates in message metadata to reduce repeat counting work during threshold checks and buffering decisions.
- Per-part estimates are stored on
part.providerMetadata.mastraand reused on subsequent passes when the cache version/tokenizer source matches. - For string-only message content (without parts), OM uses a message-level metadata fallback cache.
- Message and conversation overhead are still recalculated on every pass. The cache only stores payload estimates, so counting semantics stay the same.
data-*andreasoningparts are still skipped and aren't cached.
Async bufferingDirect link to Async buffering
Without async buffering, the Observer runs synchronously when the message threshold is reached β the agent pauses mid-conversation while the Observer LLM call completes. With async buffering (enabled by default), observations are pre-computed in the background as the conversation grows. When the threshold is hit, buffered observations activate instantly with no pause.
How it worksDirect link to How it works
As the agent converses, message tokens accumulate. At regular intervals (bufferTokens), a background Observer call runs without blocking the agent. Each call produces a "chunk" of observations that's stored in a buffer.
When message tokens reach the messageTokens threshold, buffered chunks activate: their observations move into the active observation log, and the corresponding raw messages are removed from the context window. The agent never pauses.
Buffered observations also include continuation hints β a suggested next response and the current task β so the main agent maintains conversational continuity after activation shrinks the context window.
If the agent produces messages faster than the Observer can process them, a blockAfter safety threshold forces a synchronous observation as a last resort. Buffered activation still preserves a minimum remaining context (the smaller of ~1k tokens or the configured retention floor).
Reflection works similarly β the Reflector runs in the background when observations reach a fraction of the reflection threshold.
SettingsDirect link to Settings
| Setting | Default | What it controls |
|---|---|---|
observation.bufferTokens | 0.2 | How often to buffer. 0.2 means every 20% of messageTokens β with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. 5000). |
observation.bufferActivation | 0.8 | How aggressively to clear the message window on activation. 0.8 means remove enough messages to keep only 20% of messageTokens remaining. Lower values keep more message history. |
observation.blockAfter | 1.2 | Safety threshold as a multiplier of messageTokens. At 1.2, synchronous observation is forced at 36k tokens (1.2 Γ 30k). Only matters if buffering can't keep up. |
reflection.bufferActivation | 0.5 | When to start background reflection. 0.5 means reflection begins when observations reach 50% of the observationTokens threshold. |
reflection.blockAfter | 1.2 | Safety threshold for reflection, same logic as observation. |
DisablingDirect link to Disabling
To disable async buffering and use synchronous observation/reflection instead:
const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
observation: {
bufferTokens: false,
},
},
},
})
Setting bufferTokens: false disables both observation and reflection async buffering. See async buffering configuration for the full API.
Async buffering isn't supported with scope: 'resource'. It's automatically disabled in resource scope.
Observer Context OptimizationDirect link to Observer Context Optimization
By default, the Observer receives the full observation history as context when processing new messages. The Observer also receives prior current-task and suggested-response metadata (when available), so it can stay oriented even when observation context is truncated. For long-running conversations where observations grow large, you can opt into context optimization to reduce Observer input costs.
Set observation.previousObserverTokens to limit how many tokens of previous observations are sent to the Observer. Observations are tail-truncated, keeping the most recent entries. When a buffered reflection is pending, the already-reflected lines are automatically replaced with the reflection summary before truncation is applied.
const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
observation: {
previousObserverTokens: 10_000, // keep only ~10k tokens of recent observations
},
},
},
})
previousObserverTokens: 2000β default; keeps ~2k tokens of recent observations.previousObserverTokens: 0β omit previous observations completely.previousObserverTokens: falseβ disable truncation and keep full previous observations.
Migrating existing threadsDirect link to Migrating existing threads
No manual migration needed. OM reads existing messages and observes them lazily when thresholds are exceeded.
- Thread scope: The first time a thread exceeds
observation.messageTokens, the Observer processes the backlog. - Resource scope: All unobserved messages across all threads for a resource are processed together. For users with many existing threads, this could take significant time.
Comparing OM with other memory featuresDirect link to Comparing OM with other memory features
- Message history: High-fidelity record of the current conversation
- Working memory: Small, structured state (JSON or markdown) for user preferences, names, goals
- Semantic Recall: RAG-based retrieval of relevant past messages
If you're using working memory to store conversation summaries or ongoing state that grows over time, OM is a better fit. Working memory is for small, structured data; OM is for long-running event logs. OM also manages message history automaticallyβthe messageTokens setting controls how much raw history remains before observation runs.
In practical terms, OM replaces both working memory and message history, and has greater accuracy (and lower cost) than Semantic Recall.