Observational Memory
Added in: @mastra/memory@1.1.0
Observational Memory (OM) is Mastra's memory system for long-context agentic memory. Two background agents — an Observer that watches conversations and creates observations, and a Reflector that restructures observations by combining related items, reflecting on overarching patterns, and condensing where possible — maintain an observation log that replaces raw message history as it grows.
UsageDirect link to Usage
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
export const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant.",
model: "openai/gpt-5-mini",
memory: new Memory({
options: {
observationalMemory: true,
},
}),
});
ConfigurationDirect link to Configuration
The observationalMemory option accepts true, false, or a configuration object.
Setting observationalMemory: true enables it with all defaults. Setting observationalMemory: false or omitting it disables it.
enabled?:
model?:
scope?:
observation?:
reflection?:
Observation configDirect link to Observation config
model?:
messageTokens?:
maxTokensPerBatch?:
modelSettings?:
Reflection configDirect link to Reflection config
model?:
observationTokens?:
modelSettings?:
Model settingsDirect link to Model settings
temperature?:
maxOutputTokens?:
ExamplesDirect link to Examples
Resource scope with custom thresholdsDirect link to Resource scope with custom thresholds
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
export const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant.",
model: "openai/gpt-5-mini",
memory: new Memory({
options: {
observationalMemory: {
scope: "resource",
observation: {
messageTokens: 20_000,
},
reflection: {
observationTokens: 60_000,
},
},
},
}),
});
Shared token budgetDirect link to Shared token budget
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
export const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant.",
model: "openai/gpt-5-mini",
memory: new Memory({
options: {
observationalMemory: {
shareTokenBudget: true,
observation: {
messageTokens: 20_000,
},
reflection: {
observationTokens: 80_000,
},
},
},
}),
});
When shareTokenBudget is enabled, the total budget is observation.messageTokens + reflection.observationTokens (100k in this example). If observations only use 30k tokens, messages can expand to use up to 70k. If messages are short, observations have more room before triggering reflection.
Custom modelDirect link to Custom model
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
export const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant.",
model: "openai/gpt-5-mini",
memory: new Memory({
options: {
observationalMemory: {
model: "openai/gpt-4o-mini",
},
},
}),
});
Different models per agentDirect link to Different models per agent
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
export const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant.",
model: "openai/gpt-5-mini",
memory: new Memory({
options: {
observationalMemory: {
observation: {
model: "google/gemini-2.5-flash",
},
reflection: {
model: "openai/gpt-4o-mini",
},
},
},
}),
});
Standalone usageDirect link to Standalone usage
Most users should use the Memory class above. Using ObservationalMemory directly is mainly useful for benchmarking, experimentation, or when you need to control processor ordering with other processors (like guardrails).
import { ObservationalMemory } from "@mastra/memory/processors";
import { Agent } from "@mastra/core/agent";
import { LibSQLStore } from "@mastra/libsql";
const storage = new LibSQLStore({
id: "my-storage",
url: "file:./memory.db",
});
const om = new ObservationalMemory({
storage: storage.stores.memory,
model: "google/gemini-2.5-flash",
scope: "resource",
observation: {
messageTokens: 20_000,
},
reflection: {
observationTokens: 60_000,
},
});
export const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant.",
model: "openai/gpt-5-mini",
inputProcessors: [om],
outputProcessors: [om],
});
Standalone configDirect link to Standalone config
The standalone ObservationalMemory class accepts all the same options as the observationalMemory config object above, plus the following: