Memory Processors
Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
When memory is enabled on an agent, Mastra adds memory processors to the agent's processor pipeline. These processors retrieve conversation history, working memory, and semantically relevant messages, then persist new messages after the model responds.
Memory processors are processors that operate specifically on memory-related messages and state.
Built-in Memory ProcessorsDirect link to Built-in Memory Processors
Mastra automatically adds these processors when memory is enabled:
MessageHistoryDirect link to MessageHistory
Retrieves conversation history and persists new messages.
When you configure:
memory: new Memory({
lastMessages: 10,
});
Mastra internally:
- Creates a
MessageHistoryprocessor withlimit: 10 - Adds it to the agent's input processors (runs before the LLM)
- Adds it to the agent's output processors (runs after the LLM)
What it does:
- Input: Fetches the last 10 messages from storage and prepends them to the conversation
- Output: Persists new messages to storage after the model responds
Example:
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { openai } from "@ai-sdk/openai";
const agent = new Agent({
id: "test-agent",
name: "Test Agent",
instructions: "You are a helpful assistant",
model: 'openai/gpt-4o',
memory: new Memory({
storage: new LibSQLStore({
id: "memory-store",
url: "file:memory.db",
}),
lastMessages: 10, // MessageHistory processor automatically added
}),
});
SemanticRecallDirect link to SemanticRecall
Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
When you configure:
memory: new Memory({
semanticRecall: { enabled: true },
vector: myVectorStore,
embedder: myEmbedder,
});
Mastra internally:
- Creates a
SemanticRecallprocessor - Adds it to the agent's input processors (runs before the LLM)
- Adds it to the agent's output processors (runs after the LLM)
- Requires both a vector store and embedder to be configured
What it does:
- Input: Performs vector similarity search to find relevant past messages and prepends them to the conversation
- Output: Creates embeddings for new messages and stores them in the vector store for future retrieval
Example:
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { PineconeVector } from "@mastra/pinecone";
import { OpenAIEmbedder } from "@mastra/openai";
import { openai } from "@ai-sdk/openai";
const agent = new Agent({
name: "semantic-agent",
instructions: "You are a helpful assistant with semantic memory",
model: 'openai/gpt-4o',
memory: new Memory({
storage: new LibSQLStore({
id: "memory-store",
url: "file:memory.db",
}),
vector: new PineconeVector({
id: "memory-vector",
apiKey: process.env.PINECONE_API_KEY!,
environment: "us-east-1",
}),
embedder: new OpenAIEmbedder({
model: "text-embedding-3-small",
apiKey: process.env.OPENAI_API_KEY!,
}),
semanticRecall: { enabled: true }, // SemanticRecall processor automatically added
}),
});
WorkingMemoryDirect link to WorkingMemory
Manages working memory state across conversations.
When you configure:
memory: new Memory({
workingMemory: { enabled: true },
});
Mastra internally:
- Creates a
WorkingMemoryprocessor - Adds it to the agent's input processors (runs before the LLM)
- Requires a storage adapter to be configured
What it does:
- Input: Retrieves working memory state for the current thread and prepends it to the conversation
- Output: No output processing
Example:
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { openai } from "@ai-sdk/openai";
const agent = new Agent({
name: "working-memory-agent",
instructions: "You are an assistant with working memory",
model: 'openai/gpt-4o',
memory: new Memory({
storage: new LibSQLStore({
id: "memory-store",
url: "file:memory.db",
}),
workingMemory: { enabled: true }, // WorkingMemory processor automatically added
}),
});
Manual Control and DeduplicationDirect link to Manual Control and Deduplication
If you manually add a memory processor to inputProcessors or outputProcessors, Mastra will not automatically add it. This gives you full control over processor ordering:
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { MessageHistory } from "@mastra/memory/processors";
import { TokenLimiter } from "@mastra/core/processors";
import { LibSQLStore } from "@mastra/libsql";
import { openai } from "@ai-sdk/openai";
// Custom MessageHistory with different configuration
const customMessageHistory = new MessageHistory({
storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
lastMessages: 20,
});
const agent = new Agent({
name: "custom-memory-agent",
instructions: "You are a helpful assistant",
model: 'openai/gpt-4o',
memory: new Memory({
storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
lastMessages: 10, // This would normally add MessageHistory(10)
}),
inputProcessors: [
customMessageHistory, // Your custom one is used instead
new TokenLimiter({ limit: 4000 }), // Runs after your custom MessageHistory
],
});
Processor Execution OrderDirect link to Processor Execution Order
Understanding the execution order is important when combining guardrails with memory:
Input ProcessorsDirect link to Input Processors
[Memory Processors] → [Your inputProcessors]
- Memory processors run FIRST:
WorkingMemory,MessageHistory,SemanticRecall - Your input processors run AFTER: guardrails, filters, validators
This means memory loads conversation history before your processors can validate or filter the input.
Output ProcessorsDirect link to Output Processors
[Your outputProcessors] → [Memory Processors]
- Your output processors run FIRST: guardrails, filters, validators
- Memory processors run AFTER:
SemanticRecall(embeddings),MessageHistory(persistence)
This ordering is designed to be safe by default: if your output guardrail calls abort(), the memory processors never run and no messages are saved.
Guardrails and MemoryDirect link to Guardrails and Memory
The default execution order provides safe guardrail behavior:
Output guardrails (recommended)Direct link to Output guardrails (recommended)
Output guardrails run before memory processors save messages. If a guardrail aborts:
- The tripwire is triggered
- Memory processors are skipped
- No messages are persisted to storage
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { openai } from "@ai-sdk/openai";
// Output guardrail that blocks inappropriate content
const contentBlocker = {
id: "content-blocker",
processOutputResult: async ({ messages, abort }) => {
const hasInappropriateContent = messages.some((msg) =>
containsBadContent(msg)
);
if (hasInappropriateContent) {
abort("Content blocked by guardrail");
}
return messages;
},
};
const agent = new Agent({
name: "safe-agent",
instructions: "You are a helpful assistant",
model: 'openai/gpt-4o',
memory: new Memory({ lastMessages: 10 }),
// Your guardrail runs BEFORE memory saves
outputProcessors: [contentBlocker],
});
// If the guardrail aborts, nothing is saved to memory
const result = await agent.generate("Hello");
if (result.tripwire) {
console.log("Blocked:", result.tripwireReason);
// Memory is empty - no messages were persisted
}
Input guardrailsDirect link to Input guardrails
Input guardrails run after memory processors load history. If a guardrail aborts:
- The tripwire is triggered
- The LLM is never called
- Output processors (including memory persistence) are skipped
- No messages are persisted to storage
// Input guardrail that validates user input
const inputValidator = {
id: "input-validator",
processInput: async ({ messages, abort }) => {
const lastUserMessage = messages.findLast((m) => m.role === "user");
if (isInvalidInput(lastUserMessage)) {
abort("Invalid input detected");
}
return messages;
},
};
const agent = new Agent({
name: "validated-agent",
instructions: "You are a helpful assistant",
model: 'openai/gpt-4o',
memory: new Memory({ lastMessages: 10 }),
// Your guardrail runs AFTER memory loads history
inputProcessors: [inputValidator],
});
SummaryDirect link to Summary
| Guardrail Type | When it runs | If it aborts |
|---|---|---|
| Input | After memory loads history | LLM not called, nothing saved |
| Output | Before memory saves | Nothing saved to storage |
Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
Related documentationDirect link to Related documentation
- Processors - General processor concepts and custom processor creation
- Guardrails - Security and validation processors
- Memory Overview - Memory types and configuration
When creating custom processors avoid mutating the input messages array or its objects directly.