Memory Class
The Memory class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. You must configure a storage provider for conversation history, and if you enable semantic recall you will also need to provide a vector store and embedder.
Usage exampleDirect link to Usage example
src/mastra/agents/test-agent.ts
import { Memory } from '@mastra/memory'
import { Agent } from '@mastra/core/agent'
export const agent = new Agent({
name: 'test-agent',
instructions: 'You are an agent with memory.',
model: 'openai/gpt-5.1',
memory: new Memory({
options: {
workingMemory: {
enabled: true,
},
},
}),
})
To enable
workingMemoryon an agent, you’ll need a storage provider configured on your main Mastra instance. See Mastra class for more information.
Constructor parametersDirect link to Constructor parameters
storage?:
MastraCompositeStore
Storage implementation for persisting memory data. Defaults to `new DefaultStorage({ config: { url: "file:memory.db" } })` if not provided.
vector?:
MastraVector | false
Vector store for semantic search capabilities. Set to `false` to disable vector operations.
embedder?:
EmbeddingModel<string> | EmbeddingModelV2<string>
Embedder instance for vector embeddings. Required when semantic recall is enabled.
options?:
MemoryConfig
Memory configuration options.
Options parametersDirect link to Options parameters
lastMessages?:
number | false
= 10
Number of most recent messages to include in context. Set to `false` to disable loading conversation history into context. Use `Number.MAX_SAFE_INTEGER` to retrieve all messages with no limit. To prevent saving new messages, use the `readOnly` option instead.
readOnly?:
boolean
= false
When true, prevents memory from saving new messages and provides working memory as read-only context (without the updateWorkingMemory tool). Useful for read-only operations like previews, internal routing agents, or sub agents that should reference but not modify memory.
semanticRecall?:
boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }
= false
Enable semantic search in message history. Can be a boolean or an object with configuration options. When enabled, requires both vector store and embedder to be configured. Default topK is 4, default messageRange is {before: 1, after: 1}.
workingMemory?:
WorkingMemory
= { enabled: false, template: '# User Information\n- **First Name**:\n- **Last Name**:\n...' }
Configuration for working memory feature. Can be `{ enabled: boolean; template?: string; schema?: ZodObject<any> | JSONSchema7; scope?: 'thread' | 'resource' }` or `{ enabled: boolean }` to disable.
observationalMemory?:
boolean | ObservationalMemoryOptions
= false
Enable Observational Memory for long-context agentic memory. Set to `true` for defaults, or pass a config object to customize token budgets, models, and scope. See [Observational Memory reference](/reference/memory/observational-memory) for configuration details.
generateTitle?:
boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> }
= false
Controls automatic thread title generation from the user's first message. Can be a boolean or an object with custom model and instructions.
ReturnsDirect link to Returns
memory:
Memory
A new Memory instance with the specified configuration.
Extended usage exampleDirect link to Extended usage example
src/mastra/agents/test-agent.ts
import { Memory } from '@mastra/memory'
import { Agent } from '@mastra/core/agent'
import { LibSQLStore, LibSQLVector } from '@mastra/libsql'
export const agent = new Agent({
name: 'test-agent',
instructions: 'You are an agent with memory.',
model: 'openai/gpt-5.1',
memory: new Memory({
storage: new LibSQLStore({
id: 'test-agent-storage',
url: 'file:./working-memory.db',
}),
vector: new LibSQLVector({
id: 'test-agent-vector',
url: 'file:./vector-memory.db',
}),
options: {
lastMessages: 10,
semanticRecall: {
topK: 3,
messageRange: 2,
scope: 'resource',
},
workingMemory: {
enabled: true,
},
generateTitle: true,
},
}),
})
PostgreSQL with index configurationDirect link to PostgreSQL with index configuration
src/mastra/agents/pg-agent.ts
import { Memory } from '@mastra/memory'
import { Agent } from '@mastra/core/agent'
import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
import { PgStore, PgVector } from '@mastra/pg'
export const agent = new Agent({
name: 'pg-agent',
instructions: 'You are an agent with optimized PostgreSQL memory.',
model: 'openai/gpt-5.1',
memory: new Memory({
storage: new PgStore({
id: 'pg-agent-storage',
connectionString: process.env.DATABASE_URL,
}),
vector: new PgVector({
id: 'pg-agent-vector',
connectionString: process.env.DATABASE_URL,
}),
embedder: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
options: {
lastMessages: 20,
semanticRecall: {
topK: 5,
messageRange: 3,
scope: 'resource',
indexConfig: {
type: 'hnsw', // Use HNSW for better performance
metric: 'dotproduct', // Optimal for OpenAI embeddings
m: 16, // Number of bi-directional links
efConstruction: 64, // Construction-time candidate list size
},
},
workingMemory: {
enabled: true,
},
},
}),
})