Memory Class Reference
The Memory
class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. By default, it uses LibSQL for storage and vector search, and FastEmbed for embeddings.
Basic Usage
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
const agent = new Agent({
memory: new Memory(),
...otherOptions,
});
Custom Configuration
import { Memory } from "@mastra/memory";
import { DefaultStorage, DefaultVectorDB } from "@mastra/core/storage";
import { Agent } from "@mastra/core/agent";
const memory = new Memory({
// Custom storage configuration
storage: new DefaultStorage({
url: "file:memory.db",
}),
// Custom vector database for semantic search
vector: new DefaultVectorDB({
url: "file:vector.db",
}),
// Memory configuration options
options: {
// Number of recent messages to include
lastMessages: 20,
// Semantic search configuration
semanticRecall: {
topK: 3, // Number of similar messages to retrieve
messageRange: {
// Messages to include around each result
before: 2,
after: 1,
},
},
// Working memory configuration
workingMemory: {
enabled: true,
template: "<user><first_name></first_name><last_name></last_name></user>",
},
},
});
const agent = new Agent({
memory,
...otherOptions,
});
Parameters
storage:
vector?:
embedder?:
options?:
options
lastMessages?:
semanticRecall?:
topK?:
messageRange?:
workingMemory?:
Working Memory
The working memory feature allows agents to maintain persistent information across conversations. When enabled, the Memory class will automatically manage XML-based working memory updates through the conversation stream.
If no template is provided, the Memory class uses a default template that includes fields for user details, preferences, goals, and other contextual information. See the Agent Memory Guide for detailed usage examples and best practices.
embedder
By default, Memory uses FastEmbed with the bge-small-en-v1.5
model, which provides a good balance of performance and model size (~130MB). You only need to specify an embedder if you want to use a different model or provider.
Additional Notes
Vector Search Configuration
When using vector search capabilities with custom configuration, ensure you configure both the vector store and appropriate search options. Here’s an example:
import { Memory } from "@mastra/memory";
import { DefaultStorage, DefaultVectorDB } from "@mastra/core/storage";
const memory = new Memory({
storage: new DefaultStorage({
url: ":memory:",
}),
vector: new DefaultVectorDB({
url: ":memory:",
}),
options: {
semanticRecall: {
topK: 5,
messageRange: 2,
},
},
});