DocsReferenceMemoryMemory Class

Memory Class Reference

The Memory class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval.

Usage Example

import { Memory } from "@mastra/memory";
import { MastraStorageLibSql } from "@mastra/core/storage";
 
const memory = new Memory({
  storage: new MastraStorageLibSql({
    url: ":memory:",
  }),
});

Parameters

storage:

MastraStorage
Storage implementation for persisting memory data

vector?:

MastraVector
Vector store for semantic search capabilities

embedder?:

MastraEmbedder
Embedder instance for vector embeddings (e.g., OpenAIEmbedder, CohereEmbedder)

options?:

MemoryConfig
General memory configuration options

options

lastMessages?:

number | false
= 40
Number of most recent messages to retrieve. Set to false to disable.

semanticRecall?:

boolean | SemanticRecallConfig
= false (true if vector store provided)
Enable semantic search in message history. Automatically enabled when vector store is provided.

topK?:

number
= 2
Number of similar messages to retrieve when using semantic search

messageRange?:

number | { before: number; after: number }
= 2
Range of messages to include around semantic search results

workingMemory?:

{ enabled: boolean; template: string }
= { enabled: false, template: '<user><first_name></first_name><last_name></last_name>...</user>' }
Configuration for working memory feature that allows persistent storage of user information across conversations

Working Memory

The working memory feature allows agents to maintain persistent information across conversations. When enabled, the Memory class will automatically manage XML-based working memory updates through the conversation stream.

If no template is provided, the Memory class uses a default template that includes fields for user details, preferences, goals, and other contextual information. See the Agent Memory Guide for detailed usage examples and best practices.

embedder

The embedder instance to use for generating vector embeddings. This should be an instance of a class that implements the MastraEmbedder interface. See the Embedder Reference for available embedders and their configuration options.

Additional Notes

Vector Search Configuration

When using vector search capabilities, ensure you configure both the vector store and appropriate search options. Here’s an example (just using the in-memory store):

import { Memory } from "@mastra/memory";
import { MastraStorageLibSql } from "@mastra/core/storage";
import { LibSQLVector } from "@mastra/core/vector/libsql";
 
const memory = new Memory({
  storage: new MastraStorageLibSql({
    url: ":memory:",
  }),
  vector: new LibSQLVector({
    url: ":memory:",
  }),
  embedder: new OpenAIEmbedder({
    model: "text-embedding-3-small",
    maxRetries: 3,
  }),
  options: {
    semanticRecall: {
      topK: 5,
      messageRange: 2,
    },
  },
});