Memory Class
The Memory class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. You must configure a storage provider for conversation history, and if you enable semantic recall you will also need to provide a vector store and embedder.
Usage example
src/mastra/agents/test-agent.ts
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
export const agent = new Agent({
  name: "test-agent",
  instructions: "You are an agent with memory.",
  model: openai("gpt-4o"),
  memory: new Memory({
    options: {
      workingMemory: {
        enabled: true,
      },
    },
  }),
});
To enable
workingMemoryon an agent, you’ll need a storage provider configured on your main Mastra instance. See Mastra class for more information.
Constructor parameters
storage?:
MastraStorage
Storage implementation for persisting memory data. Defaults to `new DefaultStorage({ config: { url: "file:memory.db" } })` if not provided.
vector?:
MastraVector | false
Vector store for semantic search capabilities. Set to `false` to disable vector operations.
embedder?:
EmbeddingModel<string> | EmbeddingModelV2<string>
Embedder instance for vector embeddings. Required when semantic recall is enabled.
options?:
MemoryConfig
Memory configuration options.
processors?:
MemoryProcessor[]
Array of memory processors that can filter or transform messages before they're sent to the LLM.
Options parameters
lastMessages?:
number | false
= 10
Number of most recent messages to retrieve. Set to false to disable.
semanticRecall?:
boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }
= false
Enable semantic search in message history. Can be a boolean or an object with configuration options. When enabled, requires both vector store and embedder to be configured.
workingMemory?:
WorkingMemory
= { enabled: false, template: '# User Information\n- **First Name**:\n- **Last Name**:\n...' }
Configuration for working memory feature. Can be `{ enabled: boolean; template?: string; schema?: ZodObject<any> | JSONSchema7; scope?: 'thread' | 'resource' }` or `{ enabled: boolean }` to disable.
threads?:
{ generateTitle?: boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> } }
= { generateTitle: false }
Settings related to memory thread creation. `generateTitle` controls automatic thread title generation from the user's first message. Can be a boolean or an object with custom model and instructions.
Returns
memory:
Memory
A new Memory instance with the specified configuration.
Extended usage example
src/mastra/agents/test-agent.ts
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
export const agent = new Agent({
  name: "test-agent",
  instructions: "You are an agent with memory.",
  model: openai("gpt-4o"),
  memory: new Memory({
    storage: new LibSQLStore({
      url: "file:./working-memory.db",
    }),
    vector: new LibSQLVector({
      connectionUrl: "file:./vector-memory.db",
    }),
    options: {
      lastMessages: 10,
      semanticRecall: {
        topK: 3,
        messageRange: 2,
        scope: "resource",
      },
      workingMemory: {
        enabled: true,
      },
      threads: {
        generateTitle: true,
      },
    },
  }),
});
PostgreSQL with index configuration
src/mastra/agents/pg-agent.ts
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { PgStore, PgVector } from "@mastra/pg";
export const agent = new Agent({
  name: "pg-agent",
  instructions: "You are an agent with optimized PostgreSQL memory.",
  model: openai("gpt-4o"),
  memory: new Memory({
    storage: new PgStore({
      connectionString: process.env.DATABASE_URL,
    }),
    vector: new PgVector({
      connectionString: process.env.DATABASE_URL,
    }),
    embedder: openai.embedding("text-embedding-3-small"),
    options: {
      lastMessages: 20,
      semanticRecall: {
        topK: 5,
        messageRange: 3,
        scope: "resource",
        indexConfig: {
          type: "hnsw", // Use HNSW for better performance
          metric: "dotproduct", // Optimal for OpenAI embeddings
          m: 16, // Number of bi-directional links
          efConstruction: 64, // Construction-time candidate list size
        },
      },
      workingMemory: {
        enabled: true,
      },
    },
  }),
});