# Memory Class The `Memory` class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. You must configure a storage provider for conversation history, and if you enable semantic recall you will also need to provide a vector store and embedder. ## Usage example ```typescript import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; export const agent = new Agent({ name: "test-agent", instructions: "You are an agent with memory.", model: "openai/gpt-5.1", memory: new Memory({ options: { workingMemory: { enabled: true, }, }, }), }); ``` > To enable `workingMemory` on an agent, you’ll need a storage provider configured on your main Mastra instance. See [Mastra class](https://mastra.ai/reference/core/mastra-class) for more information. ## Constructor parameters **storage?:** (`MastraCompositeStore`): Storage implementation for persisting memory data. Defaults to \`new DefaultStorage({ config: { url: "file:memory.db" } })\` if not provided. **vector?:** (`MastraVector | false`): Vector store for semantic search capabilities. Set to \`false\` to disable vector operations. **embedder?:** (`EmbeddingModel | EmbeddingModelV2`): Embedder instance for vector embeddings. Required when semantic recall is enabled. **options?:** (`MemoryConfig`): Memory configuration options. ### Options parameters **lastMessages?:** (`number | false`): Number of most recent messages to retrieve. Set to false to disable. (Default: `10`) **readOnly?:** (`boolean`): When true, prevents memory from saving new messages and provides working memory as read-only context (without the updateWorkingMemory tool). Useful for read-only operations like previews, internal routing agents, or sub agents that should reference but not modify memory. (Default: `false`) **semanticRecall?:** (`boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }`): Enable semantic search in message history. Can be a boolean or an object with configuration options. When enabled, requires both vector store and embedder to be configured. Default topK is 4, default messageRange is {before: 1, after: 1}. (Default: `false`) **workingMemory?:** (`WorkingMemory`): Configuration for working memory feature. Can be \`{ enabled: boolean; template?: string; schema?: ZodObject\ | JSONSchema7; scope?: 'thread' | 'resource' }\` or \`{ enabled: boolean }\` to disable. (Default: `{ enabled: false, template: '# User Information\n- **First Name**:\n- **Last Name**:\n...' }`) **observationalMemory?:** (`boolean | ObservationalMemoryOptions`): Enable Observational Memory for long-context agentic memory. Set to \`true\` for defaults, or pass a config object to customize token budgets, models, and scope. See \[Observational Memory reference]\(/reference/memory/observational-memory) for configuration details. (Default: `false`) **generateTitle?:** (`boolean | { model: DynamicArgument; instructions?: DynamicArgument }`): Controls automatic thread title generation from the user's first message. Can be a boolean or an object with custom model and instructions. (Default: `false`) ## Returns **memory:** (`Memory`): A new Memory instance with the specified configuration. ## Extended usage example ```typescript import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; import { LibSQLStore, LibSQLVector } from "@mastra/libsql"; export const agent = new Agent({ name: "test-agent", instructions: "You are an agent with memory.", model: "openai/gpt-5.1", memory: new Memory({ storage: new LibSQLStore({ id: 'test-agent-storage', url: "file:./working-memory.db", }), vector: new LibSQLVector({ id: 'test-agent-vector', url: "file:./vector-memory.db", }), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, scope: "resource", }, workingMemory: { enabled: true, }, generateTitle: true, }, }), }); ``` ## PostgreSQL with index configuration ```typescript import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; import { ModelRouterEmbeddingModel } from "@mastra/core/llm"; import { PgStore, PgVector } from "@mastra/pg"; export const agent = new Agent({ name: "pg-agent", instructions: "You are an agent with optimized PostgreSQL memory.", model: "openai/gpt-5.1", memory: new Memory({ storage: new PgStore({ id: 'pg-agent-storage', connectionString: process.env.DATABASE_URL, }), vector: new PgVector({ id: 'pg-agent-vector', connectionString: process.env.DATABASE_URL, }), embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"), options: { lastMessages: 20, semanticRecall: { topK: 5, messageRange: 3, scope: "resource", indexConfig: { type: "hnsw", // Use HNSW for better performance metric: "dotproduct", // Optimal for OpenAI embeddings m: 16, // Number of bi-directional links efConstruction: 64, // Construction-time candidate list size }, }, workingMemory: { enabled: true, }, }, }), }); ``` ### Related - [Getting Started with Memory](https://mastra.ai/docs/memory/overview) - [Semantic Recall](https://mastra.ai/docs/memory/semantic-recall) - [Working Memory](https://mastra.ai/docs/memory/working-memory) - [Observational Memory](https://mastra.ai/docs/memory/observational-memory) - [Memory Processors](https://mastra.ai/docs/memory/memory-processors) - [createThread](https://mastra.ai/reference/memory/createThread) - [recall](https://mastra.ai/reference/memory/recall) - [getThreadById](https://mastra.ai/reference/memory/getThreadById) - [listThreads](https://mastra.ai/reference/memory/listThreads) - [deleteMessages](https://mastra.ai/reference/memory/deleteMessages) - [cloneThread](https://mastra.ai/reference/memory/cloneThread) - [Clone Utility Methods](https://mastra.ai/reference/memory/clone-utilities)