# Agent memory Agents use memory to maintain context across interactions. LLMs are stateless and don't retain information between calls, so agents need memory to track message history and recall relevant information. Mastra agents can be configured to store message history, with optional [working memory](https://mastra.ai/docs/memory/working-memory/llms.txt) to maintain recent context or [semantic recall](https://mastra.ai/docs/memory/semantic-recall/llms.txt) to retrieve past messages based on meaning. ## When to use memory Use memory when your agent needs to maintain multi-turn conversations that reference prior exchanges, recall user preferences or facts from earlier in a session, or build context over time within a conversation thread. Skip memory for single-turn requests where each interaction is independent. ## Setting up memory To enable memory in Mastra, install the `@mastra/memory` package along with a storage provider. ```bash npm install @mastra/memory@latest @mastra/libsql@latest ``` ```bash pnpm add @mastra/memory@latest @mastra/libsql@latest ``` ```bash yarn add @mastra/memory@latest @mastra/libsql@latest ``` ```bash bun add @mastra/memory@latest @mastra/libsql@latest ``` ## Storage providers Memory requires a storage provider to persist message history, including user messages and agent responses. For more details on available providers and how storage works in Mastra, see the [Storage](https://mastra.ai/docs/memory/storage/llms.txt) documentation. ## Configuring memory 1. Enable memory by creating a `Memory` instance and passing it to the agent’s `memory` option. ```typescript import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; export const memoryAgent = new Agent({ id: 'memory-agent', name: 'Memory Agent', memory: new Memory({ options: { lastMessages: 20, }, }), }); ``` Visit [Memory Class](https://mastra.ai/reference/memory/memory-class/llms.txt) for a full list of configuration options. 2. Add a storage provider to your main Mastra instance to enable memory across all configured agents. ```typescript import { Mastra } from "@mastra/core"; import { LibSQLStore } from "@mastra/libsql"; export const mastra = new Mastra({ storage: new LibSQLStore({ id: 'mastra-storage', url: ":memory:", }), }); ``` Visit [libSQL Storage](https://mastra.ai/reference/storage/libsql/llms.txt) for a full list of configuration options. Alternatively, add storage directly to an agent’s memory to keep data separate or use different providers per agent. ```typescript import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { LibSQLStore } from "@mastra/libsql"; export const memoryAgent = new Agent({ id: 'memory-agent', name: 'Memory Agent', memory: new Memory({ storage: new LibSQLStore({ id: 'mastra-storage', url: ":memory:", }), }), }); ``` Agent-level storage is not supported when using [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment/llms.txt). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation does not apply if you bring your own database. ## Message history Include a `memory` object with both `resource` and `thread` to track message history during agent calls. - `resource`: A stable identifier for the user or entity. - `thread`: An ID that isolates a specific conversation or session. These fields tell the agent where to store and retrieve context, enabling persistent, thread-aware memory across a conversation. ```typescript const response = await memoryAgent.generate( "Remember my favorite color is blue.", { memory: { resource: "user-123", thread: "conversation-123", }, }, ); ``` To recall information stored in memory, call the agent with the same `resource` and `thread` values used in the original conversation. ```typescript const response = await memoryAgent.generate("What's my favorite color?", { memory: { resource: "user-123", thread: "conversation-123", }, }); ``` Each thread has an owner (`resourceId`) that cannot be changed after creation. Avoid reusing the same thread ID for threads with different owners, as this will cause errors when querying. To learn more about memory see the [Memory](https://mastra.ai/docs/memory/overview/llms.txt) documentation. ## Using `RequestContext` Use [RequestContext](https://mastra.ai/docs/server/request-context/llms.txt) to access request-specific values. This lets you conditionally select different memory or storage configurations based on the context of the request. ```typescript export type UserTier = { "user-tier": "enterprise" | "pro"; }; const premiumMemory = new Memory(); const standardMemory = new Memory(); export const memoryAgent = new Agent({ id: 'memory-agent', name: 'Memory Agent', memory: ({ requestContext }) => { const userTier = requestContext.get("user-tier") as UserTier["user-tier"]; return userTier === "enterprise" ? premiumMemory : standardMemory; }, }); ``` Visit [Request Context](https://mastra.ai/docs/server/request-context/llms.txt) for more information. ## Related - [Working Memory](https://mastra.ai/docs/memory/working-memory/llms.txt) - [Semantic Recall](https://mastra.ai/docs/memory/semantic-recall/llms.txt) - [Storage](https://mastra.ai/docs/memory/storage/llms.txt) - [Request Context](https://mastra.ai/docs/server/request-context/llms.txt)