Storage
For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the supported databases and pass it to your Mastra instance.
import { Mastra } from "@mastra/core";
import { LibSQLStore } from "@mastra/libsql";
export const mastra = new Mastra({
storage: new LibSQLStore({
id: 'mastra-storage',
url: "file:./mastra.db",
}),
});
When running mastra dev alongside your application (e.g., Next.js), use an absolute path to ensure both processes access the same database:
url: "file:/absolute/path/to/your/project/mastra.db"
Relative paths like file:./mastra.db resolve based on each process's working directory, which may differ.
This configures instance-level storage, which all agents share by default. You can also configure agent-level storage for isolated data boundaries.
Mastra automatically creates the necessary tables on first interaction. See the core schema for details on what gets created, including tables for messages, threads, resources, workflows, traces, and evaluation datasets.
Supported providersDirect link to Supported providers
Each provider page includes installation instructions, configuration parameters, and usage examples:
- libSQL
- PostgreSQL
- MongoDB
- Upstash
- Cloudflare D1
- Cloudflare Durable Objects
- Convex
- DynamoDB
- LanceDB
- Microsoft SQL Server
libSQL is the easiest way to get started because it doesn’t require running a separate database server.
Configuration scopeDirect link to Configuration scope
Storage can be configured at the instance level (shared by all agents) or at the agent level (isolated to a specific agent).
Instance-level storageDirect link to Instance-level storage
Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
import { Mastra } from "@mastra/core";
import { PostgresStore } from "@mastra/pg";
export const mastra = new Mastra({
storage: new PostgresStore({
id: 'mastra-storage',
connectionString: process.env.DATABASE_URL,
}),
});
// Both agents inherit storage from the Mastra instance above
const agent1 = new Agent({ id: "agent-1", memory: new Memory() });
const agent2 = new Agent({ id: "agent-2", memory: new Memory() });
This is useful when all primitives share the same storage backend and have similar performance, scaling, and operational requirements.
Composite storageDirect link to Composite storage
Composite storage is an alternative way to configure instance-level storage. Use MastraCompositeStore to set the memory domain (and any other domains you need) to different storage providers.
import { Mastra } from "@mastra/core";
import { MastraCompositeStore } from "@mastra/core/storage";
import { MemoryLibSQL } from "@mastra/libsql";
import { WorkflowsPG } from "@mastra/pg";
import { ObservabilityStorageClickhouse } from "@mastra/clickhouse";
export const mastra = new Mastra({
storage: new MastraCompositeStore({
id: "composite",
domains: {
memory: new MemoryLibSQL({ url: "file:./memory.db" }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
observability: new ObservabilityStorageClickhouse({
url: process.env.CLICKHOUSE_URL,
username: process.env.CLICKHOUSE_USERNAME,
password: process.env.CLICKHOUSE_PASSWORD,
}),
},
}),
});
This is useful when different types of data have different performance or operational requirements, such as low-latency storage for memory, durable storage for workflows, and high-throughput storage for observability.
Agent-level storageDirect link to Agent-level storage
Agent-level storage overrides storage configured at the instance level. Add storage to a specific agent when you need data boundaries or compliance requirements:
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
export const agent = new Agent({
id: "agent",
memory: new Memory({
storage: new PostgresStore({
id: 'agent-storage',
connectionString: process.env.AGENT_DATABASE_URL,
}),
}),
});
Mastra Cloud Store doesn't support agent-level storage.
Threads and resourcesDirect link to Threads and resources
Mastra organizes conversations using two identifiers:
- Thread - a conversation session containing a sequence of messages.
- Resource - the entity that owns the thread, such as a user, organization, project, or any other domain entity in your application.
Both identifiers are required for agents to store information:
- Generate
- Stream
const response = await agent.generate("hello", {
memory: {
thread: "conversation-abc-123",
resource: "user_123",
},
});
const stream = await agent.stream("hello", {
memory: {
thread: "conversation-abc-123",
resource: "user_123",
},
});
Studio automatically generates a thread and resource ID for you. When calling stream() or generate() yourself, remember to provide these identifiers explicitly.
Thread title generationDirect link to Thread title generation
Mastra can automatically generate descriptive thread titles based on the user's first message when generateTitle is enabled.
Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
export const agent = new Agent({
id: "agent",
memory: new Memory({
options: {
generateTitle: true,
},
}),
});
Title generation runs asynchronously after the agent responds and does not affect response time.
To optimize cost or behavior, provide a smaller model and custom instructions:
export const agent = new Agent({
id: "agent",
memory: new Memory({
options: {
generateTitle: {
model: "openai/gpt-4o-mini",
instructions: "Generate a 1 word title",
},
},
}),
});
Semantic recallDirect link to Semantic recall
Semantic recall has different storage requirements - it needs a vector database in addition to the standard storage adapter. See Semantic recall for setup and supported vector providers.
Handling large attachmentsDirect link to Handling large attachments
Some storage providers enforce record size limits that base64-encoded file attachments (such as images) can exceed:
| Provider | Record size limit |
|---|---|
| DynamoDB | 400 KB |
| Convex | 1 MiB |
| Cloudflare D1 | 1 MiB |
PostgreSQL, MongoDB, and libSQL have higher limits and are generally unaffected.
To avoid this, use an input processor to upload attachments to external storage (S3, R2, GCS, Convex file storage, etc.) and replace them with URL references before persistence.
import type { Processor } from "@mastra/core/processors";
import type { MastraDBMessage } from "@mastra/core/memory";
export class AttachmentUploader implements Processor {
id = "attachment-uploader";
async processInput({ messages }: { messages: MastraDBMessage[] }) {
return Promise.all(messages.map((msg) => this.processMessage(msg)));
}
async processMessage(msg: MastraDBMessage) {
const attachments = msg.content.experimental_attachments;
if (!attachments?.length) return msg;
const uploaded = await Promise.all(
attachments.map(async (att) => {
// Skip if already a URL
if (!att.url?.startsWith("data:")) return att;
// Upload base64 data and replace with URL
const url = await this.upload(att.url, att.contentType);
return { ...att, url };
})
);
return { ...msg, content: { ...msg.content, experimental_attachments: uploaded } };
}
async upload(dataUri: string, contentType?: string): Promise<string> {
const base64 = dataUri.split(",")[1];
const buffer = Buffer.from(base64, "base64");
// Replace with your storage provider (S3, R2, GCS, Convex, etc.)
// return await s3.upload(buffer, contentType);
throw new Error("Implement upload() with your storage provider");
}
}
Use the processor with your agent:
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { AttachmentUploader } from "./processors/attachment-uploader";
const agent = new Agent({
id: "my-agent",
memory: new Memory({ storage: yourStorage }),
inputProcessors: [new AttachmentUploader()],
});