Storage
For Mastra to remember previous interactions, you must configure a storage adapter. Mastra is designed to work with your preferred database provider - choose from the supported providers and pass it to your Mastra instance.
import { Mastra } from "@mastra/core";
import { LibSQLStore } from "@mastra/libsql";
export const mastra = new Mastra({
storage: new LibSQLStore({
id: 'mastra-storage',
url: "file:./mastra.db",
}),
});
On first interaction, Mastra automatically creates the necessary tables following the core schema. This includes tables for messages, threads, resources, workflows, traces, and evaluation datasets.
Supported providersDirect link to Supported providers
Each provider page includes installation instructions, configuration parameters, and usage examples:
- libSQL Storage
- PostgreSQL Storage
- MongoDB Storage
- Upstash Storage
- Cloudflare D1
- Cloudflare Durable Objects
- Convex
- DynamoDB
- LanceDB
- Microsoft SQL Server
libSQL is the easiest way to get started because it doesn’t require running a separate database server
Configuration scopeDirect link to Configuration scope
You can configure storage at two different scopes:
Instance-level storageDirect link to Instance-level storage
Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
import { Mastra } from "@mastra/core";
import { PostgresStore } from "@mastra/pg";
export const mastra = new Mastra({
storage: new PostgresStore({
id: 'mastra-storage',
connectionString: process.env.DATABASE_URL,
}),
});
// All agents automatically use this storage
const agent1 = new Agent({ id: "agent-1", memory: new Memory() });
const agent2 = new Agent({ id: "agent-2", memory: new Memory() });
This is useful when all primitives share the same storage backend and have similar performance, scaling, and operational requirements.
Composite storageDirect link to Composite storage
Add storage to your Mastra instance using MastraCompositeStore and configure individual storage domains to use different storage providers.
import { Mastra } from "@mastra/core";
import { MastraCompositeStore } from "@mastra/core/storage";
import { MemoryLibSQL } from "@mastra/libsql";
import { WorkflowsPG } from "@mastra/pg";
import { ObservabilityStorageClickhouse } from "@mastra/clickhouse";
export const mastra = new Mastra({
storage: new MastraCompositeStore({
id: "composite",
domains: {
memory: new MemoryLibSQL({ url: "file:./memory.db" }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
observability: new ObservabilityStorageClickhouse({
url: process.env.CLICKHOUSE_URL,
username: process.env.CLICKHOUSE_USERNAME,
password: process.env.CLICKHOUSE_PASSWORD,
}),
},
}),
});
This is useful when different types of data have different performance or operational requirements, such as low-latency storage for memory, durable storage for workflows, and high-throughput storage for observability.
See Storage Domains for more information.
Agent-level storageDirect link to Agent-level storage
Agent-level storage overrides storage configured at the instance-level. Add storage to a specific agent when you need data boundaries or compliance requirements:
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
export const agent = new Agent({
id: "agent",
memory: new Memory({
storage: new PostgresStore({
id: 'agent-storage',
connectionString: process.env.AGENT_DATABASE_URL,
}),
}),
});
This is useful when different agents need to store data in separate databases for security, compliance, or organizational reasons.
Agent-level storage is not supported when using Mastra Cloud Store. If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation does not apply if you bring your own database.
Threads and resourcesDirect link to Threads and resources
Mastra organizes memory into threads using two identifiers:
- Thread: A conversation session containing a sequence of messages (e.g.,
convo_123) - Resource: An identifier for the entity the thread belongs to, typically a user (e.g.,
user_123)
Both identifiers are required for agents to store and recall information:
const stream = await agent.stream("message for agent", {
memory: {
thread: "convo_123",
resource: "user_123",
},
});
Studio automatically generates a thread and resource ID for you. Remember to to pass these explicitly when calling stream or generate yourself.
Thread and resource relationshipDirect link to Thread and resource relationship
Each thread has an owner (its resourceId) that is set when the thread is created and cannot be changed. When you query a thread, you must use the correct owner's resource ID. Attempting to query a thread with a different resource ID will result in an error:
Thread with id <thread_id> is for resource with id <resource_a>
but resource <resource_b> was queried
Note that while each thread has one owner, messages within that thread can have different resourceId values. This is used for message attribution and filtering (e.g., distinguishing between different agents in a multi-agent system, or filtering messages for analytics).
Security: Memory is a storage layer, not an authorization layer. Your application must implement access control before calling memory APIs. The resourceId parameter controls both validation and filtering - provide it to validate ownership and filter messages, or omit it for server-side access without validation.
To avoid accidentally reusing thread IDs across different owners, use UUIDs: crypto.randomUUID()
Thread title generationDirect link to Thread title generation
Mastra can automatically generate descriptive thread titles based on the user's first message.
Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
export const testAgent = new Agent({
id: "test-agent",
memory: new Memory({
options: {
generateTitle: true,
},
}),
});
Title generation runs asynchronously after the agent responds and does not affect response time.
To optimize cost or behavior, provide a smaller model and custom instructions:
export const testAgent = new Agent({
id: "test-agent",
memory: new Memory({
options: {
generateTitle: {
model: "openai/gpt-4o-mini",
instructions: "Generate a concise title based on the user's first message",
},
},
}),
});
Semantic recallDirect link to Semantic recall
Semantic recall uses vector embeddings to retrieve relevant past messages based on meaning rather than recency. This requires a vector database instance, which can be configured at the instance or agent level.
The vector database doesn't have to be the same as your storage provider. For example, you might use PostgreSQL for storage and Pinecone for vectors:
import { Mastra } from "@mastra/core";
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
import { PineconeVector } from "@mastra/pinecone";
// Instance-level vector configuration
export const mastra = new Mastra({
storage: new PostgresStore({
id: 'mastra-storage',
connectionString: process.env.DATABASE_URL,
}),
});
// Agent-level vector configuration
export const agent = new Agent({
id: "agent",
memory: new Memory({
vector: new PineconeVector({
id: 'agent-vector',
apiKey: process.env.PINECONE_API_KEY,
}),
options: {
semanticRecall: {
topK: 5,
messageRange: 2,
},
},
}),
});
We support all popular vector providers including Pinecone, Chroma, Qdrant, and many more.
For more information on configuring semantic recall, see the Semantic Recall documentation.