Skip to main content
Mastra 1.0 is available 🎉 Read announcement

Cloudflare D1 Storage

The Cloudflare D1 storage implementation provides a serverless SQL database solution using Cloudflare D1, supporting relational operations and transactional consistency.

Observability Not Supported

Cloudflare D1 storage does not support the observability domain. Traces from the DefaultExporter cannot be persisted to D1, and Mastra Studio's observability features won't work with D1 as your only storage provider. To enable observability, use composite storage to route observability data to a supported provider like ClickHouse or PostgreSQL.

Row Size Limit

Cloudflare D1 enforces a 1 MiB maximum row size. This limit can be exceeded when storing messages with base64-encoded attachments such as images. See Handling large attachments for workarounds including uploading attachments to external storage.

Installation
Direct link to Installation

npm install @mastra/cloudflare-d1@latest

Usage
Direct link to Usage

Using with Cloudflare Workers
Direct link to Using with Cloudflare Workers

When using D1Store in a Cloudflare Worker, you need to access the D1 binding from the worker's env parameter at runtime. The D1Database in your type definition is only for TypeScript type checking—the actual binding is provided by the Workers runtime.

import { D1Store } from "@mastra/cloudflare-d1";
import { Mastra } from "@mastra/core";
import { CloudflareDeployer } from "@mastra/deployer-cloudflare";

type Env = {
D1Database: D1Database; // TypeScript type definition
};

// Factory function to create Mastra with D1 binding
function createMastra(env: Env) {
const storage = new D1Store({
binding: env.D1Database, // âś… Access the actual binding from env
tablePrefix: "dev_", // Optional: isolate tables per environment
});

return new Mastra({
storage,
deployer: new CloudflareDeployer({
name: "my-worker",
d1_databases: [
{
binding: "D1Database", // Must match the property name in Env type
database_name: "your-database-name",
database_id: "your-database-id"
}
],
}),
});
}

// Cloudflare Worker export
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
const mastra = createMastra(env);

// Your handler logic here
return new Response("Hello from Mastra with D1!");
}
};
Important: Understanding D1 Bindings

In the Env type definition, D1Database: D1Database serves two purposes:

  • The property name (D1Database) must match the binding name in your wrangler.toml
  • The type (: D1Database) is from @cloudflare/workers-types for TypeScript type checking

At runtime, Cloudflare Workers provides the actual D1 database instance via env.D1Database. You cannot use D1Database directly outside of the worker's context.

Using with REST API
Direct link to Using with REST API

For non-Workers environments (Node.js, serverless functions, etc.), use the REST API approach:

import { D1Store } from "@mastra/cloudflare-d1";

const storage = new D1Store({
accountId: process.env.CLOUDFLARE_ACCOUNT_ID!, // Cloudflare Account ID
databaseId: process.env.CLOUDFLARE_D1_DATABASE_ID!, // D1 Database ID
apiToken: process.env.CLOUDFLARE_API_TOKEN!, // Cloudflare API Token
tablePrefix: "dev_", // Optional: isolate tables per environment
});

Wrangler Configuration
Direct link to Wrangler Configuration

Add the D1 database binding to your wrangler.toml:

[[d1_databases]]
binding = "D1Database" # Must match the property name in your Env type
database_name = "your-database-name"
database_id = "your-database-id"

Or in wrangler.jsonc:

{
"d1_databases": [
{
"binding": "D1Database",
"database_name": "your-database-name",
"database_id": "your-database-id"
}
]
}

Parameters
Direct link to Parameters

binding?:

D1Database
Cloudflare D1 Workers binding (for Workers runtime)

accountId?:

string
Cloudflare Account ID (for REST API)

databaseId?:

string
Cloudflare D1 Database ID (for REST API)

apiToken?:

string
Cloudflare API Token (for REST API)

tablePrefix?:

string
Optional prefix for all table names (useful for environment isolation)

Additional Notes
Direct link to Additional Notes

Schema Management
Direct link to Schema Management

The storage implementation handles schema creation and updates automatically. It creates the following tables:

  • threads: Stores conversation threads
  • messages: Stores individual messages
  • metadata: Stores additional metadata for threads and messages

Initialization
Direct link to Initialization

When you pass storage to the Mastra class, init() is called automatically before any storage operation:

import { Mastra } from "@mastra/core";
import { D1Store } from "@mastra/cloudflare-d1";

type Env = {
D1Database: D1Database;
};

// In a Cloudflare Worker
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
const storage = new D1Store({
binding: env.D1Database, // âś… Use env.D1Database
});

const mastra = new Mastra({
storage, // init() is called automatically
});

// Your handler logic here
return new Response("Success");
}
};

If you're using storage directly without Mastra, you must call init() explicitly to create the tables:

import { D1Store } from "@mastra/cloudflare-d1";

type Env = {
D1Database: D1Database;
};

// In a Cloudflare Worker
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
const storage = new D1Store({
id: 'd1-storage',
binding: env.D1Database, // âś… Use env.D1Database
});

// Required when using storage directly
await storage.init();

// Access domain-specific stores via getStore()
const memoryStore = await storage.getStore('memory');
const thread = await memoryStore?.getThreadById({ threadId: "..." });

return new Response("Success");
}
};
warning

If init() is not called, tables won't be created and storage operations will fail silently or throw errors.

Transactions & Consistency
Direct link to Transactions & Consistency

Cloudflare D1 provides transactional guarantees for single-row operations. This means that multiple operations can be executed as a single, all-or-nothing unit of work.

Table Creation & Migrations
Direct link to Table Creation & Migrations

Tables are created automatically when storage is initialized (and can be isolated per environment using the tablePrefix option), but advanced schema changes—such as adding columns, changing data types, or modifying indexes—require manual migration and careful planning to avoid data loss.