Skip to main content

Cloudflare D1 storage

The Cloudflare D1 storage implementation provides a serverless SQL database solution using Cloudflare D1, supporting relational operations and transactional consistency.

Observability Not Supported

Cloudflare D1 storage doesn't support the observability domain. Traces from the DefaultExporter can't be persisted to D1, and Studio's observability features won't work with D1 as your only storage provider. To enable observability, use composite storage to route observability data to a supported provider like ClickHouse or PostgreSQL.

Row Size Limit

Cloudflare D1 enforces a 1 MiB maximum row size. This limit can be exceeded when storing messages with base64-encoded attachments such as images. See Handling large attachments for workarounds including uploading attachments to external storage.

Installation
Direct link to Installation

npm install @mastra/cloudflare-d1@latest

Usage
Direct link to Usage

Using with Mastra CloudflareDeployer
Direct link to Using with Mastra CloudflareDeployer

The standard way to use D1Store with Mastra on Cloudflare is with CloudflareDeployer. Import env from cloudflare:workers and initialize D1Store inline inside new Mastra({...}).

src/mastra/index.ts
import { env } from 'cloudflare:workers'
import { D1Store } from '@mastra/cloudflare-d1'
import { Mastra } from '@mastra/core'
import { CloudflareDeployer } from '@mastra/deployer-cloudflare'

export const mastra = new Mastra({
storage: new D1Store({ binding: env.DB }),
deployer: new CloudflareDeployer({
name: 'my-worker',
d1_databases: [
{
binding: 'DB',
database_name: 'your-database-name',
database_id: 'your-database-id',
},
],
}),
})
note

When using import { env } from 'cloudflare:workers', D1Store must be initialized inline inside new Mastra({...}) — not extracted to a module-level variable. Alternatively, initialize D1Store inside the fetch handler after env is available. See CloudflareDeployer reference for details.

Using in a Cloudflare Worker without HTTP routes
Direct link to Using in a Cloudflare Worker without HTTP routes

If you want to call Mastra directly in a Worker — for example, to run an agent or trigger a workflow — without serving HTTP routes, you don't need CloudflareDeployer. Access the D1 binding from the worker's env parameter and call Mastra programmatically.

import { D1Store } from '@mastra/cloudflare-d1'
import { Mastra } from '@mastra/core'

type Env = {
DB: D1Database
}

export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
const mastra = new Mastra({
storage: new D1Store({ binding: env.DB }),
})

const agent = mastra.getAgent('my-agent')
const result = await agent.generate('Hello')

return Response.json({ text: result.text })
},
}

Using with REST API
Direct link to Using with REST API

For non-Workers environments (Node.js, serverless functions, etc.), use the REST API approach:

import { D1Store } from '@mastra/cloudflare-d1'

const storage = new D1Store({
accountId: process.env.CLOUDFLARE_ACCOUNT_ID!, // Cloudflare Account ID
databaseId: process.env.CLOUDFLARE_D1_DATABASE_ID!, // D1 Database ID
apiToken: process.env.CLOUDFLARE_API_TOKEN!, // Cloudflare API Token
tablePrefix: 'dev_', // Optional: isolate tables per environment
})

Wrangler Configuration
Direct link to Wrangler Configuration

Add the D1 database binding to your wrangler.toml:

[[d1_databases]]
binding = "DB"
database_name = "your-database-name"
database_id = "your-database-id"

Or in wrangler.jsonc:

{
"d1_databases": [
{
"binding": "DB",
"database_name": "your-database-name",
"database_id": "your-database-id",
},
],
}

Parameters
Direct link to Parameters

binding?:

D1Database
Cloudflare D1 Workers binding (for Workers runtime)

accountId?:

string
Cloudflare Account ID (for REST API)

databaseId?:

string
Cloudflare D1 Database ID (for REST API)

apiToken?:

string
Cloudflare API Token (for REST API)

tablePrefix?:

string
Optional prefix for all table names (useful for environment isolation)

Additional notes
Direct link to Additional notes

Schema Management
Direct link to Schema Management

The storage implementation handles schema creation and updates automatically. It creates the following tables:

  • threads: Stores conversation threads
  • messages: Stores individual messages
  • metadata: Stores additional metadata for threads and messages

Initialization
Direct link to Initialization

When you pass storage to the Mastra class, init() is called automatically before any storage operation:

import { Mastra } from '@mastra/core'
import { D1Store } from '@mastra/cloudflare-d1'

type Env = {
DB: D1Database
}

// In a Cloudflare Worker
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
const storage = new D1Store({
binding: env.DB,
})

const mastra = new Mastra({
storage, // init() is called automatically
})

// Your handler logic here
return new Response('Success')
},
}

If you're using storage directly without Mastra, you must call init() explicitly to create the tables:

import { D1Store } from '@mastra/cloudflare-d1'

type Env = {
DB: D1Database
}

// In a Cloudflare Worker
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
const storage = new D1Store({
id: 'd1-storage',
binding: env.DB,
})

// Required when using storage directly
await storage.init()

// Access domain-specific stores via getStore()
const memoryStore = await storage.getStore('memory')
const thread = await memoryStore?.getThreadById({ threadId: '...' })

return new Response('Success')
},
}
warning

If init() isn't called, tables won't be created and storage operations will fail silently or throw errors.

Transactions & Consistency
Direct link to Transactions & Consistency

Cloudflare D1 provides transactional guarantees for single-row operations. This means that multiple operations can be executed as a single, all-or-nothing unit of work.

Table Creation & Migrations
Direct link to Table Creation & Migrations

Tables are created automatically when storage is initialized (and can be isolated per environment using the tablePrefix option), but advanced schema changes—such as adding columns, changing data types, or modifying indexes—require manual migration and careful planning to avoid data loss.