# Cloudflare D1 Storage The Cloudflare D1 storage implementation provides a serverless SQL database solution using Cloudflare D1, supporting relational operations and transactional consistency. > **Observability Not Supported:** Cloudflare D1 storage **does not support the observability domain**. Traces from the `DefaultExporter` cannot be persisted to D1, and Mastra Studio's observability features won't work with D1 as your only storage provider. To enable observability, use [composite storage](https://mastra.ai/reference/storage/composite) to route observability data to a supported provider like ClickHouse or PostgreSQL. > **Row Size Limit:** Cloudflare D1 enforces a **1 MiB maximum row size**. This limit can be exceeded when storing messages with base64-encoded attachments such as images. See [Handling large attachments](https://mastra.ai/docs/memory/storage) for workarounds including uploading attachments to external storage. ## Installation **npm**: ```bash npm install @mastra/cloudflare-d1@latest ``` **pnpm**: ```bash pnpm add @mastra/cloudflare-d1@latest ``` **Yarn**: ```bash yarn add @mastra/cloudflare-d1@latest ``` **Bun**: ```bash bun add @mastra/cloudflare-d1@latest ``` ## Usage ### Using with Cloudflare Workers When using D1Store in a Cloudflare Worker, you need to access the D1 binding from the worker's `env` parameter at runtime. The `D1Database` in your type definition is only for TypeScript type checking—the actual binding is provided by the Workers runtime. ```typescript import { D1Store } from "@mastra/cloudflare-d1"; import { Mastra } from "@mastra/core"; import { CloudflareDeployer } from "@mastra/deployer-cloudflare"; type Env = { D1Database: D1Database; // TypeScript type definition }; // Factory function to create Mastra with D1 binding function createMastra(env: Env) { const storage = new D1Store({ binding: env.D1Database, // ✅ Access the actual binding from env tablePrefix: "dev_", // Optional: isolate tables per environment }); return new Mastra({ storage, deployer: new CloudflareDeployer({ name: "my-worker", d1_databases: [ { binding: "D1Database", // Must match the property name in Env type database_name: "your-database-name", database_id: "your-database-id" } ], }), }); } // Cloudflare Worker export export default { async fetch(request: Request, env: Env, ctx: ExecutionContext) { const mastra = createMastra(env); // Your handler logic here return new Response("Hello from Mastra with D1!"); } }; ``` > **Important: Understanding D1 Bindings:** In the `Env` type definition, `D1Database: D1Database` serves two purposes: > > - The **property name** (`D1Database`) must match the `binding` name in your `wrangler.toml` > - The **type** (`: D1Database`) is from `@cloudflare/workers-types` for TypeScript type checking > > At runtime, Cloudflare Workers provides the actual D1 database instance via `env.D1Database`. You cannot use `D1Database` directly outside of the worker's context. ### Using with REST API For non-Workers environments (Node.js, serverless functions, etc.), use the REST API approach: ```typescript import { D1Store } from "@mastra/cloudflare-d1"; const storage = new D1Store({ accountId: process.env.CLOUDFLARE_ACCOUNT_ID!, // Cloudflare Account ID databaseId: process.env.CLOUDFLARE_D1_DATABASE_ID!, // D1 Database ID apiToken: process.env.CLOUDFLARE_API_TOKEN!, // Cloudflare API Token tablePrefix: "dev_", // Optional: isolate tables per environment }); ``` ### Wrangler Configuration Add the D1 database binding to your `wrangler.toml`: ```toml [[d1_databases]] binding = "D1Database" # Must match the property name in your Env type database_name = "your-database-name" database_id = "your-database-id" ``` Or in `wrangler.jsonc`: ```jsonc { "d1_databases": [ { "binding": "D1Database", "database_name": "your-database-name", "database_id": "your-database-id" } ] } ``` ## Parameters **binding?:** (`D1Database`): Cloudflare D1 Workers binding (for Workers runtime) **accountId?:** (`string`): Cloudflare Account ID (for REST API) **databaseId?:** (`string`): Cloudflare D1 Database ID (for REST API) **apiToken?:** (`string`): Cloudflare API Token (for REST API) **tablePrefix?:** (`string`): Optional prefix for all table names (useful for environment isolation) ## Additional Notes ### Schema Management The storage implementation handles schema creation and updates automatically. It creates the following tables: - `threads`: Stores conversation threads - `messages`: Stores individual messages - `metadata`: Stores additional metadata for threads and messages ### Initialization When you pass storage to the Mastra class, `init()` is called automatically before any storage operation: ```typescript import { Mastra } from "@mastra/core"; import { D1Store } from "@mastra/cloudflare-d1"; type Env = { D1Database: D1Database; }; // In a Cloudflare Worker export default { async fetch(request: Request, env: Env, ctx: ExecutionContext) { const storage = new D1Store({ binding: env.D1Database, // ✅ Use env.D1Database }); const mastra = new Mastra({ storage, // init() is called automatically }); // Your handler logic here return new Response("Success"); } }; ``` If you're using storage directly without Mastra, you must call `init()` explicitly to create the tables: ```typescript import { D1Store } from "@mastra/cloudflare-d1"; type Env = { D1Database: D1Database; }; // In a Cloudflare Worker export default { async fetch(request: Request, env: Env, ctx: ExecutionContext) { const storage = new D1Store({ id: 'd1-storage', binding: env.D1Database, // ✅ Use env.D1Database }); // Required when using storage directly await storage.init(); // Access domain-specific stores via getStore() const memoryStore = await storage.getStore('memory'); const thread = await memoryStore?.getThreadById({ threadId: "..." }); return new Response("Success"); } }; ``` > **Warning:** If `init()` is not called, tables won't be created and storage operations will fail silently or throw errors. ### Transactions & Consistency Cloudflare D1 provides transactional guarantees for single-row operations. This means that multiple operations can be executed as a single, all-or-nothing unit of work. ### Table Creation & Migrations Tables are created automatically when storage is initialized (and can be isolated per environment using the `tablePrefix` option), but advanced schema changes—such as adding columns, changing data types, or modifying indexes—require manual migration and careful planning to avoid data loss.