Skip to main content
Mastra 1.0 is available 🎉 Read announcement

Default Exporter

The DefaultExporter persists traces to your configured storage backend, making them accessible through Studio. It's automatically enabled when using the default observability configuration and requires no external services.

Production Observability

Observability data can quickly overwhelm general-purpose databases in production. For high-traffic applications, we recommend using ClickHouse for the observability storage domain via composite storage. See Production Recommendations for details.

Configuration
Direct link to Configuration

Prerequisites
Direct link to Prerequisites

  1. Storage Backend: Configure a storage provider (libSQL, PostgreSQL, etc.)
  2. Studio: Install for viewing traces locally

Basic Setup
Direct link to Basic Setup

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { Observability, DefaultExporter } from "@mastra/observability";
import { LibSQLStore } from "@mastra/libsql";

export const mastra = new Mastra({
storage: new LibSQLStore({
id: 'mastra-storage',
url: "file:./mastra.db", // Required for trace persistence
}),
observability: new Observability({
configs: {
local: {
serviceName: "my-service",
exporters: [new DefaultExporter()],
},
},
}),
});

Include DefaultExporter in your observability configuration:

import { Mastra } from "@mastra/core";
import {
Observability,
DefaultExporter,
CloudExporter,
SensitiveDataFilter,
} from "@mastra/observability";
import { LibSQLStore } from "@mastra/libsql";

export const mastra = new Mastra({
storage: new LibSQLStore({
id: 'mastra-storage',
url: "file:./mastra.db",
}),
observability: new Observability({
configs: {
default: {
serviceName: "mastra",
exporters: [
new DefaultExporter(), // Persists traces to storage for Mastra Studio
new CloudExporter(), // Sends traces to Mastra Cloud (requires MASTRA_CLOUD_ACCESS_TOKEN)
],
spanOutputProcessors: [
new SensitiveDataFilter(),
],
},
},
}),
});

Viewing Traces
Direct link to Viewing Traces

Studio
Direct link to Studio

Access your traces through Studio:

  1. Start Studio
  2. Navigate to Observability
  3. Filter and search your local traces
  4. Inspect detailed span information

Tracing Strategies
Direct link to Tracing Strategies

DefaultExporter automatically selects the optimal tracing strategy based on your storage provider. You can also override this selection if needed.

Available Strategies
Direct link to Available Strategies

StrategyDescriptionUse Case
realtimeProcess each event immediatelyDevelopment, debugging, low traffic
batch-with-updatesBuffer events and batch write with full lifecycle supportLow volume Production
insert-onlyOnly process completed spans, ignore updatesHigh volume Production

Strategy Configuration
Direct link to Strategy Configuration

new DefaultExporter({
strategy: "auto", // Default - let storage provider decide
// or explicitly set:
// strategy: 'realtime' | 'batch-with-updates' | 'insert-only'

// Batching configuration (applies to both batch-with-updates and insert-only)
maxBatchSize: 1000, // Max spans per batch
maxBatchWaitMs: 5000, // Max wait before flushing
maxBufferSize: 10000, // Max spans to buffer
});

Storage Provider Support
Direct link to Storage Provider Support

Different storage providers support different tracing strategies. Some providers support observability for production workloads, while others are intended primarily for local development.

If you set the strategy to 'auto', the DefaultExporter automatically selects the optimal strategy for the storage provider. If you set the strategy to a mode that the storage provider doesn't support, you will get an error message.

Providers with Observability Support
Direct link to Providers with Observability Support

Storage ProviderPreferred StrategySupported StrategiesRecommended Use
ClickHouse (@mastra/clickhouse)insert-onlyinsert-onlyProduction (high-volume)
PostgreSQLbatch-with-updatesbatch-with-updates, insert-onlyProduction (low volume)
MSSQLbatch-with-updatesbatch-with-updates, insert-onlyProduction (low volume)
MongoDBbatch-with-updatesbatch-with-updates, insert-onlyProduction (low volume)
libSQLbatch-with-updatesbatch-with-updates, insert-onlyDefault storage, good for development

Providers without Observability Support
Direct link to Providers without Observability Support

The following storage providers do not support the observability domain. If you're using one of these providers and need observability, use composite storage to route observability data to a supported provider:

Strategy Benefits
Direct link to Strategy Benefits

  • realtime: Immediate visibility, best for debugging
  • batch-with-updates: 10-100x throughput improvement, full span lifecycle
  • insert-only: Additional 70% reduction in database operations, perfect for analytics

Production Recommendations
Direct link to Production Recommendations

Observability data grows quickly in production environments. A single agent interaction can generate hundreds of spans, and high-traffic applications can produce thousands of traces per day. Most general-purpose databases aren't optimized for this write-heavy, append-only workload.

ClickHouse is a columnar database designed for high-volume analytics workloads. It's the recommended choice for production observability because:

  • Optimized for writes: Handles millions of inserts per second
  • Efficient compression: Reduces storage costs for trace data
  • Fast queries: Columnar storage enables quick trace lookups and aggregations
  • Time-series native: Built-in support for time-based data retention and partitioning

Using Composite Storage
Direct link to Using Composite Storage

If you're using a provider without observability support (like Convex or DynamoDB) or want to optimize performance, use composite storage to route observability data to ClickHouse while keeping other data in your primary database.

Batching Behavior
Direct link to Batching Behavior

Flush Triggers
Direct link to Flush Triggers

For both batch strategies (batch-with-updates and insert-only), traces are flushed to storage when any of these conditions are met:

  1. Size trigger: Buffer reaches maxBatchSize spans
  2. Time trigger: maxBatchWaitMs elapsed since first event
  3. Emergency flush: Buffer approaches maxBufferSize limit
  4. Shutdown: Force flush all pending events

Error Handling
Direct link to Error Handling

The DefaultExporter includes robust error handling for production use:

  • Retry Logic: Exponential backoff (500ms, 1s, 2s, 4s)
  • Transient Failures: Automatic retry with backoff
  • Persistent Failures: Drop batch after 4 failed attempts
  • Buffer Overflow: Prevent memory issues during storage outages

Configuration Examples
Direct link to Configuration Examples

// Zero config - recommended for most users
new DefaultExporter();

// Development override
new DefaultExporter({
strategy: "realtime", // Immediate visibility for debugging
});

// High-throughput production
new DefaultExporter({
maxBatchSize: 2000, // Larger batches
maxBatchWaitMs: 10000, // Wait longer to fill batches
maxBufferSize: 50000, // Handle longer outages
});

// Low-latency production
new DefaultExporter({
maxBatchSize: 100, // Smaller batches
maxBatchWaitMs: 1000, // Flush quickly
});