Default Exporter
The DefaultExporter persists traces to your configured storage backend, making them accessible through Studio. It's automatically enabled when using the default observability configuration and requires no external services.
Observability data can quickly overwhelm general-purpose databases in production. For high-traffic applications, we recommend using ClickHouse for the observability storage domain via composite storage. See Production Recommendations for details.
ConfigurationDirect link to Configuration
PrerequisitesDirect link to Prerequisites
- Storage Backend: Configure a storage provider (libSQL, PostgreSQL, etc.)
- Studio: Install for viewing traces locally
Basic SetupDirect link to Basic Setup
import { Mastra } from "@mastra/core";
import { Observability, DefaultExporter } from "@mastra/observability";
import { LibSQLStore } from "@mastra/libsql";
export const mastra = new Mastra({
storage: new LibSQLStore({
id: 'mastra-storage',
url: "file:./mastra.db", // Required for trace persistence
}),
observability: new Observability({
configs: {
local: {
serviceName: "my-service",
exporters: [new DefaultExporter()],
},
},
}),
});
Recommended ConfigurationDirect link to Recommended Configuration
Include DefaultExporter in your observability configuration:
import { Mastra } from "@mastra/core";
import {
Observability,
DefaultExporter,
CloudExporter,
SensitiveDataFilter,
} from "@mastra/observability";
import { LibSQLStore } from "@mastra/libsql";
export const mastra = new Mastra({
storage: new LibSQLStore({
id: 'mastra-storage',
url: "file:./mastra.db",
}),
observability: new Observability({
configs: {
default: {
serviceName: "mastra",
exporters: [
new DefaultExporter(), // Persists traces to storage for Mastra Studio
new CloudExporter(), // Sends traces to Mastra Cloud (requires MASTRA_CLOUD_ACCESS_TOKEN)
],
spanOutputProcessors: [
new SensitiveDataFilter(),
],
},
},
}),
});
Viewing TracesDirect link to Viewing Traces
StudioDirect link to Studio
Access your traces through Studio:
- Start Studio
- Navigate to Observability
- Filter and search your local traces
- Inspect detailed span information
Tracing StrategiesDirect link to Tracing Strategies
DefaultExporter automatically selects the optimal tracing strategy based on your storage provider. You can also override this selection if needed.
Available StrategiesDirect link to Available Strategies
| Strategy | Description | Use Case |
|---|---|---|
| realtime | Process each event immediately | Development, debugging, low traffic |
| batch-with-updates | Buffer events and batch write with full lifecycle support | Low volume Production |
| insert-only | Only process completed spans, ignore updates | High volume Production |
Strategy ConfigurationDirect link to Strategy Configuration
new DefaultExporter({
strategy: "auto", // Default - let storage provider decide
// or explicitly set:
// strategy: 'realtime' | 'batch-with-updates' | 'insert-only'
// Batching configuration (applies to both batch-with-updates and insert-only)
maxBatchSize: 1000, // Max spans per batch
maxBatchWaitMs: 5000, // Max wait before flushing
maxBufferSize: 10000, // Max spans to buffer
});
Storage Provider SupportDirect link to Storage Provider Support
Different storage providers support different tracing strategies. Some providers support observability for production workloads, while others are intended primarily for local development.
If you set the strategy to 'auto', the DefaultExporter automatically selects the optimal strategy for the storage provider. If you set the strategy to a mode that the storage provider doesn't support, you will get an error message.
Providers with Observability SupportDirect link to Providers with Observability Support
| Storage Provider | Preferred Strategy | Supported Strategies | Recommended Use |
|---|---|---|---|
ClickHouse (@mastra/clickhouse) | insert-only | insert-only | Production (high-volume) |
| PostgreSQL | batch-with-updates | batch-with-updates, insert-only | Production (low volume) |
| MSSQL | batch-with-updates | batch-with-updates, insert-only | Production (low volume) |
| MongoDB | batch-with-updates | batch-with-updates, insert-only | Production (low volume) |
| libSQL | batch-with-updates | batch-with-updates, insert-only | Default storage, good for development |
Providers without Observability SupportDirect link to Providers without Observability Support
The following storage providers do not support the observability domain. If you're using one of these providers and need observability, use composite storage to route observability data to a supported provider:
Strategy BenefitsDirect link to Strategy Benefits
- realtime: Immediate visibility, best for debugging
- batch-with-updates: 10-100x throughput improvement, full span lifecycle
- insert-only: Additional 70% reduction in database operations, perfect for analytics
Production RecommendationsDirect link to Production Recommendations
Observability data grows quickly in production environments. A single agent interaction can generate hundreds of spans, and high-traffic applications can produce thousands of traces per day. Most general-purpose databases aren't optimized for this write-heavy, append-only workload.
Recommended: ClickHouse for High-Volume ProductionDirect link to Recommended: ClickHouse for High-Volume Production
ClickHouse is a columnar database designed for high-volume analytics workloads. It's the recommended choice for production observability because:
- Optimized for writes: Handles millions of inserts per second
- Efficient compression: Reduces storage costs for trace data
- Fast queries: Columnar storage enables quick trace lookups and aggregations
- Time-series native: Built-in support for time-based data retention and partitioning
Using Composite StorageDirect link to Using Composite Storage
If you're using a provider without observability support (like Convex or DynamoDB) or want to optimize performance, use composite storage to route observability data to ClickHouse while keeping other data in your primary database.
Batching BehaviorDirect link to Batching Behavior
Flush TriggersDirect link to Flush Triggers
For both batch strategies (batch-with-updates and insert-only), traces are flushed to storage when any of these conditions are met:
- Size trigger: Buffer reaches
maxBatchSizespans - Time trigger:
maxBatchWaitMselapsed since first event - Emergency flush: Buffer approaches
maxBufferSizelimit - Shutdown: Force flush all pending events
Error HandlingDirect link to Error Handling
The DefaultExporter includes robust error handling for production use:
- Retry Logic: Exponential backoff (500ms, 1s, 2s, 4s)
- Transient Failures: Automatic retry with backoff
- Persistent Failures: Drop batch after 4 failed attempts
- Buffer Overflow: Prevent memory issues during storage outages
Configuration ExamplesDirect link to Configuration Examples
// Zero config - recommended for most users
new DefaultExporter();
// Development override
new DefaultExporter({
strategy: "realtime", // Immediate visibility for debugging
});
// High-throughput production
new DefaultExporter({
maxBatchSize: 2000, // Larger batches
maxBatchWaitMs: 10000, // Wait longer to fill batches
maxBufferSize: 50000, // Handle longer outages
});
// Low-latency production
new DefaultExporter({
maxBatchSize: 100, // Smaller batches
maxBatchWaitMs: 1000, // Flush quickly
});
RelatedDirect link to Related
- Tracing Overview
- CloudExporter
- Composite Storage - Combine multiple storage providers
- Storage Configuration