Tracing
Tracing provides specialized monitoring and debugging for the AI-related operations in your application. When enabled, Mastra automatically creates traces for agent runs, LLM generations, tool calls, and workflow steps with AI-specific context and metadata.
Unlike traditional application tracing, Tracing focuses specifically on understanding your AI pipeline — capturing token usage, model parameters, tool execution details, and conversation flows. This makes it easier to debug issues, optimize performance, and understand how your AI systems behave in production.
How It WorksDirect link to How It Works
Traces are created by:
- Configure exporters → send trace data to observability platforms
- Set sampling strategies → control which traces are collected
- Run agents and workflows → Mastra auto-instruments them with Tracing
ConfigurationDirect link to Configuration
Basic ConfigDirect link to Basic Config
import { Mastra } from "@mastra/core";
import {
Observability,
DefaultExporter,
CloudExporter,
SensitiveDataFilter,
} from "@mastra/observability";
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: "mastra",
exporters: [
new DefaultExporter(), // Persists traces to storage for Mastra Studio
new CloudExporter(), // Sends traces to Mastra Cloud (if MASTRA_CLOUD_ACCESS_TOKEN is set)
],
spanOutputProcessors: [
new SensitiveDataFilter(), // Redacts sensitive data like passwords, tokens, keys
],
},
},
}),
storage: new LibSQLStore({
id: 'mastra-storage',
url: "file:./mastra.db", // Storage is required for tracing
}),
});
This configuration includes:
- Service Name:
"mastra"- identifies your service in traces - Sampling:
"always"by default (100% of traces) - Exporters:
DefaultExporter- Persists traces to your configured storage for Mastra StudioCloudExporter- Sends traces to Mastra Cloud (requiresMASTRA_CLOUD_ACCESS_TOKEN)
- Span Output Processors:
SensitiveDataFilter- Redacts sensitive fields
ExportersDirect link to Exporters
Exporters determine where your trace data is sent and how it's stored. They integrate with your existing observability stack, support data residency requirements, and can be optimized for cost and performance. You can use multiple exporters simultaneously to send the same trace data to different destinations — for example, storing detailed traces locally for debugging while sending sampled data to a cloud provider for production monitoring.
Internal ExportersDirect link to Internal Exporters
Mastra provides two built-in exporters:
- Default - Persists traces to local storage for viewing in Studio
- Cloud - Sends traces to Mastra Cloud for production monitoring and collaboration
External ExportersDirect link to External Exporters
In addition to the internal exporters, Mastra supports integration with popular observability platforms. These exporters allow you to leverage your existing monitoring infrastructure and take advantage of platform-specific features like alerting, dashboards, and correlation with other application metrics.
- Arize - Exports traces to Arize Phoenix or Arize AX using OpenInference semantic conventions
- Braintrust - Exports traces to Braintrust's eval and observability platform
- Datadog - Sends traces to Datadog APM via OTLP for full-stack observability with AI tracing
- Laminar - Sends traces to Laminar via OTLP/HTTP (protobuf) with Laminar-native span attributes + scorer support
- Langfuse - Sends traces to the Langfuse open-source LLM engineering platform
- LangSmith - Pushes traces into LangSmith's observability and evaluation toolkit
- PostHog - Sends traces to PostHog for AI analytics and product insights
- Sentry - Sends traces to Sentry for AI tracing and monitoring using OpenTelemetry semantic conventions
- OpenTelemetry - Deliver traces to any OpenTelemetry-compatible observability system
- Supports: Dash0, MLflow, New Relic, SigNoz, Traceloop, Zipkin, and others!
BridgesDirect link to Bridges
Bridges provide bidirectional integration with external tracing systems. Unlike exporters that send trace data to external platforms, bridges create native spans in external systems and inherit context from them. This enables Mastra operations to participate in existing distributed traces.
- OpenTelemetry Bridge - Integrate with existing OpenTelemetry infrastructure
Bridges vs ExportersDirect link to Bridges vs Exporters
| Feature | Bridges | Exporters |
|---|---|---|
| Creates native spans in external systems | Yes | No |
| Inherits context from external systems | Yes | No |
| Sends data to backends | Via external SDK | Directly |
| Use case | Existing distributed tracing | Standalone Mastra tracing |
You can use both together — a bridge for context propagation and exporters to send traces to additional destinations.
Sampling StrategiesDirect link to Sampling Strategies
Sampling allows you to control which traces are collected, helping you balance between observability needs and resource costs. In production environments with high traffic, collecting every trace can be expensive and unnecessary. Sampling strategies let you capture a representative subset of traces while ensuring you don't miss critical information about errors or important operations.
Mastra supports four sampling strategies:
Always SampleDirect link to Always Sample
Collects 100% of traces. Best for development, debugging, or low-traffic scenarios where you need complete visibility.
sampling: {
type: "always";
}
Never SampleDirect link to Never Sample
Disables tracing entirely. Useful for specific environments where tracing adds no value or when you need to temporarily disable tracing without removing configuration.
sampling: {
type: "never";
}
Ratio-Based SamplingDirect link to Ratio-Based Sampling
Randomly samples a percentage of traces. Ideal for production environments where you want statistical insights without the cost of full tracing. The probability value ranges from 0 (no traces) to 1 (all traces).
sampling: {
type: 'ratio',
probability: 0.1 // Sample 10% of traces
}
Custom SamplingDirect link to Custom Sampling
Implements your own sampling logic based on request context, metadata, or business rules. Perfect for complex scenarios like sampling based on user tier, request type, or error conditions.
sampling: {
type: 'custom',
sampler: (options) => {
// Sample premium users at higher rate
if (options?.metadata?.userTier === 'premium') {
return Math.random() < 0.5; // 50% sampling
}
// Default 1% sampling for others
return Math.random() < 0.01;
}
}
Complete ExampleDirect link to Complete Example
export const mastra = new Mastra({
observability: new Observability({
configs: {
"10_percent": {
serviceName: "my-service",
// Sample 10% of traces
sampling: {
type: "ratio",
probability: 0.1,
},
exporters: [new DefaultExporter()],
},
},
}),
});
Multi-Config SetupDirect link to Multi-Config Setup
Complex applications often require different tracing configurations for different scenarios. You might want detailed traces with full sampling during development, sampled traces sent to external providers in production, and specialized configurations for specific features or customer segments. The configSelector function enables dynamic configuration selection at runtime, allowing you to route traces based on request context, environment variables, feature flags, or any custom logic.
This approach is particularly valuable when:
- Running A/B tests with different observability requirements
- Providing enhanced debugging for specific customers or support cases
- Gradually rolling out new tracing providers without affecting existing monitoring
- Optimizing costs by using different sampling rates for different request types
- Maintaining separate trace streams for compliance or data residency requirements
Note that only a single config can be used for a specific execution. But a single config can send data to multiple exporters simultaneously.
Dynamic Configuration SelectionDirect link to Dynamic Configuration Selection
Use configSelector to choose the appropriate tracing configuration based on request context:
export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: "langfuse-service",
exporters: [langfuseExporter],
},
braintrust: {
serviceName: "braintrust-service",
exporters: [braintrustExporter],
},
debug: {
serviceName: "debug-service",
sampling: { type: "always" },
exporters: [new DefaultExporter()],
},
},
configSelector: (context, availableTracers) => {
// Use debug config for support requests
if (context.requestContext?.get("supportMode")) {
return "debug";
}
// Route specific customers to different providers
const customerId = context.requestContext?.get("customerId");
if (customerId && premiumCustomers.includes(customerId)) {
return "braintrust";
}
// Route specific requests to langfuse
if (context.requestContext?.get("useExternalTracing")) {
return "langfuse";
}
throw new Error('no config found')
},
}),
});
Environment-Based ConfigurationDirect link to Environment-Based Configuration
A common pattern is to select configurations based on deployment environment:
export const mastra = new Mastra({
observability: new Observability({
configs: {
development: {
serviceName: "my-service-dev",
sampling: { type: "always" },
exporters: [new DefaultExporter()],
},
staging: {
serviceName: "my-service-staging",
sampling: { type: "ratio", probability: 0.5 },
exporters: [langfuseExporter],
},
production: {
serviceName: "my-service-prod",
sampling: { type: "ratio", probability: 0.01 },
exporters: [cloudExporter, langfuseExporter],
},
},
configSelector: (context, availableTracers) => {
const env = process.env.NODE_ENV || "development";
return env;
},
}),
});
Common Configuration Patterns & TroubleshootingDirect link to Common Configuration Patterns & Troubleshooting
Maintaining Studio and Cloud AccessDirect link to Maintaining Studio and Cloud Access
When adding external exporters, include DefaultExporter and CloudExporter to maintain access to Studio and Mastra Cloud:
import {
Observability,
DefaultExporter,
CloudExporter,
SensitiveDataFilter,
} from "@mastra/observability";
import { ArizeExporter } from "@mastra/arize";
export const mastra = new Mastra({
observability: new Observability({
configs: {
production: {
serviceName: "my-service",
exporters: [
new ArizeExporter({
endpoint: process.env.PHOENIX_ENDPOINT,
apiKey: process.env.PHOENIX_API_KEY,
}),
new DefaultExporter(), // Keep Studio access
new CloudExporter(), // Keep Cloud access
],
spanOutputProcessors: [
new SensitiveDataFilter(),
],
},
},
}),
});
This configuration sends traces to all three destinations simultaneously:
- Arize Phoenix/AX for external observability
- DefaultExporter for Studio
- CloudExporter for Mastra Cloud dashboard
Remember: A single trace can be sent to multiple exporters. You don't need separate configs for each exporter unless you want different sampling rates or processors.
Adding Custom MetadataDirect link to Adding Custom Metadata
Custom metadata allows you to attach additional context to your traces, making it easier to debug issues and understand system behavior in production. Metadata can include business logic details, performance metrics, user context, or any information that helps you understand what happened during execution.
You can add metadata to any span using the tracing context:
execute: async (inputData, context) => {
const startTime = Date.now();
const response = await fetch(inputData.endpoint);
// Add custom metadata to the current span
context?.tracingContext.currentSpan?.update({
metadata: {
apiStatusCode: response.status,
endpoint: inputData.endpoint,
responseTimeMs: Date.now() - startTime,
userTier: inputData.userTier,
region: process.env.AWS_REGION,
},
});
return await response.json();
};
Metadata set here will be shown in all configured exporters.
Automatic Metadata from RequestContextDirect link to Automatic Metadata from RequestContext
Instead of manually adding metadata to each span, you can configure Mastra to automatically extract values from RequestContext and attach them as metadata to all spans in a trace. This is useful for consistently tracking user identifiers, environment information, feature flags, or any request-scoped data across your entire trace.
Configuration-Level ExtractionDirect link to Configuration-Level Extraction
Define which RequestContext keys to extract in your tracing configuration. These keys will be automatically included as metadata for all spans created with this configuration:
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: "my-service",
requestContextKeys: ["userId", "environment", "tenantId"],
exporters: [new DefaultExporter()],
},
},
}),
});
Now when you execute agents or workflows with a RequestContext, these values are automatically extracted:
const requestContext = new RequestContext();
requestContext.set("userId", "user-123");
requestContext.set("environment", "production");
requestContext.set("tenantId", "tenant-456");
// All spans in this trace automatically get userId, environment, and tenantId metadata
const result = await agent.generate("Hello", {
requestContext,
});
Per-Request AdditionsDirect link to Per-Request Additions
You can add trace-specific keys using tracingOptions.requestContextKeys. These are merged with the configuration-level keys:
const requestContext = new RequestContext();
requestContext.set("userId", "user-123");
requestContext.set("environment", "production");
requestContext.set("experimentId", "exp-789");
const result = await agent.generate("Hello", {
requestContext,
tracingOptions: {
requestContextKeys: ["experimentId"], // Adds to configured keys
},
});
// All spans now have: userId, environment, AND experimentId
Nested Value ExtractionDirect link to Nested Value Extraction
Use dot notation to extract nested values from RequestContext:
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
requestContextKeys: ["user.id", "session.data.experimentId"],
exporters: [new DefaultExporter()],
},
},
}),
});
const requestContext = new RequestContext();
requestContext.set("user", { id: "user-456", name: "John Doe" });
requestContext.set("session", { data: { experimentId: "exp-999" } });
// Metadata will include: { user: { id: 'user-456' }, session: { data: { experimentId: 'exp-999' } } }
How It WorksDirect link to How It Works
- TraceState Computation: At the start of a trace (root span creation), Mastra computes which keys to extract by merging configuration-level and per-request keys
- Automatic Extraction: Root spans (agent runs, workflow executions) automatically extract metadata from RequestContext
- Child Span Extraction: Child spans can also extract metadata if you pass
requestContextwhen creating them - Metadata Precedence: Explicit metadata passed to span options always takes precedence over extracted metadata
Adding Tags to TracesDirect link to Adding Tags to Traces
Tags are string labels that help you categorize and filter traces. Unlike metadata (which contains structured key-value data), tags are simple strings designed for quick filtering and organization.
Use tracingOptions.tags to add tags when executing agents or workflows:
// With agents
const result = await agent.generate("Hello", {
tracingOptions: {
tags: ["production", "experiment-v2", "user-request"],
},
});
// With workflows
const run = await mastra.getWorkflow("myWorkflow").createRun();
const result = await run.start({
inputData: { data: "process this" },
tracingOptions: {
tags: ["batch-processing", "priority-high"],
},
});
How Tags WorkDirect link to How Tags Work
- Root span only: Tags are applied only to the root span of a trace (the agent run or workflow run span)
- Widely supported: Tags are supported by most exporters for filtering and searching traces:
- Braintrust - Native
tagsfield - Langfuse - Native
tagsfield on traces - ArizeExporter -
tag.tagsOpenInference attribute - OtelExporter -
mastra.tagsspan attribute - OtelBridge -
mastra.tagsspan attribute
- Braintrust - Native
- Combinable with metadata: You can use both
tagsandmetadatain the sametracingOptions
const result = await agent.generate([{ role: "user", content: "Analyze this" }], {
tracingOptions: {
tags: ["production", "analytics"],
metadata: { userId: "user-123", experimentId: "exp-456" },
},
});
Common Tag PatternsDirect link to Common Tag Patterns
- Environment:
"production","staging","development" - Feature flags:
"feature-x-enabled","beta-user" - Request types:
"user-request","batch-job","scheduled-task" - Priority levels:
"priority-high","priority-low" - Experiments:
"experiment-v1","control-group","treatment-a"
Hiding Sensitive Input/OutputDirect link to Hiding Sensitive Input/Output
When processing sensitive data, you may want to prevent input and output values from being logged to your observability platforms. Use hideInput and hideOutput in tracingOptions to exclude this data from all spans in a trace:
// Hide input data (e.g., user credentials, PII)
const result = await agent.generate([{ role: "user", content: "Process this sensitive data" }], {
tracingOptions: {
hideInput: true, // Input will be hidden from all spans
},
});
// Hide output data (e.g., generated secrets, confidential results)
const result = await agent.generate([{ role: "user", content: "Generate API keys" }], {
tracingOptions: {
hideOutput: true, // Output will be hidden from all spans
},
});
// Hide both input and output
const result = await agent.generate([{ role: "user", content: "Handle confidential request" }], {
tracingOptions: {
hideInput: true,
hideOutput: true,
},
});
How It WorksDirect link to How It Works
- Trace-wide effect: When set on the root span, these options apply to all child spans in the trace (tool calls, model generations, etc.)
- Export-time filtering: The data is still available internally during execution but is excluded when spans are exported to observability platforms
- Combinable with other options: You can use
hideInput/hideOutputalongsidetags,metadata, and othertracingOptions
const result = await agent.generate([{ role: "user", content: "Sensitive operation" }], {
tracingOptions: {
hideInput: true,
hideOutput: true,
tags: ["sensitive-operation", "pii-handling"],
metadata: { operationType: "credential-processing" },
},
});
For more granular control over sensitive data, consider using the Sensitive Data Filter processor, which can redact specific fields (like passwords, tokens, and keys) while preserving the rest of the input/output.
Child Spans and Metadata ExtractionDirect link to Child Spans and Metadata Extraction
When creating child spans within tools or workflow steps, you can pass the requestContext parameter to enable metadata extraction:
execute: async (inputData, context) => {
// Create child span WITH requestContext - gets metadata extraction
const dbSpan = context?.tracingContext.currentSpan?.createChildSpan({
type: "generic",
name: "database-query",
requestContext: context?.requestContext, // Pass to enable metadata extraction
});
const results = await db.query("SELECT * FROM users");
dbSpan?.end({ output: results });
// Or create child span WITHOUT requestContext - no metadata extraction
const cacheSpan = context?.tracingContext.currentSpan?.createChildSpan({
type: "generic",
name: "cache-check",
// No requestContext - won't extract metadata
});
return results;
};
This gives you fine-grained control over which child spans include RequestContext metadata. Root spans (agent/workflow executions) always extract metadata automatically, while child spans only extract when you explicitly pass requestContext.
Creating Child SpansDirect link to Creating Child Spans
Child spans allow you to track fine-grained operations within your workflow steps or tools. They provide visibility into sub-operations like database queries, API calls, file operations, or complex calculations. This hierarchical structure helps you identify performance bottlenecks and understand the exact sequence of operations.
Create child spans inside a tool call or workflow step to track specific operations:
execute: async (inputData, context) => {
// Create another child span for the main database operation
const querySpan = context?.tracingContext.currentSpan?.createChildSpan({
type: "generic",
name: "database-query",
input: { query: inputData.query },
metadata: { database: "production" },
});
try {
const results = await db.query(inputData.query);
querySpan?.end({
output: results.data,
metadata: {
rowsReturned: results.length,
queryTimeMs: results.executionTime,
cacheHit: results.fromCache,
},
});
return results;
} catch (error) {
querySpan?.error({
error,
metadata: { retryable: isRetryableError(error) },
});
throw error;
}
};
Child spans automatically inherit the trace context from their parent, maintaining the relationship hierarchy in your observability platform.
Span FormattingDirect link to Span Formatting
Mastra provides two ways to transform span data before it reaches your observability platform: span processors and custom span formatters. Both allow you to modify, filter, or enrich trace data, but they operate at different levels and serve different purposes.
| Feature | Span Processors | Custom Span Formatters |
|---|---|---|
| Configuration level | Observability config | Per-exporter |
| Operates on | Internal Span object | Exported ExportedSpan data |
| Applies to | All exporters | Single exporter |
| Async support | No | Yes |
| Use case | Security, filtering, enrichment | Platform-specific formatting, async enrichment |
Use span processors for synchronous transformations that should apply to all exporters (like redacting sensitive data). Use custom span formatters when different exporters need different representations of the same data (like plain text for one platform and structured data for another), or when you need to perform asynchronous operations like fetching data from external APIs.
Span ProcessorsDirect link to Span Processors
Span processors transform, filter, or enrich trace data before it's exported. They act as a pipeline between span creation and export, enabling you to modify spans for security, compliance, or debugging purposes. Processors run once and affect all exporters.
Built-in ProcessorsDirect link to Built-in Processors
- Sensitive Data Filter redacts sensitive information. It is enabled in the default observability config.
Creating Custom ProcessorsDirect link to Creating Custom Processors
You can create custom span processors by implementing the SpanOutputProcessor interface. Here's a simple example that converts all input text in spans to lowercase:
import type { SpanOutputProcessor, AnySpan } from "@mastra/observability";
export class LowercaseInputProcessor implements SpanOutputProcessor {
name = "lowercase-processor";
process(span: AnySpan): AnySpan {
span.input = `${span.input}`.toLowerCase();
return span;
}
async shutdown(): Promise<void> {
// Cleanup if needed
}
}
// Use the custom processor
export const mastra = new Mastra({
observability: new Observability({
configs: {
development: {
spanOutputProcessors: [new LowercaseInputProcessor(), new SensitiveDataFilter()],
exporters: [new DefaultExporter()],
},
},
}),
});
Processors are executed in the order they're defined, allowing you to chain multiple transformations. Common use cases include:
- Redacting sensitive data (passwords, tokens, API keys)
- Adding environment-specific metadata
- Filtering out spans based on criteria
- Normalizing data formats
- Enriching spans with business context
Custom Span FormattersDirect link to Custom Span Formatters
Custom span formatters transform how spans appear in specific observability platforms. Unlike span processors, formatters are configured per-exporter, allowing different formatting for different destinations. Formatters support both synchronous and asynchronous operations.
Use CasesDirect link to Use Cases
- Extract plain text from AI SDK messages - Convert structured message arrays to readable text
- Transform input/output formats - Customize how data appears in specific platforms
- Platform-specific field mapping - Add or remove fields based on platform requirements
- Async data enrichment - Fetch additional context from external APIs or databases
ConfigurationDirect link to Configuration
Add a customSpanFormatter to any exporter configuration:
import { BraintrustExporter } from "@mastra/braintrust";
import { LangfuseExporter } from "@mastra/langfuse";
import { SpanType } from "@mastra/core/observability";
import type { CustomSpanFormatter } from "@mastra/core/observability";
// Formatter that extracts plain text from AI messages
const plainTextFormatter: CustomSpanFormatter = (span) => {
if (span.type === SpanType.AGENT_RUN && Array.isArray(span.input)) {
const userMessage = span.input.find((m) => m.role === "user");
return {
...span,
input: userMessage?.content ?? span.input,
};
}
return span;
};
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: "my-service",
exporters: [
// Braintrust gets plain text formatting
new BraintrustExporter({
customSpanFormatter: plainTextFormatter,
}),
// Langfuse keeps the original structured format
new LangfuseExporter(),
],
},
},
}),
});
Chaining Multiple FormattersDirect link to Chaining Multiple Formatters
Use chainFormatters to combine multiple formatters. Chains support both sync and async formatters:
import { chainFormatters } from "@mastra/observability";
const inputFormatter: CustomSpanFormatter = (span) => ({
...span,
input: extractPlainText(span.input),
});
const outputFormatter: CustomSpanFormatter = (span) => ({
...span,
output: extractPlainText(span.output),
});
const exporter = new BraintrustExporter({
customSpanFormatter: chainFormatters([inputFormatter, outputFormatter]),
});
Async FormattersDirect link to Async Formatters
Custom span formatters support asynchronous operations, enabling use cases like fetching data from external APIs or databases to enrich your spans:
import type { CustomSpanFormatter } from "@mastra/core/observability";
// Async formatter that enriches spans with user data
const userEnrichmentFormatter: CustomSpanFormatter = async (span) => {
const userId = span.metadata?.userId;
if (!userId) return span;
// Fetch user data from your API or database
const userData = await fetchUserData(userId);
return {
...span,
metadata: {
...span.metadata,
userName: userData.name,
userEmail: userData.email,
department: userData.department,
},
};
};
// Async formatter that looks up additional context
const contextEnrichmentFormatter: CustomSpanFormatter = async (span) => {
if (span.type !== SpanType.AGENT_RUN) return span;
// Fetch experiment configuration
const experimentConfig = await getExperimentConfig(span.metadata?.experimentId);
return {
...span,
metadata: {
...span.metadata,
experimentVariant: experimentConfig?.variant,
experimentGroup: experimentConfig?.group,
},
};
};
// Use async formatters with an exporter
const exporter = new BraintrustExporter({
customSpanFormatter: userEnrichmentFormatter,
});
// Or chain sync and async formatters together
const exporter = new LangfuseExporter({
customSpanFormatter: chainFormatters([
plainTextFormatter, // sync
userEnrichmentFormatter, // async
contextEnrichmentFormatter, // async
]),
});
Async formatters add latency to span export. Keep async operations fast (under 100ms) to avoid slowing down your application. Consider using caching for frequently accessed data.
Serialization OptionsDirect link to Serialization Options
Serialization options control how span data (input, output, and attributes) is truncated before export. This is useful when working with large payloads, deeply nested objects, or when you need to optimize trace storage.
ConfigurationDirect link to Configuration
Add serializationOptions to your observability configuration:
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: "my-service",
serializationOptions: {
maxStringLength: 2048, // Maximum length for string values (default: 1024)
maxDepth: 10, // Maximum depth for nested objects (default: 6)
maxArrayLength: 100, // Maximum number of items in arrays (default: 50)
maxObjectKeys: 75, // Maximum number of keys in objects (default: 50)
},
exporters: [new DefaultExporter()],
},
},
}),
});
Available OptionsDirect link to Available Options
| Option | Default | Description |
|---|---|---|
maxStringLength | 1024 | Maximum length for string values. Longer strings are truncated. |
maxDepth | 6 | Maximum depth for nested objects. Deeper levels are omitted. |
maxArrayLength | 50 | Maximum number of items in arrays. Additional items are omitted. |
maxObjectKeys | 50 | Maximum number of keys in objects. Additional keys are omitted. |
Use CasesDirect link to Use Cases
Increasing limits for debugging: If your agents or tools work with large documents, API responses, or data structures, increase these limits to capture more context in your traces:
serializationOptions: {
maxStringLength: 8192, // Capture longer text content
maxDepth: 12, // Handle deeply nested JSON responses
maxArrayLength: 200, // Keep more items from large lists
}
Reducing trace size for production: Lower these values to reduce storage costs and improve performance when you don't need full payload visibility:
serializationOptions: {
maxStringLength: 256, // Truncate strings aggressively
maxDepth: 3, // Shallow object representation
maxArrayLength: 10, // Keep only first few items
maxObjectKeys: 20, // Limit object keys
}
All options are optional — if not specified, they fall back to the defaults shown above.
Retrieving Trace IDsDirect link to Retrieving Trace IDs
When you execute agents or workflows with tracing enabled, the response includes a traceId that you can use to look up the full trace in your observability platform. This is useful for debugging, customer support, or correlating traces with other events in your system.
Agent Trace IDsDirect link to Agent Trace IDs
Both generate and stream methods return the trace ID in their response:
// Using generate
const result = await agent.generate("Hello");
console.log("Trace ID:", result.traceId);
// Using stream
const streamResult = await agent.stream("Tell me a story");
console.log("Trace ID:", streamResult.traceId);
Workflow Trace IDsDirect link to Workflow Trace IDs
Workflow executions also return trace IDs:
// Create a workflow run
const run = await mastra.getWorkflow("myWorkflow").createRun();
// Start the workflow
const result = await run.start({
inputData: { data: "process this" },
});
console.log("Trace ID:", result.traceId);
// Or stream the workflow
const { stream, getWorkflowState } = run.stream({
inputData: { data: "process this" },
});
// Get the final state which includes the trace ID
const finalState = await getWorkflowState();
console.log("Trace ID:", finalState.traceId);
Using Trace IDsDirect link to Using Trace IDs
Once you have a trace ID, you can:
- Look up traces in Studio: Navigate to the traces view and search by ID
- Query traces in external platforms: Use the ID in Langfuse, Braintrust, MLflow, or your observability platform
- Correlate with logs: Include the trace ID in your application logs for cross-referencing
- Share for debugging: Provide trace IDs to support teams or developers for investigation
The trace ID is only available when tracing is enabled. If tracing is disabled or sampling excludes the request, traceId will be undefined.
Integrating with External Tracing SystemsDirect link to Integrating with External Tracing Systems
When running Mastra agents or workflows within applications that have existing distributed tracing (OpenTelemetry, Datadog, etc.), you can connect Mastra traces to your parent trace context. This creates a unified view of your entire request flow, making it easier to understand how Mastra operations fit into the broader system.
Passing External Trace IDsDirect link to Passing External Trace IDs
Use the tracingOptions parameter to specify the trace context from your parent system:
// Get trace context from your existing tracing system
const parentTraceId = getCurrentTraceId(); // Your tracing system
const parentSpanId = getCurrentSpanId(); // Your tracing system
// Execute Mastra operations as part of the parent trace
const result = await agent.generate("Analyze this data", {
tracingOptions: {
traceId: parentTraceId,
parentSpanId: parentSpanId,
},
});
// The Mastra trace will now appear as a child in your distributed trace
OpenTelemetry IntegrationDirect link to OpenTelemetry Integration
Integration with OpenTelemetry allows Mastra traces to appear seamlessly in your existing observability platform:
import { trace } from "@opentelemetry/api";
// Get the current OpenTelemetry span
const currentSpan = trace.getActiveSpan();
const spanContext = currentSpan?.spanContext();
if (spanContext) {
const result = await agent.generate(userMessage, {
tracingOptions: {
traceId: spanContext.traceId,
parentSpanId: spanContext.spanId,
},
});
}
Workflow IntegrationDirect link to Workflow Integration
Workflows support the same pattern for trace propagation:
const workflow = mastra.getWorkflow("data-pipeline");
const run = await workflow.createRun();
const result = await run.start({
inputData: { data: "..." },
tracingOptions: {
traceId: externalTraceId,
parentSpanId: externalSpanId,
},
});
ID Format RequirementsDirect link to ID Format Requirements
Mastra validates trace and span IDs to ensure compatibility:
- Trace IDs: 1-32 hexadecimal characters (OpenTelemetry uses 32)
- Span IDs: 1-16 hexadecimal characters (OpenTelemetry uses 16)
Invalid IDs are handled gracefully — Mastra logs an error and continues:
- Invalid trace ID → generates a new trace ID
- Invalid parent span ID → ignores the parent relationship
This ensures tracing never crashes your application, even with malformed input.
Example: Express MiddlewareDirect link to Example: Express Middleware
Here's a complete example showing trace propagation in an Express application:
import { trace } from "@opentelemetry/api";
import express from "express";
const app = express();
app.post("/api/analyze", async (req, res) => {
// Get current OpenTelemetry context
const currentSpan = trace.getActiveSpan();
const spanContext = currentSpan?.spanContext();
const result = await agent.generate(req.body.message, {
tracingOptions: spanContext
? {
traceId: spanContext.traceId,
parentSpanId: spanContext.spanId,
}
: undefined,
});
res.json(result);
});
This creates a single distributed trace that includes both the HTTP request handling and the Mastra agent execution, viewable in your observability platform of choice.
Flushing Traces in Serverless EnvironmentsDirect link to Flushing Traces in Serverless Environments
In serverless environments like Vercel's fluid compute, AWS Lambda, or Cloudflare Workers, runtime instances can be reused across multiple requests. The flush() method allows you to ensure all buffered spans are exported before the runtime terminates, without shutting down the exporter (which would prevent future exports).
Serverless environments have ephemeral filesystems. Use external storage instead of local file storage (file:./mastra.db). See the Vercel deployment guide for a complete setup example.
Using flush()Direct link to Using flush()
Call flush() on the observability instance to flush all exporters:
// Get the observability instance from Mastra
const observability = mastra.getObservability();
// Flush all buffered spans to all exporters
await observability.flush();
When to Use flush()Direct link to When to Use flush()
Use flush() in these scenarios:
- End of serverless function execution: Ensure spans are exported before the runtime is paused or terminated
- Before long-running operations: Flush accumulated spans before a potentially slow operation
- Periodic flushing: In long-running processes, periodically flush to ensure timely data availability
// Example: Vercel serverless function
export async function POST(req: Request) {
const result = await agent.generate([{ role: "user", content: await req.text() }]);
// Ensure spans are exported before function completes
const observability = mastra.getObservability();
await observability.flush();
return Response.json(result);
}
flush() vs shutdown()Direct link to flush() vs shutdown()
| Method | Behavior | Use Case |
|---|---|---|
flush() | Exports buffered spans, keeps exporter active | Serverless environments, periodic flushing |
shutdown() | Exports buffered spans, releases resources | Application shutdown, graceful termination |
Use flush() when you need to ensure data is exported but want to keep the exporter ready for future requests. Use shutdown() only when the application is terminating.
What Gets TracedDirect link to What Gets Traced
Mastra automatically creates spans for:
Agent OperationsDirect link to Agent Operations
- Agent runs - Complete execution with instructions and tools
- LLM calls - Model interactions with tokens and parameters
- Tool executions - Function calls with inputs and outputs
- Memory operations - Thread and semantic recall
Workflow OperationsDirect link to Workflow Operations
- Workflow runs - Full execution from start to finish
- Individual steps - Step processing with inputs/outputs
- Control flow - Conditionals, loops, parallel execution
- Wait operations - Delays and event waiting
See AlsoDirect link to See Also
Reference DocumentationDirect link to Reference Documentation
- Configuration API - ObservabilityConfig details
- Tracing Classes - Core classes and methods
- Span Interfaces - Span types and lifecycle
- Type Definitions - Complete interface reference
ExportersDirect link to Exporters
- DefaultExporter - Storage persistence
- CloudExporter - Mastra Cloud integration
- ConsoleExporter - Debug output
- Arize - Arize Phoenix and Arize AX integration
- Braintrust - Braintrust integration
- Langfuse - Langfuse integration
- MLflow - MLflow OTLP endpoint setup
- OpenTelemetry - OTEL-compatible platforms
BridgesDirect link to Bridges
- OpenTelemetry Bridge - OTEL context integration
ProcessorsDirect link to Processors
- Sensitive Data Filter - Data redaction