Mastra now supports AI Tracing

·

Sep 30, 2025

When we first built observability into Mastra, we went with the obvious choice: OpenTelemetry. It's the industry standard, it integrates with everything, and it promised to give us complete visibility into our AI applications.

But there was a problem.

We were drowning in noise.

Lost in the forest of traces

Here's what a typical trace looked like when debugging an agent that calls a workflow with nested tools:

tracingbefore.png

See the problem? The actual AI operations—the LLM call and the tool execution—are buried in implementation details. You're seeing every function call, every validation check, every formatting operation. It's like trying to debug your application by reading the Node.js source code.

Traditional observability tools are built for traditional applications. They assume you want to see everything because in a typical web service, any function could be the bottleneck. But AI applications are different. The structure is more predictable: agents call models, models call tools, tools might trigger workflows. What varies is the behavior within that structure.

AI Tracing: A better kind of observability

So we built AI Tracing—observability designed specifically for AI applications.

Here's the same operation with AI Tracing:

tracingafter.png

Just the operations that matter. No implementation noise. Token counts elevated to the trace level where you can actually see them. Clear parent-child relationships between AI operations.

What makes AI Tracing different

Built for multiple observability platforms

Different observability platforms have different strengths. Langfuse excels at LLM-specific analytics, Braintrust is great for evaluation and testing, and so on.

AI Tracing doesn't force you to choose a singular platform. The new exporter architecture lets you send the same trace data to multiple destinations, each formatted for that platform's strengths.

Plus, the configSelector function enables dynamic configuration selection at runtime, allowing you to route traces based on request context, environment variables, feature flags, or any custom logic.

A common pattern is to select configurations based on deployment environment

 1export const mastra = new Mastra({
 2  observability: {
 3    configs: {
 4      development: {
 5        serviceName: 'my-service-dev',
 6        sampling: { type: 'always' },
 7        exporters: [new DefaultExporter()],
 8      },
 9      staging: {
10        serviceName: 'my-service-staging',
11        sampling: { type: 'ratio', probability: 0.5 },
12        exporters: [langfuseExporter],
13      },
14      production: {
15        serviceName: 'my-service-prod',
16        sampling: { type: 'ratio', probability: 0.01 },
17        exporters: [cloudExporter, langfuseExporter],
18      },
19    },
20    configSelector: (context, availableTracers) => {
21      const env = process.env.NODE_ENV || 'development';
22      return env;
23    },
24  },
25});

Right now we support the following exporters: Braintrust, Langfuse, and Langsmith. We also have one to send to any OTel provider.

Batch optimization, real-time debugging

One of the most frustrating aspects of traditional observability is the delay. You run your agent, something goes wrong, and you have to wait for the batch export to see what happened. By the time you see the trace, you've lost context.

AI Tracing supports real-time export for development (provided the configured exporter supports real-time tracing) :

 1new LangfuseExporter({
 2  publicKey: process.env.LANGFUSE_PUBLIC_KEY,
 3  secretKey: process.env.LANGFUSE_SECRET_KEY,
 4  realtime: process.env.NODE_ENV === 'development', // Instant visibility
 5})

In development, every span is immediately flushed to your observability platform. You can watch your traces appear in real-time as your agent executes. When something goes wrong, the trace is already there.

In production, batch mode takes over for efficiency. Spans are buffered and sent in batches, reducing network overhead and improving performance.

Automatic noise filtering

Every AI framework has implementation details that don't belong in your traces. Message validation, context initialization, response formatting—these are important for the framework to work, but they're noise when you're trying to understand your application's behavior.

AI Tracing automatically filters out Mastra's internal operations. We mark them as internal spans, and by default, they don't get exported to your observability platform. You see your application's logic, not our framework's plumbing.

(But if you need it, the data is still there. You can enable internal span export to see everything.)

What’s next

We're working on adding suspend/resume support to workflow tracing as well as setting custom trace metadata when starting an Agent or Workflow from the API. So look out for future updates.

The paradox of good observability is that it should be invisible when things work and invaluable when they don't. AI Tracing flips traditional tracing on its head—during normal operation, you see clean, high-level flows. When something goes wrong, you can drill down.


AI Tracing is available now in @mastra/core 0.14.0 and later. Check out the documentation to get started, or see the GitHub issue for the latest updates.

Stay up to date