Mastra Observability

Observe and evaluateyour AI agent performance

Mastra Observability monitors LLM operations, traces agent decision paths, and helps you debug complex workflows. Seamlessly integrate with any OpenTelemetry-compatible platform.

AI agent and workflow monitoring

When errors occur, Mastra shows you exactly what happened. Every LLM call logs token usage, latency, prompts and completions. Every agent run captures decision paths, tool calls and memory operations.

Agent tracing and telemetry

Mastra traces every step of agent execution and sends telemetry to any observability provider. Decorator-based instrumentation captures the full callstack locally and in production. Mastra supports any OpenTelemetry-compatible platform, including MLflow, Langfuse, Braintrust, Datadog, New Relic and SigNoz.

build custom agent evals

Mastra Scorers give you quantifiable metrics for measuring agent quality, automatically running in the background, and stored in your database. Get insights into performance, compare different approaches, and identify areas for improvement in your AI systems.

Python trains,
TypeScript ships.

Frequently asked questions