Skip to main content
Mastra 1.0 is available 🎉 Read announcement

Sentry Exporter

Sentry is an application monitoring platform with AI-specific tracing capabilities. The Sentry exporter sends your traces to Sentry using OpenTelemetry semantic conventions, providing insights into model performance, token usage, and tool executions.

Installation
Direct link to Installation

npm install @mastra/sentry@beta

Configuration
Direct link to Configuration

Prerequisites
Direct link to Prerequisites

  1. Sentry Account: Sign up at sentry.io
  2. DSN: Get your Data Source Name from Project Settings → Client Keys
  3. Environment Variables: Set your configuration
.env
SENTRY_DSN=https://...@...sentry.io/...

# Optional
SENTRY_ENVIRONMENT=production
SENTRY_RELEASE=1.0.0

Zero-Config Setup
Direct link to Zero-Config Setup

With environment variables set, use the exporter with no configuration:

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { SentryExporter } from "@mastra/sentry";

export const mastra = new Mastra({
observability: new Observability({
configs: {
sentry: {
serviceName: "my-service",
exporters: [new SentryExporter()],
},
},
}),
});

Explicit Configuration
Direct link to Explicit Configuration

You can also pass credentials directly (takes precedence over environment variables):

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { SentryExporter } from "@mastra/sentry";

export const mastra = new Mastra({
observability: new Observability({
configs: {
sentry: {
serviceName: "my-service",
exporters: [
new SentryExporter({
dsn: process.env.SENTRY_DSN!,
environment: "production",
tracesSampleRate: 1.0, // Send 100% of transactions to Sentry
}),
],
},
},
}),
});

Configuration Options
Direct link to Configuration Options

Complete Configuration
Direct link to Complete Configuration

new SentryExporter({
// Required settings
dsn: process.env.SENTRY_DSN!, // Data Source Name - tells the SDK where to send events

// Optional settings
environment: "production", // Deployment environment (enables filtering issues and alerts by environment)
tracesSampleRate: 1.0, // Percentage of transactions sent to Sentry (0.0 = 0%, 1.0 = 100%)
release: "1.0.0", // Version of your code deployed (helps identify regressions and track deployments)

// Advanced Sentry options
options: {
// Any additional Sentry.NodeOptions
integrations: [],
beforeSend: (event) => event,
// ... other Sentry SDK options
},

// Diagnostic logging
logLevel: "info", // debug | info | warn | error
});

Sampling Configuration
Direct link to Sampling Configuration

Control the percentage of transactions sent to Sentry. This is useful for high-volume applications:

new SentryExporter({
dsn: process.env.SENTRY_DSN!,
tracesSampleRate: 0.1, // Send 10% of transactions to Sentry (recommended for high-load backends)
});
tip

Set to 1.0 (100%) for development and 0.1 to 0.2 (10-20%) for production high-load applications. To disable tracing entirely, don't set tracesSampleRate at all rather than setting it to 0.

Span Type Mapping
Direct link to Span Type Mapping

Mastra span types are automatically mapped to Sentry operations:

Mastra SpanTypeSentry OperationNotes
AGENT_RUNgen_ai.invoke_agentContains tokens from child MODEL_GENERATION span
MODEL_GENERATIONgen_ai.chatIncludes usage stats, streaming data
MODEL_STEP(skipped)Skipped to simplify trace hierarchy
MODEL_CHUNK(skipped)Data aggregated in MODEL_GENERATION
TOOL_CALLgen_ai.execute_toolTool execution with input/output
MCP_TOOL_CALLgen_ai.execute_toolMCP tool execution
WORKFLOW_RUNworkflow.run
WORKFLOW_STEPworkflow.step
WORKFLOW_CONDITIONALworkflow.conditional
WORKFLOW_CONDITIONAL_EVALworkflow.conditional
WORKFLOW_PARALLELworkflow.parallel
WORKFLOW_LOOPworkflow.loop
WORKFLOW_SLEEPworkflow.sleep
WORKFLOW_WAIT_EVENTworkflow.wait
PROCESSOR_RUNai.processor
GENERICai.span

OpenTelemetry Semantic Conventions
Direct link to OpenTelemetry Semantic Conventions

The exporter uses standard GenAI semantic conventions with Sentry-specific attributes:

For MODEL_GENERATION spans:

  • gen_ai.system: Model provider (e.g., openai, anthropic)
  • gen_ai.request.model: Model identifier (e.g., gpt-4)
  • gen_ai.response.model: Response model
  • gen_ai.response.text: Output text response
  • gen_ai.response.tool_calls: Tool calls made during generation (JSON array)
  • gen_ai.usage.input_tokens: Input token count
  • gen_ai.usage.output_tokens: Output token count
  • gen_ai.request.temperature: Temperature parameter
  • gen_ai.request.stream: Whether streaming was requested
  • gen_ai.request.messages: Input messages/prompts (JSON)
  • gen_ai.completion_start_time: Time first token arrived

For TOOL_CALL spans:

  • gen_ai.tool.name: Tool identifier
  • gen_ai.tool.type: function
  • gen_ai.tool.call.id: Tool call ID
  • gen_ai.tool.input: Tool input (JSON)
  • gen_ai.tool.output: Tool output (JSON)
  • tool.success: Whether the tool call succeeded

For AGENT_RUN spans:

  • gen_ai.agent.name: Agent identifier
  • gen_ai.pipeline.name: Agent name (for Sentry AI view)
  • gen_ai.agent.instructions: Agent instructions
  • gen_ai.response.model: Model from child generation
  • gen_ai.response.text: Output text from child generation
  • gen_ai.usage.*: Token usage from child generation

Features
Direct link to Features

  • Hierarchical traces: Maintains parent-child relationships
  • Token tracking: Automatic token usage tracking for generations
  • Tool call tracking: Captures tool executions with input/output
  • Streaming support: Aggregates streaming responses
  • Error tracking: Automatic error status and exception capture
  • Workflow support: Tracks workflow execution steps
  • Simplified hierarchy: MODEL_STEP and MODEL_CHUNK spans are skipped to reduce noise