# Sentry Exporter [Sentry](https://sentry.io/) is an application monitoring platform with AI-specific tracing capabilities. The Sentry exporter sends your traces to Sentry using OpenTelemetry semantic conventions, providing insights into model performance, token usage, and tool executions. ## Installation ```bash npm install @mastra/sentry@beta ``` ```bash pnpm add @mastra/sentry@beta ``` ```bash yarn add @mastra/sentry@beta ``` ```bash bun add @mastra/sentry@beta ``` ## Configuration ### Prerequisites 1. **Sentry Account**: Sign up at [sentry.io](https://sentry.io/) 2. **DSN**: Get your [Data Source Name](https://docs.sentry.io/concepts/key-terms/dsn-explainer/) from Project Settings → Client Keys 3. **Environment Variables**: Set your configuration ```bash SENTRY_DSN=https://...@...sentry.io/... # Optional SENTRY_ENVIRONMENT=production SENTRY_RELEASE=1.0.0 ``` ### Zero-Config Setup With environment variables set, use the exporter with no configuration: ```typescript import { Mastra } from "@mastra/core"; import { Observability } from "@mastra/observability"; import { SentryExporter } from "@mastra/sentry"; export const mastra = new Mastra({ observability: new Observability({ configs: { sentry: { serviceName: "my-service", exporters: [new SentryExporter()], }, }, }), }); ``` ### Explicit Configuration You can also pass credentials directly (takes precedence over environment variables): ```typescript import { Mastra } from "@mastra/core"; import { Observability } from "@mastra/observability"; import { SentryExporter } from "@mastra/sentry"; export const mastra = new Mastra({ observability: new Observability({ configs: { sentry: { serviceName: "my-service", exporters: [ new SentryExporter({ dsn: process.env.SENTRY_DSN!, environment: "production", tracesSampleRate: 1.0, // Send 100% of transactions to Sentry }), ], }, }, }), }); ``` ## Configuration Options ### Complete Configuration ```typescript new SentryExporter({ // Required settings dsn: process.env.SENTRY_DSN!, // Data Source Name - tells the SDK where to send events // Optional settings environment: "production", // Deployment environment (enables filtering issues and alerts by environment) tracesSampleRate: 1.0, // Percentage of transactions sent to Sentry (0.0 = 0%, 1.0 = 100%) release: "1.0.0", // Version of your code deployed (helps identify regressions and track deployments) // Advanced Sentry options options: { // Any additional Sentry.NodeOptions integrations: [], beforeSend: (event) => event, // ... other Sentry SDK options }, // Diagnostic logging logLevel: "info", // debug | info | warn | error }); ``` ### Sampling Configuration Control the percentage of transactions sent to Sentry. This is useful for high-volume applications: ```typescript new SentryExporter({ dsn: process.env.SENTRY_DSN!, tracesSampleRate: 0.1, // Send 10% of transactions to Sentry (recommended for high-load backends) }); ``` Set to `1.0` (100%) for development and `0.1` to `0.2` (10-20%) for production high-load applications. To disable tracing entirely, don't set `tracesSampleRate` at all rather than setting it to `0`. ## Span Type Mapping Mastra span types are automatically mapped to Sentry operations: | Mastra SpanType | Sentry Operation | Notes | | --------------------------- | ---------------------- | ------------------------------------------------- | | `AGENT_RUN` | `gen_ai.invoke_agent` | Contains tokens from child MODEL\_GENERATION span | | `MODEL_GENERATION` | `gen_ai.chat` | Includes usage stats, streaming data | | `MODEL_STEP` | _(skipped)_ | Skipped to simplify trace hierarchy | | `MODEL_CHUNK` | _(skipped)_ | Data aggregated in MODEL\_GENERATION | | `TOOL_CALL` | `gen_ai.execute_tool` | Tool execution with input/output | | `MCP_TOOL_CALL` | `gen_ai.execute_tool` | MCP tool execution | | `WORKFLOW_RUN` | `workflow.run` | | | `WORKFLOW_STEP` | `workflow.step` | | | `WORKFLOW_CONDITIONAL` | `workflow.conditional` | | | `WORKFLOW_CONDITIONAL_EVAL` | `workflow.conditional` | | | `WORKFLOW_PARALLEL` | `workflow.parallel` | | | `WORKFLOW_LOOP` | `workflow.loop` | | | `WORKFLOW_SLEEP` | `workflow.sleep` | | | `WORKFLOW_WAIT_EVENT` | `workflow.wait` | | | `PROCESSOR_RUN` | `ai.processor` | | | `GENERIC` | `ai.span` | | ## OpenTelemetry Semantic Conventions The exporter uses standard GenAI semantic conventions with Sentry-specific attributes: **For MODEL\_GENERATION spans:** - `gen_ai.system`: Model provider (e.g., `openai`, `anthropic`) - `gen_ai.request.model`: Model identifier (e.g., `gpt-4`) - `gen_ai.response.model`: Response model - `gen_ai.response.text`: Output text response - `gen_ai.response.tool_calls`: Tool calls made during generation (JSON array) - `gen_ai.usage.input_tokens`: Input token count - `gen_ai.usage.output_tokens`: Output token count - `gen_ai.request.temperature`: Temperature parameter - `gen_ai.request.stream`: Whether streaming was requested - `gen_ai.request.messages`: Input messages/prompts (JSON) - `gen_ai.completion_start_time`: Time first token arrived **For TOOL\_CALL spans:** - `gen_ai.tool.name`: Tool identifier - `gen_ai.tool.type`: `function` - `gen_ai.tool.call.id`: Tool call ID - `gen_ai.tool.input`: Tool input (JSON) - `gen_ai.tool.output`: Tool output (JSON) - `tool.success`: Whether the tool call succeeded **For AGENT\_RUN spans:** - `gen_ai.agent.name`: Agent identifier - `gen_ai.pipeline.name`: Agent name (for Sentry AI view) - `gen_ai.agent.instructions`: Agent instructions - `gen_ai.response.model`: Model from child generation - `gen_ai.response.text`: Output text from child generation - `gen_ai.usage.*`: Token usage from child generation ## Features - **Hierarchical traces**: Maintains parent-child relationships - **Token tracking**: Automatic token usage tracking for generations - **Tool call tracking**: Captures tool executions with input/output - **Streaming support**: Aggregates streaming responses - **Error tracking**: Automatic error status and exception capture - **Workflow support**: Tracks workflow execution steps - **Simplified hierarchy**: MODEL\_STEP and MODEL\_CHUNK spans are skipped to reduce noise ## Related - [Tracing Overview](https://mastra.ai/docs/observability/tracing/overview/llms.txt) - [Sentry Documentation](https://docs.sentry.io/) - [OpenTelemetry Semantic Conventions](https://opentelemetry.io/docs/concepts/semantic-conventions/)