# SentryExporter Sends Tracing data to Sentry for AI tracing and monitoring using OpenTelemetry semantic conventions. ## Constructor ```typescript new SentryExporter(config: SentryExporterConfig) ``` ## SentryExporterConfig ```typescript interface SentryExporterConfig extends BaseExporterConfig { dsn?: string; environment?: string; tracesSampleRate?: number; release?: string; options?: Partial; } ``` Extends `BaseExporterConfig`, which includes: - `logger?: IMastraLogger` - Logger instance - `logLevel?: LogLevel | 'debug' | 'info' | 'warn' | 'error'` - Log level (default: INFO) ## Methods ### exportTracingEvent ```typescript async exportTracingEvent(event: TracingEvent): Promise ``` Exports a tracing event to Sentry. Handles SPAN\_STARTED, SPAN\_UPDATED, and SPAN\_ENDED events. ### flush ```typescript async flush(): Promise ``` Force flushes any pending spans to Sentry without shutting down the exporter. Waits up to 2 seconds for pending data to be sent. Useful in serverless environments where you need to ensure spans are exported before the runtime terminates. ### shutdown ```typescript async shutdown(): Promise ``` Ends all active spans, clears internal state, and closes the Sentry connection. Waits up to 2 seconds for pending data to be sent. ## Usage ### Zero-Config (using environment variables) ```typescript import { SentryExporter } from "@mastra/sentry"; // Reads from SENTRY_DSN, SENTRY_ENVIRONMENT, SENTRY_RELEASE const exporter = new SentryExporter(); ``` ### Explicit Configuration ```typescript import { SentryExporter } from "@mastra/sentry"; const exporter = new SentryExporter({ dsn: process.env.SENTRY_DSN, environment: "production", tracesSampleRate: 1.0, release: "1.0.0", }); ``` ### With Sampling ```typescript const exporter = new SentryExporter({ dsn: process.env.SENTRY_DSN, environment: "production", tracesSampleRate: 0.1, // Send 10% of transactions to Sentry (recommended for high-load backends) }); ``` ### With Custom Sentry Options ```typescript const exporter = new SentryExporter({ dsn: process.env.SENTRY_DSN, environment: "production", options: { integrations: [ // Custom Sentry integrations ], beforeSend: (event) => { // Custom event processing return event; }, }, }); ``` ## Span Mapping Mastra span types are mapped to Sentry operations: | Mastra SpanType | Sentry Operation | | --------------------------- | ---------------------- | | `AGENT_RUN` | `gen_ai.invoke_agent` | | `MODEL_GENERATION` | `gen_ai.chat` | | `MODEL_STEP` | _(skipped)_ | | `MODEL_CHUNK` | _(skipped)_ | | `TOOL_CALL` | `gen_ai.execute_tool` | | `MCP_TOOL_CALL` | `gen_ai.execute_tool` | | `WORKFLOW_RUN` | `workflow.run` | | `WORKFLOW_STEP` | `workflow.step` | | `WORKFLOW_CONDITIONAL` | `workflow.conditional` | | `WORKFLOW_CONDITIONAL_EVAL` | `workflow.conditional` | | `WORKFLOW_PARALLEL` | `workflow.parallel` | | `WORKFLOW_LOOP` | `workflow.loop` | | `WORKFLOW_SLEEP` | `workflow.sleep` | | `WORKFLOW_WAIT_EVENT` | `workflow.wait` | | `PROCESSOR_RUN` | `ai.processor` | | `GENERIC` | `ai.span` | MODEL\_STEP and MODEL\_CHUNK spans are skipped to simplify trace hierarchy. Their data is aggregated into MODEL\_GENERATION spans. ## Environment Variables The exporter reads configuration from these environment variables: | Variable | Description | | -------------------- | ----------------------------------------------------- | | `SENTRY_DSN` | Data Source Name - tells the SDK where to send events | | `SENTRY_ENVIRONMENT` | Deployment environment name | | `SENTRY_RELEASE` | Version of your code deployed | ## OpenTelemetry Attributes The exporter sets these standard OpenTelemetry attributes: **Common attributes (all spans):** - `sentry.origin`: `auto.ai.mastra` (identifies spans from Mastra) - `ai.span.type`: Mastra span type **MODEL\_GENERATION spans:** - `gen_ai.operation.name`: `chat` - `gen_ai.system`: Model provider - `gen_ai.request.model`: Model identifier - `gen_ai.request.messages`: Input messages/prompts (JSON) - `gen_ai.response.model`: Response model - `gen_ai.response.text`: Output text - `gen_ai.response.tool_calls`: Tool calls made during generation (JSON array) - `gen_ai.usage.input_tokens`: Input token count - `gen_ai.usage.output_tokens`: Output token count - `gen_ai.usage.total_tokens`: Total tokens - `gen_ai.request.stream`: Streaming flag - `gen_ai.request.temperature`: Temperature parameter - `gen_ai.completion_start_time`: First token time **TOOL\_CALL spans:** - `gen_ai.operation.name`: `execute_tool` - `gen_ai.tool.name`: Tool identifier - `gen_ai.tool.type`: `function` - `gen_ai.tool.call.id`: Tool call ID - `gen_ai.tool.input`: Tool input - `gen_ai.tool.output`: Tool output - `tool.success`: Success flag **AGENT\_RUN spans:** - `gen_ai.operation.name`: `invoke_agent` - `gen_ai.agent.name`: Agent identifier - `gen_ai.pipeline.name`: Agent name - `gen_ai.agent.instructions`: Agent instructions - `gen_ai.response.model`: Model from child generation - `gen_ai.response.text`: Output from child generation - `gen_ai.usage.*`: Token usage from child generation