Skip to main content
Mastra 1.0 is available 🎉 Read announcement

SentryExporter

Sends Tracing data to Sentry for AI tracing and monitoring using OpenTelemetry semantic conventions.

Constructor
Direct link to Constructor

new SentryExporter(config: SentryExporterConfig)

SentryExporterConfig
Direct link to SentryExporterConfig

interface SentryExporterConfig extends BaseExporterConfig {
dsn?: string;
environment?: string;
tracesSampleRate?: number;
release?: string;
options?: Partial<Sentry.NodeOptions>;
}

Extends BaseExporterConfig, which includes:

  • logger?: IMastraLogger - Logger instance
  • logLevel?: LogLevel | 'debug' | 'info' | 'warn' | 'error' - Log level (default: INFO)

Methods
Direct link to Methods

exportTracingEvent
Direct link to exportTracingEvent

async exportTracingEvent(event: TracingEvent): Promise<void>

Exports a tracing event to Sentry. Handles SPAN_STARTED, SPAN_UPDATED, and SPAN_ENDED events.

flush
Direct link to flush

async flush(): Promise<void>

Force flushes any pending spans to Sentry without shutting down the exporter. Waits up to 2 seconds for pending data to be sent. Useful in serverless environments where you need to ensure spans are exported before the runtime terminates.

shutdown
Direct link to shutdown

async shutdown(): Promise<void>

Ends all active spans, clears internal state, and closes the Sentry connection. Waits up to 2 seconds for pending data to be sent.

Usage
Direct link to Usage

Zero-Config (using environment variables)
Direct link to Zero-Config (using environment variables)

import { SentryExporter } from "@mastra/sentry";

// Reads from SENTRY_DSN, SENTRY_ENVIRONMENT, SENTRY_RELEASE
const exporter = new SentryExporter();

Explicit Configuration
Direct link to Explicit Configuration

import { SentryExporter } from "@mastra/sentry";

const exporter = new SentryExporter({
dsn: process.env.SENTRY_DSN,
environment: "production",
tracesSampleRate: 1.0,
release: "1.0.0",
});

With Sampling
Direct link to With Sampling

const exporter = new SentryExporter({
dsn: process.env.SENTRY_DSN,
environment: "production",
tracesSampleRate: 0.1, // Send 10% of transactions to Sentry (recommended for high-load backends)
});

With Custom Sentry Options
Direct link to With Custom Sentry Options

const exporter = new SentryExporter({
dsn: process.env.SENTRY_DSN,
environment: "production",
options: {
integrations: [
// Custom Sentry integrations
],
beforeSend: (event) => {
// Custom event processing
return event;
},
},
});

Span Mapping
Direct link to Span Mapping

Mastra span types are mapped to Sentry operations:

Mastra SpanTypeSentry Operation
AGENT_RUNgen_ai.invoke_agent
MODEL_GENERATIONgen_ai.chat
MODEL_STEP(skipped)
MODEL_CHUNK(skipped)
TOOL_CALLgen_ai.execute_tool
MCP_TOOL_CALLgen_ai.execute_tool
WORKFLOW_RUNworkflow.run
WORKFLOW_STEPworkflow.step
WORKFLOW_CONDITIONALworkflow.conditional
WORKFLOW_CONDITIONAL_EVALworkflow.conditional
WORKFLOW_PARALLELworkflow.parallel
WORKFLOW_LOOPworkflow.loop
WORKFLOW_SLEEPworkflow.sleep
WORKFLOW_WAIT_EVENTworkflow.wait
PROCESSOR_RUNai.processor
GENERICai.span

MODEL_STEP and MODEL_CHUNK spans are skipped to simplify trace hierarchy. Their data is aggregated into MODEL_GENERATION spans.

Environment Variables
Direct link to Environment Variables

The exporter reads configuration from these environment variables:

VariableDescription
SENTRY_DSNData Source Name - tells the SDK where to send events
SENTRY_ENVIRONMENTDeployment environment name
SENTRY_RELEASEVersion of your code deployed

OpenTelemetry Attributes
Direct link to OpenTelemetry Attributes

The exporter sets these standard OpenTelemetry attributes:

Common attributes (all spans):

  • sentry.origin: auto.ai.mastra (identifies spans from Mastra)
  • ai.span.type: Mastra span type

MODEL_GENERATION spans:

  • gen_ai.operation.name: chat
  • gen_ai.system: Model provider
  • gen_ai.request.model: Model identifier
  • gen_ai.request.messages: Input messages/prompts (JSON)
  • gen_ai.response.model: Response model
  • gen_ai.response.text: Output text
  • gen_ai.response.tool_calls: Tool calls made during generation (JSON array)
  • gen_ai.usage.input_tokens: Input token count
  • gen_ai.usage.output_tokens: Output token count
  • gen_ai.usage.total_tokens: Total tokens
  • gen_ai.request.stream: Streaming flag
  • gen_ai.request.temperature: Temperature parameter
  • gen_ai.completion_start_time: First token time

TOOL_CALL spans:

  • gen_ai.operation.name: execute_tool
  • gen_ai.tool.name: Tool identifier
  • gen_ai.tool.type: function
  • gen_ai.tool.call.id: Tool call ID
  • gen_ai.tool.input: Tool input
  • gen_ai.tool.output: Tool output
  • tool.success: Success flag

AGENT_RUN spans:

  • gen_ai.operation.name: invoke_agent
  • gen_ai.agent.name: Agent identifier
  • gen_ai.pipeline.name: Agent name
  • gen_ai.agent.instructions: Agent instructions
  • gen_ai.response.model: Model from child generation
  • gen_ai.response.text: Output from child generation
  • gen_ai.usage.*: Token usage from child generation