SentryExporter
Sends Tracing data to Sentry for AI tracing and monitoring using OpenTelemetry semantic conventions.
ConstructorDirect link to Constructor
new SentryExporter(config: SentryExporterConfig)
SentryExporterConfigDirect link to SentryExporterConfig
interface SentryExporterConfig extends BaseExporterConfig {
dsn?: string;
environment?: string;
tracesSampleRate?: number;
release?: string;
options?: Partial<Sentry.NodeOptions>;
}
Extends BaseExporterConfig, which includes:
logger?: IMastraLogger- Logger instancelogLevel?: LogLevel | 'debug' | 'info' | 'warn' | 'error'- Log level (default: INFO)
MethodsDirect link to Methods
exportTracingEventDirect link to exportTracingEvent
async exportTracingEvent(event: TracingEvent): Promise<void>
Exports a tracing event to Sentry. Handles SPAN_STARTED, SPAN_UPDATED, and SPAN_ENDED events.
flushDirect link to flush
async flush(): Promise<void>
Force flushes any pending spans to Sentry without shutting down the exporter. Waits up to 2 seconds for pending data to be sent. Useful in serverless environments where you need to ensure spans are exported before the runtime terminates.
shutdownDirect link to shutdown
async shutdown(): Promise<void>
Ends all active spans, clears internal state, and closes the Sentry connection. Waits up to 2 seconds for pending data to be sent.
UsageDirect link to Usage
Zero-Config (using environment variables)Direct link to Zero-Config (using environment variables)
import { SentryExporter } from "@mastra/sentry";
// Reads from SENTRY_DSN, SENTRY_ENVIRONMENT, SENTRY_RELEASE
const exporter = new SentryExporter();
Explicit ConfigurationDirect link to Explicit Configuration
import { SentryExporter } from "@mastra/sentry";
const exporter = new SentryExporter({
dsn: process.env.SENTRY_DSN,
environment: "production",
tracesSampleRate: 1.0,
release: "1.0.0",
});
With SamplingDirect link to With Sampling
const exporter = new SentryExporter({
dsn: process.env.SENTRY_DSN,
environment: "production",
tracesSampleRate: 0.1, // Send 10% of transactions to Sentry (recommended for high-load backends)
});
With Custom Sentry OptionsDirect link to With Custom Sentry Options
const exporter = new SentryExporter({
dsn: process.env.SENTRY_DSN,
environment: "production",
options: {
integrations: [
// Custom Sentry integrations
],
beforeSend: (event) => {
// Custom event processing
return event;
},
},
});
Span MappingDirect link to Span Mapping
Mastra span types are mapped to Sentry operations:
| Mastra SpanType | Sentry Operation |
|---|---|
AGENT_RUN | gen_ai.invoke_agent |
MODEL_GENERATION | gen_ai.chat |
MODEL_STEP | (skipped) |
MODEL_CHUNK | (skipped) |
TOOL_CALL | gen_ai.execute_tool |
MCP_TOOL_CALL | gen_ai.execute_tool |
WORKFLOW_RUN | workflow.run |
WORKFLOW_STEP | workflow.step |
WORKFLOW_CONDITIONAL | workflow.conditional |
WORKFLOW_CONDITIONAL_EVAL | workflow.conditional |
WORKFLOW_PARALLEL | workflow.parallel |
WORKFLOW_LOOP | workflow.loop |
WORKFLOW_SLEEP | workflow.sleep |
WORKFLOW_WAIT_EVENT | workflow.wait |
PROCESSOR_RUN | ai.processor |
GENERIC | ai.span |
MODEL_STEP and MODEL_CHUNK spans are skipped to simplify trace hierarchy. Their data is aggregated into MODEL_GENERATION spans.
Environment VariablesDirect link to Environment Variables
The exporter reads configuration from these environment variables:
| Variable | Description |
|---|---|
SENTRY_DSN | Data Source Name - tells the SDK where to send events |
SENTRY_ENVIRONMENT | Deployment environment name |
SENTRY_RELEASE | Version of your code deployed |
OpenTelemetry AttributesDirect link to OpenTelemetry Attributes
The exporter sets these standard OpenTelemetry attributes:
Common attributes (all spans):
sentry.origin:auto.ai.mastra(identifies spans from Mastra)ai.span.type: Mastra span type
MODEL_GENERATION spans:
gen_ai.operation.name:chatgen_ai.system: Model providergen_ai.request.model: Model identifiergen_ai.request.messages: Input messages/prompts (JSON)gen_ai.response.model: Response modelgen_ai.response.text: Output textgen_ai.response.tool_calls: Tool calls made during generation (JSON array)gen_ai.usage.input_tokens: Input token countgen_ai.usage.output_tokens: Output token countgen_ai.usage.total_tokens: Total tokensgen_ai.request.stream: Streaming flaggen_ai.request.temperature: Temperature parametergen_ai.completion_start_time: First token time
TOOL_CALL spans:
gen_ai.operation.name:execute_toolgen_ai.tool.name: Tool identifiergen_ai.tool.type:functiongen_ai.tool.call.id: Tool call IDgen_ai.tool.input: Tool inputgen_ai.tool.output: Tool outputtool.success: Success flag
AGENT_RUN spans:
gen_ai.operation.name:invoke_agentgen_ai.agent.name: Agent identifiergen_ai.pipeline.name: Agent namegen_ai.agent.instructions: Agent instructionsgen_ai.response.model: Model from child generationgen_ai.response.text: Output from child generationgen_ai.usage.*: Token usage from child generation