Sentry Exporter
Sentry is an application monitoring platform with AI-specific tracing capabilities. The Sentry exporter sends your traces to Sentry using OpenTelemetry semantic conventions, providing insights into model performance, token usage, and tool executions.
InstallationDirect link to Installation
- npm
- pnpm
- Yarn
- Bun
npm install @mastra/sentry@beta
pnpm add @mastra/sentry@beta
yarn add @mastra/sentry@beta
bun add @mastra/sentry@beta
ConfigurationDirect link to Configuration
PrerequisitesDirect link to Prerequisites
- Sentry Account: Sign up at sentry.io
- DSN: Get your Data Source Name from Project Settings → Client Keys
- Environment Variables: Set your configuration
SENTRY_DSN=https://...@...sentry.io/...
# Optional
SENTRY_ENVIRONMENT=production
SENTRY_RELEASE=1.0.0
Zero-Config SetupDirect link to Zero-Config Setup
With environment variables set, use the exporter with no configuration:
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { SentryExporter } from "@mastra/sentry";
export const mastra = new Mastra({
observability: new Observability({
configs: {
sentry: {
serviceName: "my-service",
exporters: [new SentryExporter()],
},
},
}),
});
Explicit ConfigurationDirect link to Explicit Configuration
You can also pass credentials directly (takes precedence over environment variables):
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { SentryExporter } from "@mastra/sentry";
export const mastra = new Mastra({
observability: new Observability({
configs: {
sentry: {
serviceName: "my-service",
exporters: [
new SentryExporter({
dsn: process.env.SENTRY_DSN!,
environment: "production",
tracesSampleRate: 1.0, // Send 100% of transactions to Sentry
}),
],
},
},
}),
});
Configuration OptionsDirect link to Configuration Options
Complete ConfigurationDirect link to Complete Configuration
new SentryExporter({
// Required settings
dsn: process.env.SENTRY_DSN!, // Data Source Name - tells the SDK where to send events
// Optional settings
environment: "production", // Deployment environment (enables filtering issues and alerts by environment)
tracesSampleRate: 1.0, // Percentage of transactions sent to Sentry (0.0 = 0%, 1.0 = 100%)
release: "1.0.0", // Version of your code deployed (helps identify regressions and track deployments)
// Advanced Sentry options
options: {
// Any additional Sentry.NodeOptions
integrations: [],
beforeSend: (event) => event,
// ... other Sentry SDK options
},
// Diagnostic logging
logLevel: "info", // debug | info | warn | error
});
Sampling ConfigurationDirect link to Sampling Configuration
Control the percentage of transactions sent to Sentry. This is useful for high-volume applications:
new SentryExporter({
dsn: process.env.SENTRY_DSN!,
tracesSampleRate: 0.1, // Send 10% of transactions to Sentry (recommended for high-load backends)
});
Set to 1.0 (100%) for development and 0.1 to 0.2 (10-20%) for production high-load applications. To disable tracing entirely, don't set tracesSampleRate at all rather than setting it to 0.
Span Type MappingDirect link to Span Type Mapping
Mastra span types are automatically mapped to Sentry operations:
| Mastra SpanType | Sentry Operation | Notes |
|---|---|---|
AGENT_RUN | gen_ai.invoke_agent | Contains tokens from child MODEL_GENERATION span |
MODEL_GENERATION | gen_ai.chat | Includes usage stats, streaming data |
MODEL_STEP | (skipped) | Skipped to simplify trace hierarchy |
MODEL_CHUNK | (skipped) | Data aggregated in MODEL_GENERATION |
TOOL_CALL | gen_ai.execute_tool | Tool execution with input/output |
MCP_TOOL_CALL | gen_ai.execute_tool | MCP tool execution |
WORKFLOW_RUN | workflow.run | |
WORKFLOW_STEP | workflow.step | |
WORKFLOW_CONDITIONAL | workflow.conditional | |
WORKFLOW_CONDITIONAL_EVAL | workflow.conditional | |
WORKFLOW_PARALLEL | workflow.parallel | |
WORKFLOW_LOOP | workflow.loop | |
WORKFLOW_SLEEP | workflow.sleep | |
WORKFLOW_WAIT_EVENT | workflow.wait | |
PROCESSOR_RUN | ai.processor | |
GENERIC | ai.span |
OpenTelemetry Semantic ConventionsDirect link to OpenTelemetry Semantic Conventions
The exporter uses standard GenAI semantic conventions with Sentry-specific attributes:
For MODEL_GENERATION spans:
gen_ai.system: Model provider (e.g.,openai,anthropic)gen_ai.request.model: Model identifier (e.g.,gpt-4)gen_ai.response.model: Response modelgen_ai.response.text: Output text responsegen_ai.response.tool_calls: Tool calls made during generation (JSON array)gen_ai.usage.input_tokens: Input token countgen_ai.usage.output_tokens: Output token countgen_ai.request.temperature: Temperature parametergen_ai.request.stream: Whether streaming was requestedgen_ai.request.messages: Input messages/prompts (JSON)gen_ai.completion_start_time: Time first token arrived
For TOOL_CALL spans:
gen_ai.tool.name: Tool identifiergen_ai.tool.type:functiongen_ai.tool.call.id: Tool call IDgen_ai.tool.input: Tool input (JSON)gen_ai.tool.output: Tool output (JSON)tool.success: Whether the tool call succeeded
For AGENT_RUN spans:
gen_ai.agent.name: Agent identifiergen_ai.pipeline.name: Agent name (for Sentry AI view)gen_ai.agent.instructions: Agent instructionsgen_ai.response.model: Model from child generationgen_ai.response.text: Output text from child generationgen_ai.usage.*: Token usage from child generation
FeaturesDirect link to Features
- Hierarchical traces: Maintains parent-child relationships
- Token tracking: Automatic token usage tracking for generations
- Tool call tracking: Captures tool executions with input/output
- Streaming support: Aggregates streaming responses
- Error tracking: Automatic error status and exception capture
- Workflow support: Tracks workflow execution steps
- Simplified hierarchy: MODEL_STEP and MODEL_CHUNK spans are skipped to reduce noise