Arthur exporter
Arthur provides an observability and evaluation platform for AI applications through Arthur Engine (open-source). The Arthur exporter sends traces using OpenTelemetry and OpenInference semantic conventions.
InstallationDirect link to Installation
- npm
- pnpm
- Yarn
- Bun
npm install @mastra/arthur@latest
pnpm add @mastra/arthur@latest
yarn add @mastra/arthur@latest
bun add @mastra/arthur@latest
ConfigurationDirect link to Configuration
PrerequisitesDirect link to Prerequisites
- Arthur Engine instance: Follow the Docker Compose deployment guide to start an Arthur Engine
- API key: Generate an API key from the Arthur Engine UI at
http://localhost:3030 - Task ID (optional): Create a task to route traces to a specific task
Task routingDirect link to Task routing
Arthur Engine associates traces with tasks in two ways:
- By service name: Set
serviceNamein the observability config. Arthur Engine automatically routes traces to the task matching that name, creating it if needed. - By task ID: Pass a pre-existing
taskIdto the exporter to send traces to a specific task directly.
If both are provided, taskId takes precedence.
Environment variablesDirect link to Environment variables
# Required
ARTHUR_API_KEY=your-api-key
ARTHUR_BASE_URL=http://localhost:3030
# Optional - route traces to a pre-existing task by ID
ARTHUR_TASK_ID=your-task-id
Zero-Config SetupDirect link to Zero-Config Setup
With environment variables set, use the exporter with no configuration:
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { ArthurExporter } from '@mastra/arthur'
export const mastra = new Mastra({
observability: new Observability({
configs: {
arthur: {
serviceName: 'my-service',
exporters: [new ArthurExporter()],
},
},
}),
})
Explicit ConfigurationDirect link to Explicit Configuration
You can also pass credentials directly (takes precedence over environment variables):
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { ArthurExporter } from '@mastra/arthur'
export const mastra = new Mastra({
observability: new Observability({
configs: {
arthur: {
serviceName: 'my-service',
exporters: [
new ArthurExporter({
apiKey: process.env.ARTHUR_API_KEY!,
endpoint: process.env.ARTHUR_BASE_URL!,
taskId: process.env.ARTHUR_TASK_ID,
}),
],
},
},
}),
})
Configuration optionsDirect link to Configuration options
Complete ConfigurationDirect link to Complete Configuration
new ArthurExporter({
// Arthur Configuration
apiKey: 'your-api-key', // Required
endpoint: 'http://localhost:3030', // Required
taskId: 'your-task-id', // Optional
// Optional OTLP settings
headers: {
'x-custom-header': 'value', // Additional headers for OTLP requests
},
// Debug and performance tuning
logLevel: 'debug', // Logging: debug | info | warn | error
batchSize: 512, // Batch size before exporting spans
timeout: 30000, // Timeout in ms before exporting spans
// Custom resource attributes
resourceAttributes: {
'deployment.environment': process.env.NODE_ENV,
'service.version': process.env.APP_VERSION,
},
})
Custom metadataDirect link to Custom metadata
Non-reserved span attributes are serialized into the OpenInference metadata payload and surface in Arthur. You can add them via tracingOptions.metadata:
await agent.generate(input, {
tracingOptions: {
metadata: {
companyId: 'acme-co',
tier: 'enterprise',
},
},
})
Reserved fields such as input, output, sessionId, thread/user IDs, and OpenInference IDs are excluded automatically.
OpenInference semantic conventionsDirect link to OpenInference semantic conventions
This exporter implements the OpenInference Semantic Conventions for generative AI applications, providing standardized trace structure across different observability platforms.