Skip to main content

Arthur exporter

Arthur provides an observability and evaluation platform for AI applications through Arthur Engine (open-source). The Arthur exporter sends traces using OpenTelemetry and OpenInference semantic conventions.

Installation
Direct link to Installation

npm install @mastra/arthur@latest

Configuration
Direct link to Configuration

Prerequisites
Direct link to Prerequisites

  1. Arthur Engine instance: Follow the Docker Compose deployment guide to start an Arthur Engine
  2. API key: Generate an API key from the Arthur Engine UI at http://localhost:3030
  3. Task ID (optional): Create a task to route traces to a specific task

Task routing
Direct link to Task routing

Arthur Engine associates traces with tasks in two ways:

  • By service name: Set serviceName in the observability config. Arthur Engine automatically routes traces to the task matching that name, creating it if needed.
  • By task ID: Pass a pre-existing taskId to the exporter to send traces to a specific task directly.

If both are provided, taskId takes precedence.

Environment variables
Direct link to Environment variables

.env
# Required
ARTHUR_API_KEY=your-api-key
ARTHUR_BASE_URL=http://localhost:3030

# Optional - route traces to a pre-existing task by ID
ARTHUR_TASK_ID=your-task-id

Zero-Config Setup
Direct link to Zero-Config Setup

With environment variables set, use the exporter with no configuration:

src/mastra/index.ts
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { ArthurExporter } from '@mastra/arthur'

export const mastra = new Mastra({
observability: new Observability({
configs: {
arthur: {
serviceName: 'my-service',
exporters: [new ArthurExporter()],
},
},
}),
})

Explicit Configuration
Direct link to Explicit Configuration

You can also pass credentials directly (takes precedence over environment variables):

src/mastra/index.ts
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { ArthurExporter } from '@mastra/arthur'

export const mastra = new Mastra({
observability: new Observability({
configs: {
arthur: {
serviceName: 'my-service',
exporters: [
new ArthurExporter({
apiKey: process.env.ARTHUR_API_KEY!,
endpoint: process.env.ARTHUR_BASE_URL!,
taskId: process.env.ARTHUR_TASK_ID,
}),
],
},
},
}),
})

Configuration options
Direct link to Configuration options

Complete Configuration
Direct link to Complete Configuration

new ArthurExporter({
// Arthur Configuration
apiKey: 'your-api-key', // Required
endpoint: 'http://localhost:3030', // Required
taskId: 'your-task-id', // Optional

// Optional OTLP settings
headers: {
'x-custom-header': 'value', // Additional headers for OTLP requests
},

// Debug and performance tuning
logLevel: 'debug', // Logging: debug | info | warn | error
batchSize: 512, // Batch size before exporting spans
timeout: 30000, // Timeout in ms before exporting spans

// Custom resource attributes
resourceAttributes: {
'deployment.environment': process.env.NODE_ENV,
'service.version': process.env.APP_VERSION,
},
})

Custom metadata
Direct link to Custom metadata

Non-reserved span attributes are serialized into the OpenInference metadata payload and surface in Arthur. You can add them via tracingOptions.metadata:

await agent.generate(input, {
tracingOptions: {
metadata: {
companyId: 'acme-co',
tier: 'enterprise',
},
},
})

Reserved fields such as input, output, sessionId, thread/user IDs, and OpenInference IDs are excluded automatically.

OpenInference semantic conventions
Direct link to OpenInference semantic conventions

This exporter implements the OpenInference Semantic Conventions for generative AI applications, providing standardized trace structure across different observability platforms.