Skip to main content

Arize Exporter

Arize provides observability platforms for AI applications through Phoenix (open-source) and Arize AX (enterprise). The Arize exporter sends traces using OpenTelemetry and OpenInference semantic conventions, compatible with any OpenTelemetry platform that supports OpenInference.

Installation
Direct link to Installation

npm install @mastra/arize@beta

Configuration
Direct link to Configuration

Phoenix Setup
Direct link to Phoenix Setup

Phoenix is an open-source observability platform that can be self-hosted or used via Phoenix Cloud.

Prerequisites
Direct link to Prerequisites

  1. Phoenix Instance: Deploy using Docker or sign up at Phoenix Cloud
  2. Endpoint: Your Phoenix endpoint URL (ends in /v1/traces)
  3. API Key: Optional for unauthenticated instances, required for Phoenix Cloud
  4. Environment Variables: Set your configuration
.env
PHOENIX_ENDPOINT=http://localhost:6006/v1/traces  # Or your Phoenix Cloud URL
PHOENIX_API_KEY=your-api-key # Optional for local instances
PHOENIX_PROJECT_NAME=mastra-service # Optional, defaults to 'mastra-service'

Basic Setup
Direct link to Basic Setup

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { ArizeExporter } from "@mastra/arize";

export const mastra = new Mastra({
observability: new Observability({
configs: {
arize: {
serviceName: process.env.PHOENIX_PROJECT_NAME || "mastra-service",
exporters: [
new ArizeExporter({
endpoint: process.env.PHOENIX_ENDPOINT!,
apiKey: process.env.PHOENIX_API_KEY,
projectName: process.env.PHOENIX_PROJECT_NAME,
}),
],
},
},
}),
});
info

Quick Start with Docker

Test locally with an in-memory Phoenix instance:

docker run --pull=always -d --name arize-phoenix -p 6006:6006 \
-e PHOENIX_SQL_DATABASE_URL="sqlite:///:memory:" \
arizephoenix/phoenix:latest

Set PHOENIX_ENDPOINT=http://localhost:6006/v1/traces and run your Mastra agent to see traces at localhost:6006.

Arize AX Setup
Direct link to Arize AX Setup

Arize AX is an enterprise observability platform with advanced features for production AI systems.

Prerequisites
Direct link to Prerequisites

  1. Arize AX Account: Sign up at app.arize.com
  2. Space ID: Your organization's space identifier
  3. API Key: Generate in Arize AX settings
  4. Environment Variables: Set your credentials
.env
ARIZE_SPACE_ID=your-space-id
ARIZE_API_KEY=your-api-key
ARIZE_PROJECT_NAME=mastra-service # Optional

Basic Setup
Direct link to Basic Setup

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { ArizeExporter } from "@mastra/arize";

export const mastra = new Mastra({
observability: new Observability({
configs: {
arize: {
serviceName: process.env.ARIZE_PROJECT_NAME || "mastra-service",
exporters: [
new ArizeExporter({
apiKey: process.env.ARIZE_API_KEY!,
spaceId: process.env.ARIZE_SPACE_ID!,
projectName: process.env.ARIZE_PROJECT_NAME,
}),
],
},
},
}),
});

Configuration Options
Direct link to Configuration Options

The Arize exporter supports advanced configuration for fine-tuning OpenTelemetry behavior:

Complete Configuration
Direct link to Complete Configuration

new ArizeExporter({
// Phoenix Configuration
endpoint: "https://your-collector.example.com/v1/traces", // Required for Phoenix

// Arize AX Configuration
spaceId: "your-space-id", // Required for Arize AX

// Shared Configuration
apiKey: "your-api-key", // Required for authenticated endpoints
projectName: "mastra-service", // Optional project name

// Optional OTLP settings
headers: {
"x-custom-header": "value", // Additional headers for OTLP requests
},

// Debug and performance tuning
logLevel: "debug", // Logging: debug | info | warn | error
batchSize: 512, // Batch size before exporting spans
timeout: 30000, // Timeout in ms before exporting spans

// Custom resource attributes
resourceAttributes: {
"deployment.environment": process.env.NODE_ENV,
"service.version": process.env.APP_VERSION,
},
});

Batch Processing Options
Direct link to Batch Processing Options

Control how traces are batched and exported:

new ArizeExporter({
endpoint: process.env.PHOENIX_ENDPOINT!,
apiKey: process.env.PHOENIX_API_KEY,

// Batch processing configuration
batchSize: 512, // Number of spans to batch (default: 512)
timeout: 30000, // Max time in ms to wait before export (default: 30000)
});

Resource Attributes
Direct link to Resource Attributes

Add custom attributes to all exported spans:

new ArizeExporter({
endpoint: process.env.PHOENIX_ENDPOINT!,
resourceAttributes: {
"deployment.environment": process.env.NODE_ENV,
"service.namespace": "production",
"service.instance.id": process.env.HOSTNAME,
"custom.attribute": "value",
},
});

Custom metadata
Direct link to Custom metadata

Non-reserved span attributes are serialized into the OpenInference metadata payload and surface in Arize/Phoenix. You can add them via tracingOptions.metadata:

await agent.generate(input, {
tracingOptions: {
metadata: {
companyId: "acme-co",
tier: "enterprise",
},
},
});

Reserved fields such as input, output, sessionId, thread/user IDs, and OpenInference IDs are excluded automatically.

OpenInference Semantic Conventions
Direct link to OpenInference Semantic Conventions

This exporter implements the OpenInference Semantic Conventions for generative AI applications, providing standardized trace structure across different observability platforms.

Using Tags
Direct link to Using Tags

Tags help you categorize and filter traces in Phoenix and Arize AX. Add tags when executing agents or workflows:

const result = await agent.generate({
messages: [{ role: "user", content: "Hello" }],
tracingOptions: {
tags: ["production", "experiment-v2", "user-request"],
},
});

Tags appear as the tag.tags attribute following OpenInference conventions and can be used to filter and search traces. Common use cases include:

  • Environment labels: "production", "staging"
  • Experiment tracking: "experiment-v1", "control-group"
  • Priority levels: "priority-high", "batch-job"