Arize Exporter
Arize provides observability platforms for AI applications through Phoenix (open-source) and Arize AX (enterprise). The Arize exporter sends traces using OpenTelemetry and OpenInference semantic conventions, compatible with any OpenTelemetry platform that supports OpenInference.
Installation
npm install @mastra/arize@beta
Configuration
Phoenix Setup
Phoenix is an open-source observability platform that can be self-hosted or used via Phoenix Cloud.
Prerequisites
- Phoenix Instance: Deploy using Docker or sign up at Phoenix Cloud
- Endpoint: Your Phoenix endpoint URL (ends in
/v1/traces) - API Key: Optional for unauthenticated instances, required for Phoenix Cloud
- Environment Variables: Set your configuration
PHOENIX_ENDPOINT=http://localhost:6006/v1/traces # Or your Phoenix Cloud URL
PHOENIX_API_KEY=your-api-key # Optional for local instances
PHOENIX_PROJECT_NAME=mastra-service # Optional, defaults to 'mastra-service'
Basic Setup
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { ArizeExporter } from "@mastra/arize";
export const mastra = new Mastra({
observability: new Observability({
configs: {
arize: {
serviceName: process.env.PHOENIX_PROJECT_NAME || "mastra-service",
exporters: [
new ArizeExporter({
endpoint: process.env.PHOENIX_ENDPOINT!,
apiKey: process.env.PHOENIX_API_KEY,
projectName: process.env.PHOENIX_PROJECT_NAME,
}),
],
},
},
}),
});
Quick Start with Docker
Test locally with an in-memory Phoenix instance:
docker run --pull=always -d --name arize-phoenix -p 6006:6006 \
-e PHOENIX_SQL_DATABASE_URL="sqlite:///:memory:" \
arizephoenix/phoenix:latest
Set PHOENIX_ENDPOINT=http://localhost:6006/v1/traces and run your Mastra agent to see traces at localhost:6006.
Arize AX Setup
Arize AX is an enterprise observability platform with advanced features for production AI systems.
Prerequisites
- Arize AX Account: Sign up at app.arize.com
- Space ID: Your organization's space identifier
- API Key: Generate in Arize AX settings
- Environment Variables: Set your credentials
ARIZE_SPACE_ID=your-space-id
ARIZE_API_KEY=your-api-key
ARIZE_PROJECT_NAME=mastra-service # Optional
Basic Setup
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { ArizeExporter } from "@mastra/arize";
export const mastra = new Mastra({
observability: new Observability({
configs: {
arize: {
serviceName: process.env.ARIZE_PROJECT_NAME || "mastra-service",
exporters: [
new ArizeExporter({
apiKey: process.env.ARIZE_API_KEY!,
spaceId: process.env.ARIZE_SPACE_ID!,
projectName: process.env.ARIZE_PROJECT_NAME,
}),
],
},
},
}),
});
Configuration Options
The Arize exporter supports advanced configuration for fine-tuning OpenTelemetry behavior:
Complete Configuration
new ArizeExporter({
// Phoenix Configuration
endpoint: "https://your-collector.example.com/v1/traces", // Required for Phoenix
// Arize AX Configuration
spaceId: "your-space-id", // Required for Arize AX
// Shared Configuration
apiKey: "your-api-key", // Required for authenticated endpoints
projectName: "mastra-service", // Optional project name
// Optional OTLP settings
headers: {
"x-custom-header": "value", // Additional headers for OTLP requests
},
// Debug and performance tuning
logLevel: "debug", // Logging: debug | info | warn | error
batchSize: 512, // Batch size before exporting spans
timeout: 30000, // Timeout in ms before exporting spans
// Custom resource attributes
resourceAttributes: {
"deployment.environment": process.env.NODE_ENV,
"service.version": process.env.APP_VERSION,
},
});
Batch Processing Options
Control how traces are batched and exported:
new ArizeExporter({
endpoint: process.env.PHOENIX_ENDPOINT!,
apiKey: process.env.PHOENIX_API_KEY,
// Batch processing configuration
batchSize: 512, // Number of spans to batch (default: 512)
timeout: 30000, // Max time in ms to wait before export (default: 30000)
});
Resource Attributes
Add custom attributes to all exported spans:
new ArizeExporter({
endpoint: process.env.PHOENIX_ENDPOINT!,
resourceAttributes: {
"deployment.environment": process.env.NODE_ENV,
"service.namespace": "production",
"service.instance.id": process.env.HOSTNAME,
"custom.attribute": "value",
},
});
OpenInference Semantic Conventions
This exporter implements the OpenInference Semantic Conventions for generative AI applications, providing standardized trace structure across different observability platforms.