Skip to main content

Langfuse exporter

Langfuse is an open-source observability platform specifically designed for LLM applications. The Langfuse exporter sends your traces to Langfuse, providing detailed insights into model performance, token usage, and conversation flows.

Installation
Direct link to Installation

npm install @mastra/langfuse@latest

Configuration
Direct link to Configuration

Prerequisites
Direct link to Prerequisites

  1. Langfuse Account: Sign up at cloud.langfuse.com or deploy self-hosted
  2. API Keys: Create public/secret key pair in Langfuse Settings → API Keys
  3. Environment Variables: Set your credentials
.env
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxx
LANGFUSE_BASE_URL=https://cloud.langfuse.com # Or your self-hosted URL

Zero-Config Setup
Direct link to Zero-Config Setup

With environment variables set, use the exporter with no configuration:

src/mastra/index.ts
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { LangfuseExporter } from '@mastra/langfuse'

export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: 'my-service',
exporters: [new LangfuseExporter()],
},
},
}),
})

Explicit Configuration
Direct link to Explicit Configuration

You can also pass credentials directly (takes precedence over environment variables):

src/mastra/index.ts
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { LangfuseExporter } from '@mastra/langfuse'

export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: 'my-service',
exporters: [
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
baseUrl: process.env.LANGFUSE_BASE_URL,
environment: process.env.NODE_ENV,
release: process.env.GIT_COMMIT,
}),
],
},
},
}),
})

Configuration options
Direct link to Configuration options

Realtime vs Batch Mode
Direct link to Realtime vs Batch Mode

The Langfuse exporter supports two modes for sending traces:

Realtime Mode (Development)
Direct link to Realtime Mode (Development)

Traces appear immediately in Langfuse dashboard, ideal for debugging:

new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: true, // Flush after each event
})

Batch Mode (Production)
Direct link to Batch Mode (Production)

Better performance with automatic batching:

new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: false, // Default - batch traces
})

Batch Tuning for High-Volume Traces
Direct link to Batch Tuning for High-Volume Traces

For self-hosted Langfuse deployments or streamed runs that produce many spans per second, you can tune the OTEL batch size and flush interval to reduce request pressure on the Langfuse ingestion endpoint:

new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
flushAt: 500, // Maximum spans per OTEL export batch
flushInterval: 20, // Maximum seconds between flushes
})

To suppress high-volume span types entirely (for example MODEL_CHUNK spans from streamed responses), use the observability-level excludeSpanTypes option rather than configuring the exporter:

import { SpanType } from '@mastra/core/observability'

new Observability({
configs: {
langfuse: {
serviceName: 'my-service',
exporters: [new LangfuseExporter()],
excludeSpanTypes: [SpanType.MODEL_CHUNK],
},
},
})

Complete Configuration
Direct link to Complete Configuration

new LangfuseExporter({
// Required credentials
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,

// Optional settings
baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com
realtime: process.env.NODE_ENV === 'development', // Dynamic mode selection
flushAt: 500, // Maximum spans per OTEL export batch
flushInterval: 20, // Maximum seconds between flushes
logLevel: 'info', // Diagnostic logging: debug | info | warn | error

// Langfuse-specific settings
environment: process.env.NODE_ENV, // Shows in Langfuse UI for filtering
release: process.env.GIT_COMMIT, // Git commit hash for version tracking
})

Prompt linking
Direct link to Prompt linking

You can link LLM generations to prompts stored in Langfuse Prompt Management. This enables version tracking and metrics for your prompts.

Use withLangfusePrompt with buildTracingOptions for the cleanest API:

src/agents/support-agent.ts
import { Agent } from '@mastra/core/agent'
import { buildTracingOptions } from '@mastra/observability'
import { LangfuseExporter, withLangfusePrompt } from '@mastra/langfuse'

const exporter = new LangfuseExporter()

// Fetch the prompt from Langfuse Prompt Management via the client
const prompt = await exporter.client.prompt.get('customer-support', { type: 'text' })

export const supportAgent = new Agent({
name: 'support-agent',
instructions: prompt.compile(), // Use the prompt text from Langfuse
model: 'openai/gpt-5.4',
defaultGenerateOptions: {
tracingOptions: buildTracingOptions(
withLangfusePrompt({ name: prompt.name, version: prompt.version }),
),
},
})

The withLangfusePrompt helper accepts name and version fields for prompt linking. Langfuse v5 requires both fields.

Manual Fields
Direct link to Manual Fields

You can also pass manual fields if you're not using the Langfuse SDK:

const tracingOptions = buildTracingOptions(withLangfusePrompt({ name: 'my-prompt', version: 1 }))

Prompt Object Fields
Direct link to Prompt Object Fields

The prompt object requires both name and version:

FieldTypeDescription
namestringThe prompt name in Langfuse
versionnumberThe prompt version number

When set on a MODEL_GENERATION span, the Langfuse exporter automatically links the generation to the corresponding prompt.