Langfuse exporter
Langfuse is an open-source observability platform specifically designed for LLM applications. The Langfuse exporter sends your traces to Langfuse, providing detailed insights into model performance, token usage, and conversation flows.
InstallationDirect link to Installation
- npm
- pnpm
- Yarn
- Bun
npm install @mastra/langfuse@latest
pnpm add @mastra/langfuse@latest
yarn add @mastra/langfuse@latest
bun add @mastra/langfuse@latest
ConfigurationDirect link to Configuration
PrerequisitesDirect link to Prerequisites
- Langfuse Account: Sign up at cloud.langfuse.com or deploy self-hosted
- API Keys: Create public/secret key pair in Langfuse Settings → API Keys
- Environment Variables: Set your credentials
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxx
LANGFUSE_BASE_URL=https://cloud.langfuse.com # Or your self-hosted URL
Zero-Config SetupDirect link to Zero-Config Setup
With environment variables set, use the exporter with no configuration:
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { LangfuseExporter } from '@mastra/langfuse'
export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: 'my-service',
exporters: [new LangfuseExporter()],
},
},
}),
})
Explicit ConfigurationDirect link to Explicit Configuration
You can also pass credentials directly (takes precedence over environment variables):
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { LangfuseExporter } from '@mastra/langfuse'
export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: 'my-service',
exporters: [
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
baseUrl: process.env.LANGFUSE_BASE_URL,
environment: process.env.NODE_ENV,
release: process.env.GIT_COMMIT,
}),
],
},
},
}),
})
Configuration optionsDirect link to Configuration options
Realtime vs Batch ModeDirect link to Realtime vs Batch Mode
The Langfuse exporter supports two modes for sending traces:
Realtime Mode (Development)Direct link to Realtime Mode (Development)
Traces appear immediately in Langfuse dashboard, ideal for debugging:
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: true, // Flush after each event
})
Batch Mode (Production)Direct link to Batch Mode (Production)
Better performance with automatic batching:
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: false, // Default - batch traces
})
Batch Tuning for High-Volume TracesDirect link to Batch Tuning for High-Volume Traces
For self-hosted Langfuse deployments or streamed runs that produce many spans per second, you can tune the OTEL batch size and flush interval to reduce request pressure on the Langfuse ingestion endpoint:
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
flushAt: 500, // Maximum spans per OTEL export batch
flushInterval: 20, // Maximum seconds between flushes
})
To suppress high-volume span types entirely (for example MODEL_CHUNK spans from streamed responses), use the observability-level excludeSpanTypes option rather than configuring the exporter:
import { SpanType } from '@mastra/core/observability'
new Observability({
configs: {
langfuse: {
serviceName: 'my-service',
exporters: [new LangfuseExporter()],
excludeSpanTypes: [SpanType.MODEL_CHUNK],
},
},
})
Complete ConfigurationDirect link to Complete Configuration
new LangfuseExporter({
// Required credentials
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
// Optional settings
baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com
realtime: process.env.NODE_ENV === 'development', // Dynamic mode selection
flushAt: 500, // Maximum spans per OTEL export batch
flushInterval: 20, // Maximum seconds between flushes
logLevel: 'info', // Diagnostic logging: debug | info | warn | error
// Langfuse-specific settings
environment: process.env.NODE_ENV, // Shows in Langfuse UI for filtering
release: process.env.GIT_COMMIT, // Git commit hash for version tracking
})
Prompt linkingDirect link to Prompt linking
You can link LLM generations to prompts stored in Langfuse Prompt Management. This enables version tracking and metrics for your prompts.
Using the Helper (Recommended)Direct link to Using the Helper (Recommended)
Use withLangfusePrompt with buildTracingOptions for the cleanest API:
import { Agent } from '@mastra/core/agent'
import { buildTracingOptions } from '@mastra/observability'
import { LangfuseExporter, withLangfusePrompt } from '@mastra/langfuse'
const exporter = new LangfuseExporter()
// Fetch the prompt from Langfuse Prompt Management via the client
const prompt = await exporter.client.prompt.get('customer-support', { type: 'text' })
export const supportAgent = new Agent({
name: 'support-agent',
instructions: prompt.compile(), // Use the prompt text from Langfuse
model: 'openai/gpt-5.4',
defaultGenerateOptions: {
tracingOptions: buildTracingOptions(
withLangfusePrompt({ name: prompt.name, version: prompt.version }),
),
},
})
The withLangfusePrompt helper accepts name and version fields for prompt linking. Langfuse v5 requires both fields.
Manual FieldsDirect link to Manual Fields
You can also pass manual fields if you're not using the Langfuse SDK:
const tracingOptions = buildTracingOptions(withLangfusePrompt({ name: 'my-prompt', version: 1 }))
Prompt Object FieldsDirect link to Prompt Object Fields
The prompt object requires both name and version:
| Field | Type | Description |
|---|---|---|
name | string | The prompt name in Langfuse |
version | number | The prompt version number |
When set on a MODEL_GENERATION span, the Langfuse exporter automatically links the generation to the corresponding prompt.