Skip to main content

Langfuse Exporter

Langfuse is an open-source observability platform specifically designed for LLM applications. The Langfuse exporter sends your traces to Langfuse, providing detailed insights into model performance, token usage, and conversation flows.

Installation
Direct link to Installation

npm install @mastra/langfuse@beta

Configuration
Direct link to Configuration

Prerequisites
Direct link to Prerequisites

  1. Langfuse Account: Sign up at cloud.langfuse.com or deploy self-hosted
  2. API Keys: Create public/secret key pair in Langfuse Settings → API Keys
  3. Environment Variables: Set your credentials
.env
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxx
LANGFUSE_BASE_URL=https://cloud.langfuse.com # Or your self-hosted URL

Basic Setup
Direct link to Basic Setup

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { LangfuseExporter } from "@mastra/langfuse";

export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: "my-service",
exporters: [
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
baseUrl: process.env.LANGFUSE_BASE_URL,
options: {
environment: process.env.NODE_ENV,
},
}),
],
},
},
}),
});

Configuration Options
Direct link to Configuration Options

Realtime vs Batch Mode
Direct link to Realtime vs Batch Mode

The Langfuse exporter supports two modes for sending traces:

Realtime Mode (Development)
Direct link to Realtime Mode (Development)

Traces appear immediately in Langfuse dashboard, ideal for debugging:

new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: true, // Flush after each event
});

Batch Mode (Production)
Direct link to Batch Mode (Production)

Better performance with automatic batching:

new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: false, // Default - batch traces
});

Complete Configuration
Direct link to Complete Configuration

new LangfuseExporter({
// Required credentials
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,

// Optional settings
baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com
realtime: process.env.NODE_ENV === "development", // Dynamic mode selection
logLevel: "info", // Diagnostic logging: debug | info | warn | error

// Langfuse-specific options
options: {
environment: process.env.NODE_ENV, // Shows in UI for filtering
version: process.env.APP_VERSION, // Track different versions
release: process.env.GIT_COMMIT, // Git commit hash
},
});

Prompt Linking
Direct link to Prompt Linking

You can link LLM generations to prompts stored in Langfuse Prompt Management. This enables version tracking and metrics for your prompts.

Use withLangfusePrompt with buildTracingOptions for the cleanest API:

src/agents/support-agent.ts
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { buildTracingOptions } from "@mastra/observability";
import { withLangfusePrompt } from "@mastra/langfuse";
import { Langfuse } from "langfuse";

const langfuse = new Langfuse({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
});

// Fetch the prompt from Langfuse Prompt Management
const prompt = await langfuse.getPrompt("customer-support");

export const supportAgent = new Agent({
name: "support-agent",
instructions: prompt.prompt, // Use the prompt text from Langfuse
model: openai("gpt-4o"),
defaultGenerateOptions: {
tracingOptions: buildTracingOptions(withLangfusePrompt(prompt)),
## Using Tags

Tags help you categorize and filter traces in the Langfuse dashboard. Add tags when executing agents or workflows:

```typescript
const result = await agent.generate({
messages: [{ role: "user", content: "Hello" }],
tracingOptions: {
tags: ["production", "experiment-v2", "user-request"],
},
});

The withLangfusePrompt helper automatically extracts name, version, and id from the Langfuse prompt object.

Manual Fields
Direct link to Manual Fields

You can also pass manual fields if you're not using the Langfuse SDK:

const tracingOptions = buildTracingOptions(
withLangfusePrompt({ name: "my-prompt", version: 1 }),
);

// Or with just an ID
const tracingOptions = buildTracingOptions(
withLangfusePrompt({ id: "prompt-uuid-12345" }),
);

Prompt Object Fields
Direct link to Prompt Object Fields

The prompt object supports these fields:

FieldTypeDescription
namestringThe prompt name in Langfuse
versionnumberThe prompt version number
idstringThe prompt UUID for direct linking

You can link prompts using either:

  • id alone (the UUID uniquely identifies a prompt version)
  • name + version together
  • All three fields

When set on a MODEL_GENERATION span, the Langfuse exporter automatically links the generation to the corresponding prompt. Tags appear in Langfuse's trace view and can be used to filter and search traces. Common use cases include:

  • Environment labels: "production", "staging"
  • Experiment tracking: "experiment-v1", "control-group"
  • Priority levels: "priority-high", "batch-job"
  • User segments: "beta-user", "enterprise"