Langfuse Exporter
Langfuse is an open-source observability platform specifically designed for LLM applications. The Langfuse exporter sends your traces to Langfuse, providing detailed insights into model performance, token usage, and conversation flows.
InstallationDirect link to Installation
npm install @mastra/langfuse@beta
ConfigurationDirect link to Configuration
PrerequisitesDirect link to Prerequisites
- Langfuse Account: Sign up at cloud.langfuse.com or deploy self-hosted
- API Keys: Create public/secret key pair in Langfuse Settings → API Keys
- Environment Variables: Set your credentials
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxx
LANGFUSE_BASE_URL=https://cloud.langfuse.com # Or your self-hosted URL
Basic SetupDirect link to Basic Setup
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { LangfuseExporter } from "@mastra/langfuse";
export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: "my-service",
exporters: [
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
baseUrl: process.env.LANGFUSE_BASE_URL,
options: {
environment: process.env.NODE_ENV,
},
}),
],
},
},
}),
});
Configuration OptionsDirect link to Configuration Options
Realtime vs Batch ModeDirect link to Realtime vs Batch Mode
The Langfuse exporter supports two modes for sending traces:
Realtime Mode (Development)Direct link to Realtime Mode (Development)
Traces appear immediately in Langfuse dashboard, ideal for debugging:
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: true, // Flush after each event
});
Batch Mode (Production)Direct link to Batch Mode (Production)
Better performance with automatic batching:
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: false, // Default - batch traces
});
Complete ConfigurationDirect link to Complete Configuration
new LangfuseExporter({
// Required credentials
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
// Optional settings
baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com
realtime: process.env.NODE_ENV === "development", // Dynamic mode selection
logLevel: "info", // Diagnostic logging: debug | info | warn | error
// Langfuse-specific options
options: {
environment: process.env.NODE_ENV, // Shows in UI for filtering
version: process.env.APP_VERSION, // Track different versions
release: process.env.GIT_COMMIT, // Git commit hash
},
});
Prompt LinkingDirect link to Prompt Linking
You can link LLM generations to prompts stored in Langfuse Prompt Management. This enables version tracking and metrics for your prompts.
Using the Helper (Recommended)Direct link to Using the Helper (Recommended)
Use withLangfusePrompt with buildTracingOptions for the cleanest API:
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { buildTracingOptions } from "@mastra/observability";
import { withLangfusePrompt } from "@mastra/langfuse";
import { Langfuse } from "langfuse";
const langfuse = new Langfuse({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
});
// Fetch the prompt from Langfuse Prompt Management
const prompt = await langfuse.getPrompt("customer-support");
export const supportAgent = new Agent({
name: "support-agent",
instructions: prompt.prompt, // Use the prompt text from Langfuse
model: openai("gpt-4o"),
defaultGenerateOptions: {
tracingOptions: buildTracingOptions(withLangfusePrompt(prompt)),
## Using Tags
Tags help you categorize and filter traces in the Langfuse dashboard. Add tags when executing agents or workflows:
```typescript
const result = await agent.generate({
messages: [{ role: "user", content: "Hello" }],
tracingOptions: {
tags: ["production", "experiment-v2", "user-request"],
},
});
The withLangfusePrompt helper automatically extracts name, version, and id from the Langfuse prompt object.
Manual FieldsDirect link to Manual Fields
You can also pass manual fields if you're not using the Langfuse SDK:
const tracingOptions = buildTracingOptions(
withLangfusePrompt({ name: "my-prompt", version: 1 }),
);
// Or with just an ID
const tracingOptions = buildTracingOptions(
withLangfusePrompt({ id: "prompt-uuid-12345" }),
);
Prompt Object FieldsDirect link to Prompt Object Fields
The prompt object supports these fields:
| Field | Type | Description |
|---|---|---|
name | string | The prompt name in Langfuse |
version | number | The prompt version number |
id | string | The prompt UUID for direct linking |
You can link prompts using either:
idalone (the UUID uniquely identifies a prompt version)name+versiontogether- All three fields
When set on a MODEL_GENERATION span, the Langfuse exporter automatically links the generation to the corresponding prompt.
Tags appear in Langfuse's trace view and can be used to filter and search traces. Common use cases include:
- Environment labels:
"production","staging" - Experiment tracking:
"experiment-v1","control-group" - Priority levels:
"priority-high","batch-job" - User segments:
"beta-user","enterprise"