# Langfuse Exporter [Langfuse](https://langfuse.com/) is an open-source observability platform specifically designed for LLM applications. The Langfuse exporter sends your traces to Langfuse, providing detailed insights into model performance, token usage, and conversation flows. ## Installation ```bash npm install @mastra/langfuse@latest ``` ```bash pnpm add @mastra/langfuse@latest ``` ```bash yarn add @mastra/langfuse@latest ``` ```bash bun add @mastra/langfuse@latest ``` ## Configuration ### Prerequisites 1. **Langfuse Account**: Sign up at [cloud.langfuse.com](https://cloud.langfuse.com) or deploy self-hosted 2. **API Keys**: Create public/secret key pair in Langfuse Settings → API Keys 3. **Environment Variables**: Set your credentials ```bash LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxx LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxx LANGFUSE_BASE_URL=https://cloud.langfuse.com # Or your self-hosted URL ``` ### Zero-Config Setup With environment variables set, use the exporter with no configuration: ```typescript import { Mastra } from "@mastra/core"; import { Observability } from "@mastra/observability"; import { LangfuseExporter } from "@mastra/langfuse"; export const mastra = new Mastra({ observability: new Observability({ configs: { langfuse: { serviceName: "my-service", exporters: [new LangfuseExporter()], }, }, }), }); ``` ### Explicit Configuration You can also pass credentials directly (takes precedence over environment variables): ```typescript import { Mastra } from "@mastra/core"; import { Observability } from "@mastra/observability"; import { LangfuseExporter } from "@mastra/langfuse"; export const mastra = new Mastra({ observability: new Observability({ configs: { langfuse: { serviceName: "my-service", exporters: [ new LangfuseExporter({ publicKey: process.env.LANGFUSE_PUBLIC_KEY!, secretKey: process.env.LANGFUSE_SECRET_KEY!, baseUrl: process.env.LANGFUSE_BASE_URL, options: { environment: process.env.NODE_ENV, }, }), ], }, }, }), }); ``` ## Configuration Options ### Realtime vs Batch Mode The Langfuse exporter supports two modes for sending traces: #### Realtime Mode (Development) Traces appear immediately in Langfuse dashboard, ideal for debugging: ```typescript new LangfuseExporter({ publicKey: process.env.LANGFUSE_PUBLIC_KEY!, secretKey: process.env.LANGFUSE_SECRET_KEY!, realtime: true, // Flush after each event }); ``` #### Batch Mode (Production) Better performance with automatic batching: ```typescript new LangfuseExporter({ publicKey: process.env.LANGFUSE_PUBLIC_KEY!, secretKey: process.env.LANGFUSE_SECRET_KEY!, realtime: false, // Default - batch traces }); ``` ### Complete Configuration ```typescript new LangfuseExporter({ // Required credentials publicKey: process.env.LANGFUSE_PUBLIC_KEY!, secretKey: process.env.LANGFUSE_SECRET_KEY!, // Optional settings baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com realtime: process.env.NODE_ENV === "development", // Dynamic mode selection logLevel: "info", // Diagnostic logging: debug | info | warn | error // Langfuse-specific options options: { environment: process.env.NODE_ENV, // Shows in UI for filtering version: process.env.APP_VERSION, // Track different versions release: process.env.GIT_COMMIT, // Git commit hash }, }); ``` ## Prompt Linking You can link LLM generations to prompts stored in [Langfuse Prompt Management](https://langfuse.com/docs/prompt-management). This enables version tracking and metrics for your prompts. ### Using the Helper (Recommended) Use `withLangfusePrompt` with `buildTracingOptions` for the cleanest API: ```typescript import { Agent } from "@mastra/core/agent"; import { buildTracingOptions } from "@mastra/observability"; import { withLangfusePrompt } from "@mastra/langfuse"; import { Langfuse } from "langfuse"; // Reads credentials from LANGFUSE_SECRET_KEY, LANGFUSE_PUBLIC_KEY, LANGFUSE_BASE_URL env vars const langfuse = new Langfuse(); // Fetch the prompt from Langfuse Prompt Management const prompt = await langfuse.getPrompt("customer-support"); export const supportAgent = new Agent({ name: "support-agent", instructions: prompt.prompt, // Use the prompt text from Langfuse model: "openai/gpt-4o", defaultGenerateOptions: { tracingOptions: buildTracingOptions(withLangfusePrompt(prompt)), }, }); ``` The `withLangfusePrompt` helper automatically extracts `name`, `version`, and `id` from the Langfuse prompt object. ### Manual Fields You can also pass manual fields if you're not using the Langfuse SDK: ```typescript const tracingOptions = buildTracingOptions( withLangfusePrompt({ name: "my-prompt", version: 1 }), ); // Or with just an ID const tracingOptions = buildTracingOptions( withLangfusePrompt({ id: "prompt-uuid-12345" }), ); ``` ### Prompt Object Fields The prompt object supports these fields: | Field | Type | Description | | --------- | ------ | ---------------------------------- | | `name` | string | The prompt name in Langfuse | | `version` | number | The prompt version number | | `id` | string | The prompt UUID for direct linking | You can link prompts using either: - `id` alone (the UUID uniquely identifies a prompt version) - `name` + `version` together - All three fields When set on a `MODEL_GENERATION` span, the Langfuse exporter automatically links the generation to the corresponding prompt. ## Related - [Tracing Overview](https://mastra.ai/docs/observability/tracing/overview/llms.txt) - [Langfuse Documentation](https://langfuse.com/docs) - [Langfuse Prompt Management](https://langfuse.com/docs/prompt-management)