Skip to main content

LangfuseExporter

Sends Tracing data to Langfuse for observability.

Constructor
Direct link to Constructor

new LangfuseExporter(config: LangfuseExporterConfig)

LangfuseExporterConfig
Direct link to langfuseexporterconfig

interface LangfuseExporterConfig extends BaseExporterConfig {
publicKey?: string
secretKey?: string
baseUrl?: string
realtime?: boolean
flushAt?: number
flushInterval?: number
environment?: string
release?: string
}

Extends BaseExporterConfig, which includes:

  • logger?: IMastraLogger - Logger instance
  • logLevel?: LogLevel | 'debug' | 'info' | 'warn' | 'error' - Log level (default: INFO)

Properties
Direct link to Properties

client
Direct link to client

get client(): LangfuseClient | undefined

The LangfuseClient instance for advanced Langfuse features. Returns undefined when the exporter is disabled (missing credentials). Use this for prompt management, evaluations, datasets, and direct API access.

Methods
Direct link to Methods

flush
Direct link to flush

async flush(): Promise<void>

Force flushes any buffered spans and scores to Langfuse without shutting down. Useful in serverless environments where you need to ensure data is exported before the runtime terminates.

shutdown
Direct link to shutdown

async shutdown(): Promise<void>

Flushes pending data and shuts down both the span processor and the client.

Usage
Direct link to Usage

Zero-Config (using environment variables)
Direct link to Zero-Config (using environment variables)

import { LangfuseExporter } from '@mastra/langfuse'

// Reads from LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_BASE_URL
const exporter = new LangfuseExporter()

Explicit Configuration
Direct link to Explicit Configuration

import { LangfuseExporter } from '@mastra/langfuse'

const exporter = new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY,
secretKey: process.env.LANGFUSE_SECRET_KEY,
baseUrl: 'https://cloud.langfuse.com',
realtime: true,
})

Span mapping
Direct link to Span mapping

Mastra spans are converted to OTel format and sent to Langfuse via LangfuseSpanProcessor. Langfuse automatically maps spans based on their gen_ai.* attributes:

  • Spans with a model attribute → Langfuse generations
  • All other spans → Langfuse spans
  • Parent-child relationships are preserved via OTel trace/span IDs

Prompt linking
Direct link to Prompt linking

Link LLM generations to Langfuse Prompt Management using the withLangfusePrompt helper:

import { buildTracingOptions } from '@mastra/observability'
import { LangfuseExporter, withLangfusePrompt } from '@mastra/langfuse'

const exporter = new LangfuseExporter()

// Fetch prompt via the Langfuse client
const prompt = await exporter.client.prompt.get('customer-support', { type: 'text' })

const agent = new Agent({
name: 'support-agent',
instructions: prompt.compile(),
model: 'openai/gpt-5.4',
defaultGenerateOptions: {
tracingOptions: buildTracingOptions(
withLangfusePrompt({ name: prompt.name, version: prompt.version }),
),
},
})

Helper Functions
Direct link to Helper Functions

withLangfusePrompt(prompt)
Direct link to withlangfusepromptprompt

Adds Langfuse prompt metadata to tracing options.

// Link by name and version (required for Langfuse v5)
withLangfusePrompt({ name: 'my-prompt', version: 1 })

The prompt metadata is passed through as span attributes and Langfuse links the generation to the corresponding prompt.