LangfuseExporter
Sends Tracing data to Langfuse for observability.
ConstructorDirect link to Constructor
new LangfuseExporter(config: LangfuseExporterConfig)
LangfuseExporterConfigDirect link to LangfuseExporterConfig
interface LangfuseExporterConfig extends BaseExporterConfig {
publicKey?: string;
secretKey?: string;
baseUrl?: string;
realtime?: boolean;
options?: any;
}
Extends BaseExporterConfig, which includes:
logger?: IMastraLogger- Logger instancelogLevel?: LogLevel | 'debug' | 'info' | 'warn' | 'error'- Log level (default: INFO)
MethodsDirect link to Methods
exportTracingEventDirect link to exportTracingEvent
async exportTracingEvent(event: TracingEvent): Promise<void>
Exports a tracing event to Langfuse.
exportDirect link to export
async export(spans: ReadOnlySpan[]): Promise<void>
Batch exports spans to Langfuse.
flushDirect link to flush
async flush(): Promise<void>
Force flushes any buffered spans to Langfuse without shutting down the exporter. Useful in serverless environments where you need to ensure spans are exported before the runtime terminates.
shutdownDirect link to shutdown
async shutdown(): Promise<void>
Flushes pending data and shuts down the client.
UsageDirect link to Usage
Zero-Config (using environment variables)Direct link to Zero-Config (using environment variables)
import { LangfuseExporter } from "@mastra/langfuse";
// Reads from LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_BASE_URL
const exporter = new LangfuseExporter();
Explicit ConfigurationDirect link to Explicit Configuration
import { LangfuseExporter } from "@mastra/langfuse";
const exporter = new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY,
secretKey: process.env.LANGFUSE_SECRET_KEY,
baseUrl: "https://cloud.langfuse.com",
realtime: true,
});
Span MappingDirect link to Span Mapping
- Root spans → Langfuse traces
MODEL_GENERATIONspans → Langfuse generations- All other spans → Langfuse spans
- Event spans → Langfuse events
Prompt LinkingDirect link to Prompt Linking
Link LLM generations to Langfuse Prompt Management using the withLangfusePrompt helper:
import { buildTracingOptions } from "@mastra/observability";
import { withLangfusePrompt } from "@mastra/langfuse";
import { Langfuse } from "langfuse";
const langfuse = new Langfuse({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
});
const prompt = await langfuse.getPrompt("customer-support");
const agent = new Agent({
name: "support-agent",
instructions: prompt.prompt,
model: "openai/gpt-4o",
defaultGenerateOptions: {
tracingOptions: buildTracingOptions(withLangfusePrompt(prompt)),
},
});
Helper FunctionsDirect link to Helper Functions
withLangfusePrompt(prompt)Direct link to withlangfusepromptprompt
Adds Langfuse prompt metadata to tracing options.
// With Langfuse SDK prompt object
withLangfusePrompt(prompt)
// With manual fields
withLangfusePrompt({ name: "my-prompt", version: 1 })
withLangfusePrompt({ id: "prompt-uuid" })
When metadata.langfuse.prompt is set on a MODEL_GENERATION span (with either id alone, or name + version), the exporter automatically links the generation to the prompt in Langfuse.