This week brings two major features: new streamVNext
and generateVNext
methods with multiple output format support (Mastra and AI SDK v5), plus output processors for transforming agent streams. We've also added experimental AI tracing, Gemini Live voice capabilities, and significant playground improvements.
Release: @mastra/core@0.14.0
Let's dive in:
AI SDK v5 Support
We've added AI SDK v5 support throughout Mastra. We now handle both v4 and v5 message formats simultaneously, the playground model switcher auto-detects versions, and our new streamVNext
and generateVNext
APIs are able to output v5 streams. This makes migrating seamless while maintaining backward compatibility. Details in PR #6731 and PR #6909.
New Stream and Generate Methods with Format Options
The updated streamVNext
and generateVNext
methods now let you choose your output format—either Mastra's native format or AI SDK v5 format. This gives you flexibility in how you consume agent responses while maintaining compatibility with existing code.
1// New vNext methods with configurable output
2const stream = await agent.streamVNext(messages, {
3 format: "aisdk", // defaults to "mastra"
4});
5
6const result = await agent.generateVNext(messages, {
7 format: "aisdk", // defaults to "mastra"
8});
The new methods are available across the entire stack (core agents, Client JS SDK, playground, and agent networks.) Check out PR #6877 for details.
Output Processors for Agent Streams
We've unified input and output processing with a single Processor interface, enabling stateful output processing for agent streams. This means you can now transform, filter, or enrich agent responses in real-time as they stream.
Output Processors allow you to transform AI responses after they are generated by the language model but before they are returned to users. This is useful for implementing moderation, transformation, and safety controls on AI-generated content.
1const agent = new Agent({
2 // ... other config
3 outputProcessors: [
4 new CustomProcessor({
5 processOutputStream: async (chunk) => {
6 // Transform output as it streams
7 return transformedChunk;
8 },
9 }),
10 ],
11});
We've also shipped built-in output processors for common use-cases, ModerationProcessor
, PIIDetector
, and TokenLimiterProcessor
to name a few.
See PR #6482 or the docs for full API examples.
Experimental AI Tracing
We've added experimental AI tracing support for workflows and agents based on recent design updates. This gives you deeper visibility into operations for debugging and optimization.
Enable it with the new observability providers. We've added LangFuse (PR #6783) exporter this week.
Gemini Live Voice Integration
We've integrated Google's Gemini Live API with real-time TTS, STT, and voice-to-voice capabilities. This enables building voice-powered agents with natural conversation flow.
See PR #6677 for implementation details.
Other Notable Updates
- Playground: Added model switcher with improved suggestions UI and dynamic options based on selected model version
- Evals: New
runExperiment
method for running and scoring agents in test suites or CI (PR #6761) - CLI: Added 'scorers' subcommand for managing and listing scorers (PR #6704)
- Workflows: Added custom ID parameter to
.map
method and improved suspend/resume documentation - Deployer: Fixed TypeScript build issues and Cloudflare deployment bugs
For the complete list of changes and fixes, check out the full release notes.
Happy building! 🚀