The next evolution of Mastra streaming

·

Sep 25, 2025

Next week (the week of September 28), we're shipping a significant update to how streaming works in Mastra—we're making our new streaming architecture the default experience starting from @mastra/core v0.19.0.

This is a breaking change—the first we've introduced in a while—and we want to explain why.

The journey to better streaming

When we first built Mastra, we relied entirely on Vercel's AI SDK for streaming. It was a pragmatic choice that let us focus on what made Mastra unique: the orchestration layer that sits above language models.

But as developers started building more complex apps with Mastra, we kept hitting the same limitations. What happens when an agent calls another agent? How do you stream results from a workflow that contains multiple agents? How do you report progress from a long-running tool without blocking the entire stream?

Our old implementation worked beautifully for single model calls, but Mastra needed something more.

So we built our own streaming layer.

What we're actually changing

We're promoting our battle-tested streamVNext and generateVNext functions to become the new stream and generate.

If you've been using the vNext functions, you already know what's coming. These functions implement our custom streaming protocol that handles nested agent calls, long-running tools, and more complex orchestration patterns. They've been running in production for weeks across applications.

For everyone else, this means the stream and generate functions you're familiar with will work differently after the update. They'll accept different arguments and return a different streaming format.

We initially introduced these as vNext functions to give developers time to test and provide feedback. That period has been invaluable and we’ve now built confidence that this is the right foundation for Mastra's future.

Why we're making this change now

We don't take breaking changes lightly. Every breaking change creates work for everyone involved. So why now?

The AI SDK v5 catalyst

When Vercel released AI SDK v5 in July with its own breaking changes, we had a choice. We could have immediately updated our stream and generate functions to support v5, forcing everyone to migrate at once. Instead, we built v5 compatibility into our vNext functions, allowing developers to adopt it when ready.

Now, three months later, most of the ecosystem has moved to v5. By making our v5-compatible streaming the default, we're aligning with where the community already is.

Nested streaming is no longer optional

We've written before about nested streaming, but it bears repeating: modern AI applications are compositional. Agents call other agents. Workflows orchestrate multiple models. Tools can be entire applications themselves.

Our custom streaming protocol makes these patterns not just possible but elegant. When an agent in your workflow calls another agent, both streams compose naturally. When a tool needs to report progress during a long operation, it can stream updates without blocking.

The future: Mastra for every frontend

This change isn't just about fixing limitations—it's about what becomes possible next.

The new format option we're introducing (like format: 'aisdk') establishes a pattern where Mastra can output streams in whatever format your frontend needs. Today it's AI SDK v5. Tomorrow it could be copilotkit, assistant-ui, or a format we haven't even imagined yet.

What happens next

Over the next few days, we'll be publishing a migration guide. If you're currently using streamVNext and generateVNext, the migration is trivial—just rename the functions. If you're using the original versions, we're providing Legacy variants that preserve the old behavior.

The future of AI applications is compositional, and Mastra's streaming is evolving to match.


Full migration guides will be published early next week. In the meantime, if you have questions or concerns, reach out on Discord or GitHub.

Stay up to date