Mastra Changelog 2025-03-27

Mastra's Latest Updates: Nested Workflows, Voice to Voice, and more

Shane ThomasShane Thomas·

Mar 27, 2025

·

4 min read

This week brings significant improvements across Mastra's ecosystem, with major updates to workflow capabilities, Mastra Voice, and more.

Nested Workflows & Concurrency

The core package received substantial enhancements to its workflow system.

You can now give a workflow directly as a step, which makes executing parallel branches easier:

 1parentWorkflow
 2  .step(nestedWorkflowA)
 3  .step(nestedWorkflowB)
 4  .after([nestedWorkflowA, nestedWorkflowB])
 5  .step(finalStep);

Conditional branching is also easier to follow with the new structure that accepts workflows. You can execute a whole nested workflow, and then continue execution as normal on the next .then() ’d step:

 1// Create nested workflows for different paths
 2const workflowA = new Workflow({ name: "workflow-a" })
 3  .step(stepA1)
 4  .then(stepA2)
 5  .commit();
 6
 7const workflowB = new Workflow({ name: "workflow-b" })
 8  .step(stepB1)
 9  .then(stepB2)
10  .commit();
11
12// Use the new if-else syntax with nested workflows
13parentWorkflow
14  .step(initialStep)
15  .if(
16    async ({ context }) => {
17      // Your condition here
18      return someCondition;
19    },
20    workflowA, // if branch
21    workflowB, // else branch
22  )
23  .then(finalStep)
24  .commit();

And to pass arguments to nested workflows, you can use variable mappings.

We also added new workflow result schemas to make it easier to propagate results back from a nested workflow execution back to the parent workflow:

 1
 2 const workflowA = new Workflow({
 3	 name: 'workflow-a',
 4   result: {
 5   schema: z.object({
 6	 activities: z.string(),
 7   }),
 8   mapping: {
 9   activities: {
10   step: planActivities,
11   path: 'activities',
12	    },
13	  },
14	},
15})
16  .step(fetchWeather)
17  .then(planActivities)
18  .commit();
19
20const weatherWorkflow = new Workflow({
21  name: 'weather-workflow',
22  triggerSchema: z.object({
23    city: z.string().describe('The city to get the weather for'),
24  }),
25  result: {
26    schema: z.object({
27      activities: z.string(),
28    }),
29    mapping: {
30      activities: {
31        step: workflowA,
32        path: 'result.activities',
33      },
34    },
35  },
36})
37  .step(workflowA, {
38    variables: {
39      city: {
40        step: 'trigger',
41        path: 'city',
42      },
43    },
44  })
45  .step(doSomethingElseStep)
46  .commit();

Check out our docs to explore all things nested workflows.

Voice

We shipped some big changes and new features to Mastra Voice. They include:

  • Support for speech to speech providers, with OpenAI Realtime API being our first.
  • Support for WebSocket connections to establish a persistent connection to voice providers (like OpenAI) instead of separate HTTP requests. This enables bidirectional audio streaming without the request-wait-response pattern of traditional HTTP.
  • A new event-driven architecture for voice interactions. Instead of const text = await agent.voice.listen(audio), you now use agent.voice.on('writing', ({ text }) => { ... }), creating a more responsive experience without managing any WebSocket complexity.

Here's an example of setting up a real-time voice agent that can participate in continuous conversations:

 1import { Agent } from "@mastra/core/agent";
 2import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime";
 3
 4const agent = new Agent({
 5  name: 'Agent',
 6  instructions: `You are a helpful assistant with real-time voice capabilities.`,
 7  model: openai('gpt-4o'),
 8  voice: new OpenAIRealtimeVoice(),
 9});
10
11// Connect to the voice service
12await agent.voice.connect();
13
14// Listen for agent audio responses
15agent.voice.on('speaker', (stream) => {
16  // The 'speaker' can be any audio output implementation that accepts streams for instance, node-speaker
17  stream.pipe(speaker)
18});
19
20// Initiate the conversation by emitting a 'speaking' event
21await agent.voice.speak('How can I help you today?');
22
23// Send continuous audio from microphone
24const micStream = mic.getAudioStream();
25await agent.voice.send(micStream);

Additional details, including code examples of the new event-driven architecture, can be found in our blog post.

New LLM Provider Options

Both Cerebras and Groq were added as LLM provider options in create-mastra expanding the available choices for AI backends.

Performance optimizations

Serverless deployments saw improvements like:

  • A marked reduction in bundle sizes. We went from 90mb to 8mb when using memory and our Postgres store
  • Improved deployments to Vercel, Netlify and Cloudflare edge functions
  • Improved cold start times

Find details on our most recent optimizations here.

Memory improvements

Memory operations also saw notable optimization, like:

  • The addition of a new Mem0 integration @mastra/mem0
  • Improved semantic recall for long message inputs
  • Complete compatibility for using memory with AI SDK's useChat() hooks

Stay tuned for more updates next week…

Share:
Shane Thomas

Shane Thomas is the founder and CPO of Mastra. He co-hosts AI Agents Hour, a weekly show covering news and topics around AI agents. Previously, he was in product and engineering at Netlify and Gatsby. He created the first course as an MCP server and is kind of a musician.

All articles by Shane Thomas