We're excited to share two weeks of updates, featuring workflow state management, unified streaming, and AI tracing improvements.
Release: @mastra/core@0.22.0
Let's dig in...
Workflow State: Share Data Across Steps
Persistent State Management
We've added workflow state management, enabling data sharing between workflow steps without having to pass values through every single step.
State persists across suspensions and resumptions, perfect for long-running workflows with human-in-the-loop patterns.
See our full announcement for detailed examples.
Suspend/Resume in Playground
The playground now fully supports suspended workflows with visual indicators and resume controls, making human-in-the-loop workflow development much easier. We also shipped an API for automatically closing streams when a workflow suspends, allowing them to be resumed later.
See the blog post for details.
Model Configuration Standardization
Consistent Model Config API
All model configuration points now accept the same flexible MastraModelConfig type:
1// Scorers now support all config types
2const scorer = new LLMScorer({
3 model: "openai/gpt-4o", // model router
4 // OR config object
5 // OR dynamic function
6});
7
8// Input/Output Processors too
9const processor = new ModerationProcessor({
10 model: "anthropic/claude-3-haiku",
11});
This provides a consistent API across Scorers, Input/Output Processors, and Relevance Scorers.
See PR #8626 for implementation.
Structured Output Improvements
Zod-First Validation
We improved structured output with Zod validation, ensuring type-safe and validated JSON outputs across all providers. Better complex nested schema handling and clearer error messages when validation fails.
Response Format by Default
We updated structuredOutput to use response format by default with an opt-in to JSON prompt injection (this would insert the schema into the system prompt.)
1 const response = await testAgentThatDoesntSupportStructuredOutput.generate(
2 message,
3 {
4 structuredOutput: {
5 schema: z.object({
6 summary: z.string(),
7 keywords: z.array(z.string())
8 }),
9 // To use if you model does not natively support structured outputs
10 jsonPromptInjection: true
11 },
12 }
13);
Schema Context for Structuring Output with Separate Models
When specifying a model in structuredOutput options to use a separate structuring agent, the main agent now receives context about the schema fields that will be extracted. This prevents incomplete responses where the structuring model lacks sufficient information to populate all schema fields, ensuring the main agent generates comprehensive content for all requirements.
This is particularly useful when using smaller or cheaper models for structuring (e.g., gpt-4o-mini), as the main agent will now generate all necessary information upfront rather than forcing the structuring model to fill in missing data.
Zod schema compatibility
We fixed Zod schema handling to ensure compatibility with both Zod v3 and v4.
OpenAI Strict Mode
We now automatically enable strict schema validation when using OpenAI models with JSON response format:
1*// Automatic strict validation with OpenAI*
2const agent = new Agent({
3 model: "openai/gpt-4o",
4 structuredOutput: {
5 schema: { ... }
6 }
7});
8
9await agent.generate({
10 providerOptions: {
11 openai: {
12 // No need to set this anymore when using structuredOutput
13 // To disable it set it false
14 strictJsonSchema: true
15 }
16 }
17})
See PR #8734, PR #8557, PR #8686, and PR #8886 for implementation details.
AI Tracing Updates
RuntimeContext Metadata Extraction
Automatically extract RuntimeContext values as metadata for tracing spans:
1const mastra = new Mastra({
2 observability: {
3 configs: {
4 default: {
5 runtimeContextKeys: ["userId", "tenantId"],
6 },
7 },
8 },
9});
10
11await agent.generate({
12 messages,
13 runtimeContext,
14 tracingOptions: {
15 runtimeContextKeys: ["experimentId"], // Add trace-specific keys
16 },
17});
See PR #9072 for details.
External Trace Integration
Support for integrating with external tracing systems via existing trace IDs:
1await agent.generate({
2 messages,
3 tracingOptions: {
4 traceId: existingTraceId, // 32 hex chars
5 parentSpanId: parentSpanId, // 16 hex chars
6 },
7});
See PR #9053 for implementation.
Agent Updates
Agents as Tools
Call agents directly as tools within other agents or workflows:
1const researchAgent = new Agent({
2 name: "researcher",
3 tools: { webSearchTool },
4});
5
6const writerAgent = new Agent({
7 name: "writer",
8 agents: { researchAgent }, // Use agent as a tool
9});
See PR #8863 for details.
Network Thread Titles
Agent networks now automatically generate meaningful titles for conversation threads.
See PR #8853 for implementation.
Workflow Improvements
Custom Resume Labels
Define semantic labels for workflow resume points:
1const approvalStep = createStep({
2 id: "approval",
3 execute: async ({ suspend }) => {
4 return await suspend({
5 resumeLabel: "manager-approval",
6 });
7 },
8});
9
10// Resume with meaningful label
11await workflow.resume({
12 step: "manager-approval",
13 resumeData: approvalData,
14});
See PR #8941 for details.
Enhanced Workflow Output
Workflows now return structured output with a fullStream property for complete streaming access.
See PR #9048 for implementation.
Model Router Updates
- Embeddings Support: Use magic strings for embeddings (
"openai/text-embedding-ada-002") - Provider Options: Typed options for OpenAI, Anthropic, Google, and xAI
- Local Provider Fix: Support for Ollama and custom URLs
Other updates from the last couple weeks
- Memory Improvements: Fixed reasoning chunk storage, preserved UIMessage metadata, and improved input processor message handling. PR #9041, PR #9029
- Gemini Compatibility: Fixed message ordering validation errors. PR #8069
- LLM Step Tracing: Comprehensive tracing for LLM steps and chunks. PR #9058
- Tool Output Display: Improved playground visualization. PR #9021
- Nested Workflow Events: Fixed event handling in nested workflows. PR #9132
- Provider Tool Types: Fixed TypeScript errors with provider tools. PR #8940
For the complete list of changes, check out the full release notes.
That's all for this update!
Happy building! ๐