Mastra Changelog 2026-01-20

Server adapters, composite storage, processor system overhaul, AI SDK v6 support, thread cloning, and more.

·

Jan 20, 2026

Mastra 1.0.0 is here. This release includes architectural changes and major features that improve how you deploy, configure, and use Mastra.

Release: @mastra/core@1.0.0

We prepared automated codemods for most breaking changes. Run all v1 codemods at once:

 1npx @mastra/codemod@latest v1

See the migration guide for detailed instructions.

Let's dive in:

Server Adapters

Server Adapters automatically expose your agents, workflows, tools, and MCP servers as HTTP endpoints on the server you’re already running.

Mastra 1.0 introduces several adapter packages that make running Mastra inside an existing Express, Hono, Fastify or Koa app much easier to set up and maintain.

Pass your Express app and mastra instance to the MastraServer constructor, call init(), and all Mastra endpoints are registered automatically alongside your existing routes. Here’s an example using the Express adapter:

 1import express from "express";
 2import { MastraServer } from "@mastra/express";
 3import { mastra } from "./mastra";
 4
 5const app = express();
 6const server = new MastraServer({ app, mastra });
 7await server.init();
 8
 9app.listen(3001);

Running Mastra as its own server works great for many use cases, but it creates an extra process to deploy, monitor, log, and secure. Server Adapters remove that overhead.

Server Adapters are best suited for teams who already have a standalone server. (PR #10263, PR #11568)

For teams integrating Mastra into a Next.js or other fullstack application, mastra build remains the recommended approach.

Composite Storage

1.0 introduces per-domain storage configuration, replacing the single-storage model.

Previously, Mastra used one storage backend for all domains (memory, workflows, scores, observability). This forced tradeoffs: Postgres for workflows meant Postgres for everything, even when a lighter option fit better.

Composite storage lets you choose the right backend per domain:

 1import { MastraCompositeStore } from "@mastra/core/storage";
 2import { WorkflowsPG, ScoresPG } from "@mastra/pg";
 3import { MemoryLibSQL } from "@mastra/libsql";
 4
 5const storage = new MastraCompositeStore({
 6  id: "composite",
 7  domains: {
 8    memory: new MemoryLibSQL({ url: "file:./local.db" }),
 9    workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
10    scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
11  },
12});

Use Postgres for workflows, LibSQL for memory, ClickHouse for columnar observability data, or align with your existing databases. This makes cost, latency, and operational tradeoffs explicit per domain instead of global.

Single-storage configurations remain supported, but composite storage is recommended for production deployments where different domains have different requirements. (PR #11401)

Processor System Overhaul

Intervene at every step of the agent loop. Input and output processors can now intercept, transform, retry, or short-circuit execution with clearer semantics, improved tripwires, and processInputStep integration.

Processors can now be composed using workflow primitives, enabling complex moderation pipelines, parallel validation, and sophisticated guardrails. The new processOutputStep method runs after every step in the agentic loop, allowing you to validate responses before tool execution.

Tripwires now support retry mechanisms with LLM feedback. Processors can request retries with metadata that gets sent back to the LLM for self-correction. (PR #10947, PR #10774)

AI SDK v6 Support

1.0 adds full support for AI SDK v6, including LanguageModelV3 models and ToolLoopAgent.

Previously, Mastra supported AI SDK v5 (LanguageModelV1/V2). As v6 introduced LanguageModelV3 with a different API, Mastra needed updates to support the latest SDK.

You can now use LanguageModelV3 models and ToolLoopAgent directly:

 1import { openai } from '@ai-sdk/openai';
 2import { ToolLoopAgent } from 'ai';
 3import { Mastra } from '@mastra/core';
 4
 5const toolLoopAgent = new ToolLoopAgent({
 6  model: openai('gpt-4o'), // LanguageModelV3
 7  tools: { weather: weatherTool },
 8});
 9
10const mastra = new Mastra({
11  agents: { toolLoopAgent },
12});

V3's nested usage format is normalized to Mastra's flat format, with reasoning tokens and cached input tokens preserved. All existing V1 and V2 models continue to work unchanged, this is fully backward compatible. (PR #11191, PR #11254)

Thread Cloning

Duplicate conversation threads to branch interactions, test changes, or power emerging multi-path UI patterns in consumer and internal tools.

 1// Clone a thread
 2const { thread, clonedMessages } = await memory.cloneThread({
 3  sourceThreadId: 'thread-123',
 4  title: 'My Clone',
 5  options: {
 6    messageLimit: 10, // optional: only copy last N messages
 7  },
 8});
 9
10// Check if a thread is a clone
11if (memory.isClone(thread)) {
12  const source = await memory.getSourceThread(thread.id);
13}
14
15// List all clones of a thread
16const clones = await memory.listClones('thread-123');

Includes storage implementations for InMemory, PostgreSQL, LibSQL, and Upstash, plus a new API endpoint and clone button in the playground UI. Embeddings are created for cloned messages, enabling semantic recall across branches. (PR #11517)

Agent Networks

Agent networks now support structured output, human-in-the-loop workflows, and completion validation:

  • Structured Output (PR #11701): Pass a Zod schema to agent.network() to get typed results from multi-agent execution
  • Human-in-the-Loop (PR #11678): Suspend/resume capabilities and tool approval workflows for agent networks via agent.resumeNetwork(), agent.approveNetworkToolCall(), and agent.declineNetworkToolCall()
  • Completion Validation (PR #11562): Use custom scorers to validate network completion

Stored Agents

Agents can now be stored in the database and loaded at runtime (PR #10953):

 1await mastra.getStorage().createAgent({
 2  agent: {
 3    id: 'my-agent',
 4    name: 'My Agent',
 5    instructions: 'You are helpful',
 6    model: { provider: 'openai', name: 'gpt-4' },
 7    tools: { myTool: {} },
 8  },
 9});
10
11const agent = await mastra.getStoredAgentById('my-agent');
12const response = await agent.generate('Hello!');

Human-in-the-Loop for generate()

Tool approval with requireToolApproval now works with generate(), not just stream() (PR #12056):

 1const output = await agent.generate('Find user John', {
 2  requireToolApproval: true,
 3});
 4
 5if (output.finishReason === 'suspended') {
 6  const result = await agent.approveToolCallGenerate({
 7    runId: output.runId,
 8    toolCallId: output.suspendPayload.toolCallId,
 9  });
10}

Workflow Improvements

Expanded workflow control with several new capabilities:

  • startAsync() (PR #11093): Fire-and-forget workflow execution that returns immediately with a runId after sending the event to Inngest
  • Per-step execution (PR #11276): Run just a single step instead of the entire workflow
  • onError / onFinish callbacks (PR #11200): Lifecycle callbacks for server-side handling of workflow completion and errors
  • Inngest cron scheduling (PR #11518): Automatically trigger workflows on a schedule with optional inputData and initialState

Unified Observability Schema

Spans now use a unified identification model with entityId, entityType, and entityName instead of separate agentId, toolId, workflowId fields.

The new listTraces() API enables powerful filtering, pagination, and sorting across all entity types:

 1const { spans, pagination } = await storage.listTraces({
 2  filters: {
 3    entityType: EntityType.AGENT,
 4    entityId: 'my-agent',
 5    userId: 'user-123',
 6    environment: 'production',
 7    status: TraceStatus.SUCCESS,
 8    startedAt: { start: new Date('2024-01-01'), end: new Date('2024-01-31') },
 9  },
10  pagination: { page: 0, perPage: 50 },
11  orderBy: { field: 'startedAt', direction: 'DESC' },
12});

SQL-based stores automatically add new columns to existing spans tables on initialization. Existing data is preserved with new columns set to NULL. (PR #11132)

Memory API Changes

The Memory API has been simplified and defaults updated:

API Simplification (PR #9701):

  • rememberMessages() removed (was duplicating query() functionality)
  • query() renamed to recall() for clarity
 1memory.recall({ threadId, resourceId, perPage, vectorSearchString, threadConfig });

Default Scope Changed (PR #8983): Both workingMemory.scope and semanticRecall.scope now default to 'resource' instead of 'thread'. This means working memory and semantic recall persist across all conversations for the same user. To maintain thread-scoped behavior, explicitly set scope: 'thread'.

Default Settings Updated: lastMessages default decreased from 40 to 10, semanticRecall disabled by default, and thread title generation disabled by default.

Context-Aware ID Generation

The idGenerator function now receives context about what type of ID is being generated (PR #10964):

 1const mastra = new Mastra({
 2  idGenerator: context => {
 3    if (context?.idType === 'message' && context?.threadId) {
 4      return `msg-${context.threadId}-${Date.now()}`;
 5    }
 6    return crypto.randomUUID();
 7  },
 8});

Breaking Changes

Subpath imports required (PR #9544): Top-level imports from @mastra/core are no longer allowed except for Mastra and type Config. Use subpath imports:

 1import { Mastra } from "@mastra/core";
 2import { Agent } from "@mastra/core/agent";
 3import { createTool } from "@mastra/core/tools";
 4import { createStep } from "@mastra/core/workflows";

Tool execution signature changed (PR #9587):

 1// Before
 2execute: async ({ context }) => { return doSomething(context.data); }
 3
 4// After
 5execute: async (inputData, context) => { return doSomething(inputData); }

Context properties are now organized into namespaces: context.agent for agent-specific properties, context.workflow for workflow-specific properties, and context.mcp for MCP-specific properties.

Tool output validation (PR #9664): Tools with an outputSchema now validate their return values at runtime. Invalid output returns a ValidationError instead of passing through silently.

API renames (PR #9495, PR #9511):

  • getAgents()listAgents(), getTools()listTools(), getWorkflows()listWorkflows()
  • RuntimeContextRequestContext

Pagination API changes (PR #9592): All pagination APIs now use page/perPage instead of offset/limit.

Observability configuration (PR #9709): Install @mastra/observability and use explicit configuration with new Observability() instead of telemetry: or observability: { default: { enabled: true } }.

Node.js 22.13.0 minimum (PR #9706): Minimum required Node.js version bumped to 22.13.0.

Database migrations required: If using PostgreSQL or LibSQL, see the storage migration guide for required schema updates.

Other Notable Updates

  • Primitive ID Requirements: Every Mastra primitive (agent, MCPServer, workflow, tool, processor, scorer, vector) now requires an id and has get, list, and add methods (PR #9675)
  • Zod v4 Preparation: Removed Zod-specific type constraints across workflows and tools, preparing for Zod v4 migration (PR #11814)
  • Storage API Updates: Removed getMessages() and getMessagesPaginated(), replaced with unified listMessages() API supporting perPage: false (PR #9670, PR #9695)
  • Domain-Specific Storage Access: Storage operations now use getStore() pattern: storage.getStore('memory'), storage.getStore('workflows'), etc. (PR #11361)
  • Scorer Updates: Scorers moved under eval domain, now use MastraDBMessage instead of UIMessage (PR #9589, PR #9702)
  • Voice Methods Moved: Voice-related methods moved from Agent class to agent.voice namespace: agent.speak()agent.voice.speak() (PR #9257)
  • Structured Output: output option removed, use structuredOutput: { schema } instead
  • Sensitive Data Protection: New hideInput and hideOutput options in tracing to protect sensitive information (PR #11969)
  • Trace Tagging: BrainTrust and Langfuse exporters now support trace tagging (PR #10765)
  • TTFT Metrics: Time-to-first-token support for Langfuse integration (PR #10781)

For changes in other packages, check the CHANGELOG.md file in each package directory on GitHub.

That's all for 1.0.0!

Happy building! 🚀

Share:
Shane Thomas

Shane Thomas is the founder and CPO of Mastra. He co-hosts AI Agents Hour, a weekly show covering news and topics around AI agents. Previously, he was in product and engineering at Netlify and Gatsby. He created the first course as an MCP server and is kind of a musician.

All articles by Shane Thomas