Mastra Changelog 2026-01-30

Unified Workspace API, observability and server adapters improvements.

Shane ThomasShane Thomas·

Jan 30, 2026

·

9 min read

We’ve been busy smoothing out some of the “sharp edges” in day to day agent development, especially around safe tool access, running in serverless environments, and getting cleaner traces when things go wrong.

Release: @mastra/core@1.1.0

Let's dive in:

Unified Workspace API (Filesystem + Sandbox Execution + Search + Skills)

Workspace is a new, unified capability that gives your agents a single interface for the stuff they commonly need to do in real projects: read and write files, run commands in a sandbox, search content (keyword, semantic, or hybrid), and discover SKILL.md instructions.

Instead of stitching together separate implementations (filesystem provider here, ad-hoc search there, custom skill loading logic somewhere else), you can now attach a Workspace and get a coherent set of “agent accessible” tools with safety controls like read-only mode, read-before-write guards, and approval flows.

Core usage

 1import { Agent } from '@mastra/core/agent';
 2import { Workspace, LocalFilesystem, LocalSandbox } from '@mastra/core/workspace';
 3import { openai } from '@ai-sdk/openai';
 4
 5const workspace = new Workspace({
 6  filesystem: new LocalFilesystem({ basePath: './workspace' }),
 7  sandbox: new LocalSandbox({ workingDirectory: './workspace' }),
 8  bm25: true, // enable keyword search
 9});
10
11const agent = new Agent({
12  name: 'repo-helper',
13  model: openai('gpt-4o'),
14  workspace, // agent automatically receives workspace tools
15});

Server endpoints, first-class support

Workspace isn’t just a core concept, it’s also exposed through the server so you can build UI and automation around it:

  • GET /workspaces
  • GET /workspaces/:workspaceId
  • GET/POST/DELETE /workspaces/:workspaceId/fs/*
  • GET /workspaces/:workspaceId/skills
  • GET /workspaces/:workspaceId/skills/:skillName
  • GET /workspaces/:workspaceId/skills/:skillName/references
  • GET /workspaces/:workspaceId/search
 1// List workspaces
 2const { workspaces } = await fetch('/api/workspaces').then(r => r.json());
 3const workspaceId = workspaces[0].id;
 4
 5// Read a file
 6const { content } = await fetch(
 7  `/api/workspaces/${workspaceId}/fs/read?path=/docs/guide.md`,
 8).then(r => r.json());
 9
10// Hybrid search
11const { results } = await fetch(
12  `/api/workspaces/${workspaceId}/search?query=authentication&mode=hybrid`,
13).then(r => r.json());

Client SDK methods

If you’re building on top of Mastra programmatically, the JS client now exposes workspace operations directly:

 1import { MastraClient } from '@mastra/client-js';
 2
 3const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
 4
 5const { workspaces } = await client.listWorkspaces();
 6const workspace = client.getWorkspace(workspaces[0].id);
 7
 8const { content } = await workspace.readFile('/docs/guide.md');
 9
10const { skills } = await workspace.listSkills();
11const skill = workspace.getSkill('my-skill');
12const details = await skill.details();
13
14const { results } = await workspace.search({ query: 'authentication', mode: 'hybrid' });

Along the way, skill loading is also more robust, we fixed a nasty edge case where Zod v3/v4 conflicts could break skill metadata validation. (PR #12344, PR #11986)

Observability & Streaming Improvements (Trace Status, Better Spans, Tool Approval Tracing)

Tracing is only useful if it’s easy to interpret. In 1.1.0 we focused on making traces more “debuggable” out of the box, especially for streaming models and approval flows.

Trace status you can actually filter on

listTraces now returns a status field so you can display and filter traces without having to infer it from endedAt and error:

  • success
  • error
  • running

This is particularly helpful in Studio and dashboards where you want quick triage views like “show me everything currently running” or “only failed runs”. (PR #12082)

Cleaner spans, better streaming visibility

We tightened up span output in a few important ways:

  • spans inherit entityType/entityId from the closest non-internal parent
  • internal framework spans no longer clutter exported traces
  • processor spans properly track separate input and output
  • model chunk spans are emitted for all streaming chunks (not just some)

If you’re exporting traces to tools like Langfuse, LangSmith, Braintrust, Datadog, etc., you should see cleaner timelines and more consistent attribution. (PR #12396)

Tool approval is now visible in traces

When a tool requires approval, Mastra now emits a MODEL_CHUNK span (chunk: tool-call-approval) that includes:

  • tool call id and name
  • arguments that need approval
  • resume schema describing the approval response format

That means you can debug “why is the agent stuck waiting for approval?” by looking at the trace, not by guessing. (PR #12171)

Correct token accounting for Langfuse and PostHog (cached tokens split out)

Token usage reporting is now aligned with what Langfuse and PostHog expect for cost calculation:

  • input token count excludes cached tokens
  • cache read and cache creation tokens are reported separately
  • defensive clamping prevents negative counts in edge cases

If you care about accurate spend reporting, this eliminates a common source of confusion. (PR #12465)

Default tracing tags are preserved

Tags set in an agent’s defaultOptions.tracingOptions are now preserved when merging call-site options, and correctly passed through to exporters. (PR #12325)

Server Adapters: Serverless MCP Support + Explicit Route Auth Controls

If you’re deploying Mastra behind a reverse proxy, on edge runtimes, or with custom auth requirements, 1.1.0 makes adapters and route security much more predictable.

Stateless MCP over HTTP for serverless and edge

Express, Fastify, Hono, and Koa adapters now accept mcpOptions, including serverless: true. This lets MCP HTTP transport run statelessly in environments where requests can’t share session state (Cloudflare Workers, Vercel Edge, etc.), without you having to override response handling.

 1import { MastraServer } from '@mastra/server';
 2
 3const server = new MastraServer({
 4  app,
 5  mastra,
 6  mcpOptions: {
 7    serverless: true,
 8  },
 9});

(PR #12324)

Explicit requiresAuth per route (defaults to protected)

Adapters and @mastra/server now attach explicit requiresAuth metadata to routes, defaulting to true for safer defaults. You can make a route public by setting requiresAuth: false.

This also improves performance (less route matching overhead) and makes route security easier to audit by reading the route definitions. (PR #12153)

Stronger auth enforcement for custom routes (including path params)

Custom routes registered via registerApiRoute now correctly enforce authentication even in dev mode when MASTRA_DEV=true.

We also fixed a subtle but important bug: path parameter routes (like /users/:id) now respect requiresAuth properly using pattern matching, including optional parameters and wildcards. (PR #12339, PR #12143)

Route prefixes behave as documented

If you set a custom API prefix, it now replaces the default /api prefix instead of being prepended to it. Example: prefix: /api/v2 correctly yields /api/v2/agents (not /api/v2/api/agents). Prefixes are also normalized consistently (leading slash, no trailing slash, collapsing multiple slashes). (PR #12221, PR #12295)

requestContextSchema: Runtime-Validated, Type-Safe Request Context Everywhere

Mastra already passes requestContext through agents, tools, workflows, and steps, but up to now it was easy to end up with missing fields at runtime, or “stringly typed” access that only fails once you’re in production.

In 1.1.0, you can define requestContextSchema (Zod) on tools, agents, workflows, and steps. Mastra will validate it at runtime, and you get typed access during execution.

Tool example

 1import { createTool } from '@mastra/core/tools';
 2import { z } from 'zod';
 3
 4const myTool = createTool({
 5  id: 'my-tool',
 6  inputSchema: z.object({ query: z.string() }),
 7  requestContextSchema: z.object({
 8    userId: z.string(),
 9    apiKey: z.string(),
10  }),
11  execute: async (input, context) => {
12    const userId = context.requestContext?.get('userId');
13    return { ok: true, userId };
14  },
15});

Agent example, with RequestContext.all

RequestContext.all provides a convenient typed object of all validated values:

 1import { Agent } from '@mastra/core/agent';
 2import { z } from 'zod';
 3import { openai } from '@ai-sdk/openai';
 4
 5const agent = new Agent({
 6  name: 'my-agent',
 7  model: openai('gpt-4o'),
 8  requestContextSchema: z.object({
 9    userId: z.string(),
10    featureFlags: z
11      .object({
12        debugMode: z.boolean().optional(),
13        enableSearch: z.boolean().optional(),
14      })
15      .optional(),
16  }),
17  instructions: ({ requestContext }) => {
18    const { userId, featureFlags } = requestContext.all;
19
20    const base = `You are a helpful assistant. The current user ID is: ${userId}.`;
21    return featureFlags?.debugMode ? `${base} Debug mode is enabled.` : base;
22  },
23  tools: ({ requestContext }) => {
24    const { featureFlags } = requestContext.all;
25
26    const tools: Record<string, any> = {};
27    if (featureFlags?.enableSearch) {
28      tools['web_search_preview'] = openai.tools.webSearchPreview();
29    }
30
31    return tools;
32  },
33});

Workflow and step schemas

 1import { createWorkflow } from '@mastra/core/workflows';
 2import { createStep } from '@mastra/core/workflows';
 3import { z } from 'zod';
 4
 5const workflow = createWorkflow({
 6  id: 'my-workflow',
 7  inputSchema: z.object({ data: z.string() }),
 8  requestContextSchema: z.object({
 9    tenantId: z.string(),
10  }),
11});
12
13const step = createStep({
14  id: 'my-step',
15  inputSchema: z.object({ data: z.string() }),
16  outputSchema: z.object({ result: z.string() }),
17  requestContextSchema: z.object({
18    userId: z.string(),
19  }),
20  execute: async ({ requestContext }) => {
21    const userId = requestContext?.get('userId');
22    return { result: `hello ${userId}` };
23  },
24});
25
26workflow.then(step).commit();

This also flows into Mastra Studio, when an entity defines requestContextSchema, Studio shows a Request Context tab with a form so you can supply values before running. On the runtime side, we also fixed propagation through agent networks and nested workflow execution, and made RequestContext.toJSON safer by skipping non-serializable values. (PR #12418, PR #12303, PR #12471, PR #12428)

Breaking Changes

Google embedding router removes deprecated text-embedding-004 (PR #12346): Google shut down text-embedding-004 on January 14, 2026. If you were using it via the model router, switch to google/gemini-embedding-001.

 1// Before
 2const embeddingModel = 'google/text-embedding-004';
 3
 4// After
 5const embeddingModel = 'google/gemini-embedding-001';

Other Notable Updates

  • Stored agents now use thin metadata + version snapshots: Agent records store metadata only (id, status, activeVersionId, authorId, etc.), configuration lives in version rows, enabling full history and rollback. List endpoints return resolved agents, and updates can auto-create versions with retention handling (PR #12485)
  • Dynamic agent management (CRUD + versioning): End to end support across storage backends, server endpoints, and client SDK, plus compare/restore workflows and Studio UI improvements (PR #12379)
  • Model router: native providers and correct package routing: Providers using non-default AI SDK packages (like @ai-sdk/anthropic) now route correctly, and cerebras, togetherai, deepinfra are supported as native SDK providers (PR #12488)
  • ModelRouterLanguageModel now propagates supportedUrls: Prevents unnecessary URL downloading and base64 conversion when providers support URLs natively (OpenAI, Anthropic, Google, Mistral, etc.). If you relied on Mastra downloading internal URLs, you may need to switch to base64 or public URLs (PR #12038)
  • Working memory respects readOnly: Working memory can be loaded as context without allowing updates, updateWorkingMemory tool is omitted in read-only mode (PR #12167)
  • Tool input validation handles null optional fields: If an LLM sends null for optional fields, validation retries with nulls stripped (optional accepts undefined, nullable still behaves as nullable) (PR #12433)
  • Agent network improvements: Abort support for network calls, better error feedback when routing agent returns malformed JSON, output processors applied to saved messages, and routing agent gets user-configured processors to avoid context blowups (PR #12419, PR #12213, PR #12329, PR #12259)
  • MastraVoice typing relaxed: You can pass any MastraVoice implementation directly to Agent.voice without wrapping in CompositeVoice (PR #12477)
  • Model loop stream supports activeTools: You can constrain which tools are available during streaming LLM execution via ModelLoopStreamArgs (PR #12220)
  • Custom routes included in OpenAPI spec: registerApiRoute routes now show up in generated OpenAPI documentation (PR #11786)

That's all for @mastra/core@1.1.0!

Happy building! 🚀

Share:
Shane Thomas

Shane Thomas is the founder and CPO of Mastra. He co-hosts AI Agents Hour, a weekly show covering news and topics around AI agents. Previously, he was in product and engineering at Netlify and Gatsby. He created the first course as an MCP server and is kind of a musician.

All articles by Shane Thomas