We've been busy working on the parts of Mastra you feel every day, workspaces that stay fast and predictable, workflows that are easier to debug at scale, and the foundations for secure, multi-user deployments.
Release: @mastra/core@1.9.0
Let's dive in:
Major Workspace & Sandbox upgrades (token-aware output, mounts, process control, LSP resolution)
Workspaces got a big capability boost aimed at making agents more effective in real projects, especially larger repos and monorepos. The focus is on keeping tool output model-friendly, reducing token waste, and giving you better control over sandbox execution.
A few notable improvements:
- Model-safe command output: Workspace sandbox tool results now strip ANSI color codes when sent to the model (via
toModelOutput), while still streaming colored output to the user. This improves readability for the LLM and cuts noise in context. - Token-aware truncation (configurable): Tools that can return large results now enforce a token-based output limit (default was reduced to 2000 tokens,
list_filesuses 1000). It uses tiktoken-style counting, so limits map more closely to what models actually consume. - Smarter truncation per tool: File and search style tools truncate differently than process tools, so you keep the most useful parts (for example, head+tail output for long command logs).
- Gitignore-aware results:
list_filesandgrepnow respect.gitignoreby default, filtering out common token sinks likenode_modules/anddist/. If you explicitly target an ignored path, it still works. - Custom tool names: You can now expose workspace tools under custom names with
WorkspaceToolConfig.name, which is especially handy when you want your prompts and tool vocabulary to match your app domain. - Local symlink mounts in LocalSandbox: Mounted paths are now accessible via symlink, making local sandbox runs behave more like remote sandboxes.
- Sandbox cancellation and background process streaming: Commands can be cancelled via
abortSignal, and long-running background processes can stream output throughonStdout,onStderr, andonExit.
Token-aware output limits
1import { Workspace } from '@mastra/core/workspace';
2
3const workspace = new Workspace({
4 tools: {
5 mastra_workspace_execute_command: {
6 maxOutputTokens: 5000, // override default limit
7 },
8 },
9});Remap workspace tool names exposed to the LLM
1import { Workspace } from '@mastra/core/workspace';
2import { LocalFilesystem } from '@mastra/core/workspace';
3
4const workspace = new Workspace({
5 filesystem: new LocalFilesystem({ basePath: './project' }),
6 tools: {
7 mastra_workspace_read_file: { name: 'view' },
8 mastra_workspace_grep: { name: 'search_content' },
9 mastra_workspace_edit_file: { name: 'string_replace_lsp' },
10 },
11});Configure LSP binary resolution (monorepo friendly)
Language server diagnostics are now much more reliable across monorepos, global installs, and custom toolchains. You can override binaries directly, add search paths, or (optionally) fall back to a package runner.
1import { Workspace } from '@mastra/core/workspace';
2
3const workspace = new Workspace({
4 lsp: {
5 binaryOverrides: {
6 typescript: '/usr/local/bin/typescript-language-server --stdio',
7 },
8 searchPaths: ['/path/to/my-tool'],
9 packageRunner: 'npx --yes',
10 },
11});These changes are especially useful when you are running agents in sandboxes but developing locally, and you want consistent diagnostics and file tooling behavior without blowing up context windows. (PR #13440, PR #13724, PR #13687, PR #13677, PR #13474, PR #13597)
End-to-end Auth + RBAC across Server, Studio, and Providers
Mastra now includes a pluggable auth system in @mastra/core/auth, plus the server-side wiring needed to make authentication and permissions consistent across Server, Studio, and provider implementations.
At the core are provider interfaces you can implement or adopt:
IUserProviderfor user lookup and managementISessionProviderfor session creation/validation and cookiesISSOProviderfor SSO login and callback flowsICredentialsProviderfor username/password flows
You also get default session provider implementations (cookie-based with secure defaults, plus an in-memory provider for dev/test).
For Enterprise Edition, @mastra/core/auth/ee adds RBAC, ACL, and license validation. Capabilities are built up from your RBAC and ACL providers:
1import { buildCapabilities } from '@mastra/core/auth/ee';
2
3const capabilities = buildCapabilities({
4 rbac: myRBACProvider,
5 acl: myACLProvider,
6});On the provider side, new packages like @mastra/auth-cloud, @mastra/auth-studio, and @mastra/auth-workos cover common deployment needs (OAuth/SSO, session management, and RBAC), and Studio UI now supports permission-gated auth screens and components.
Here is what WorkOS-based auth + RBAC configuration looks like:
1import { Mastra } from '@mastra/core';
2import { MastraAuthWorkos, MastraRBACWorkos } from '@mastra/auth-workos';
3
4const mastra = new Mastra({
5 server: {
6 auth: new MastraAuthWorkos({
7 apiKey: process.env.WORKOS_API_KEY,
8 clientId: process.env.WORKOS_CLIENT_ID,
9 }),
10 rbac: new MastraRBACWorkos({
11 apiKey: process.env.WORKOS_API_KEY,
12 clientId: process.env.WORKOS_CLIENT_ID,
13 roleMapping: {
14 admin: ['*'],
15 member: ['agents:read', 'workflows:*'],
16 },
17 }),
18 },
19});If you are building anything multi-tenant or shipping internal tools, having a shared auth contract across Server and Studio means fewer one-off integrations, and fewer surprises when you move from local dev to production. (PR #13163)
Workflow execution path tracking + concurrent-safe workflow snapshot updates
Workflows now expose much better introspection and safer persistence, which matters a lot when workflows branch, resume, or run in parallel.
Execution path tracking
Workflow results now include stepExecutionPath, a list of the step IDs that actually ran. It is also available mid-execution (via execution context), and it persists across resume/restart cycles.
1// Before
2const result = await workflow.execute({ triggerData });
3// result.stepExecutionPath โ undefined
4
5// After
6const result = await workflow.execute({ triggerData });
7console.log(result.stepExecutionPath);
8// โ ['step1', 'step2', 'step4']You can use this for debugging conditional branches, understanding resumes, and auditing what happened without scanning logs.
Smaller execution logs
Execution logs are now more compact by deduping payloads. Step outputs are no longer duplicated as the next step's input, which keeps workflow results readable and reduces storage and context pressure.
Concurrent-safe workflow snapshot updates
Storage backends now support atomic workflow updates with:
updateWorkflowResultsupdateWorkflowStatesupportsConcurrentUpdates()
The workflow engine checks supportsConcurrentUpdates() and throws a clear error when a backend cannot safely support concurrent updates (some backends are explicitly not supported). This helps prevent subtle state corruption when multiple tool calls hit the same workflow concurrently.
These changes pair nicely with the fix for parallel workflow tool calls, where each call now runs with its own workflow run context to avoid duplicated results and incorrect suspension/resumption behavior. (PR #11755, PR #12575, PR #13478)
Breaking Changes
harness.sendMessage uses files instead of images (PR #13574): harness.sendMessage() now accepts files instead of images. This supports any file type, preserves filenames end-to-end, and auto-decodes text-based files (text/*, application/json) so models can actually consume the content.
1// Before
2await harness.sendMessage({
3 content: 'Analyze this',
4 images: [{ data: base64Data, mimeType: 'image/png' }],
5});
6
7// After
8await harness.sendMessage({
9 content: 'Analyze this',
10 files: [
11 { data: base64Data, mediaType: 'image/png', filename: 'screenshot.png' },
12 ],
13});Other Notable Updates
- Network execution callbacks: Added
onStepFinishandonErrortoNetworkOptionsfor per-step progress monitoring and custom error handling duringagent.network()execution (PR #13370) - Harness subagent loop controls:
HarnessSubagentnow supportsmaxStepsandstopWhenso spawned subagents can use custom loop limits (PR #13653) - OpenAI WebSocket transport: Added WebSocket transport for streaming responses, with auto-close and manual transport access (PR #13531)
- Request-scoped context in Harness: Added
requestContextpassthrough to Harness runtime APIs so tools and subagents can receive request-scoped values (PR #13650) - Unified observability type system: Introduced interfaces for structured logging, metrics, scores, and feedback that flow through contexts alongside tracing, with no-op defaults to avoid forcing migrations (PR #13058)
- Post-construction server configuration: Added
mastra.setServer()so platform tooling can inject server defaults (for example auth) after constructing aMastrainstance (PR #13729) - Thread and memory fixes:
Harness.cloneThread()now resolves dynamic memory factories before cloning, andMemory.recall()now consistently returns pagination metadata (PR #13569, PR #13278) - Workspace and sandbox stability: Fixed detached process stdio issues on some Node versions, improved LocalSandbox spawn error handling, and fixed tilde path expansion in LocalFilesystem and LocalSandbox (PR #13697, PR #13734, PR #13739, PR #13741)
- Tooling correctness: Tool lifecycle hooks now fire correctly for tools created via
createTool()(PR #13708)
That's all for @mastra/core@1.9.0!
Happy building! ๐
