Mastra Changelog 2026-02-24

Background process management in workspaces and sandboxes, runtime tool configuration via Workspace.setToolsConfig(), and observational memory reliability and introspection improvements.

Shane ThomasShane Thomas·

Feb 24, 2026

·

6 min read

We’ve been busy smoothing out a few of the rough edges that show up once you start running real workloads, long-lived sandboxes, and production UIs. The result is a release that makes workspaces more powerful, more controllable, and easier to debug.

Release: @mastra/core@1.7.0

Let's dive in:

Background Process Management in Workspaces & Sandboxes

Workspaces can now spawn and manage long-running background processes inside sandbox environments, which is perfect for dev servers, file watchers, background jobs, and REPL-style workflows. Under the hood, core introduces a SandboxProcessManager abstraction plus a ProcessHandle API for streaming output, waiting on exit, and managing process lifecycle. (PR #13293)

That matters because previously you had two awkward choices: block on every command until it exits, or build your own process tracking outside the sandbox. Now you can start a process once, keep it running, and inspect or kill it later, all through a consistent sandbox API.

Here’s what it looks like at the sandbox layer:

 1// Spawn a background process
 2const handle = await sandbox.processes.spawn('node server.js');
 3
 4// Stream output and wait for exit
 5const result = await handle.wait({
 6  onStdout: data => console.log(data),
 7});
 8
 9// List and manage running processes
10const procs = await sandbox.processes.list();
11await sandbox.processes.kill(handle.pid);

A few implementation details you can rely on:

  • SandboxProcessManager base class: spawn(), list(), get(pid), kill(pid)
  • ProcessHandle base class: stdout/stderr accumulation, streaming callbacks, and .wait()
  • LocalProcessManager for Node.js, wrapping child_process
  • Stream interop via handle.reader / handle.writer
  • executeCommand now has a default implementation built on the process manager (spawn + wait)

On top of that, workspace tooling got first-class support for background processes and a much nicer terminal-style execution display. execute_command now supports background: true, and there are new tools for inspecting output and terminating processes. (PR #13309)

 1// Example: start a long-running process from a workspace tool call
 2await workspace.tools.execute_command({
 3  command: "npm run dev",
 4  background: true,
 5  // returns { pid, ... }
 6});
 7
 8// Later, check output or block until it exits
 9await workspace.tools.get_process_output({
10  pid: 12345,
11  wait: false,
12});
13
14// Or stop it
15await workspace.tools.kill_process({ pid: 12345 });

Along the way you also get practical UI improvements like output truncation helpers (tail lines), execution badges that show streaming output and exit codes, killed status, and workspace metadata, all aimed at making sandboxes feel like a real terminal instead of a black box. (PR #13309)

Runtime Tool Configuration Updates via Workspace.setToolsConfig()

You can now enable or disable workspace tools dynamically on an existing Workspace instance using Workspace.setToolsConfig(). Passing undefined resets to the default behavior and re-enables all tools. (PR #13439)

This is a big quality-of-life improvement if you switch modes at runtime, for example “plan only”, “read-only”, tenant-specific restrictions, or progressive enablement after a user confirmation. You no longer need to recreate the workspace (and everything attached to it) just to lock down a couple of capabilities.

 1const workspace = new Workspace({ filesystem, sandbox });
 2
 3// Disable write tools (e.g., in plan/read-only mode)
 4workspace.setToolsConfig({
 5  mastra_workspace_write_file: { enabled: false },
 6  mastra_workspace_edit_file: { enabled: false },
 7});
 8
 9// Re-enable all tools
10workspace.setToolsConfig(undefined);

If you’re building higher-level agent flows, this gives you a clean way to harden safety boundaries without juggling multiple workspace instances or custom tool registries. (PR #13439)

Observational Memory Reliability & Introspection Improvements

Observational Memory (OM) gets a strong set of improvements in both introspection and stability.

On the introspection side, core adds Harness.getObservationalMemoryRecord(), a public method that returns the full ObservationalMemoryRecord for the current thread, including activeObservations, generationCount, and observationTokenCount. This removes the need to reach into private storage internals just to debug OM behavior. (PR #13395)

 1const record = await harness.getObservationalMemoryRecord();
 2
 3if (record) {
 4  console.log(record.activeObservations);
 5  console.log(record.generationCount);
 6  console.log(record.observationTokenCount);
 7}

On the reliability side, @mastra/memory fixes several issues that could bite long-running processes hard:

  • A major OM memory leak that could lead to OOM crashes. The default Tiktoken encoder (often ~80 to 120 MB heap) is now shared across OM instances instead of being allocated per request. This is the primary fix, since previously each request allocated two encoders that could stick around due to async buffering promise retention. It also fixes a cleanup bug where reflection cycle IDs were not cleaned up correctly. (PR #13425)
  • A PostgreSQL deadlock scenario when parallel agents with different threadIds share the same resourceId. Thread scope now requires a valid threadId and throws a clearer error when it’s missing, plus the DB lock ordering was fixed to prevent lock inversions. (PR #13436)

If you’ve been pushing OM in production, these changes should translate directly to fewer mysterious crashes, fewer “stuck” workloads, and much easier debugging when you need to understand what OM is accumulating. (PR #13425, PR #13436, PR #13395)

Breaking Changes

None in @mastra/core@1.7.0.

Other Notable Updates

  • Canonical UI state via HarnessDisplayState: UIs can now read a single snapshot using harness.getDisplayState() instead of reconstructing state from 35+ granular events, reducing duplication and inconsistencies across TUI/web/desktop implementations. (PR #13427)

     1import type { HarnessDisplayState } from '@mastra/core/harness';
     2
     3harness.subscribe(() => {
     4  const ds: HarnessDisplayState = harness.getDisplayState();
     5  renderUI(ds);
     6});
  • Workspace instructions overhaul: Added Workspace.getInstructions() and WorkspaceInstructionsProcessor so agents receive accurate workspace context, injected directly into the agent system message (instead of being embedded into tool descriptions). Workspace.getPathContext() is deprecated in favor of .getInstructions(). Also added an instructions option to LocalFilesystem and LocalSandbox so you can replace or extend default instructions per request (for example by tenant or locale). (PR #13304)

     1const filesystem = new LocalFilesystem({
     2  basePath: './workspace',
     3  instructions: ({ defaultInstructions, requestContext }) => {
     4    const locale = requestContext?.get('locale') ?? 'en';
     5    return `${defaultInstructions}\nLocale: ${locale}`;
     6  },
     7});
  • OpenAI schema compatibility fix for agents-as-tools + model router: Tool schemas are now post-processed to ensure all properties have valid type information, fixing OpenAI rejections caused by an auto-injected resumeData field producing a JSON Schema without a type key. (PR #13326)

  • stopWhen now receives correct tool results: Fixed stopWhen callbacks seeing empty toolResults on steps, step.toolResults now reflects tool results present in step.content. (PR #13319)

  • Scorer metadata for Studio: Added hasJudge metadata to scorer records so the Studio can distinguish code-based scorers from LLM-based scorers, and ensured it’s saved across all score-saving paths (evals, hooks, trace scoring, dataset experiments). (PR #13386)

  • Output processors can now stream during finish: Fixed a bug where the writer passed into output processors during final output processing was always undefined, blocking use cases like streaming moderation updates or custom UI events back to the client. (PR #13454)

  • Safer concurrent workspace file edits: Added per-file write locking to workspace tools (edit_file, write_file, ast_edit, delete) so concurrent tool calls targeting the same file are serialized, preventing silent overwrites during parallel edits. (PR #13302)

That's all for @mastra/core@1.7.0!

Happy building! 🚀

Share:
Shane Thomas

Shane Thomas is the founder and CPO of Mastra. He co-hosts AI Agents Hour, a weekly show covering news and topics around AI agents. Previously, he was in product and engineering at Netlify and Gatsby. He created the first course as an MCP server and is kind of a musician.

All articles by Shane Thomas