We’ve been busy smoothing out the agent building experience, especially around code edits, tool visibility in the Harness UI, and memory that stays coherent when conversations get long.
Release: @mastra/core@1.6.0
Let's dive in:
AST-Based Workspace Edit Tool (mastra_workspace_ast_edit)
String-based edits are fast, but they can be brittle when you’re doing real refactors. In 1.6.0, Mastra introduces an AST-powered workspace edit tool, mastra_workspace_ast_edit, built for structural code transformations.
It uses AST analysis to do things like identifier renames, import management (add, remove, merge), and pattern-based replacements with metavariables, which means edits are far less likely to break formatting or miss edge cases.
Even better, the tool is automatically available when @ast-grep/napi is installed in your project, so you can opt into AST refactors just by adding the dependency.
1const workspace = new Workspace({
2 filesystem: new LocalFilesystem({ basePath: '/my/project' }),
3});
4const tools = createWorkspaceTools(workspace);
5
6// Rename all occurrences of an identifier
7await tools['mastra_workspace_ast_edit'].execute({
8 path: '/src/utils.ts',
9 transform: 'rename',
10 targetName: 'oldName',
11 newName: 'newName',
12});
13
14// Add an import (merges into existing imports from the same module)
15await tools['mastra_workspace_ast_edit'].execute({
16 path: '/src/app.ts',
17 transform: 'add-import',
18 importSpec: { module: 'react', names: ['useState', 'useEffect'] },
19});
20
21// Pattern-based replacement with metavariables
22await tools['mastra_workspace_ast_edit'].execute({
23 path: '/src/app.ts',
24 pattern: 'console.log($ARG)',
25 replacement: 'logger.debug($ARG)',
26});On the server side, workspace tools like ast_edit are now detected at runtime based on available dependencies, so tools that are not actually usable (because a dependency is missing) will not be advertised to agents.
If you want to build agents that can safely modernize codebases, perform sweeping renames, or standardize imports, AST edits are the building block you’ve been waiting for. (PR #13233)
Harness UX + Built-ins: Streaming Tool Argument Previews and Task Tools
When an agent calls tools, the slowest part is often not the tool itself, it is waiting to see what the model is even trying to do. In 1.6.0, all tool renderers in the Harness now stream argument previews in real time.
That means tool names, file paths, commands, and even edit diffs can appear immediately while the model is still generating the tool call, instead of showing up only after the full JSON is complete.
Under the hood, this is powered by partial JSON parsing, and it’s enabled automatically for all Harness-based agents, no configuration required.
A few examples of what you’ll see:
- Generic tools show live key/value argument previews as args stream in
- The edit tool renders a bordered diff preview as soon as
old_strandnew_strare available - The write tool streams syntax-highlighted file content in a bordered box while args arrive
- Find files shows the glob pattern in the pending header
- Task write streams items directly into the pinned task list component in real time
This is especially helpful during debugging or demos, since you can spot incorrect paths, commands, or edit targets instantly. (PR #13328)
Built-in task tools: task_write and task_check
Mastra also makes structured task tracking a first-class Harness capability by adding task_write and task_check as built-in tools. They’re automatically injected into every agent call, so you no longer have to remember to register them manually.
That gives your agents a consistent way to keep a task list updated during longer workflows, and a reliable mechanism to check whether everything is done before concluding.
1// Agents can call task_write to create/update a task list
2await tools['task_write'].execute({
3 tasks: [
4 {
5 content: 'Fix authentication bug',
6 status: 'in_progress',
7 activeForm: 'Fixing authentication bug',
8 },
9 {
10 content: 'Add unit tests',
11 status: 'pending',
12 activeForm: 'Adding unit tests',
13 },
14 ],
15});
16
17// Agents can call task_check to verify all tasks are complete before finishing
18await tools['task_check'].execute({});
19// Returns: { completed: 1, inProgress: 0, pending: 1, allDone: false, incomplete: [...] }If you’re building agents that need to stay organized across many tool calls, this is a big quality-of-life upgrade, both in the UI and in how agents can self-manage progress. (PR #13344)
Observational Memory Continuity Improvements (Suggested Continuation + Current Task)
Observational Memory (OM) is designed to keep agents useful even when context windows get tight. The painful moment is when activation happens and the agent loses its thread, especially the “what was I about to say?” part.
In 1.6.0, OM continuity is improved by preserving:
suggestedContinuation(the agent’s intended next response)currentTask(the active task the agent thinks it is working on)
These fields now persist across activations, including through storage adapters, so the agent can pick up naturally even after older messages are compressed out of the window. This shows up most clearly in longer chats where activation triggers more often.
There are also several improvements to make activation land closer to your retention target and reduce runaway observer output:
- priority handling is improved, user messages and task completions are always high priority
- chunk selection biases slightly toward overshooting rather than undershooting retention targets
- safeguards prevent activation from consuming too much context (95% ceiling and 1000-token floor)
- activation can aggressively reduce context when pending tokens exceed
blockAfter bufferActivationnow supports absolute token values (>= 1000) in addition to ratios
Storage adapters now return suggestedContinuation and currentTask on activation, and the in-memory adapter was updated to match persistent implementations.
If you’ve ever had an agent “forget what it was doing” right after memory activation, these changes make that experience much rarer and far less disruptive. (PR #13354)
Breaking Changes
Harness methods now use object parameters and standardized naming (PR #13353): All Harness class methods were refactored to accept object parameters instead of positional arguments, and several methods were renamed for consistency (for example, getModes becomes listModes). To migrate, update call sites to pass named fields, and replace renamed methods with their new equivalents.
1// Before
2await harness.switchMode('build');
3await harness.sendMessage('Hello', { images });
4const modes = harness.getModes();
5const models = await harness.getAvailableModels();
6harness.resolveToolApprovalDecision('approve');
7
8// After
9await harness.switchMode({ modeId: 'build' });
10await harness.sendMessage({ content: 'Hello', images });
11const modes = harness.listModes();
12const models = await harness.listAvailableModels();
13harness.respondToToolApproval({ decision: 'approve' });The HarnessRequestContext interface methods (registerQuestion, registerPlanApproval, getSubagentModelId) were also updated to use object parameters, so make sure any custom implementations follow the same pattern.
Other Notable Updates
- Opt-in cross-process thread locking: Added optional
threadLockcallbacks toHarnessConfigso you can prevent concurrent access to the same thread across processes during.selectOrCreateThread(),.createThread(), and.switchThread()(PR #13334) - Vercel AI Gateway fixes: Removed conflicting overrides that caused incorrect API key resolution and fixed failures when using router string formats like
vercel/openai/gpt-oss-120b(PR #13291, PR #13287) - Processor graph schema warnings: Fixed recursive schema warnings by unrolling entries to a depth of 3 levels (PR #13292)
- Observational Memory TUI streaming updates: Fixed OM status not updating in real time by adding missing streaming handlers, also added
.switchObserverModel()and.switchReflectorModel()so model changes emit events correctly (PR #13330) - Git worktree thread scoping: Threads are now auto-tagged with project path and filtered on resume to avoid resuming the wrong worktree’s thread (PR #13343)
- Stream error crash fix: Prevented unhandled
Controller is already closederrors when streams error after downstream cancellation by checking controller state before enqueue/close/error (PR #13142) - Provider-executed tools in parallel: Fixed provider-executed tools (like Anthropic
web_search) causing stream bail when running alongside regular tools by providing a fallback mapping result (PR #13126) - Schema compatibility for Groq: Fixed Groq provider missing optional-to-nullable transformations, which could cause 400s when models omitted optional tool params (PR #13303)
- Observability reliability: Fixed spans being dropped before exporter initialization, and fixed a
deepClean()crash in bundler environments that rewroteSetconstruction (PR #12936, PR #13322)
That's all for @mastra/core@1.6.0!
Happy building! 🚀
