Upgrade to Mastra v1
In this guide you'll learn how to upgrade through breaking changes in pre-v1 versions of Mastra. It'll also help you upgrade to Mastra v1.
Use your package manager to update your project's versions. Be sure to update all Mastra packages at the same time.
Versions mentioned in the headings refer to the @mastra/core package. If necessary, versions of other Mastra packages are called out in the detailed description.
All Mastra packages have a peer dependency on @mastra/core so your package manager can inform you about compatibility.
Migrate to v0.23 (unreleased)
This version isn't released yet but we're adding changes as we make them.
Changed: Thread Title Generation Location and Default
Breaking Changes:
generateTitlehas been moved fromthreads.generateTitleto the top-level of memory options- The default value has changed from
truetofalse - Using
threads.generateTitlewill now throw an error
Why these changes?
- Simplify the API by moving
generateTitleto the top level where it logically belongs - Avoid unexpected LLM API calls and associated costs
- Give developers explicit control over when title generation occurs
- Improve predictability of memory behavior
Migration:
If your application uses threads.generateTitle, you must update it to use the top-level generateTitle option:
// Before (v0.22 and earlier)
const agent = new Agent({
memory: new Memory({
options: {
threads: {
generateTitle: true, // ❌ This will now throw an error
},
},
}),
});
// After (v0.23+)
const agent = new Agent({
memory: new Memory({
options: {
generateTitle: true, // ✅ Now at the top level
},
}),
});
If your application relied on the old default behavior (automatic title generation), explicitly enable it:
// Before (implicit true in v0.22)
const agent = new Agent({
memory: new Memory({
// generateTitle was true by default
}),
});
// After (explicit true in v0.23+)
const agent = new Agent({
memory: new Memory({
options: {
generateTitle: true, // Explicitly enable title generation
},
}),
});
You can also customize the model and instructions used for title generation:
const agent = new Agent({
memory: new Memory({
options: {
generateTitle: {
model: openai("gpt-4o-mini"),
instructions:
"Generate a concise title (max 5 words) that summarizes the conversation",
},
},
}),
});
Changed: Memory Default Settings
The default settings for semantic recall have been optimized based on RAG research:
topKincreased from2to4- Retrieves 4 most relevant messages instead of 2messageRangechanged from{before: 2, after: 2}to{before: 1, after: 1}- Includes 1 message before and after each retrieved message instead of 2
Impact:
When semantic recall is enabled, you'll now retrieve up to 12 messages total (4 × 3) instead of 10 messages (2 × 5). This provides ~8% better accuracy while only increasing message count by 20%.
Action Required:
If you were relying on the previous defaults and want to maintain the old behavior, explicitly set these values:
const agent = new Agent({
memory: {
semanticRecall: {
topK: 2,
messageRange: { before: 2, after: 2 },
},
},
});
If you're not using semantic recall (it's disabled by default), no changes are needed.
Removed: Deprecated APIs
Agent Execution Methods
- Removed
generateVNext()method. Usegenerate()instead. - Removed
streamVNext()method. Usestream()instead.
Agent Default Options Methods
Methods have been renamed to clarify which are for legacy vs current APIs:
getDefaultGenerateOptions()→getDefaultGenerateOptionsLegacy()getDefaultStreamOptions()→getDefaultStreamOptionsLegacy()getDefaultVNextStreamOptions()→getDefaultStreamOptions()
Output Options
- Removed deprecated
outputfield fromgenerate()/stream()options. UsestructuredOutput.schemainstead:
// Before
const result = await agent.generate(messages, {
output: z.object({
answer: z.string(),
}),
});
// After
const result = await agent.generate(messages, {
structuredOutput: {
schema: z.object({
answer: z.string(),
}),
},
});
Abort Signal
- Removed
modelSettings.abortSignal. Use the top-levelabortSignaloption instead:
// Before
const result = await agent.stream(messages, {
modelSettings: {
abortSignal: controller.signal,
temperature: 0.7,
},
});
// After
const result = await agent.stream(messages, {
abortSignal: controller.signal,
modelSettings: {
temperature: 0.7,
},
});
MCP tools
- Removed
elicitationandextrafrom top level arguments. Use themcpproperty instead.
export const accountBalance = createTool({
id: "account-balance",
description: "Fetches the current account balance for a given account",
inputSchema: z.object({
accountId: z.string(),
}),
outputSchema: z.object({
balance: z.number(),
}),
execute: async ({ context, mcp }) => {
if (mcp) {
await checkAuth(mcp.extra.authInfo);
const result = await mcp.elicitation.sendRequest({
message: `Is it ok to fetch the account balance for account ${context.accountId}?`,
requestedSchema: {
type: "object",
properties: {
confirm: { type: "boolean" },
},
required: ["confirm"],
},
});
if (result.action === "accept") {
return {
balance: getAccountBalance(context.accountId),
};
}
}
},
});
Input processors
- Removed the
@mastra/core/agent/input-processorsexports. They can found at@mastra/core/processorsnow. InputProcessortype removed in favour ofProcessorwhich implements aprocessInputfunction.
Migrate to v0.22
Changed: Memory Scope Defaults
@mastra/memory - 0.16.0
@mastra/core - 0.22.0
The default memory scope for both working memory and semantic recall has changed from 'thread' to 'resource'.
Before:
- Working memory defaulted to
scope: 'thread'(isolated per conversation) - Semantic recall defaulted to
scope: 'thread'(search within current conversation only)
After:
- Working memory defaults to
scope: 'resource'(persists across all user conversations) - Semantic recall defaults to
scope: 'resource'(search across all user conversations)
Why this change?
- Better user experience: Most applications want to remember user information across conversations, not just within a single thread
- More intuitive default: Users expect AI agents to "remember" them, which requires resource-scoped memory
- Alignment with common use cases: The majority of production applications use resource-scoped memory
Migration:
If you want to maintain the old behavior where memory is isolated per conversation thread, explicitly set scope: 'thread' in your memory configuration:
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
const memory = new Memory({
storage: new LibSQLStore({ url: "file:local.db" }),
options: {
workingMemory: {
enabled: true,
scope: "thread", // Explicitly set to thread-scoped
template: `# User Profile
- **Name**:
- **Interests**:
`,
},
semanticRecall: {
topK: 3,
scope: "thread", // Explicitly set to thread-scoped
},
},
});
If you want to adopt the new default behavior, you can optionally remove explicit scope: 'resource' declarations as they're now redundant.
Deprecated: format: "aisdk"
The format: "aisdk" option in stream()/generate() methods is deprecated. Use the @mastra/ai-sdk package instead. Learn more in the Using Vercel AI SDK documentation.
Removed: MCP Classes
@mastra/mcp - 0.14.0
- Removed
MastraMCPClientclass. UseMCPClientclass instead. - Removed
MCPConfigurationOptionstype. UseMCPClientOptionstype instead. The API is identical. - Removed
MCPConfigurationclass. UseMCPClientclass instead.
Removed: CLI flags & commands
mastra - 0.17.0
- Removed the
mastra deployCLI command. Use the deploy instructions of your individual platform. - Removed
--envflag frommastra buildcommand. To start the build output with a custom env usemastra start --env <env>instead. - Remove
--portflag frommastra dev. Useserver.porton thenew Mastra()class instead.
Migrate to v0.21
No changes needed.