Skip to main content

Upgrade to Mastra v1

In this guide you'll learn how to upgrade through breaking changes in pre-v1 versions of Mastra. It'll also help you upgrade to Mastra v1.

Use your package manager to update your project's versions. Be sure to update all Mastra packages at the same time.

info

Versions mentioned in the headings refer to the @mastra/core package. If necessary, versions of other Mastra packages are called out in the detailed description.

All Mastra packages have a peer dependency on @mastra/core so your package manager can inform you about compatibility.

Migrate to v0.23 (unreleased)

note

This version isn't released yet but we're adding changes as we make them.

Changed: Thread Title Generation Location and Default

Breaking Changes:

  1. generateTitle has been moved from threads.generateTitle to the top-level of memory options
  2. The default value has changed from true to false
  3. Using threads.generateTitle will now throw an error

Why these changes?

  • Simplify the API by moving generateTitle to the top level where it logically belongs
  • Avoid unexpected LLM API calls and associated costs
  • Give developers explicit control over when title generation occurs
  • Improve predictability of memory behavior

Migration:

If your application uses threads.generateTitle, you must update it to use the top-level generateTitle option:

// Before (v0.22 and earlier)
const agent = new Agent({
memory: new Memory({
options: {
threads: {
generateTitle: true, // ❌ This will now throw an error
},
},
}),
});

// After (v0.23+)
const agent = new Agent({
memory: new Memory({
options: {
generateTitle: true, // ✅ Now at the top level
},
}),
});

If your application relied on the old default behavior (automatic title generation), explicitly enable it:

// Before (implicit true in v0.22)
const agent = new Agent({
memory: new Memory({
// generateTitle was true by default
}),
});

// After (explicit true in v0.23+)
const agent = new Agent({
memory: new Memory({
options: {
generateTitle: true, // Explicitly enable title generation
},
}),
});

You can also customize the model and instructions used for title generation:

const agent = new Agent({
memory: new Memory({
options: {
generateTitle: {
model: openai("gpt-4o-mini"),
instructions:
"Generate a concise title (max 5 words) that summarizes the conversation",
},
},
}),
});

Changed: Memory Default Settings

The default settings for semantic recall have been optimized based on RAG research:

  • topK increased from 2 to 4 - Retrieves 4 most relevant messages instead of 2
  • messageRange changed from {before: 2, after: 2} to {before: 1, after: 1} - Includes 1 message before and after each retrieved message instead of 2

Impact:

When semantic recall is enabled, you'll now retrieve up to 12 messages total (4 × 3) instead of 10 messages (2 × 5). This provides ~8% better accuracy while only increasing message count by 20%.

Action Required:

If you were relying on the previous defaults and want to maintain the old behavior, explicitly set these values:

const agent = new Agent({
memory: {
semanticRecall: {
topK: 2,
messageRange: { before: 2, after: 2 },
},
},
});

If you're not using semantic recall (it's disabled by default), no changes are needed.

Removed: Deprecated APIs

Agent Execution Methods

  • Removed generateVNext() method. Use generate() instead.
  • Removed streamVNext() method. Use stream() instead.

Agent Default Options Methods

Methods have been renamed to clarify which are for legacy vs current APIs:

  • getDefaultGenerateOptions()getDefaultGenerateOptionsLegacy()
  • getDefaultStreamOptions()getDefaultStreamOptionsLegacy()
  • getDefaultVNextStreamOptions()getDefaultStreamOptions()

Output Options

  • Removed deprecated output field from generate()/stream() options. Use structuredOutput.schema instead:
// Before
const result = await agent.generate(messages, {
output: z.object({
answer: z.string(),
}),
});

// After
const result = await agent.generate(messages, {
structuredOutput: {
schema: z.object({
answer: z.string(),
}),
},
});

Abort Signal

  • Removed modelSettings.abortSignal. Use the top-level abortSignal option instead:
// Before
const result = await agent.stream(messages, {
modelSettings: {
abortSignal: controller.signal,
temperature: 0.7,
},
});

// After
const result = await agent.stream(messages, {
abortSignal: controller.signal,
modelSettings: {
temperature: 0.7,
},
});

MCP tools

  • Removed elicitation and extra from top level arguments. Use the mcp property instead.
export const accountBalance = createTool({
id: "account-balance",
description: "Fetches the current account balance for a given account",
inputSchema: z.object({
accountId: z.string(),
}),
outputSchema: z.object({
balance: z.number(),
}),
execute: async ({ context, mcp }) => {
if (mcp) {
await checkAuth(mcp.extra.authInfo);

const result = await mcp.elicitation.sendRequest({
message: `Is it ok to fetch the account balance for account ${context.accountId}?`,
requestedSchema: {
type: "object",
properties: {
confirm: { type: "boolean" },
},
required: ["confirm"],
},
});

if (result.action === "accept") {
return {
balance: getAccountBalance(context.accountId),
};
}
}
},
});

Input processors

  • Removed the @mastra/core/agent/input-processors exports. They can found at @mastra/core/processors now.
  • InputProcessor type removed in favour of Processor which implements a processInput function.

Migrate to v0.22

Changed: Memory Scope Defaults

@mastra/memory - 0.16.0
@mastra/core - 0.22.0

The default memory scope for both working memory and semantic recall has changed from 'thread' to 'resource'.

Before:

  • Working memory defaulted to scope: 'thread' (isolated per conversation)
  • Semantic recall defaulted to scope: 'thread' (search within current conversation only)

After:

  • Working memory defaults to scope: 'resource' (persists across all user conversations)
  • Semantic recall defaults to scope: 'resource' (search across all user conversations)

Why this change?

  1. Better user experience: Most applications want to remember user information across conversations, not just within a single thread
  2. More intuitive default: Users expect AI agents to "remember" them, which requires resource-scoped memory
  3. Alignment with common use cases: The majority of production applications use resource-scoped memory

Migration:

If you want to maintain the old behavior where memory is isolated per conversation thread, explicitly set scope: 'thread' in your memory configuration:

import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";

const memory = new Memory({
storage: new LibSQLStore({ url: "file:local.db" }),
options: {
workingMemory: {
enabled: true,
scope: "thread", // Explicitly set to thread-scoped
template: `# User Profile
- **Name**:
- **Interests**:
`,
},
semanticRecall: {
topK: 3,
scope: "thread", // Explicitly set to thread-scoped
},
},
});

If you want to adopt the new default behavior, you can optionally remove explicit scope: 'resource' declarations as they're now redundant.

Deprecated: format: "aisdk"

The format: "aisdk" option in stream()/generate() methods is deprecated. Use the @mastra/ai-sdk package instead. Learn more in the Using Vercel AI SDK documentation.

Removed: MCP Classes

@mastra/mcp - 0.14.0

  • Removed MastraMCPClient class. Use MCPClient class instead.
  • Removed MCPConfigurationOptions type. Use MCPClientOptions type instead. The API is identical.
  • Removed MCPConfiguration class. Use MCPClient class instead.

Removed: CLI flags & commands

mastra - 0.17.0

  • Removed the mastra deploy CLI command. Use the deploy instructions of your individual platform.
  • Removed --env flag from mastra build command. To start the build output with a custom env use mastra start --env <env> instead.
  • Remove --port flag from mastra dev. Use server.port on the new Mastra() class instead.

Migrate to v0.21

No changes needed.

Migrate to v0.20