Blog

Announcing Mastra 0.10

Breaking changes merit breaking news. We are shipping a new version of Mastra with a handful of changes to simplify things, adjust naming conventions, and ensure that people can deploy Mastra anywhere. Announcing Mastra 0.10.

The following changes are going out today (May 21st) and are not backwards-compatible with previous Mastra releases (hence 'breaking' changes). Let’s dig in…

vNext workflows

We've given workflows a major overhaul based on user feedback. vNext will feature several improvements that largely address 3 big needs: stronger control flow, better type safety, and multi-engine support. (We also wrote a whole blogpost on vNext here.)

Before


Previously you'd get less type safety and have to access inputValue through the context, casting types for IDE support

const logCatName = new Step({
  id: "logCatName",
  outputSchema: z.object({
    rawText: z.string(),
  }),
  execute: async ({ context }) => {
    const name = context?.getStepResult<{ name: string }>("trigger")?.name;
    console.log(`Hello, ${name} 🐈`);
    return { rawText: `Hello ${name}` };
  },
});

export const logCatWorkflow = new Workflow({
  name: "log-cat-workflow",
  triggerSchema: z.object({
    name: z.string(),
  }),
});

logCatWorkflow.step(logCatName).commit();

After


Now you get strong typing and can access inputData easily, which means better IDE support, stronger control flow and an improved dev experience

const logCatName = createStep({
  id: "logCatName",
  inputSchema: z.object({
    name: z.string(),
  }),
  outputSchema: z.object({
    rawText: z.string(),
  }),
  execute: async ({ inputData }) => {
    console.log(`Hello, ${inputData.name} 🐈`);
    return { rawText: `Hello ${inputData.name}` };
  },
});

export const logCatWorkflow = createWorkflow({
  id: "log-cat-workflow",
  inputSchema: z.object({
    name: z.string(),
  }),
  outputSchema: z.object({
    rawText: z.string(),
  }),
  steps: [logCatName],
})
  .then(logCatName)
  .commit();

Vector Store Changes

If your application uses any of our vector stores, you'll need to update your code to match the new API. These changes are designed to make the API more consistent, improve type safety, and provide better developer experience.

Positional Arguments

All vector stores that previously used positional arguments now use object parameters instead for public facing vector functions. This change makes the API more consistent and less error-prone.

Before



await vectorDB.createIndex(indexName2, 3, "cosine");
await vectorDB.upsert(indexName, [[1, 2, 3]], [{ test: "data" }]);
await vectorDB.query(indexName, [1, 2, 3], 5);

After



await vectorDB.createIndex({
  indexName: indexName2,
  dimension: 3,
  metric: "cosine",
});

await vectorDB.upsert({
  indexName,
  vectors: [[1, 2, 3]],
  metadata: [{ test: "data" }],
});

await vectorDB.query({
  indexName,
  queryVector: [1, 2, 3],
  topK: 5,
});

Method Renaming

The methods updateIndexById and deleteIndexById have been renamed to updateVector and deleteVector respectively to better reflect their purpose.

Before



await vectorDB.updateIndexById(testIndexName, idToBeUpdated, update);

After



await vectorDB.updateVector({
  indexName: testIndexName,
  id: idToBeUpdated,
  update,
});

PG Vector Specific Changes

Constructor Changes

The string constructor for PGVector has been removed in favor of object parameters, providing a more consistent and type-safe API.

Before



const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!);

After



const pgVector = new PgVector({
  connectionString: process.env.POSTGRES_CONNECTION_STRING,
});

Removed Methods

The defineIndex method has been removed from PGVector.

Before



await vectorDB.defineIndex(indexName, "cosine", { type: "flat" });

After



await vectorDB.buildIndex({
  indexName: indexName,
  metric: "cosine",
  indexConfig: { type: "flat" },
});

PG Storage Specific Changes

The schema parameter has been replaced with schemaName in the constructor for better clarity.

Before



const pgStore = new PostgresStore({
  connectionString: process.env.POSTGRES_CONNECTION_STRING,
  schema: customSchema,
});

After


const pgStore = new PostgresStore({
  connectionString: process.env.POSTGRES_CONNECTION_STRING,
  schemaName: customSchema,
});

@mastra/memory 0.3.x -> 0.10.0

If your application relies on the previous default Memory configuration, uses semantic recall, or utilizes working memory, you'll need to update your code. The goal of these changes is to reduce package size, make configuration more explicit, and lower resource usage and latency.

Default Embedder

The default embedder previously included in @mastra/memory has been moved to a separate package, @mastra/fastembed. This change reduces package size and addresses issues with deploying to platforms like Vercel and Cloudflare due to the built-in embedder's large dependencies.

If you were relying on the default embedder, you now need to explicitly configure an embedder.

Before



import { Memory } from "@mastra/memory";
const memory = new Memory({
  // this was the implicit default
  // embedder: fastembed
});

After


To continue using the FastEmbed model, first install the package npm install @mastra/fastembed, then add it to your code:

import { Memory } from "@mastra/memory";
import { fastembed } from "@mastra/fastembed";

const memory = new Memory({
embedder: fastembed,
});

Or use another embedder like OpenAI or Gemini:

import { Memory } from "@mastra/memory";
import { openai } from "@ai-sdk/openai";

const memory = new Memory({
  embedder: openai.embedding("text-embedding-3-small"),
});

Default Storage and Vector Adapters

Default storage and vector adapters have been removed from Memory. You now need to configure these explicitly. Memory will also inherit the storage configuration from your Mastra instance if it's omitted. This change makes the configuration more explicit and helps avoid unexpected behavior when deploying to platforms like Cloudflare.

Before


LibSQL was previously used for both storage and vector adapters by default:

import { Memory } from "@mastra/memory";
const memory = new Memory({
  // these were added by default
  // storage: new LibSQLStore(),
  // vector: new LibSQLVector(),
});

After


Now you need to explicitly define them:

import { Memory } from "@mastra/memory";
import { LibSQLStore, LibSQLVector } from "@mastra/libsql";

// Option 1: Set storage and vector directly on memory
const memory = new Memory({
  storage: new LibSQLStore(),
  vector: new LibSQLVector(),
  options: {
    semanticRecall: true,
  },
});

// Option 2: Let memory inherit storage from the Mastra instance, but configure vector explicitly if you're using semanticRecall
import { Mastra } from "@mastra/core";

const mastra = new Mastra({
  storage: new LibSQLStore(),
});

const memory = new Memory({
  vector: new LibSQLVector(),
  options: {
    semanticRecall: true,
  },
});

If you're not using semanticRecall you don't need to configure a vector store.

Working Memory

Working memory now uses tool calls by default instead of adding content to the text response. This change improves compatibility with data streaming and provides a more structured way for models to update working memory. It's especially important if you're using toDataStream() as the previous text-stream mode was difficult to use with data streaming.

Before



import { Memory } from "@mastra/memory";

const memory = new Memory({
  options: {
    workingMemory: {
      enabled: true,
      use: "text-stream", // when working memory was enabled this was the default
      template: `...`,
    },
  },
});

After


Now when workingMemory is enabled, the only option is the previous "use: tool-calls" behaviour.

import { Memory } from "@mastra/memory";
const memory = new Memory({
  options: {
    workingMemory: {
      enabled: true,
      template: `...`,
    },
  },
});

Implicit Memory Settings

Default settings have been changed to more reasonable defaults as the previous defaults were often surprising for users. lastMessages was previously set to 40 and is now lowered to 10. semanticRecall was enabled by default and now must be explicitly turned on. generateTitle was enabled by default and would trigger an additional call to the LLM on the first user message, requesting the LLM to generate a conversation thread title. This introduced additional latency and it wasn't obvious why. It's now disabled by default.

Before


These were the implicit defaults:

import { Memory } from "@mastra/memory";

const memory = new Memory({
  options: {
    lastMessages: 40,
    semanticRecall: {
      topK: 2,
      messageRange: 2,
    },
    threads: {
      generateTitle: true,
    },
  },
});

After


These are now the implicit defaults (if not specified):

import { Memory } from "@mastra/memory";

const memory = new Memory({
  options: {
    lastMessages: 10,
    semanticRecall: false,
    threads: {
      generateTitle: false,
    },
  },
});

@mastra/core 0.9.x -> 0.10.0

We aimed to provide good defaults for @mastra/core to ensure users had an excellent out-of-the-box experience. We chose SQLite as the default storage solution for tracings, evals, memory, and Pino as the default logger. Initially, this approach worked well, allowing users to quickly set up Mastra for development and utilize a fully functional agent and workflow. However, it soon became evident that these defaults posed challenges when users attempted to deploy Mastra agents to production and cloud providers.

No more default storage

You need to install a storage provider and connect it to the Mastra instance. Our npm create mastra experience will include @mastra/libsqlby by default.

Before



import { Mastra } from "@mastra/core";
import { weatherAgent } from "./agents";

export const mastra = new Mastra({
  agents: {
    weatherAgent,
  },
});

After


Install any supported storage provider and hook it into the Mastra instance.

import { Mastra } from "@mastra/core";
import { LibSQLStore } from "@mastra/libsql";
import { weatherAgent } from "./agents";

const storage = new LibSQLStore({
  url: "file:./mastra.db",
});

export const mastra = new Mastra({
  agents: {
    weatherAgent,
  },
  storage,
});

Removal of default Pino logger

Mastra is using Pino as a default logger, the fastest and best logger for production use. Some providers, like Cloudflare, do not play well with Pino. We’ve decided to remove Pino as the default logger in Mastra using the native console logger, meaning the log format will change. Our recommended logger for environments like Vercel/Netlify or Mastra Cloud is Pino. Mastra Cloud will ship Pino as a default, similar to our getting started experience with create mastra

Before



import { Mastra } from "@mastra/core";
import { weatherAgent } from "./agents";

const memory = new Mastra({
  agents: {
    weatherAgent,
  },
});

After


Install any supported logger and hook it into the Mastra instance:

import { Mastra } from "@mastra/core";
import { weatherAgent } from "./agents";
import { PinoLogger } from "@mastra/loggers";

const memory = new Mastra({
  agents: {
    weatherAgent,
  },
  logger: new PinoLogger({ name: "Mastra", level: "debug" }),
});

@mastra/core moved to Peer Dependencies

We've noticed that our users often have multiple @mastra/core packages when upgrading their Mastra dependencies or when dealing with mixed versions of mastra components. By moving @mastra/core to be a peer dependency across all packages, we ensure that the only version of mastra will be in the project itself, which should already be the case.

@mastra/core default import will only include Mastra

Multiple warnings have been added when importing components directly from @mastra/core instead of using a subpath. We will be removing all exports except for the Mastra class itself. This change ensures that embedding Mastra into an existing application will only utilize the necessary components.

Before



import { Mastra, MastraStorage } from "@mastra/core";
import { weatherAgent } from "./agents";

class CustomStorage extends MastraStorage {}

const memory = new Mastra({
  agents: {
    weatherAgent,
  },
});

After



import { Mastra } from "@mastra/core";
import { MastraStorage } from "@mastra/core/storage";

class CustomStorage extends MastraStorage {}

const memory = new Mastra({
  agents: {
    weatherAgent,
  },
});

@mastra/deployer 0.3.x -> 0.10.0

We are removing the deploy command from our deployers. Previously, you could deploy a project using mastra deploy, which would trigger the vendor's CLI to run the deployment. Most vendors build on each git commit and use their own tools for building and deploying. This change removes redundant tasks and simplifies the deployment process.

Share

Stay up to date