Blog

Optimizing Mastra for Seamless Edge Deployments

Mar 25, 2025

Serverless functions come with strict size limits in order to remain lightweight and responsive. This necessity poses a significant challenge when edge deploying feature-rich applications like Mastra.

To overcome this challenge, we recently made a handful of optimizations. Together, these optimizations helped us go from 90mb to 8mb (when using memory and our postgres store), improve cold start times, and enhance Mastra’s overall performance.

Let’s take a look at how we did that…

The challenge: Node.js dependencies

One of the biggest hurdles in the Node.js ecosystem is the management of dependencies. A single npm install can inadvertently introduce hundreds—or even thousands!—of transitive dependencies.

Here’s how this often plays out in practice:

  • A simple utility package may pull in complex layers of dependencies.
  • These dependencies often contain code for features we’ll never utilize.
  • Every additional dependency increases the size of our deployment bundle.
  • Many packages offer both browser-specific and Node.js specific code, even when only one is needed.
A visualization showing how node_modules are a nightmare

This "dependency bloat" can quickly lead to exceeding edge function size limits, making successful deployment a challenge without effective optimization.

Our solution: smart bundling techniques

1. Cleanup transitive dependencies by pre-bundling

As mentioned above, node_modules can be large and bloated. We optimize our dependencies by analyzing what exports are used from a dependency and pre-bundling them so we don’t need to install all transitive dependencies.

Assume the following code:

import { Agent } from '@mastra/core/agent';
import {openai} from '@ai-sdk/openai'

export const myAgent = new Agent({
  instructions: \`You're an helpful assistant\`,
  model: openai('gpt-4o')
})

By pre-bundling the Agent class, we can remove all unused dependencies, such as @libsql/client, and reduce the bundle size. The pre-bundled code can be found in the .mastra/.build folder.

Check the bundle size out for yourself here!

2. Analyzing the Mastra Instance

The deployer option in the Mastra class is intended for deploying the Mastra server but is not utilized within Mastra itself. We are analyzing the Mastra class and remove the deployer dependency as a whole:

import { Mastra } from '@mastra/core/mastra';
import { VercelDeployer } from '@mastra/deployer-vercel';

export const mastra = new Mastra({
  deployer: new VercelDeployer({
    //args
  })
});

//      ↓ ↓ ↓ ↓ ↓ ↓

import { Mastra } from '@mastra/core/mastra';

export const mastra = new Mastra({
});

3. Remove unused code

Using a bundler-like Rollup, we can analyze the code, remove unused exports, and reduce the overall bundle size. This process, known as tree-shaking, helps eliminate dead code not used in your application.

Tree-shaking is particularly valuable in larger projects where dependencies might include many features that your application doesn't utilize. The bundler analyzes the import/export statements and only includes code that's actually referenced in the final bundle.

Consider the following example:

// file1.js
export function myFunction() {
  return 'Hello, world!';
}

export function myOtherFunction() {
  return 'Hello, world!';
}

// file2.js
import { myFunction } from './file1';

myFunction();

Without tree-shaking, the final code that will be executed looks like:


function myFunction() {
  return 'Hello, world!';
}

function myOtherFunction() {
  return 'Hello, world!';
}

myFunction();

By enabling tree-shaking, we can remove the unused function called myOtherFunction

function myFunction() {
  return 'Hello, world!';
}

myFunction();

Next steps

We believe there’s still room for improvement and are currently considering:

  • Removing all default dependencies when unused like @libsql/client and fastembed.
  • Improving native dependency detection and extraction

Look out for future changelogs where we’ll share updates on our progress.

And if you’re interested in learning more about our optimization journey or have suggestions for improvement, we’d love to hear from you. Join our Discord or check out our documentation for more technical insights.

Share

Stay up to date