The TypeScript Agent Framework
From the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.
1const chefAgent = new Agent({2name: 'Chef Agent',3instructions:4"You are Michel, a practical and experienced home chef" +5"who helps people cook great meals."6model: openai('gpt-4o-mini'),7memory,8workflow: { chefWorkflow }9});
/workflows
*ops
/agents
/rag
Loved by builders
It's the easiest & most dev-friendly SDK for building AI agents I've seen.
/agents
Build intelligent agents that execute tasks, access your data sources, and maintain
memory persistently.
1const chefAgent = new Agent({2name: 'Chef Agent',3instructions:4"You are Michel, a practical and experienced home chef" +5"who helps people cook great meals."6model: openai('gpt-4o-mini'),7memory,8workflow: { chefWorkflow }9});
Switch between AI providers by changing a single line of code using the AI SDK
Combine long-term memory with recent messages for more robust agent recall
Bootstrap, iterate, and eval prompts in a local playground with LLM assistance.
Allow agents to call your functions, interact with other systems, and trigger real-world actions
/workflows
Durable graph-based state machines with built-in tracing, designed to execute complex
sequences of LLM operations.
1workflow2.step(llm)3.then(decider)4.after(decider)5.step(success)6.step(retry)7.after([8success,9retry10])11.step(finalize)12.commit();
.step()
llm
.then()
decider
when:
.then()
success
when:
.then()
retry
.after()
finalize

Simple semantics for branching, chaining, merging, and conditional execution, built on XState.
Pause execution at any step, persist state, and continue when triggered by a human-in-the-loop.
Stream step completion events to users for visibility into long-running tasks.
Create flexible architectures: embed your agents in a workflow; pass workflows as tools to your agents.
*rag
Equip agents with the right context. Sync data from SaaS tools. Scrape the web.
Pipe it into a knowledge base and embed, query, and rerank.
*ops
Track inputs and outputs for every step of every workflow run. See each agent tool call
and decision. Measure context, output, and accuracy in evals, or write your own.
Measure and track accuracy, relevance, token costs, latency, and other metrics.
Test agent and workflow outputs using rule-based and statistical evaluation methods.
Agents emit OpenTelemetry traces for faster debugging and application performance monitoring.
Keep tabs on what we're shipping
The evolution of AgentNetwork
We've unified multi-agent coordination under a new .network() primitive—read about our journey, the experiments, and how you can use it to orchestrate collaborative agents simply.Abhi Aiyer
Oct 10, 2025
Introducing Mastra Model Router: 600+ models, one API, zero package installs
Access 600+ LLM models from 40+ providers with a single string. Full TypeScript autocomplete turns your IDE into a model search engine.Tyler Barnes
Oct 9, 2025
Announcing our $13m seed round from YC, pg, Gradient, Amjad, Guillermo, Balaji, and 120+ others
Mastra secures $13m from a coalition of legendary investors to fuel AI agent development and production.Sam Bhagwat
Oct 8, 2025
Migration Guide: VNext to Standard APIs
Learn how to migrate from Mastra’s VNext streaming methods to the new standard APIs, with details on renaming, compatibility, and key differences between AI SDK v4 and v5.Sam Bhagwat
Oct 6, 2025
Mastra Changelog 2025-10-03
New model router and automatic model fallbacks.Shane Thomas
Oct 3, 2025