The TypeScript Agent Framework
From the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.
1const chefAgent = new Agent({2name: 'Chef Agent',3instructions:4"You are Michel, a practical and experienced home chef" +5"who helps people cook great meals."6model: openai('gpt-4o-mini'),7memory,8workflow: { chefWorkflow }9});
/workflows
*ops
/agents
/rag
Loved by builders
It's the easiest & most dev-friendly SDK for building AI agents I've seen.
/agents
Build intelligent agents that execute tasks, access your data sources, and maintain
memory persistently.
1const chefAgent = new Agent({2name: 'Chef Agent',3instructions:4"You are Michel, a practical and experienced home chef" +5"who helps people cook great meals."6model: openai('gpt-4o-mini'),7memory,8workflow: { chefWorkflow }9});
Switch between AI providers by changing a single line of code using the AI SDK
Combine long-term memory with recent messages for more robust agent recall
Bootstrap, iterate, and eval prompts in a local playground with LLM assistance.
Allow agents to call your functions, interact with other systems, and trigger real-world actions
/workflows
Durable graph-based state machines with built-in tracing, designed to execute complex
sequences of LLM operations.
1workflow2.step(llm)3.then(decider)4.after(decider)5.step(success)6.step(retry)7.after([8success,9retry10])11.step(finalize)12.commit();
.step()
llm
.then()
decider
when:
.then()
success
when:
.then()
retry
.after()
finalize

Simple semantics for branching, chaining, merging, and conditional execution, built on XState.
Pause execution at any step, persist state, and continue when triggered by a human-in-the-loop.
Stream step completion events to users for visibility into long-running tasks.
Create flexible architectures: embed your agents in a workflow; pass workflows as tools to your agents.
*rag
Equip agents with the right context. Sync data from SaaS tools. Scrape the web.
Pipe it into a knowledge base and embed, query, and rerank.
*ops
Track inputs and outputs for every step of every workflow run. See each agent tool call
and decision. Measure context, output, and accuracy in evals, or write your own.
Measure and track accuracy, relevance, token costs, latency, and other metrics.
Test agent and workflow outputs using rule-based and statistical evaluation methods.
Agents emit OpenTelemetry traces for faster debugging and application performance monitoring.
Keep tabs on what we're shipping
Introducing Scorers in Mastra
We're excited to announce the release of scorers in Mastra, a new way to evaluate and rank the quality of your agent's responses.Yujohn Nattrass
Aug 6, 2025
Building low-latency guardrails to secure your agents
How we built a suite of out-of-the-box input processors and optimized them from 5000ms to under 500ms per request.Daniel Lew
Jul 30, 2025
Mastra Changelog 2025-07-30
Improvements to Agents, Memory, and more.Shane Thomas
Jul 30, 2025
How Index Built an AI-First Data Analytics Platform with Mastra
Index is building a data analyst agent that lets users query their data in natural language.Nico Baier
Jul 25, 2025
Nested streaming support in Mastra
Mastra's nested streaming support provides real-time visibility into agent and workflow execution, with comprehensive cost tracking and a unified messaging interfaces.Ward Peeters
Jul 25, 2025