The TypeScript Agent Framework
From the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.
1const chefAgent = new Agent({2name: 'Chef Agent',3instructions:4"You are Michel, a practical and experienced home chef" +5"who helps people cook great meals."6model: openai('gpt-4o-mini'),7memory,8workflow: { chefWorkflow }9});
/workflows
*ops
/agents
/rag
Loved by builders
It's the easiest & most dev-friendly SDK for building AI agents I've seen.
/agents
Build intelligent agents that execute tasks, access knowledge bases, and maintain
memory persistently within threads.
1const chefAgent = new Agent({2name: 'Chef Agent',3instructions:4"You are Michel, a practical and experienced home chef" +5"who helps people cook great meals."6model: openai('gpt-4o-mini'),7memory,8workflow: { chefWorkflow }9});
Switch between AI providers by changing a single line of code using the AI SDK
Combine long-term memory with recent messages for more robust agent recall
Bootstrap, iterate, and eval prompts in a local playground with LLM assistance.
Allow agents to call your functions, interact with other systems, and trigger real-world actions
/workflows
Durable graph-based state machines with built-in tracing, designed to execute complex
sequences of LLM operations.
1workflow2.step(llm)3.then(decider)4.after(decider)5.step(success)6.step(retry)7.after([8success,9retry10])11.step(finalize)12.commit();
.step()
llm
.then()
decider
when:
.then()
success
when:
.then()
retry
.after()
finalize

Simple semantics for branching, chaining, merging, and conditional execution, built on XState.
Pause execution at any step, persist state, and continue when triggered by a human-in-the-loop.
Stream step completion events to users for visibility into long-running tasks.
Create flexible architectures: embed your agents in a workflow; pass workflows as tools to your agents.
*rag
Equip agents with the right context. Sync data from SaaS tools. Scrape the web.
Pipe it into a knowledge base and embed, query, and rerank.
*ops
Track inputs and outputs for every step of every workflow run. See each agent tool call
and decision. Measure context, output, and accuracy in evals, or write your own.
Measure and track accuracy, relevance, token costs, latency, and other metrics.
Test agent and workflow outputs using rule-based and statistical evaluation methods.
Agents emit OpenTelemetry traces for faster debugging and application performance monitoring.
Keep tabs on what we're shipping
Mastra Changelog 2025-06-20
We added sleep methods to workflows, structured memory to agents, and a new Gladia STT provider.Shane Thomas
Jun 20, 2025
Why PLAID Japan builds agents on their Google Cloud infrastructure with Mastra
How PLAID Japan migrated from GUI-based AI tools to Mastra for better collaboration and productivity for their engineering team building on Google Cloud.Sam Bhagwat
Jun 15, 2025
Mastra Changelog 2025-06-13
Cross-thread memory recall, universal schema support, and enhanced workflow observability.Shane Thomas
Jun 13, 2025
Why Vetnio powers their AI veterinary technician with Mastra
How Vetnio uses Mastra's workflow orchestrator to build specialized veterinary AI assistantsShreeda Segan
Jun 11, 2025
Why Fireworks uses Mastra in their agentic runtime
Why Fireworks AI decided to ship AIML, a prompt-based agent framework, using Mastra.Sam Bhagwat
Jun 10, 2025