The TypeScript Agent Framework
From the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.
1const chefAgent = new Agent({2name: 'Chef Agent',3instructions:4"You are Michel, a practical and experienced home chef" +5"who helps people cook great meals."6model: openai('gpt-4o-mini'),7memory,8workflow: { chefWorkflow }9});
/workflows
*ops
/agents
/rag
Loved by builders
It's the easiest & most dev-friendly SDK for building AI agents I've seen.
/agents
Build intelligent agents that execute tasks, access your data sources, and maintain
memory persistently.
1const chefAgent = new Agent({2name: 'Chef Agent',3instructions:4"You are Michel, a practical and experienced home chef" +5"who helps people cook great meals."6model: openai('gpt-4o-mini'),7memory,8workflow: { chefWorkflow }9});
Switch between AI providers by changing a single line of code using the AI SDK
Combine long-term memory with recent messages for more robust agent recall
Bootstrap, iterate, and eval prompts in a local playground with LLM assistance.
Allow agents to call your functions, interact with other systems, and trigger real-world actions
/workflows
Durable graph-based state machines with built-in tracing, designed to execute complex
sequences of LLM operations.
1workflow2.step(llm)3.then(decider)4.after(decider)5.step(success)6.step(retry)7.after([8success,9retry10])11.step(finalize)12.commit();
.step()
llm
.then()
decider
when:
.then()
success
when:
.then()
retry
.after()
finalize

Simple semantics for branching, chaining, merging, and conditional execution, built on XState.
Pause execution at any step, persist state, and continue when triggered by a human-in-the-loop.
Stream step completion events to users for visibility into long-running tasks.
Create flexible architectures: embed your agents in a workflow; pass workflows as tools to your agents.
*rag
Equip agents with the right context. Sync data from SaaS tools. Scrape the web.
Pipe it into a knowledge base and embed, query, and rerank.
*ops
Track inputs and outputs for every step of every workflow run. See each agent tool call
and decision. Measure context, output, and accuracy in evals, or write your own.
Measure and track accuracy, relevance, token costs, latency, and other metrics.
Test agent and workflow outputs using rule-based and statistical evaluation methods.
Agents emit OpenTelemetry traces for faster debugging and application performance monitoring.
Keep tabs on what we're shipping
Mastra Changelog 2025-07-17
New memory improvements, new CLI templates, reasoning display in playground, and major improvements across the board.Shane Thomas
Jul 17, 2025
How Kestral Uses Mastra to Turn Company Knowledge Into Action
Kestrel leverages Mastra's multi-agent workflows to transform company knowledge into actionable tasks and projects.Shreeda Segan
Jul 17, 2025
Yes, you can use RAG for agent memory
how we spent $8k to prove RAG isn’t deadTyler Barnes
Jul 17, 2025
How Artifact is Creating 'Cursor for Hardware'
Artifact is building a comprehensive design environment for hardware engineers, including AI copilots to automate the complex, error-prone work of electrical system design.Shreeda Segan
Jul 16, 2025
WorkOS: From Evaluating Mastra to Teaching It
How WorkOS went from discovering Mastra on GitHub to using it in production and teaching it to hundreds of engineers at AI conferences.Sam Bhagwat
Jul 15, 2025