Loved by builders, backed by founders
/agents
Build intelligent agents that execute tasks, access knowledge bases, and maintain
memory persistently within threads.
1const chefAgent = new Agent({2name: 'Chef Agent',3instructions:4"You are Michel, a practical and experienced home chef" +5"who helps people cook great meals."6model: openai('gpt-4o-mini'),7memory,8workflow: { chefWorkflow }9});
Switch between AI providers by changing a single line of code using the AI SDK
Combine long-term memory with recent messages for more robust agent recall
Bootstrap, iterate, and eval prompts in a local playground with LLM assistance.
Allow agents to call your functions, interact with other systems, and trigger real-world actions
/workflows
Durable graph-based state machines with built-in tracing, designed to execute complex
sequences of LLM operations.
1workflow2.step(llm)3.then(decider)4.after(decider)5.step(success)6.step(retry)7.after([8success,9retry10])11.step(finalize)12.commit();
.step()
llm
.then()
decider
when:
.then()
success
when:
.then()
retry
.after()
finalize

Simple semantics for branching, chaining, merging, and conditional execution, built on XState.
Pause execution at any step, persist state, and continue when triggered by a human-in-the-loop.
Stream step completion events to users for visibility into long-running tasks.
Create flexible architectures: embed your agents in a workflow; pass workflows as tools to your agents.
*rag
Equip agents with the right context. Sync data from SaaS tools. Scrape the web.
Pipe it into a knowledge base and embed, query, and rerank.
*ops
Track inputs and outputs for every step of every workflow run. See each agent tool call
and decision. Measure context, output, and accuracy in evals, or write your own.
Measure and track accuracy, relevance, token costs, latency, and other metrics.
Test agent and workflow outputs using rule-based and statistical evaluation methods.
Agents emit OpenTelemetry traces for faster debugging and application performance monitoring.
Dive deeper, build smarter
Mastra Storage: a flexible storage system for AI Applications
A look at how Mastra's storage system simplifies building AI applications by managing workflows, memory, traces, and eval data with flexible storage backends.Nik Aiyer
Apr 1, 2025
Mastra Changelog 2025-03-27
Mastra's Latest Updates: Nested Workflows, Voice to Voice, and moreShane Thomas
Mar 27, 2025
The Voice Connection: Mastra's Speech-to-Speech Capabilities
We've expanded Mastra Voice with real-time speech-to-speech capabilities for Agents.Yujohn Nattrass
Mar 26, 2025
Optimizing Mastra for Seamless Edge Deployments
Exploring a recent handful of Mastra optimizations that help it run smoothly on edge functionsWard Peeters
Mar 25, 2025
Mastra Changelog 2025-03-20
Mastra MCP documentation server, AgentNetwork updates, and moreSam Bhagwat
Mar 20, 2025