Mastra Agents

Stateful AI agents with any LLM.

Mastra agents use any LLM to reason, decide and act on open-ended tasks. Every agent comes with memory, tool calling, MCP, logging, tracing and eval primitives built in. Build and test agents in a local playground, then deploy to production with the same configuration.

Built-in agent primitives

When you need agents that retain context, call tools and evaluate their own output, Mastra gives you every primitive built in. Tool calling connects agents to external APIs and services. Memory retains conversation history, user data and relevant context across interactions. Logging, tracing and evals surface what happened and why.

Orchestrate agent networks

Single agents handle open-ended tasks well. Mastra lets you compose agents with workflows for increased reliability, or assemble agent networks where agents collaborate on more complex problems. Specify execution paths for increased control or delegate to a team of agents for increased power.

Context and guardrails

Mastra gives you precise control over what agents know and what they can do. Dynamically inject context based on the request, configure access from user identity and enforce guardrails against prompt injection and data leaks. Suspend and resume agent execution for human review.

Python trains,
TypeScript ships.