Today we're launching the Mastra platform with tools to develop, run, manage and monitor your agents effectively at scale:
- Mastra Studio: evals, logs, traces, datasets, metrics
- Mastra Server: deploy agents and workflows
- Memory Gateway: state-of-the-art agent memory
These are three separate products, each a game changer in its own right.
Studio
Studio is far and away the thing Mastra users praise the most. And it's now available in the cloud (as well as self-hostable). Here are some of the features:
Agent Editor lets you edit your Mastra agents without touching code. Edit prompts, swap tools, configure display conditions, save versions, compare side by side, and roll back.
Metrics track cost, errors, and latency across all your runs. See where your budget is going and spot issues before your users do.
Datasets are a collection of the runs you care about: ground truth, edge cases, failures, regressions. Experiments let you replay them against a new prompt, model, or tool config and compare side by side.
Logs and traces capture every agent run. Model calls, tool executions, workflow steps. The full story behind each request.
Studio is part of the open source framework and runs locally with mastra studio. On the platform, you can deploy Studio to a URL and share it with your team. Studio Auth secures access, and role-based access controls what team members can view, edit, and execute.
Server
A lot of teams told us the same thing: they built an agent. Now they wanted to deploy, host, and scale it.
Mastra Server deploys your agents and workflows as a production API. Your agents, workflows, and tools become REST endpoints with OpenAPI docs and Swagger UI. System prompts, tool definitions, and API keys are redacted from streaming responses by default.
1mastra build # Self-contained output
2mastra startDeploy from the CLI with mastra server deploy, or set up CI to deploy on every push. You get a domain URL, build logs, environment variable management, and managed storage. The same auth system that secures Studio can also be configured to protect your Server endpoints.
Memory Gateway
Unlike other memory providers, Mastra's state-of-the-art scores are no joke. Observational memory is used in production by the Fortune 500 and ex-DeepMind researchers.
Now it's available outside the Mastra ecosystem through Memory Gateway. Point your LLM calls at the gateway and your agents get persistent memory. Python, TypeScript, or any framework.
Learn more about Memory Gateway
Pricing
Studio and Server share the same pricing plan. Memory Gateway has its own.
Both start at a free Starter tier with unlimited users and deployments, and both have a $250/team/month Teams tier, but what you get for that money is different.
Studio and Server
Metered on observability events, CPU time, and data egress. Users and deployments are unlimited on every tier.
| Starter | Teams | Enterprise | |
|---|---|---|---|
| Price | Free | $250/team/month | Custom |
| Users | Unlimited | Unlimited | Unlimited |
| Deployments | Unlimited | Unlimited | Unlimited |
| Observability events | 100K | TBD | Custom |
| CPU time | 24 hours | 250 hours | Custom |
| Data egress | 10GB | 100GB | Custom |
Memory Gateway
Metered on memory tokens, retrieval storage, and stale thread retention.
| Starter | Teams | Enterprise | |
|---|---|---|---|
| Price | Free | $250/team/month | Custom |
| Memory tokens | 100K | 1M | Custom |
| Add-on tokens | $10/1M | $10/1M | $10/1M |
| Retrieval storage | 250MB | 1GB | Custom |
| Stale thread retention | 15 days | 6 months | Custom |
We're putting agents in the hands of every developer. The framework gives you the primitives to build them. The platform gives you the tools to run them at scale. All three products are available today with a free Starter tier. See the full pricing for more details.
