Studio
Studio provides an interactive UI for building, testing, and managing your agents, workflows, and tools. Run it locally during development, or deploy it to production so your team can manage agents, monitor performance, and gain insights through built-in observability.
Add authentication to protect your deployed Studio with login screens, role-based access control, and permission-based UI rendering so you can control what each team member can see and do. You can also create a project in Mastra Cloud for a hosted option.
Start StudioDirect link to Start Studio
If you created your application with create mastra, start the development server using the dev script. You can also run it directly with mastra dev.
- npm
- pnpm
- Yarn
- Bun
npm run dev
pnpm run dev
yarn dev
bun run dev
Once the server is running, you can:
- Open the Studio UI at http://localhost:4111 to interact with your agents, workflows, and tools.
- Visit http://localhost:4111/swagger-ui to discover and interact with the underlying REST API.
To run Studio in production, see Studio deployment. To run Studio independently from your Mastra server, use mastra studio.
PrimitivesDirect link to Primitives
AgentsDirect link to Agents
Chat with your agent directly, dynamically switch models, and tweak settings like temperature and top-p to understand how they affect the output.
When you interact with your agent, you can follow each step of its reasoning, view tool call outputs, and observe traces and logs to see how responses are generated. You can also attach scorers to measure and compare response quality over time.
WorkflowsDirect link to Workflows
Visualize your workflow as a graph and run it step by step with a custom input. During execution, the interface updates in real time to show the active step and the path taken.
When running a workflow, you can also view detailed traces showing tool calls, raw JSON outputs, and any errors that might have occurred along the way.
ProcessorsDirect link to Processors
View the input and output processors attached to each agent. The agent detail panel lists every processor by name and type, so you can verify your guardrails, token limiters, and custom processors are wired up correctly before testing.
See processors and guardrails for configuration details.
MCP serversDirect link to MCP servers
List the MCP servers attached to your Mastra instance and explore their available tools.
ToolsDirect link to Tools
Run tools on their own to observe behavior and test them before assigning them to an agent. If something goes wrong, re-run a tool in isolation to debug the issue.
WorkspacesDirect link to Workspaces
Browse the files in your agent's workspace filesystem using a built-in file browser. Switch between workspace mounts, create directories, and view file contents with syntax highlighting. Writable workspaces allow directory creation and file deletion; read-only workspaces are labeled accordingly. The Skills tab lists all discovered skills with their instructions, references, and metadata. Install community skills from skills.sh or remove existing ones.
See workspaces for configuration details.
Request contextDirect link to Request context
Set runtime variables that flow into your agent's instructions and tools through dependency injection. Edit request context as JSON or use a schema-driven form when your agent defines a requestContextSchema. Values persist across test chats and experiments, so you can trigger conditional flows without restarting.
See request context for configuration details.
EvaluationDirect link to Evaluation
ScorersDirect link to Scorers
The Scorers tab displays the results of your agent's scorers as they run. When messages pass through your agent, the defined scorers evaluate each output asynchronously and render their results here. This allows you to understand how your scorers respond to different interactions, compare performance across test cases, and identify areas for improvement.
DatasetsDirect link to Datasets
Create and manage collections of test cases to evaluate your agents and workflows. Import items from CSV or JSON, define input and ground-truth schemas, and pin to specific versions so you can reproduce experiments exactly. Run experiments with scorers to compare quality across prompts, models, or code changes.
See datasets overview for the full API and versioning details.
ExperimentsDirect link to Experiments
Run all items in a dataset against an agent, workflow, or scorer and collect the results in one place. Select a target, optionally attach scorers, and trigger the experiment. The results view shows each item's input, output, status, and individual score breakdowns. Compare two experiments side by side to measure the impact of prompt, model, or code changes.
See datasets overview for setup details.
ObservabilityDirect link to Observability
Visit the Studio observability docs to learn more.
SettingsDirect link to Settings
Configure the connection between Studio and your Mastra server. The settings page includes:
- Mastra instance URL: The base URL of your Mastra server (e.g.
http://localhost:4111). - API prefix: Optional path prefix for all API requests (defaults to
/api). - Custom headers: Add key-value pairs sent with every request, useful for authentication tokens or routing headers.
- Theme: Switch between dark, light, or system theme.
Code configurationDirect link to Code configuration
In addition to the settings UI, you can configure the local development server and Studio also through the server option in your src/mastra/index.ts.
By default, Studio runs at http://localhost:4111. You can change the host and port.
Mastra also supports HTTPS development through the --https flag, which automatically creates and manages certificates for your project. When you run mastra dev --https, a private key and certificate are generated for localhost (or your configured host). Visit the HTTPS reference to learn more.
Next stepsDirect link to Next steps
- Learn how to deploy Studio for production use.
- Add authentication to control access to your deployed Studio.
- Explore Studio observability to monitor agent performance and gain insights through metrics, logs, and traces.