Skip to Content
DocsAgentsOverview

Using Agents

Agents are one of the core Mastra primitives. Agents use a language model to decide on a sequence of actions. They can call functions (known as tools). You can compose them with workflows (the other main Mastra primitive), either by giving an agent a workflow as a tool, or by running an agent from within a workflow.

Agents can run autonomously in a loop, run once, or take turns with a user. You can give short-term, long-term, and working memory of their user interactions. They can stream text or return structured output (ie, JSON). They can access third-party APIs, query knowledge bases, and so on.

1. Creating an Agent

To create an agent in Mastra, you use the Agent class and define its properties:

src/mastra/agents/index.ts
import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; export const myAgent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant.", model: openai("gpt-4o-mini"), });

Note: Ensure that you have set the necessary environment variables, such as your OpenAI API key, in your .env file:

.env
OPENAI_API_KEY=your_openai_api_key

Also, make sure you have the @mastra/core package installed:

npm install @mastra/core@latest

Registering the Agent

Register your agent with Mastra to enable logging and access to configured tools and integrations:

src/mastra/index.ts
import { Mastra } from "@mastra/core"; import { myAgent } from "./agents"; export const mastra = new Mastra({ agents: { myAgent }, });

2. Generating and streaming text

Generating text

Use the .generate() method to have your agent produce text responses:

src/mastra/index.ts
const response = await myAgent.generate([ { role: "user", content: "Hello, how can you assist me today?" }, ]); console.log("Agent:", response.text);

For more details about the generate method and its options, see the generate reference documentation.

Streaming responses

For more real-time responses, you can stream the agent’s response:

src/mastra/index.ts
const stream = await myAgent.stream([ { role: "user", content: "Tell me a story." }, ]); console.log("Agent:"); for await (const chunk of stream.textStream) { process.stdout.write(chunk); }

For more details about streaming responses, see the stream reference documentation.

3. Structured Output

Agents can return structured data by providing a JSON Schema or using a Zod schema.

Using JSON Schema

const schema = { type: "object", properties: { summary: { type: "string" }, keywords: { type: "array", items: { type: "string" } }, }, additionalProperties: false, required: ["summary", "keywords"], }; const response = await myAgent.generate( [ { role: "user", content: "Please provide a summary and keywords for the following text: ...", }, ], { output: schema, }, ); console.log("Structured Output:", response.object);

Using Zod

You can also use Zod schemas for type-safe structured outputs.

First, install Zod:

npm install zod

Then, define a Zod schema and use it with the agent:

src/mastra/index.ts
import { z } from "zod"; // Define the Zod schema const schema = z.object({ summary: z.string(), keywords: z.array(z.string()), }); // Use the schema with the agent const response = await myAgent.generate( [ { role: "user", content: "Please provide a summary and keywords for the following text: ...", }, ], { output: schema, }, ); console.log("Structured Output:", response.object);

Using Tools

If you need to generate structured output alongside tool calls, you’ll need to use the experimental_output property instead of output. Here’s how:

const schema = z.object({ summary: z.string(), keywords: z.array(z.string()), }); const response = await myAgent.generate( [ { role: "user", content: "Please analyze this repository and provide a summary and keywords...", }, ], { // Use experimental_output to enable both structured output and tool calls experimental_output: schema, }, ); console.log("Structured Output:", response.object);

This allows you to have strong typing and validation for the structured data returned by the agent.

4. Multi-step tool use

Agents can be enhanced with tools - functions that extend their capabilities beyond text generation. Tools allow agents to perform calculations, access external systems, and process data. Agents not only decide whether to call tools they’re given, they determine the parameters that should be given to that tool.

For a detailed guide to creating and configuring tools, see the Adding Tools documentation, but below are the important things to know.

Using maxSteps

The maxSteps parameter controls the maximum number of sequential LLM calls an agent can make, particularly important when using tool calls. By default, it is set to 1 to prevent infinite loops in case of misconfigured tools. You can increase this limit based on your use case:

src/mastra/agents/index.ts
import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import * as mathjs from "mathjs"; import { z } from "zod"; export const myAgent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant that can solve math problems.", model: openai("gpt-4o-mini"), tools: { calculate: { description: "Calculator for mathematical expressions", schema: z.object({ expression: z.string() }), execute: async ({ expression }) => mathjs.evaluate(expression), }, }, }); const response = await myAgent.generate( [ { role: "user", content: "If a taxi driver earns $41 per hour and works 12 hours a day, how much do they earn in one day?", }, ], { maxSteps: 5, // Allow up to 5 tool usage steps }, );

Streaming progress with onStepFinish

You can monitor the progress of multi-step operations using the onStepFinish callback. This is useful for debugging or providing progress updates to users.

onStepFinish is only available when streaming or generating text without structured output.

src/mastra/agents/index.ts
const response = await myAgent.generate( [{ role: "user", content: "Calculate the taxi driver's daily earnings." }], { maxSteps: 5, onStepFinish: ({ text, toolCalls, toolResults }) => { console.log("Step completed:", { text, toolCalls, toolResults }); }, }, );

Detecting completion with onFinish

The onFinish callback is available when streaming responses and provides detailed information about the completed interaction. It is called after the LLM has finished generating its response and all tool executions have completed. This callback receives the final response text, execution steps, token usage statistics, and other metadata that can be useful for monitoring and logging:

src/mastra/agents/index.ts
const stream = await myAgent.stream( [{ role: "user", content: "Calculate the taxi driver's daily earnings." }], { maxSteps: 5, onFinish: ({ steps, text, finishReason, // 'complete', 'length', 'tool', etc. usage, // token usage statistics reasoningDetails, // additional context about the agent's decisions }) => { console.log("Stream complete:", { totalSteps: steps.length, finishReason, usage, }); }, }, );

5. Testing agents locally

Mastra provides a CLI command mastra dev to run your agents behind an API. By default, this looks for exported agents in files in the src/mastra/agents directory. It generates endpoints for testing your agent (eg http://localhost:4111/api/agents/myAgent/generate) and provides a visual playground where you can chat with an agent and view traces.

For more details, see the Local Dev Playground docs.

Next Steps