Skip to main content

Using AI SDK UI

AI SDK UI is a library of React utilities and components for building AI-powered interfaces. In this guide, you'll learn how to use @mastra/ai-sdk to convert Mastra's output to AI SDK-compatible formats, enabling you to use its hooks and components in your frontend.

note

Migrating from AI SDK v4 to v5? See the migration guide.

tip

Want to see more examples? Visit Mastra's UI Dojo or the Next.js quickstart guide.

Getting Started
Direct link to Getting Started

Use Mastra and AI SDK UI together by installing the @mastra/ai-sdk package. @mastra/ai-sdk provides custom API routes and utilities for streaming Mastra agents in AI SDK-compatible formats. This includes chat, workflow, and network route handlers, along with utilities and exported types for UI integrations.

@mastra/ai-sdk integrates with AI SDK UI's three main hooks: useChat(), useCompletion(), and useObject().

Install the required packages to get started:

npm install @mastra/ai-sdk@beta @ai-sdk/react ai

You're now ready to follow the integration guides and recipes below!

Integration Guides
Direct link to Integration Guides

Typically, you'll set up API routes that stream Mastra content in AI SDK-compatible format, and then use those routes in AI SDK UI hooks like useChat(). Below you'll find two main approaches to achieve this:

Once you have your API routes set up, you can use them in the useChat() hook.

Mastra's server
Direct link to Mastra's server

Run Mastra as a standalone server and connect your frontend (e.g. using Vite + React) to its API endpoints. You'll be using Mastra's custom API routes feature for this.

info

Mastra's UI Dojo is an example of this setup.

You can use chatRoute(), workflowRoute(), and networkRoute() to create API routes that stream Mastra content in AI SDK-compatible format. Once implemented, you can use these API routes in useChat().

This example shows how to set up a chat route at the /chat endpoint that uses an agent with the ID weatherAgent.

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { chatRoute } from "@mastra/ai-sdk";

export const mastra = new Mastra({
server: {
apiRoutes: [
chatRoute({
path: "/chat",
agent: "weatherAgent",
}),
],
},
});

You can also use dynamic agent routing, see the chatRoute() reference documentation for more details.

Framework-agnostic
Direct link to Framework-agnostic

If you don't want to run Mastra's server and instead use frameworks like Next.js or Express, you can use the handleChatStream(), handleWorkflowStream(), and handleNetworkStream() functions in your own API route handlers.

They return a ReadableStream that you can wrap with createUIMessageStreamResponse().

The examples below show you how to use them with Next.js App Router.

This example shows how to set up a chat route at the /chat endpoint that uses an agent with the ID weatherAgent.

app/chat/route.ts
import { handleChatStream } from '@mastra/ai-sdk';
import { createUIMessageStreamResponse } from 'ai';
import { mastra } from '@/src/mastra';

export async function POST(req: Request) {
const params = await req.json();
const stream = await handleChatStream({
mastra,
agentId: 'weatherAgent',
params,
});
return createUIMessageStreamResponse({ stream });
}

useChat()
Direct link to usechat

Whether you created API routes through Mastra's server or used a framework of your choice, you can now use the API endpoints in the useChat() hook.

Assuming you set up a route at /chat that uses a weather agent, you can ask it questions as seen below. It's important that you set the correct api URL.

import { useChat } from "@ai-sdk/react";
import { useState } from "react";
import { DefaultChatTransport } from "ai";

export default function Chat() {
const [inputValue, setInputValue] = useState("")
const { messages, sendMessage } = useChat({
transport: new DefaultChatTransport({
api: "http://localhost:4111/chat",
}),
});

const handleFormSubmit = (e: React.FormEvent) => {
e.preventDefault();
sendMessage({ text: inputValue });
};

return (
<div>
<pre>{JSON.stringify(messages, null, 2)}</pre>
<form onSubmit={handleFormSubmit}>
<input value={inputValue} onChange={e => setInputValue(e.target.value)} placeholder="Name of the city" />
</form>
</div>
);
}

Use prepareSendMessagesRequest to customize the request sent to the chat route, for example to pass additional configuration to the agent.

useCompletion()
Direct link to usecompletion

The useCompletion() hook handles single-turn completions between your frontend and a Mastra agent, allowing you to send a prompt and receive a streamed response over HTTP.

Your frontend could look like this:

app/page.tsx
import { useCompletion } from '@ai-sdk/react';

export default function Page() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
api: '/api/completion',
});

return (
<form onSubmit={handleSubmit}>
<input
name="prompt"
value={input}
onChange={handleInputChange}
id="input"
/>
<button type="submit">Submit</button>
<div>{completion}</div>
</form>
);
}

Below are two approaches to implementing the backend:

src/mastra/index.ts
import { Mastra } from '@mastra/core/mastra';
import { registerApiRoute } from '@mastra/core/server';
import { handleChatStream } from '@mastra/ai-sdk';
import { createUIMessageStreamResponse } from 'ai';

export const mastra = new Mastra({
server: {
apiRoutes: [
registerApiRoute('/completion', {
method: 'POST',
handler: async (c) => {
const { prompt } = await c.req.json();
const mastra = c.get('mastra');
const stream = await handleChatStream({
mastra,
agentId: 'weatherAgent',
params: {
messages: [
{
id: "1",
role: 'user',
parts: [
{
type: 'text',
text: prompt
}
]
}
],
}
})

return createUIMessageStreamResponse({ stream });
}
})
]
}
});

Custom UI
Direct link to Custom UI

Custom UI (also known as Generative UI) allows you to render custom React components based on data streamed from Mastra. Instead of displaying raw text or JSON, you can create visual components for tool outputs, workflow progress, agent network execution, and custom events.

Use Custom UI when you want to:

  • Render tool outputs as visual components (e.g., a weather card instead of JSON)
  • Display workflow step progress with status indicators
  • Visualize agent network execution with step-by-step updates
  • Show progress indicators or status updates during long-running operations

Data part types
Direct link to Data part types

Mastra streams data to the frontend as "parts" within messages. Each part has a type that determines how to render it. The @mastra/ai-sdk package transforms Mastra streams into AI SDK-compatible UI Message DataParts.

Data Part TypeSourceDescription
tool-{toolKey}AI SDK built-inTool invocation with states: input-available, output-available, output-error
data-workflowworkflowRoute()Workflow execution with step inputs, outputs, and status
data-networknetworkRoute()Agent network execution with ordered steps and outputs
data-tool-agentNested agent in toolAgent output streamed from within a tool's execute()
data-tool-workflowNested workflow in toolWorkflow output streamed from within a tool's execute()
data-tool-networkNested network in toolNetwork output streamed from within a tool's execute()
data-{custom}writer.custom()Custom events for progress indicators, status updates, etc.

Rendering tool outputs
Direct link to Rendering tool outputs

AI SDK automatically creates tool-{toolKey} parts when an agent calls a tool. These parts include the tool's state and output, which you can use to render custom components.

The tool part cycles through states:

  • input-streaming: Tool input is being streamed (when tool call streaming is enabled)
  • input-available: Tool has been called with complete input, waiting for execution
  • output-available: Tool execution completed with output
  • output-error: Tool execution failed

Here's an example of rendering a weather tool's output as a custom WeatherCard component.

Define a tool with an outputSchema so the frontend knows the shape of the data to render.

src/mastra/tools/weather-tool.ts
import { createTool } from "@mastra/core/tools";
import { z } from "zod";

export const weatherTool = createTool({
id: "get-weather",
description: "Get current weather for a location",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
}),
outputSchema: z.object({
temperature: z.number(),
feelsLike: z.number(),
humidity: z.number(),
windSpeed: z.number(),
conditions: z.string(),
location: z.string(),
}),
execute: async ({ location }) => {
const response = await fetch(
`https://api.weatherapi.com/v1/current.json?key=${process.env.WEATHER_API_KEY}&q=${location}`
);
const data = await response.json();
return {
temperature: data.current.temp_c,
feelsLike: data.current.feelslike_c,
humidity: data.current.humidity,
windSpeed: data.current.wind_kph,
conditions: data.current.condition.text,
location: data.location.name,
};
},
});
tip

The tool part type follows the pattern tool-{toolKey}, where toolKey is the key used when registering the tool with the agent. For example, if you register tools as tools: { weatherTool }, the part type will be tool-weatherTool.

Rendering workflow data
Direct link to Rendering workflow data

When using workflowRoute() or handleWorkflowStream(), Mastra emits data-workflow parts that contain the workflow's execution state, including step statuses and outputs.

Define a workflow with multiple steps that will emit data-workflow parts as it executes.

src/mastra/workflows/activities-workflow.ts
import { createStep, createWorkflow } from "@mastra/core/workflows";
import { z } from "zod";

const fetchWeather = createStep({
id: "fetch-weather",
inputSchema: z.object({
location: z.string(),
}),
outputSchema: z.object({
temperature: z.number(),
conditions: z.string(),
}),
execute: async ({ inputData }) => {
// Fetch weather data...
return { temperature: 22, conditions: "Sunny" };
},
});

const planActivities = createStep({
id: "plan-activities",
inputSchema: z.object({
temperature: z.number(),
conditions: z.string(),
}),
outputSchema: z.object({
activities: z.string(),
}),
execute: async ({ inputData, mastra }) => {
const agent = mastra?.getAgent("activityAgent");
const response = await agent?.generate(
`Suggest activities for ${inputData.conditions} weather at ${inputData.temperature}°C`
);
return { activities: response?.text || "" };
},
});

export const activitiesWorkflow = createWorkflow({
id: "activities-workflow",
inputSchema: z.object({
location: z.string(),
}),
outputSchema: z.object({
activities: z.string(),
}),
})
.then(fetchWeather)
.then(planActivities);

activitiesWorkflow.commit();

Register the workflow with Mastra and expose it via workflowRoute() to stream workflow events to the frontend.

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { workflowRoute } from "@mastra/ai-sdk";

export const mastra = new Mastra({
workflows: { activitiesWorkflow },
server: {
apiRoutes: [
workflowRoute({
path: "/workflow/activitiesWorkflow",
workflow: "activitiesWorkflow",
}),
],
},
});

For more details on workflow streaming, see Workflow Streaming.

Rendering network data
Direct link to Rendering network data

When using networkRoute() or handleNetworkStream(), Mastra emits data-network parts that contain the agent network's execution state, including which agents were called and their outputs.

Register agents with Mastra and expose the routing agent via networkRoute() to stream network execution events to the frontend.

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { networkRoute } from "@mastra/ai-sdk";

export const mastra = new Mastra({
agents: { routingAgent, researchAgent, weatherAgent },
server: {
apiRoutes: [
networkRoute({
path: "/network",
agent: "routingAgent",
}),
],
},
});

For more details on agent networks, see Agent Networks.

Custom events
Direct link to Custom events

Use writer.custom() within a tool's execute() function to emit custom data parts. This is useful for progress indicators, status updates, or any custom UI updates during tool execution.

Custom event types must start with data- to be recognized as data parts.

warning

You must await the writer.custom() call, otherwise you may encounter a WritableStream is locked error.

Use writer.custom() inside the tool's execute() function to emit custom data- prefixed events at different stages of execution.

src/mastra/tools/task-tool.ts
import { createTool } from "@mastra/core/tools";
import { z } from "zod";

export const taskTool = createTool({
id: "process-task",
description: "Process a task with progress updates",
inputSchema: z.object({
task: z.string().describe("The task to process"),
}),
outputSchema: z.object({
result: z.string(),
status: z.string(),
}),
execute: async (inputData, context) => {
const { task } = inputData;

// Emit "in progress" custom event
await context?.writer?.custom({
type: "data-tool-progress",
data: {
status: "in-progress",
message: "Gathering information...",
},
});

// Simulate work
await new Promise((resolve) => setTimeout(resolve, 3000));

// Emit "done" custom event
await context?.writer?.custom({
type: "data-tool-progress",
data: {
status: "done",
message: `Successfully processed "${task}"`,
},
});

return {
result: `Task "${task}" has been completed successfully!`,
status: "completed",
};
},
});

Tool streaming
Direct link to Tool streaming

Tools can also stream data using context.writer.write() for lower-level control, or pipe an agent's stream directly to the tool's writer. For more details, see Tool Streaming.

Examples
Direct link to Examples

For live examples of Custom UI patterns, visit Mastra's UI Dojo. The repository includes implementations for:

Recipes
Direct link to Recipes

Stream transformations
Direct link to Stream transformations

To manually transform Mastra's streams to AI SDK-compatible format, use the toAISdkStream() utility. See the examples for concrete usage patterns.

Loading historical messages
Direct link to Loading historical messages

When loading messages from Mastra's memory to display in a chat UI, use toAISdkV5Messages() or toAISdkV4Messages() to convert them to the appropriate AI SDK format for useChat()'s initialMessages.

Passing additional data
Direct link to Passing additional data

sendMessage() allows you to pass additional data from the frontend to Mastra. This data can then be used on the server as RequestContext.

Here's an example of the frontend code:

import { useChat } from "@ai-sdk/react";
import { useState } from "react";
import { DefaultChatTransport } from 'ai';

export function ChatAdditional() {
const [inputValue, setInputValue] = useState('')
const { messages, sendMessage } = useChat({
transport: new DefaultChatTransport({
api: 'http://localhost:4111/chat-extra',
}),
});

const handleFormSubmit = (e: React.FormEvent) => {
e.preventDefault();
sendMessage({ text: inputValue }, {
body: {
data: {
userId: "user123",
preferences: {
language: "en",
temperature: "celsius"
}
}
}
});
};

return (
<div>
<pre>{JSON.stringify(messages, null, 2)}</pre>
<form onSubmit={handleFormSubmit}>
<input value={inputValue} onChange={e => setInputValue(e.target.value)} placeholder="Name of the city" />
</form>
</div>
);
}

Two examples on how to implement the backend portion of it.

Add a chatRoute() to your Mastra configuration like shown above. Then, add a server-level middleware:

src/mastra/index.ts
import { Mastra } from "@mastra/core";

export const mastra = new Mastra({
server: {
middleware: [
async (c, next) => {
const requestContext = c.get("requestContext");

if (c.req.method === "POST") {
const clonedReq = c.req.raw.clone();
const body = await clonedReq.json();

if (body?.data) {
for (const [key, value] of Object.entries(body.data)) {
requestContext.set(key, value);
}
}
}
await next();
},
],
},
});
info

You can access this data in your tools via the requestContext parameter. See the Request Context documentation for more details.

Workflow suspend/resume with user approval
Direct link to Workflow suspend/resume with user approval

Workflows can suspend execution and wait for user input before continuing. This is useful for approval flows, confirmations, or any human-in-the-loop scenario.

The workflow uses:

  • suspendSchema / resumeSchema - Define the data structure for suspend payload and resume input
  • suspend() - Pauses the workflow and sends the suspend payload to the UI
  • resumeData - Contains the user's response when the workflow resumes
  • bail() - Exits the workflow early (e.g., when user rejects)

Create a workflow step that suspends for approval. The step checks resumeData to determine if it's resuming, and calls suspend() on first execution.

src/mastra/workflows/approval-workflow.ts
import { createStep, createWorkflow } from "@mastra/core/workflows";
import { z } from "zod";

const requestApproval = createStep({
id: "request-approval",
inputSchema: z.object({ requestId: z.string(), summary: z.string() }),
outputSchema: z.object({
approved: z.boolean(),
requestId: z.string(),
approvedBy: z.string().optional(),
}),
resumeSchema: z.object({
approved: z.boolean(),
approverName: z.string().optional(),
}),
suspendSchema: z.object({
message: z.string(),
requestId: z.string(),
}),
execute: async ({ inputData, resumeData, suspend, bail }) => {
// User rejected - bail out
if (resumeData?.approved === false) {
return bail({ message: "Request rejected" });
}
// User approved - continue
if (resumeData?.approved) {
return {
approved: true,
requestId: inputData.requestId,
approvedBy: resumeData.approverName || "User",
};
}
// First execution - suspend and wait
return await suspend({
message: `Please approve: ${inputData.summary}`,
requestId: inputData.requestId,
});
},
});

export const approvalWorkflow = createWorkflow({
id: "approval-workflow",
inputSchema: z.object({ requestId: z.string(), summary: z.string() }),
outputSchema: z.object({
approved: z.boolean(),
requestId: z.string(),
approvedBy: z.string().optional(),
}),
})
.then(requestApproval);

approvalWorkflow.commit();

Register the workflow. Storage is required for suspend/resume to persist state.

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { workflowRoute } from "@mastra/ai-sdk";
import { LibSQLStore } from "@mastra/libsql";

export const mastra = new Mastra({
workflows: { approvalWorkflow },
storage: new LibSQLStore({
url: "file:../mastra.db",
}),
server: {
apiRoutes: [
workflowRoute({ path: "/workflow/approvalWorkflow", workflow: "approvalWorkflow" }),
],
},
});

Key points:

  • The suspend payload is accessible via step.suspendPayload
  • To resume, send runId, step (the step ID), and resumeData in the request body
  • Storage must be configured for suspend/resume to persist workflow state

For a complete implementation, see the workflow-suspend-resume example in UI Dojo.

Nested agent streams in tools
Direct link to Nested agent streams in tools

Tools can call agents internally and stream the agent's output back to the UI. This creates data-tool-agent parts that can be rendered alongside the tool's final output.

The pattern uses:

  • context.mastra.getAgent() - Get an agent instance from within a tool
  • agent.stream() - Stream the agent's response
  • stream.fullStream.pipeTo(context.writer) - Pipe the agent's stream to the tool's writer

Create a tool that calls an agent and pipes its stream to the tool's writer.

src/mastra/tools/nested-agent-tool.ts
import { createTool } from "@mastra/core/tools";
import { z } from "zod";

export const nestedAgentTool = createTool({
id: "nested-agent-stream",
description: "Analyze weather using a nested agent",
inputSchema: z.object({
city: z.string().describe("The city to analyze"),
}),
outputSchema: z.object({
summary: z.string(),
}),
execute: async (inputData, context) => {
const agent = context?.mastra?.getAgent("weatherAgent");
if (!agent) {
return { summary: "Weather agent not available" };
}

const stream = await agent.stream(
`Analyze the weather in ${inputData.city} and provide a summary.`
);

// Pipe the agent's stream to emit data-tool-agent parts
await stream.fullStream.pipeTo(context!.writer!);

return { summary: (await stream.text) ?? "No summary available" };
},
});

Create an agent that uses this tool.

src/mastra/agents/forecast-agent.ts
import { Agent } from "@mastra/core/agent";
import { nestedAgentTool } from "../tools/nested-agent-tool";

export const forecastAgent = new Agent({
id: "forecast-agent",
instructions: "Use the nested-agent-stream tool when asked about weather.",
model: "openai/gpt-4o-mini",
tools: { nestedAgentTool },
});

Key points:

  • Piping fullStream to context.writer creates data-tool-agent parts
  • The AgentDataPart has id (on the part) and data.text (the agent's streamed text)
  • The tool still returns its own output after the stream completes

For a complete implementation, see the tool-nested-streams example in UI Dojo.

Streaming agent text from workflow steps
Direct link to Streaming agent text from workflow steps

Workflow steps can stream an agent's text output in real-time by piping the agent's stream to the step's writer. This lets users see the agent "thinking" while the workflow executes, rather than waiting for the step to complete.

The pattern uses:

  • writer in workflow step - Pipe the agent's fullStream to the step's writer
  • text and data-workflow parts - The frontend receives streaming text alongside step progress

Create a workflow step that streams an agent's response by piping to the step's writer.

src/mastra/workflows/weather-workflow.ts
import { createStep, createWorkflow } from "@mastra/core/workflows";
import { z } from "zod";
import { weatherAgent } from "../agents/weather-agent";

const analyzeWeather = createStep({
id: "analyze-weather",
inputSchema: z.object({ location: z.string() }),
outputSchema: z.object({ analysis: z.string(), location: z.string() }),
execute: async ({ inputData, writer }) => {
const response = await weatherAgent.stream(
`Analyze the weather in ${inputData.location} and provide insights.`
);

// Pipe agent stream to step writer for real-time text streaming
await response.fullStream.pipeTo(writer);

return {
analysis: await response.text,
location: inputData.location,
};
},
});

const calculateScore = createStep({
id: "calculate-score",
inputSchema: z.object({ analysis: z.string(), location: z.string() }),
outputSchema: z.object({ score: z.number(), summary: z.string() }),
execute: async ({ inputData }) => {
const score = inputData.analysis.includes("sunny") ? 85 : 50;
return { score, summary: `Comfort score for ${inputData.location}: ${score}/100` };
},
});

export const weatherWorkflow = createWorkflow({
id: "weather-workflow",
inputSchema: z.object({ location: z.string() }),
outputSchema: z.object({ score: z.number(), summary: z.string() }),
})
.then(analyzeWeather)
.then(calculateScore);

weatherWorkflow.commit();

Register the workflow with a workflowRoute(). Text streaming is enabled by default.

src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { workflowRoute } from "@mastra/ai-sdk";

export const mastra = new Mastra({
agents: { weatherAgent },
workflows: { weatherWorkflow },
server: {
apiRoutes: [
workflowRoute({ path: "/workflow/weather", workflow: "weatherWorkflow" }),
],
},
});

Key points:

  • The step's writer is available in the execute function (not via context)
  • includeTextStreamParts defaults to true on workflowRoute(), so text streams by default
  • Text parts stream in real-time while data-workflow parts update with step status

For a complete implementation, see the workflow-agent-text-stream example in UI Dojo.

Multi-stage progress with branching workflows
Direct link to Multi-stage progress with branching workflows

For workflows with conditional branching (e.g., express vs standard shipping), you can track progress across different branches by including a identifier in your custom events.

The UI Dojo example uses a stage field in the event data to identify which branch is executing (e.g., "validation", "standard-processing", "express-processing"). The frontend groups events by this field to show a pipeline-style progress UI.

See the branching-workflow.ts (backend) and workflow-custom-events.tsx (frontend) in UI Dojo.

Progress indicators in agent networks
Direct link to Progress indicators in agent networks

When using agent networks, you can emit custom progress events from tools used by sub-agents to show which agent is currently active.

The UI Dojo example includes a stage field in the event data to identify which sub-agent is running (e.g., "report-generation", "report-review"). The frontend groups events by this field and displays the latest status for each.

See the report-generation-tool.ts (backend) and agent-network-custom-events.tsx (frontend) in UI Dojo.