Using AI SDK
AI SDK is a free open-source library that gives you the tools you need to build AI-powered products. Mastra has great integration with AI SDK, including model routing, streaming support, custom React hooks, custom tool/UI, and more.
Migrating from AI SDK v4 to v5? See the migration guide.
Visit Mastra's UI Dojo to see real-world examples of AI SDK integrated with Mastra.
Streaming
The recommended way of using Mastra and AI SDK together is by installing the @mastra/ai-sdk package. @mastra/ai-sdk provides custom API routes and utilities for streaming Mastra agents in AI SDK-compatible formats. Including chat, workflow, and network route handlers, along with utilities and exported types for UI integrations.
- npm
- pnpm
- yarn
- bun
npm install @mastra/ai-sdk@beta
pnpm add @mastra/ai-sdk@beta
yarn add @mastra/ai-sdk@beta
bun add @mastra/ai-sdk@beta
chatRoute()
When setting up a custom API route, use the chatRoute() utility to create a route handler that automatically formats the agent stream into an AI SDK-compatible format.
import { Mastra } from "@mastra/core";
import { chatRoute } from "@mastra/ai-sdk";
export const mastra = new Mastra({
server: {
apiRoutes: [
chatRoute({
path: "/chat",
agent: "weatherAgent",
}),
],
},
});
Once you have your /chat API route set up, you can call the useChat() hook in your application.
const { error, status, sendMessage, messages, regenerate, stop } = useChat({
transport: new DefaultChatTransport({
api: "http://localhost:4111/chat",
}),
});
Pass extra agent stream execution options:
const { error, status, sendMessage, messages, regenerate, stop } = useChat({
transport: new DefaultChatTransport({
api: "http://localhost:4111/chat",
prepareSendMessagesRequest({ messages }) {
return {
body: {
messages,
// Pass memory config
memory: {
thread: "user-1",
resource: "user-1",
},
},
};
},
}),
});
workflowRoute()
Use the workflowRoute() utility to create a route handler that automatically formats the workflow stream into an AI SDK-compatible format.
import { Mastra } from "@mastra/core";
import { workflowRoute } from "@mastra/ai-sdk";
export const mastra = new Mastra({
server: {
apiRoutes: [
workflowRoute({
path: "/workflow",
agent: "weatherAgent",
}),
],
},
});
Once you have your /workflow API route set up, you can call the useChat() hook in your application.
const { error, status, sendMessage, messages, regenerate, stop } = useChat({
transport: new DefaultChatTransport({
api: "http://localhost:4111/workflow",
prepareSendMessagesRequest({ messages }) {
return {
body: {
inputData: {
city: messages[messages.length - 1].parts[0].text,
},
//Or resumeData for resuming a suspended workflow
resumeData: {
confirmation: messages[messages.length - 1].parts[0].text
}
},
};
},
}),
});
networkRoute()
Use the networkRoute() utility to create a route handler that automatically formats the agent network stream into an AI SDK-compatible format.
import { Mastra } from "@mastra/core";
import { networkRoute } from "@mastra/ai-sdk";
export const mastra = new Mastra({
server: {
apiRoutes: [
networkRoute({
path: "/network",
agent: "weatherAgent",
}),
],
},
});
Once you have your /network API route set up, you can call the useChat() hook in your application.
const { error, status, sendMessage, messages, regenerate, stop } = useChat({
transport: new DefaultChatTransport({
api: "http://localhost:4111/network",
}),
});
Custom UI
The @mastra/ai-sdk package transforms and emits Mastra streams (e.g workflow, network streams) into AI SDK-compatible uiMessages DataParts format.
-
Top-level parts: These are streamed via direct workflow and network stream transformations (e.g in
workflowRoute()andnetworkRoute())data-workflow: Aggregates a workflow run with step inputs/outputs and final usage.data-network: Aggregates a routing/network run with ordered steps (agent/workflow/tool executions) and outputs.
-
Nested parts: These are streamed via nested and merged streams from within a tool's
execute()method.data-tool-workflow: Nested workflow emitted from within a tool stream.data-tool-network: Nested network emitted from within an tool stream.data-tool-agent: Nested agent emitted from within an tool stream.
Here's an example: For a nested agent stream within a tool, data-tool-agent UI message parts will be emitted and can be leveraged on the client as documented below:
"use client";
import { useChat } from "@ai-sdk/react";
import { AgentTool } from '../ui/agent-tool';
import { DefaultChatTransport } from 'ai';
import type { AgentDataPart } from "@mastra/ai-sdk";
export default function Page() {
const { messages } = useChat({
transport: new DefaultChatTransport({
api: 'http://localhost:4111/chat',
}),
});
return (
<div>
{messages.map((message) => (
<div key={message.id}>
{message.parts.map((part, i) => {
switch (part.type) {
case 'data-tool-agent':
return (
<AgentTool {...part.data as AgentDataPart} key={`${message.id}-${i}`} />
);
default:
return null;
}
})}
</div>
))}
</div>
);
}
import { Tool, ToolContent, ToolHeader, ToolOutput } from "../ai-elements/tool";
import type { AgentDataPart } from "@mastra/ai-sdk";
export const AgentTool = ({ id, text, status }: AgentDataPart) => {
return (
<Tool>
<ToolHeader
type={`${id}`}
state={status === 'finished' ? 'output-available' : 'input-available'}
/>
<ToolContent>
<ToolOutput output={text} />
</ToolContent>
</Tool>
);
};
Custom Tool streaming
To stream custom data parts from within your tool execution function, use the
writer.custom() method.
import { createTool } from "@mastra/core/tools";
export const testTool = createTool({
// ...
execute: async ({ context, writer }) => {
const { value } = context;
await writer?.custom({
type: "data-tool-progress",
status: "pending"
});
const response = await fetch(...);
await writer?.custom({
type: "data-tool-progress",
status: "success"
});
return {
value: ""
};
}
});
For more information about tool streaming see Tool streaming documentation
Stream Transformations
To manually transform Mastra's streams to AI SDK-compatible format, use the toAISdkStream() utility.
import { mastra } from "../../mastra";
import { createUIMessageStream, createUIMessageStreamResponse } from "ai";
import { toAISdkStream } from "@mastra/ai-sdk";
export async function POST(req: Request) {
const { messages } = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const stream = await myAgent.stream(messages);
// Transform stream into AI SDK format and create UI messages stream
const uiMessageStream = createUIMessageStream({
execute: async ({ writer }) => {
for await (const part of toAISdkStream(stream, { from: "agent" })!) {
writer.write(part);
}
},
});
// Create a Response that streams the UI message stream to the client
return createUIMessageStreamResponse({
stream: uiMessageStream,
});
}
Stream Transformation Options
The toAISdkStream() function accepts the following options:
from:
lastMessageId?:
sendStart?:
sendFinish?:
sendReasoning?:
sendSources?:
messageMetadata?:
onError?:
Example with reasoning enabled:
import { mastra } from "../../mastra";
import { createUIMessageStream, createUIMessageStreamResponse } from "ai";
import { toAISdkStream } from "@mastra/ai-sdk";
export async function POST(req: Request) {
const { messages } = await req.json();
const myAgent = mastra.getAgent("reasoningAgent");
const stream = await myAgent.stream(messages, {
providerOptions: {
openai: { reasoningEffort: "high" },
},
});
const uiMessageStream = createUIMessageStream({
execute: async ({ writer }) => {
for await (const part of toAISdkStream(stream, {
from: "agent",
sendReasoning: true, // Enable reasoning content streaming
})!) {
writer.write(part);
}
},
});
return createUIMessageStreamResponse({
stream: uiMessageStream,
});
}
Client Side Stream Transformations
If you have a client-side response from agent.stream(...) and want AI SDK-formatted parts without custom SSE parsing, wrap response.processDataStream into a ReadableStream<ChunkType> and pipe it through toAISdkStream:
import { createUIMessageStream } from "ai";
import { toAISdkStream } from "@mastra/ai-sdk";
import type { ChunkType, MastraModelOutput } from "@mastra/core/stream";
// Client SDK agent stream
const response = await agent.stream({
messages: "What is the weather in Tokyo",
});
const chunkStream: ReadableStream<ChunkType> = new ReadableStream<ChunkType>({
start(controller) {
response
.processDataStream({
onChunk: async (chunk) => {
controller.enqueue(chunk as ChunkType);
},
})
.finally(() => controller.close());
},
});
const uiMessageStream = createUIMessageStream({
execute: async ({ writer }) => {
for await (const part of toAISdkStream(
chunkStream as unknown as MastraModelOutput,
{ from: "agent" },
)) {
writer.write(part);
}
},
});
for await (const part of uiMessageStream) {
console.log(part);
}
UI Hooks
Mastra supports AI SDK UI hooks for connecting frontend components directly to agents using HTTP streams.
Install the required AI SDK React package:
- npm
- pnpm
- yarn
- bun
npm install @ai-sdk/react
pnpm add @ai-sdk/react
yarn add @ai-sdk/react
bun add @ai-sdk/react
Using useChat()
The useChat() hook handles real-time chat interactions between your frontend and a Mastra agent, enabling you to send prompts and receive streaming responses over HTTP.
"use client";
import { useChat } from "@ai-sdk/react";
import { useState } from "react";
import { DefaultChatTransport } from 'ai';
export function Chat() {
const [inputValue, setInputValue] = useState('')
const { messages, sendMessage} = useChat({
transport: new DefaultChatTransport({
api: 'http://localhost:4111/chat',
}),
});
const handleFormSubmit = (e: React.FormEvent) => {
e.preventDefault();
sendMessage({ text: inputValue });
};
return (
<div>
<pre>{JSON.stringify(messages, null, 2)}</pre>
<form onSubmit={handleFormSubmit}>
<input value={inputValue} onChange={e=>setInputValue(e.target.value)} placeholder="Name of city" />
</form>
</div>
);
}
Requests sent using the useChat() hook are handled by a standard server route. This example shows how to define a POST route using a Next.js Route Handler.
import { mastra } from "../../mastra";
export async function POST(req: Request) {
const { messages } = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const stream = await myAgent.stream(messages, { format: "aisdk" });
return stream.toUIMessageStreamResponse();
}
When using
useChat()with agent memory, refer to the Agent Memory section for key implementation details.
Using useCompletion()
The useCompletion() hook handles single-turn completions between your frontend and a Mastra agent, allowing you to send a prompt and receive a streamed response over HTTP.
"use client";
import { useCompletion } from "@ai-sdk/react";
export function Completion() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
api: "api/completion"
});
return (
<div>
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} placeholder="Name of city" />
</form>
<p>Completion result: {completion}</p>
</div>
);
}
Requests sent using the useCompletion() hook are handled by a standard server route. This example shows how to define a POST route using a Next.js Route Handler.
import { mastra } from "../../../mastra";
export async function POST(req: Request) {
const { prompt } = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const stream = await myAgent.stream([{ role: "user", content: prompt }], {
format: "aisdk",
});
return stream.toUIMessageStreamResponse();
}
Passing additional data
sendMessage() allows you to pass additional data from the frontend to Mastra. This data can then be used on the server as RequestContext.
"use client";
import { useChat } from "@ai-sdk/react";
import { useState } from "react";
import { DefaultChatTransport } from 'ai';
export function ChatExtra() {
const [inputValue, setInputValue] = useState('')
const { messages, sendMessage } = useChat({
transport: new DefaultChatTransport({
api: 'http://localhost:4111/chat',
}),
});
const handleFormSubmit = (e: React.FormEvent) => {
e.preventDefault();
sendMessage({ text: inputValue }, {
body: {
data: {
userId: "user123",
preferences: {
language: "en",
temperature: "celsius"
}
}
}
});
};
return (
<div>
<pre>{JSON.stringify(messages, null, 2)}</pre>
<form onSubmit={handleFormSubmit}>
<input value={inputValue} onChange={e=>setInputValue(e.target.value)} placeholder="Name of city" />
</form>
</div>
);
}
import { mastra } from "../../../mastra";
import { RequestContext } from "@mastra/core/request-context";
export async function POST(req: Request) {
const { messages, data } = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const requestContext = new RequestContext();
if (data) {
for (const [key, value] of Object.entries(data)) {
requestContext.set(key, value);
}
}
const stream = await myAgent.stream(messages, {
requestContext,
format: "aisdk",
});
return stream.toUIMessageStreamResponse();
}
Handling requestContext with server.middleware
You can also populate the RequestContext by reading custom data in a server middleware:
import { Mastra } from "@mastra/core";
export const mastra = new Mastra({
agents: { weatherAgent },
server: {
middleware: [
async (c, next) => {
const requestContext = c.get("requestContext");
if (c.req.method === "POST") {
try {
const clonedReq = c.req.raw.clone();
const body = await clonedReq.json();
if (body?.data) {
for (const [key, value] of Object.entries(body.data)) {
requestContext.set(key, value);
}
}
} catch {}
}
await next();
},
],
},
});
You can then access this data in your tools via the
requestContextparameter. See the Request Context documentation for more details.