Using Vercel AI SDK
Mastra integrates with Vercel’s AI SDK to support model routing, React Hooks, and data streaming methods.
AI SDK v5
Mastra also supports AI SDK v5 see the following section for v5 specific methods: Vercel AI SDK v5
The code examples contained with this page assume you’re using the Next.js App Router at the root of your
project, e.g., app
rather than src/app
.
Model routing
When creating agents in Mastra, you can specify any AI SDK-supported model.
import { openai } from "@ai-sdk/openai";
import { Agent } from "@mastra/core/agent";
export const weatherAgent = new Agent({
name: "Weather Agent",
instructions: "Instructions for the agent...",
model: openai("gpt-4-turbo"),
});
See Model Providers and Model Capabilities for more information.
React Hooks
Mastra supports AI SDK hooks for connecting frontend components directly to agents using HTTP streams.
Install the required AI SDK React package:
npm
npm install @ai-sdk/react
Using the useChat()
Hook
The useChat
hook handles real-time chat interactions between your frontend and a Mastra agent, enabling you to send prompts and receive streaming responses over HTTP.
"use client";
import { useChat } from "@ai-sdk/react";
export function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: "api/chat"
});
return (
<div>
<pre>{JSON.stringify(messages, null, 2)}</pre>
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} placeholder="Name of city" />
</form>
</div>
);
}
Requests sent using the useChat
hook are handled by a standard server route. This example shows how to define a POST route using a Next.js Route Handler.
import { mastra } from "../../mastra";
export async function POST(req: Request) {
const { messages } = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const stream = await myAgent.stream(messages);
return stream.toDataStreamResponse();
}
When using
useChat
with agent memory, refer to the Agent Memory section for key implementation details.
Using the useCompletion()
Hook
The useCompletion
hook handles single-turn completions between your frontend and a Mastra agent, allowing you to send a prompt and receive a streamed response over HTTP.
"use client";
import { useCompletion } from "@ai-sdk/react";
export function Completion() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
api: "api/completion"
});
return (
<div>
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} placeholder="Name of city" />
</form>
<p>Completion result: {completion}</p>
</div>
);
}
Requests sent using the useCompletion
hook are handled by a standard server route. This example shows how to define a POST route using a Next.js Route Handler.
import { mastra } from "../../../mastra";
export async function POST(req: Request) {
const { prompt } = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const stream = await myAgent.stream([{ role: "user", content: prompt }]);
return stream.toDataStreamResponse();
}
Using the useObject()
Hook
The useObject
hook consumes streamed text from a Mastra agent and parses it into a structured JSON object based on a defined schema.
"use client";
import { experimental_useObject as useObject } from "@ai-sdk/react";
import { z } from "zod";
export function Object() {
const { object, submit } = useObject({
api: "api/object",
schema: z.object({
weather: z.string()
})
});
return (
<div>
<button onClick={() => submit("London")}>Generate</button>
{object ? <pre>{JSON.stringify(object, null, 2)}</pre> : null}
</div>
);
}
Requests sent using the useObject
hook are handled by a standard server route. This example shows how to define a POST route using a Next.js Route Handler.
import { mastra } from "../../../mastra";
import { z } from "zod";
export async function POST(req: Request) {
const body = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const stream = await myAgent.stream(body, {
output: z.object({
weather: z.string()
})
});
return stream.toTextStreamResponse();
}
Passing additional data with sendExtraMessageFields
The sendExtraMessageFields
option allows you to pass additional data from the frontend to Mastra. This data is available on the server as RuntimeContext
.
"use client";
import { useChat } from "@ai-sdk/react";
export function ChatExtra() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: "/api/chat-extra",
sendExtraMessageFields: true
});
const handleFormSubmit = (e: React.FormEvent) => {
e.preventDefault();
handleSubmit(e, {
data: {
userId: "user123",
preferences: {
language: "en",
temperature: "celsius"
}
}
});
};
return (
<div>
<pre>{JSON.stringify(messages, null, 2)}</pre>
<form onSubmit={handleFormSubmit}>
<input value={input} onChange={handleInputChange} placeholder="Name of city" />
</form>
</div>
);
}
Requests sent using sendExtraMessageFields
are handled by a standard server route. This example shows how to extract the custom data and populate a RuntimeContext
instance.
import { mastra } from "../../../mastra";
import { RuntimeContext } from "@mastra/core/runtime-context";
export async function POST(req: Request) {
const { messages, data } = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const runtimeContext = new RuntimeContext();
if (data) {
for (const [key, value] of Object.entries(data)) {
runtimeContext.set(key, value);
}
}
const stream = await myAgent.stream(messages, { runtimeContext });
return stream.toDataStreamResponse();
}
Handling runtimeContext
with server.middleware
You can also populate the RuntimeContext
by reading custom data in a server middleware:
import { Mastra } from "@mastra/core/mastra";
export const mastra = new Mastra({
agents: { weatherAgent },
server: {
middleware: [
async (c, next) => {
const runtimeContext = c.get("runtimeContext");
if (c.req.method === "POST") {
try {
const clonedReq = c.req.raw.clone();
const body = await clonedReq.json();
if (body?.data) {
for (const [key, value] of Object.entries(body.data)) {
runtimeContext.set(key, value);
}
}
} catch {
}
}
await next();
},
],
},
});
You can then access this data in your tools via the
runtimeContext
parameter. See the Runtime Context documentation for more details.
Streaming data
The ai
package provides utilities for managing custom data streams. In some cases, you may want to send structured updates or annotations to the client using an agent’s dataStream
.
Install the required package:
npm
npm install ai
Using createDataStream()
The createDataStream
function allows you to stream additional data to the client.
import { createDataStream } from "ai";
import { Agent } from "@mastra/core/agent";
export const weatherAgent = new Agent({...});
createDataStream({
async execute(dataStream) {
dataStream.writeData({ value: "Hello" });
dataStream.writeMessageAnnotation({ type: "status", value: "processing" });
const agentStream = await weatherAgent.stream("What is the weather");
agentStream.mergeIntoDataStream(dataStream);
},
onError: (error) => `Custom error: ${error}`
});
Using createDataStreamResponse()
The createDataStreamResponse
function creates a response object that streams data to the client.
import { mastra } from "../../../mastra";
import { createDataStreamResponse } from "ai";
export async function POST(req: Request) {
const { messages } = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
const agentStream = await myAgent.stream(messages);
const response = createDataStreamResponse({
status: 200,
statusText: "OK",
headers: {
"Custom-Header": "value"
},
async execute(dataStream) {
dataStream.writeData({ value: "Hello" });
dataStream.writeMessageAnnotation({
type: "status",
value: "processing"
});
agentStream.mergeIntoDataStream(dataStream);
},
onError: (error) => `Custom error: ${error}`
});
return response;
}
Vercel AI SDK v5
This guide covers Mastra-specific considerations when migrating from AI SDK v4 to v5.
Please add any feedback or bug reports to the AI SDK v5 mega issue in Github.Â
Experimental streamVNext Support
Mastra’s experimental streamVNext
method now includes native AI SDK v5 support through the format
parameter. This provides seamless integration with AI SDK v5’s streaming interfaces without requiring compatibility wrappers.
// Use streamVNext with AI SDK v5 format
const stream = await agent.streamVNext(messages, {
format: 'aisdk' // Enable AI SDK v5 compatibility
});
// The stream is now compatible with AI SDK v5 interfaces
return stream.toUIMessageStreamResponse();
Official migration guide
Follow the official AI SDK v5 Migration Guide for all AI SDK core breaking changes, package updates, and API changes.
This guide covers only the Mastra-specific aspects of the migration.
- Data compatibility: New data stored in v5 format will no longer work if you downgrade from v5 to v4
- Backup recommendation: Keep DB backups from before you upgrade to v5
Memory and Storage
Mastra automatically handles AI SDK v4 data using its internal MessageList
class, which manages format conversion—including v4 to v5. No database migrations are required; your existing messages are translated on the fly and continue working after you upgrade.
Message Format Conversion
For cases where you need to manually convert messages between AI SDK and Mastra formats, use the convertMessages
utility:
import { convertMessages } from '@mastra/core/agent';
// Convert AI SDK v4 messages to v5
const aiv5Messages = convertMessages(aiv4Messages).to('AIV5.UI');
// Convert Mastra messages to AI SDK v5
const aiv5Messages = convertMessages(mastraMessages).to('AIV5.Core');
// Supported output formats:
// 'Mastra.V2', 'AIV4.UI', 'AIV5.UI', 'AIV5.Core', 'AIV5.Model'
This utility is helpful when you want to fetch messages directly from your storage DB and convert them for use in AI SDK.
Enabling stream compatibility
To enable AI SDK v5 compatibility, use the experimental streamVNext
method with the format
parameter:
import { mastra } from "../../../mastra";
export async function POST(req: Request) {
const { messages } = await req.json();
const myAgent = mastra.getAgent("weatherAgent");
// Use streamVNext with AI SDK v5 format (experimental)
const stream = await myAgent.streamVNext(messages, {
format: 'aisdk'
});
return stream.toUIMessageStreamResponse();
}
Note: The streamVNext
method with format support is experimental and may change as we refine the feature based on feedback. See the Agent Streaming documentation for more details about streamVNext.