Skip to main content

Example: AI SDK v5 Integration

This example demonstrates how to integrate Mastra agents with AI SDK v5 to build modern streaming chat interfaces. It showcases a complete Next.js application with real-time conversation capabilities, persistent memory, and tool integration using the stream method with AI SDK v5 format support.

Key FeaturesDirect link to Key Features

  • Streaming Chat Interface: Uses AI SDK v5's useChat hook for real-time conversations
  • Mastra Agent Integration: Weather agent with custom tools and OpenAI GPT-4o
  • Persistent Memory: Conversation history stored with LibSQL
  • Compatibility Layer: Seamless integration between Mastra and AI SDK v5 streams
  • Tool Integration: Custom weather tool for real-time data fetching

Mastra ConfigurationDirect link to Mastra Configuration

First, set up your Mastra agent with memory and tools:

src/mastra/index.ts
import { ConsoleLogger } from "@mastra/core/logger";
import { Mastra } from "@mastra/core/mastra";
import { weatherAgent } from "./agents";

export const mastra = new Mastra({
agents: { weatherAgent },
logger: new ConsoleLogger(),
// aiSdkCompat: "v4", // Optional: for additional compatibility
});
src/mastra/agents/index.ts
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { weatherTool } from "../tools";

export const memory = new Memory({
storage: new LibSQLStore({
url: `file:./mastra.db`,
}),
options: {
semanticRecall: false,
workingMemory: {
enabled: false,
},
lastMessages: 5,
},
});

export const weatherAgent = new Agent({
name: "Weather Agent",
instructions: `
You are a helpful weather assistant that provides accurate weather information.

Your primary function is to help users get weather details for specific locations. When responding:
- Always ask for a location if none is provided
- Include relevant details like humidity, wind conditions, and precipitation
- Keep responses concise but informative

Use the weatherTool to fetch current weather data.
`,
model: openai("gpt-4o-mini"),
tools: {
weatherTool,
},
memory,
});

Custom Weather ToolDirect link to Custom Weather Tool

Create a tool that fetches real-time weather data:

src/mastra/tools/index.ts
import { createTool } from "@mastra/core/tools";
import { z } from "zod";

export const weatherTool = createTool({
id: "get-weather",
description: "Get current weather for a location",
inputSchema: z.object({
location: z.string().describe("City name"),
}),
outputSchema: z.object({
temperature: z.number(),
feelsLike: z.number(),
humidity: z.number(),
windSpeed: z.number(),
windGust: z.number(),
conditions: z.string(),
location: z.string(),
}),
execute: async ({ context }) => {
return await getWeather(context.location);
},
});

const getWeather = async (location: string) => {
// Geocoding API call
const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(location)}&count=1`;
const geocodingResponse = await fetch(geocodingUrl);
const geocodingData = await geocodingResponse.json();

if (!geocodingData.results?.[0]) {
throw new Error(`Location '${location}' not found`);
}

const { latitude, longitude, name } = geocodingData.results[0];

// Weather API call
const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&current=temperature_2m,apparent_temperature,relative_humidity_2m,wind_speed_10m,wind_gusts_10m,weather_code`;
const response = await fetch(weatherUrl);
const data = await response.json();

return {
temperature: data.current.temperature_2m,
feelsLike: data.current.apparent_temperature,
humidity: data.current.relative_humidity_2m,
windSpeed: data.current.wind_speed_10m,
windGust: data.current.wind_gusts_10m,
conditions: getWeatherCondition(data.current.weather_code),
location: name,
};
};

Next.js API RoutesDirect link to Next.js API Routes

Streaming Chat EndpointDirect link to Streaming Chat Endpoint

Create an API route that streams responses from your Mastra agent using the stream method with AI SDK v5 format:

app/api/chat/route.ts
import { mastra } from "@/src/mastra";

const myAgent = mastra.getAgent("weatherAgent");

export async function POST(req: Request) {
const { messages } = await req.json();

// Use stream with AI SDK v5 format (experimental)
const stream = await myAgent.stream(messages, {
format: "aisdk", // Enable AI SDK v5 compatibility
memory: {
thread: "user-session", // Use actual user/session ID
resource: "weather-chat",
},
});

// Stream is already in AI SDK v5 format
return stream.toUIMessageStreamResponse();
}

Initial Chat HistoryDirect link to Initial Chat History

Load conversation history from Mastra Memory:

app/api/initial-chat/route.ts
import { mastra } from "@/src/mastra";
import { NextResponse } from "next/server";
import { convertMessages } from "@mastra/core/agent";

const myAgent = mastra.getAgent("weatherAgent");

export async function GET() {
const result = await myAgent.getMemory()?.query({
threadId: "user-session",
});

const messages = convertMessages(result?.uiMessages || []).to("AIV5.UI");
return NextResponse.json(messages);
}

React Chat InterfaceDirect link to React Chat Interface

Build the frontend using AI SDK v5's useChat hook:

app/page.tsx
"use client";

import { Message, useChat } from "@ai-sdk/react";
import useSWR from "swr";

const fetcher = (url: string) => fetch(url).then((res) => res.json());

export default function Chat() {
// Load initial conversation history
const { data: initialMessages = [] } = useSWR<Message[]>(
"/api/initial-chat",
fetcher,
);

// Set up streaming chat with AI SDK v5
const { messages, input, handleInputChange, handleSubmit } = useChat({
initialMessages,
});

return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map((m) => (
<div
key={m.id}
className="whitespace-pre-wrap"
style={{ marginTop: "1em" }}
>
<h3
style={{
fontWeight: "bold",
color: m.role === "user" ? "green" : "yellow",
}}
>
{m.role === "user" ? "User: " : "AI: "}
</h3>
{m.parts.map((p) => p.type === "text" && p.text).join("\n")}
</div>
))}

<form onSubmit={handleSubmit}>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Ask about the weather..."
onChange={handleInputChange}
/>
</form>
</div>
);
}

Package ConfigurationDirect link to Package Configuration

Install the required dependencies:

NOTE: ai-sdk v5 is still in beta, while it is in beta you'll have to install the beta ai-sdk versions and the beta mastra versions. See here for more information

package.json
{
"dependencies": {
"@ai-sdk/openai": "2.0.0-beta.1",
"@ai-sdk/react": "2.0.0-beta.1",
"@mastra/core": "0.0.0-ai-v5-20250625173645",
"@mastra/libsql": "0.0.0-ai-v5-20250625173645",
"@mastra/memory": "0.0.0-ai-v5-20250625173645",
"next": "15.1.7",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"swr": "^2.3.3",
"zod": "^3.25.67"
}
}

Key Integration PointsDirect link to Key Integration Points

Experimental stream Format SupportDirect link to Experimental stream Format Support

The experimental stream method with format: 'aisdk' provides native AI SDK v5 compatibility:

// Use stream with AI SDK v5 format
const stream = await agent.stream(messages, {
format: "aisdk", // Returns AISDKV5OutputStream
});

// Direct compatibility with AI SDK v5 interfaces
return stream.toUIMessageStreamResponse();

Memory PersistenceDirect link to Memory Persistence

Conversations are automatically persisted using Mastra Memory:

  • Each conversation uses a unique threadId
  • History is loaded on page refresh via /api/initial-chat
  • New messages are automatically stored by the agent

Tool IntegrationDirect link to Tool Integration

The weather tool is seamlessly integrated:

  • Agent automatically calls the tool when weather information is needed
  • Real-time data is fetched from external APIs
  • Structured output ensures consistent responses

Running the ExampleDirect link to Running the Example

  1. Set your OpenAI API key:
echo "OPENAI_API_KEY=your_key_here" > .env.local
  1. Start the development server:
pnpm dev
  1. Visit http://localhost:3000 and ask about weather in different cities!





View source on GitHub