Example: AI SDK useChat
Hook with Mastra Server
This example shows how to integrate Mastra’s memory with the Vercel AI SDK’s useChat
hook when using a Mastra Server deployment. The key is connecting your Next.js app to the Mastra Server via the MastraClient SDK.
Architecture Overview
When using a Mastra Server, you have two options:
Option 1: Via API Route (Recommended for production)
- Your React frontend uses the
useChat
hook to manage UI state - Your API route uses
MastraClient
to communicate with the Mastra Server - The Mastra Server handles agent execution, memory, and tools
Benefits:
- Keeps your Mastra Server URL and API keys secure on the backend
- Allows you to add additional authentication/authorization logic
- Enables request transformation and response handling
- Better for multi-tenant applications
Option 2: Direct Connection (Good for development/prototyping)
- Your React frontend uses the
useChat
hook to manage UI state - The
useChat
hook connects directly to the Mastra Server’s stream endpoint - The Mastra Server handles agent execution, memory, and tools
Preventing Message Duplication with useChat
The default behavior of useChat
sends the entire chat history with each request. Since Mastra’s memory automatically retrieves history based on threadId
, sending the full history from the client leads to duplicate messages in the context window and storage.
Solution: Configure useChat
to send only the latest message along with your threadId
and resourceId
.
import { useChat } from "ai/react";
export function Chat({ threadId, resourceId }) {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: "/api/chat", // Your backend endpoint
// Pass only the latest message and custom IDs
experimental_prepareRequestBody: (request) => {
// Ensure messages array is not empty and get the last message
const lastMessage = request.messages.length > 0 ? request.messages[request.messages.length - 1] : null;
// Return the structured body for your API route
return {
message: lastMessage, // Send only the most recent message content/role
threadId,
resourceId,
};
},
// Optional: Initial messages if loading history from backend
// initialMessages: loadedMessages,
});
// ... rest of your chat UI component
return (
<div>
{/* Render messages */}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} placeholder="Send a message..." />
<button type="submit">Send</button>
</form>
</div>
);
}
import { MastraClient } from "@mastra/client-js";
import { CoreMessage } from "@mastra/core";
// Initialize the Mastra client to connect to your server
const mastraClient = new MastraClient({
baseUrl: process.env.MASTRA_SERVER_URL || "http://localhost:4111",
// Optional: Add API key if your server requires authentication
headers: {
"x-api-key": process.env.MASTRA_API_KEY,
},
});
export async function POST(request: Request) {
// Get data structured by experimental_prepareRequestBody
const { message, threadId, resourceId }: { message: CoreMessage | null; threadId: string; resourceId: string } = await request.json();
// Handle cases where message might be null (e.g., initial load or error)
if (!message || !message.content) {
return new Response("Missing message content", { status: 400 });
}
// Get the agent from your Mastra Server
const agent = mastraClient.getAgent("ChatAgent"); // Use your agent's ID
// Stream the response with memory context
const response = await agent.stream({
messages: [{ role: message.role || "user", content: message.content }],
threadId,
resourceId,
});
// Return the streaming response to the frontend
return new Response(response.body);
}
Alternative: Direct Server Route
If you’ve deployed your Mastra Server with a custom route handler for chat, you can also connect directly:
import { useChat } from "ai/react";
export function Chat({ threadId, resourceId, agentId = "ChatAgent" }) {
const { messages, input, handleInputChange, handleSubmit } = useChat({
// Connect directly to your Mastra Server's stream endpoint
api: `${process.env.NEXT_PUBLIC_MASTRA_SERVER_URL}/api/agents/${agentId}/stream`,
experimental_prepareRequestBody: (request) => {
const lastMessage = request.messages.length > 0 ? request.messages[request.messages.length - 1] : null;
// The Mastra Server expects the full messages array, not just a single message
return {
messages: lastMessage ? [lastMessage] : [],
threadId,
resourceId,
};
},
headers: {
// Include authentication if required
"x-api-key": process.env.NEXT_PUBLIC_MASTRA_API_KEY,
},
});
// ... rest of component
}
Environment Variables
Make sure to configure your environment variables:
MASTRA_SERVER_URL=http://localhost:4111 # Your Mastra Server URL
MASTRA_API_KEY=your-api-key # If authentication is enabled
# For direct client connection (if using the alternative approach)
NEXT_PUBLIC_MASTRA_SERVER_URL=http://localhost:4111
NEXT_PUBLIC_MASTRA_API_KEY=your-api-key
See the AI SDK documentation on message persistence  for more background.
Basic Thread Management UI
While this page focuses on useChat
, you can also build UIs for managing threads (listing, creating, selecting). This typically involves backend API endpoints that interact with Mastra’s memory functions like memory.getThreadsByResourceId()
and memory.createThread()
.
import React, { useState, useEffect } from 'react';
// Assume API functions exist: fetchThreads, createNewThread
async function fetchThreads(userId: string): Promise<{ id: string; title: string }[]> { /* ... */ }
async function createNewThread(userId: string): Promise<{ id: string; title: string }> { /* ... */ }
function ThreadList({ userId, currentThreadId, onSelectThread }) {
const [threads, setThreads] = useState([]);
// ... loading and error states ...
useEffect(() => {
// Fetch threads for userId
}, [userId]);
const handleCreateThread = async () => {
// Call createNewThread API, update state, select new thread
};
// ... render UI with list of threads and New Conversation button ...
return (
<div>
<h2>Conversations</h2>
<button onClick={handleCreateThread}>New Conversation</button>
<ul>
{threads.map(thread => (
<li key={thread.id}>
<button onClick={() => onSelectThread(thread.id)} disabled={thread.id === currentThreadId}>
{thread.title || `Chat ${thread.id.substring(0, 8)}...`}
</button>
</li>
))}
</ul>
</div>
);
}
// Example Usage in a Parent Chat Component
function ChatApp() {
const userId = "user_123";
const [currentThreadId, setCurrentThreadId] = useState<string | null>(null);
return (
<div style={{ display: 'flex' }}>
<ThreadList
userId={userId}
currentThreadId={currentThreadId}
onSelectThread={setCurrentThreadId}
/>
<div style={{ flexGrow: 1 }}>
{currentThreadId ? (
<Chat threadId={currentThreadId} resourceId={userId} agentId="your-agent-id" /> // Your useChat component
) : (
<div>Select or start a conversation.</div>
)}
</div>
</div>
);
}
Key Differences: Mastra Server vs Direct Integration
When using Mastra Server:
- Your Next.js app acts as a client to the Mastra Server
- Use
MastraClient
to communicate with the server - Agent configuration, memory, and tools are managed on the server
- Better for production deployments and multi-client scenarios
When using direct integration (from the AI SDK docs):
- Agent runs directly in your Next.js server
- You manage agent configuration and dependencies locally
- Simpler for development but requires more setup
Troubleshooting
Common Issues:
- Connection refused: Ensure your Mastra Server is running and accessible
- Authentication errors: Check your API key configuration
- Message duplication: Verify you’re only sending the latest message
- Missing thread history: Ensure
threadId
andresourceId
are passed correctly - CORS errors (direct connection): Configure your Mastra Server to allow requests from your frontend origin
Related
- MastraClient Overview: Learn more about the Mastra Client SDK
- Mastra Server: How to deploy and configure a Mastra Server
- Memory Overview: Core concepts of
resourceId
andthreadId
- AI SDK Integration: General useChat documentation