AI SDK
If you're already using the Vercel AI SDK directly and want to add Mastra capabilities like processors or memory without switching to the full Mastra agent API, withMastra() lets you wrap any AI SDK model with these features. This is useful when you want to keep your existing AI SDK code but add input/output processing, conversation persistence, or content filtering.
If you want to use Mastra together with AI SDK UI (e.g. useChat()), visit the AI SDK UI guide.
InstallationDirect link to Installation
Install @mastra/ai-sdk to begin using the withMastra() function.
- npm
- pnpm
- yarn
- bun
npm install @mastra/ai-sdk@beta
pnpm add @mastra/ai-sdk@beta
yarn add @mastra/ai-sdk@beta
bun add @mastra/ai-sdk@beta
ExamplesDirect link to Examples
With ProcessorsDirect link to With Processors
Processors let you transform messages before they're sent to the model (processInput) and after responses are received (processOutputResult). This example creates a logging processor that logs message counts at each stage, then wraps an OpenAI model with it.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { withMastra } from '@mastra/ai-sdk';
import type { Processor } from '@mastra/core/processors';
const loggingProcessor: Processor<'logger'> = {
id: 'logger',
async processInput({ messages }) {
console.log('Input:', messages.length, 'messages');
return messages;
},
async processOutputResult({ messages }) {
console.log('Output:', messages.length, 'messages');
return messages;
},
};
const model = withMastra(openai('gpt-4o'), {
inputProcessors: [loggingProcessor],
outputProcessors: [loggingProcessor],
});
const { text } = await generateText({
model,
prompt: 'What is 2 + 2?',
});
With MemoryDirect link to With Memory
Memory automatically loads previous messages from storage before the LLM call and saves new messages after. This example configures a LibSQL storage backend to persist conversation history, loading the last 10 messages for context.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { withMastra } from '@mastra/ai-sdk';
import { LibSQLStore } from '@mastra/libsql';
const storage = new LibSQLStore({
id: 'my-app',
url: 'file:./data.db',
});
await storage.init();
const model = withMastra(openai('gpt-4o'), {
memory: {
storage,
threadId: 'user-thread-123',
resourceId: 'user-123',
lastMessages: 10,
},
});
const { text } = await generateText({
model,
prompt: 'What did we talk about earlier?',
});
With Processors & MemoryDirect link to With Processors & Memory
You can combine processors and memory together. Input processors run after memory loads historical messages, and output processors run before memory saves the response.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { withMastra } from '@mastra/ai-sdk';
import { LibSQLStore } from '@mastra/libsql';
const storage = new LibSQLStore({ id: 'my-app', url: 'file:./data.db' });
await storage.init();
const model = withMastra(openai('gpt-4o'), {
inputProcessors: [myGuardProcessor],
outputProcessors: [myLoggingProcessor],
memory: {
storage,
threadId: 'thread-123',
resourceId: 'user-123',
lastMessages: 10,
},
});
const { text } = await generateText({
model,
prompt: 'Hello!',
});
RelatedDirect link to Related
withMastra()- API reference forwithMastra()- Processors - Learn about input and output processors
- Memory - Overview of Mastra's memory system
- AI SDK UI - Using AI SDK UI hooks with Mastra agents, workflows, and networks