# AI SDK If you're already using the [Vercel AI SDK](https://sdk.vercel.ai) directly and want to add Mastra capabilities like [processors](https://mastra.ai/docs/agents/processors/llms.txt) or [memory](https://mastra.ai/docs/memory/memory-processors/llms.txt) without switching to the full Mastra agent API, [`withMastra()`](https://mastra.ai/reference/ai-sdk/with-mastra/llms.txt) lets you wrap any AI SDK model with these features. This is useful when you want to keep your existing AI SDK code but add input/output processing, conversation persistence, or content filtering. If you want to use Mastra together with AI SDK UI (e.g. `useChat()`), visit the [AI SDK UI guide](https://mastra.ai/guides/build-your-ui/ai-sdk-ui/llms.txt). ## Installation Install `@mastra/ai-sdk` to begin using the `withMastra()` function. ```bash npm install @mastra/ai-sdk@latest ``` ```bash pnpm add @mastra/ai-sdk@latest ``` ```bash yarn add @mastra/ai-sdk@latest ``` ```bash bun add @mastra/ai-sdk@latest ``` ## Examples ### With Processors Processors let you transform messages before they're sent to the model (`processInput`) and after responses are received (`processOutputResult`). This example creates a logging processor that logs message counts at each stage, then wraps an OpenAI model with it. ```typescript import { openai } from '@ai-sdk/openai'; import { generateText } from 'ai'; import { withMastra } from '@mastra/ai-sdk'; import type { Processor } from '@mastra/core/processors'; const loggingProcessor: Processor<'logger'> = { id: 'logger', async processInput({ messages }) { console.log('Input:', messages.length, 'messages'); return messages; }, async processOutputResult({ messages }) { console.log('Output:', messages.length, 'messages'); return messages; }, }; const model = withMastra(openai('gpt-4o'), { inputProcessors: [loggingProcessor], outputProcessors: [loggingProcessor], }); const { text } = await generateText({ model, prompt: 'What is 2 + 2?', }); ``` ### With Memory Memory automatically loads previous messages from storage before the LLM call and saves new messages after. This example configures a libSQL storage backend to persist conversation history, loading the last 10 messages for context. ```typescript import { openai } from '@ai-sdk/openai'; import { generateText } from 'ai'; import { withMastra } from '@mastra/ai-sdk'; import { LibSQLStore } from '@mastra/libsql'; const storage = new LibSQLStore({ id: 'my-app', url: 'file:./data.db', }); await storage.init(); const model = withMastra(openai('gpt-4o'), { memory: { storage, threadId: 'user-thread-123', resourceId: 'user-123', lastMessages: 10, }, }); const { text } = await generateText({ model, prompt: 'What did we talk about earlier?', }); ``` ### With Processors & Memory You can combine processors and memory together. Input processors run after memory loads historical messages, and output processors run before memory saves the response. ```typescript import { openai } from '@ai-sdk/openai'; import { generateText } from 'ai'; import { withMastra } from '@mastra/ai-sdk'; import { LibSQLStore } from '@mastra/libsql'; const storage = new LibSQLStore({ id: 'my-app', url: 'file:./data.db' }); await storage.init(); const model = withMastra(openai('gpt-4o'), { inputProcessors: [myGuardProcessor], outputProcessors: [myLoggingProcessor], memory: { storage, threadId: 'thread-123', resourceId: 'user-123', lastMessages: 10, }, }); const { text } = await generateText({ model, prompt: 'Hello!', }); ``` ## Related - [`withMastra()`](https://mastra.ai/reference/ai-sdk/with-mastra/llms.txt) - API reference for `withMastra()` - [Processors](https://mastra.ai/docs/agents/processors/llms.txt) - Learn about input and output processors - [Memory](https://mastra.ai/docs/memory/overview/llms.txt) - Overview of Mastra's memory system - [AI SDK UI](https://mastra.ai/guides/build-your-ui/ai-sdk-ui/llms.txt) - Using AI SDK UI hooks with Mastra agents, workflows, and networks