Skip to main content

withMastra()

Wraps an AI SDK model with Mastra processors and/or memory.

Usage example
Direct link to Usage example

src/example.ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { withMastra } from '@mastra/ai-sdk';
import type { Processor } from '@mastra/core/processors';

const loggingProcessor: Processor<'logger'> = {
id: 'logger',
async processInput({ messages }) {
console.log('Input:', messages.length, 'messages');
return messages;
},
};

const model = withMastra(openai('gpt-4o'), {
inputProcessors: [loggingProcessor],
});

const { text } = await generateText({
model,
prompt: 'What is 2 + 2?',
});

Parameters
Direct link to Parameters

model:

LanguageModelV2
Any AI SDK language model (e.g., `openai('gpt-4o')`, `anthropic('claude-3-opus')`).

options?:

WithMastraOptions
Configuration object for processors and memory.

options.inputProcessors?:

InputProcessor[]
Input processors to run before the LLM call.

options.outputProcessors?:

OutputProcessor[]
Output processors to run on the LLM response.

options.memory?:

WithMastraMemoryOptions
Memory configuration - enables automatic message history persistence.

options.memory.storage:

MemoryStorage
Storage adapter for message persistence (e.g., LibSQLStore, PostgresStore).

options.memory.threadId:

string
Thread ID for conversation persistence.

options.memory.resourceId?:

string
Resource ID (user/session identifier).

options.memory.lastMessages?:

number | false
Number of recent messages to retrieve, or false to disable.

options.memory.semanticRecall?:

WithMastraSemanticRecallOptions
Semantic recall configuration (RAG-based memory retrieval).

options.memory.workingMemory?:

MemoryConfig['workingMemory']
Working memory configuration (persistent user data).

options.memory.readOnly?:

boolean
Read-only mode - prevents saving new messages.

Returns
Direct link to Returns

A wrapped model compatible with generateText, streamText, generateObject, and streamObject.

On this page