Skip to main content

withMastra()

Wraps an AI SDK model with Mastra processors and/or memory.

Usage example
Direct link to Usage example

src/example.ts
import { openai } from '@ai-sdk/openai'
import { generateText } from 'ai'
import { withMastra } from '@mastra/ai-sdk'
import type { Processor } from '@mastra/core/processors'

const loggingProcessor: Processor<'logger'> = {
id: 'logger',
async processInput({ messages }) {
console.log('Input:', messages.length, 'messages')
return messages
},
}

const model = withMastra(openai('gpt-4o'), {
inputProcessors: [loggingProcessor],
})

const { text } = await generateText({
model,
prompt: 'What is 2 + 2?',
})

Parameters
Direct link to Parameters

model:

LanguageModelV2
Any AI SDK language model (e.g., `openai('gpt-4o')`, `anthropic('claude-3-opus')`).

options?:

WithMastraOptions
Configuration object for processors and memory.
WithMastraOptions

inputProcessors?:

InputProcessor[]
Input processors to run before the LLM call.

outputProcessors?:

OutputProcessor[]
Output processors to run on the LLM response.

memory?:

WithMastraMemoryOptions
Memory configuration - enables automatic message history persistence.
WithMastraMemoryOptions

storage:

MemoryStorage
Memory storage domain for message persistence. Get it from a composite store using `await storage.getStore('memory')`.

threadId:

string
Thread ID for conversation persistence.

resourceId?:

string
Resource ID (user/session identifier).

lastMessages?:

number | false
Number of recent messages to retrieve, or false to disable.

semanticRecall?:

WithMastraSemanticRecallOptions
Semantic recall configuration (RAG-based memory retrieval).

workingMemory?:

MemoryConfig['workingMemory']
Working memory configuration (persistent user data).

readOnly?:

boolean
Read-only mode - prevents saving new messages.

Returns
Direct link to Returns

A wrapped model compatible with generateText, streamText, generateObject, and streamObject.

Streaming behavior
Direct link to Streaming behavior

Output processors that implement processOutputResult run after the stream finishes. Consume the stream to completion to persist message history and semantic recall.

On this page