Agent.generateVNext() (Experimental)
Experimental Feature
This is a new generation method with support for flexible output formats (including AI SDK v5). It will eventually replace `generate()` and may change based on community feedback.
The .generateVNext()
method enables non-streaming response generation from an agent, with enhanced capabilities and flexible output formats. It accepts messages and optional generation options, supporting both Mastra’s native format and AI SDK v5 compatibility.
Usage example
// Default Mastra format
const mastraResult = await agent.generateVNext("message for agent");
// AI SDK v5 compatible format
const aiSdkResult = await agent.generateVNext("message for agent", {
format: 'aisdk'
});
Parameters
messages:
string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]
The messages to send to the agent. Can be a single string, array of strings, or structured message objects.
options?:
AgentExecutionOptions<Output, StructuredOutput, Format>
Optional configuration for the generation process.
Options
format?:
'mastra' | 'aisdk'
= 'mastra'
Determines the output format. Use 'mastra' for Mastra's native format (default) or 'aisdk' for AI SDK v5 compatibility.
maxSteps?:
number
Maximum number of steps to run during execution.
scorers?:
MastraScorers | Record<string, { scorer: MastraScorer['name']; sampling?: ScoringSamplingConfig }>
Evaluation scorers to run on the execution results.
scorer:
string
Name of the scorer to use.
sampling?:
ScoringSamplingConfig
Sampling configuration for the scorer.
tracingContext?:
TracingContext
AI tracing context for span hierarchy and metadata.
returnScorerData?:
boolean
Whether to return detailed scoring data in the response.
onChunk?:
(chunk: ChunkType) => Promise<void> | void
Callback function called for each chunk during generation.
onError?:
({ error }: { error: Error | string }) => Promise<void> | void
Callback function called when an error occurs during generation.
onAbort?:
(event: any) => Promise<void> | void
Callback function called when the generation is aborted.
activeTools?:
Array<keyof ToolSet> | undefined
Array of tool names that should be active during execution. If undefined, all available tools are active.
abortSignal?:
AbortSignal
Signal object that allows you to abort the agent's execution. When the signal is aborted, all ongoing operations will be terminated.
prepareStep?:
PrepareStepFunction<any>
Callback function called before each step of multi-step execution.
context?:
ModelMessage[]
Additional context messages to provide to the agent.
structuredOutput?:
StructuredOutputOptions<S extends ZodTypeAny = ZodTypeAny>
Enables structured output generation with better developer experience. Automatically creates and uses a StructuredOutputProcessor internally.
schema:
z.ZodSchema<S>
Zod schema defining the expected output structure.
model?:
MastraLanguageModel
Language model to use for structured output generation. If not provided, uses the agent's default model.
errorStrategy?:
'strict' | 'warn' | 'fallback'
Strategy for handling schema validation errors. 'strict' throws errors, 'warn' logs warnings, 'fallback' uses fallback values.
fallbackValue?:
<S extends ZodTypeAny>
Fallback value to use when schema validation fails and errorStrategy is 'fallback'.
instructions?:
string
Additional instructions for structured output generation.
outputProcessors?:
Processor[]
Output processors to use for this execution (overrides agent's default).
inputProcessors?:
Processor[]
Input processors to use for this execution (overrides agent's default).
instructions?:
string
Custom instructions that override the agent's default instructions for this execution.
output?:
Zod schema | JsonSchema7
Schema for structured output generation (Zod schema or JSON Schema).
memory?:
object
Memory configuration for conversation persistence and retrieval.
thread:
string | { id: string; metadata?: Record<string, any>, title?: string }
Thread identifier for conversation continuity. Can be a string ID or an object with ID and optional metadata/title.
resource:
string
Resource identifier for organizing conversations by user, session, or context.
options?:
MemoryConfig
Additional memory configuration options for conversation management.
onFinish?:
StreamTextOnFinishCallback<any> | StreamObjectOnFinishCallback<OUTPUT>
Callback fired when generation completes. Type varies by format.
onStepFinish?:
StreamTextOnStepFinishCallback<any> | never
Callback fired after each generation step. Type varies by format.
resourceId?:
string
Deprecated. Use memory.resource instead. Identifier for the resource/user.
telemetry?:
TelemetrySettings
Telemetry collection settings for observability.
isEnabled?:
boolean
Whether telemetry collection is enabled.
recordInputs?:
boolean
Whether to record input data in telemetry.
recordOutputs?:
boolean
Whether to record output data in telemetry.
functionId?:
string
Identifier for the function being executed.
modelSettings?:
CallSettings
Model-specific settings like temperature, topP, etc.
temperature?:
number
Controls randomness in generation (0-2). Higher values make output more random.
maxRetries?:
number
Maximum number of retry attempts for failed requests.
topP?:
number
Nucleus sampling parameter (0-1). Controls diversity of generated text.
topK?:
number
Top-k sampling parameter. Limits vocabulary to k most likely tokens.
presencePenalty?:
number
Penalty for token presence (-2 to 2). Reduces repetition.
frequencyPenalty?:
number
Penalty for token frequency (-2 to 2). Reduces repetition of frequent tokens.
stopSequences?:
string[]
Array of strings that will stop generation when encountered.
threadId?:
string
Deprecated. Use memory.thread instead. Thread identifier for conversation continuity.
toolChoice?:
'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }
Controls how tools are selected during generation.
'auto':
string
Let the model decide when to use tools (default).
'none':
string
Disable tool usage entirely.
'required':
string
Force the model to use at least one tool.
{ type: 'tool'; toolName: string }:
object
Force the model to use a specific tool.
toolsets?:
ToolsetsInput
Additional tool sets that can be used for this execution.
clientTools?:
ToolsInput
Client-side tools available during execution.
savePerStep?:
boolean
Save messages incrementally after each generation step completes (default: false).
providerOptions?:
Record<string, Record<string, JSONValue>>
Provider-specific options passed to the language model.
openai?:
Record<string, JSONValue>
OpenAI-specific options like reasoningEffort, responseFormat, etc.
anthropic?:
Record<string, JSONValue>
Anthropic-specific options like maxTokens, etc.
google?:
Record<string, JSONValue>
Google-specific options.
[providerName]?:
Record<string, JSONValue>
Any provider-specific options.
runId?:
string
Unique identifier for this execution run.
runtimeContext?:
RuntimeContext
Runtime context containing dynamic configuration and state.
stopWhen?:
StopCondition | StopCondition[]
Conditions for stopping execution (e.g., step count, token limit).
Returns
result:
Awaited<ReturnType<MastraModelOutput<Output>['getFullOutput']>> | Awaited<ReturnType<AISDKV5OutputStream<Output>['getFullOutput']>>
Returns the full output of the generation process. When format is 'mastra' (default), returns MastraModelOutput result. When format is 'aisdk', returns AISDKV5OutputStream result for AI SDK v5 compatibility.