Agent.generateVNext()(実験的)
Experimental Feature
This is a new generation method with support for flexible output formats (including AI SDK v5). It will eventually replace `generate()` and may change based on community feedback.
.generateVNext()
メソッドは、エージェントからの非ストリーミング応答の生成を可能にし、拡張された機能と柔軟な出力形式を提供します。メッセージと任意の生成オプションを受け取り、Mastra のネイティブ形式と AI SDK v5 との互換性の両方をサポートします。
使用例
// Mastra のデフォルト形式
const mastraResult = await agent.generateVNext("message for agent");
// AI SDK v5 互換形式
const aiSdkResult = await agent.generateVNext("message for agent", {
format: 'aisdk'
});
パラメータ
messages:
string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]
エージェントに送信するメッセージ。単一の文字列、文字列配列、または構造化メッセージオブジェクトを指定できます。
options?:
AgentExecutionOptions<Output, StructuredOutput, Format>
生成処理のための任意の設定。
設定
format?:
'mastra' | 'aisdk'
= 'mastra'
Determines the output format. Use 'mastra' for Mastra's native format (default) or 'aisdk' for AI SDK v5 compatibility.
maxSteps?:
number
Maximum number of steps to run during execution.
scorers?:
MastraScorers | Record<string, { scorer: MastraScorer['name']; sampling?: ScoringSamplingConfig }>
Evaluation scorers to run on the execution results.
scorer:
string
Name of the scorer to use.
sampling?:
ScoringSamplingConfig
Sampling configuration for the scorer.
tracingContext?:
TracingContext
AI tracing context for span hierarchy and metadata.
returnScorerData?:
boolean
Whether to return detailed scoring data in the response.
onChunk?:
(chunk: ChunkType) => Promise<void> | void
Callback function called for each chunk during generation.
onError?:
({ error }: { error: Error | string }) => Promise<void> | void
Callback function called when an error occurs during generation.
onAbort?:
(event: any) => Promise<void> | void
Callback function called when the generation is aborted.
activeTools?:
Array<keyof ToolSet> | undefined
Array of tool names that should be active during execution. If undefined, all available tools are active.
abortSignal?:
AbortSignal
Signal object that allows you to abort the agent's execution. When the signal is aborted, all ongoing operations will be terminated.
prepareStep?:
PrepareStepFunction<any>
Callback function called before each step of multi-step execution.
context?:
ModelMessage[]
Additional context messages to provide to the agent.
structuredOutput?:
StructuredOutputOptions<S extends ZodTypeAny = ZodTypeAny>
Enables structured output generation with better developer experience. Automatically creates and uses a StructuredOutputProcessor internally.
schema:
z.ZodSchema<S>
Zod schema defining the expected output structure.
model?:
MastraLanguageModel
Language model to use for structured output generation. If not provided, uses the agent's default model.
errorStrategy?:
'strict' | 'warn' | 'fallback'
Strategy for handling schema validation errors. 'strict' throws errors, 'warn' logs warnings, 'fallback' uses fallback values.
fallbackValue?:
<S extends ZodTypeAny>
Fallback value to use when schema validation fails and errorStrategy is 'fallback'.
instructions?:
string
Additional instructions for structured output generation.
outputProcessors?:
Processor[]
Output processors to use for this execution (overrides agent's default).
inputProcessors?:
Processor[]
Input processors to use for this execution (overrides agent's default).
instructions?:
string
Custom instructions that override the agent's default instructions for this execution.
output?:
Zod schema | JsonSchema7
**Deprecated.** Use structuredOutput with maxSteps:1 to achieve the same thing. Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema.
memory?:
object
Memory configuration for conversation persistence and retrieval.
thread:
string | { id: string; metadata?: Record<string, any>, title?: string }
Thread identifier for conversation continuity. Can be a string ID or an object with ID and optional metadata/title.
resource:
string
Resource identifier for organizing conversations by user, session, or context.
options?:
MemoryConfig
Additional memory configuration options for conversation management.
onFinish?:
StreamTextOnFinishCallback<any> | StreamObjectOnFinishCallback<OUTPUT>
Callback fired when generation completes. Type varies by format.
onStepFinish?:
StreamTextOnStepFinishCallback<any> | never
Callback fired after each generation step. Type varies by format.
resourceId?:
string
Deprecated. Use memory.resource instead. Identifier for the resource/user.
telemetry?:
TelemetrySettings
Telemetry collection settings for observability.
isEnabled?:
boolean
Whether telemetry collection is enabled.
recordInputs?:
boolean
Whether to record input data in telemetry.
recordOutputs?:
boolean
Whether to record output data in telemetry.
functionId?:
string
Identifier for the function being executed.
modelSettings?:
CallSettings
Model-specific settings like temperature, topP, etc.
temperature?:
number
Controls randomness in generation (0-2). Higher values make output more random.
maxRetries?:
number
Maximum number of retry attempts for failed requests.
topP?:
number
Nucleus sampling parameter (0-1). Controls diversity of generated text.
topK?:
number
Top-k sampling parameter. Limits vocabulary to k most likely tokens.
presencePenalty?:
number
Penalty for token presence (-2 to 2). Reduces repetition.
frequencyPenalty?:
number
Penalty for token frequency (-2 to 2). Reduces repetition of frequent tokens.
stopSequences?:
string[]
Array of strings that will stop generation when encountered.
threadId?:
string
Deprecated. Use memory.thread instead. Thread identifier for conversation continuity.
toolChoice?:
'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }
Controls how tools are selected during generation.
'auto':
string
Let the model decide when to use tools (default).
'none':
string
Disable tool usage entirely.
'required':
string
Force the model to use at least one tool.
{ type: 'tool'; toolName: string }:
object
Force the model to use a specific tool.
toolsets?:
ToolsetsInput
Additional tool sets that can be used for this execution.
clientTools?:
ToolsInput
Client-side tools available during execution.
savePerStep?:
boolean
Save messages incrementally after each generation step completes (default: false).
providerOptions?:
Record<string, Record<string, JSONValue>>
Provider-specific options passed to the language model.
openai?:
Record<string, JSONValue>
OpenAI-specific options like reasoningEffort, responseFormat, etc.
anthropic?:
Record<string, JSONValue>
Anthropic-specific options like maxTokens, etc.
google?:
Record<string, JSONValue>
Google-specific options.
[providerName]?:
Record<string, JSONValue>
Any provider-specific options.
runId?:
string
Unique identifier for this execution run.
runtimeContext?:
RuntimeContext
Runtime context containing dynamic configuration and state.
stopWhen?:
StopCondition | StopCondition[]
Conditions for stopping execution (e.g., step count, token limit).
返り値
result:
Awaited<ReturnType<MastraModelOutput<Output>['getFullOutput']>> | Awaited<ReturnType<AISDKV5OutputStream<Output>['getFullOutput']>>
生成処理の最終的な出力全体を返します。format が 'mastra'(デフォルト)の場合は MastraModelOutput の結果を、'aisdk' の場合は AI SDK v5 との互換性のため AISDKV5OutputStream の結果を返します。