Agent.generate()
.generate()
メソッドは、エージェントとやり取りしてテキストや構造化されたレスポンスを生成するために使用します。このメソッドはメッセージと任意の生成オプションを受け取ります。
使い方の例
await agent.generate("message for agent");
パラメータ
messages:
string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]
エージェントに送信するメッセージ。単一の文字列、文字列の配列、またはマルチモーダルコンテンツ(テキスト、画像など)を含む構造化メッセージオブジェクトを指定できます。
options?:
AgentGenerateOptions
生成処理に関する任意の設定。
オプションのパラメータ
abortSignal?:
AbortSignal
Signal object that allows you to abort the agent's execution. When the signal is aborted, all ongoing operations will be terminated.
context?:
CoreMessage[]
Additional context messages to provide to the agent.
structuredOutput?:
StructuredOutputOptions<S extends ZodTypeAny = ZodTypeAny>
Enables structured output generation with better developer experience. Automatically creates and uses a StructuredOutputProcessor internally.
schema:
z.ZodSchema<S>
Zod schema to validate the output against.
model:
MastraLanguageModel
Model to use for the internal structuring agent.
errorStrategy?:
'strict' | 'warn' | 'fallback'
Strategy when parsing or validation fails. Defaults to 'strict'.
fallbackValue?:
<S extends ZodTypeAny>
Fallback value when errorStrategy is 'fallback'.
instructions?:
string
Custom instructions for the structuring agent.
outputProcessors?:
Processor[]
Overrides the output processors set on the agent. Output processors that can modify or validate messages from the agent before they are returned to the user. Must implement either (or both) of the `processOutputResult` and `processOutputStream` functions.
inputProcessors?:
Processor[]
Overrides the input processors set on the agent. Input processors that can modify or validate messages before they are processed by the agent. Must implement the `processInput` function.
experimental_output?:
Zod schema | JsonSchema7
Note, the preferred route is to use the `structuredOutput` property. Enables structured output generation alongside text generation and tool calls. The model will generate responses that conform to the provided schema.
instructions?:
string
Custom instructions that override the agent's default instructions for this specific generation. Useful for dynamically modifying agent behavior without creating a new agent instance.
output?:
Zod schema | JsonSchema7
Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema.
memory?:
object
Configuration for memory. This is the preferred way to manage memory.
thread:
string | { id: string; metadata?: Record<string, any>, title?: string }
The conversation thread, as a string ID or an object with an `id` and optional `metadata`.
resource:
string
Identifier for the user or resource associated with the thread.
options?:
MemoryConfig
Configuration for memory behavior, like message history and semantic recall. See `MemoryConfig` below.
maxSteps?:
number
= 5
Maximum number of execution steps allowed.
maxRetries?:
number
= 2
Maximum number of retries. Set to 0 to disable retries.
onStepFinish?:
GenerateTextOnStepFinishCallback<any> | never
Callback function called after each execution step. Receives step details as a JSON string. Unavailable for structured output
runId?:
string
Unique ID for this generation run. Useful for tracking and debugging purposes.
telemetry?:
TelemetrySettings
Settings for telemetry collection during generation.
isEnabled?:
boolean
Enable or disable telemetry. Disabled by default while experimental.
recordInputs?:
boolean
Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.
recordOutputs?:
boolean
Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.
functionId?:
string
Identifier for this function. Used to group telemetry data by function.
temperature?:
number
Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic.
toolChoice?:
'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }
= 'auto'
Controls how the agent uses tools during generation.
'auto':
string
Let the model decide whether to use tools (default).
'none':
string
Do not use any tools.
'required':
string
Require the model to use at least one tool.
{ type: 'tool'; toolName: string }:
object
Require the model to use a specific tool by name.
toolsets?:
ToolsetsInput
Additional toolsets to make available to the agent during generation.
clientTools?:
ToolsInput
Tools that are executed on the 'client' side of the request. These tools do not have execute functions in the definition.
savePerStep?:
boolean
Save messages incrementally after each stream step completes (default: false).
providerOptions?:
Record<string, Record<string, JSONValue>>
Additional provider-specific options that are passed through to the underlying LLM provider. The structure is `{ providerName: { optionKey: value } }`. Since Mastra extends AI SDK, see the [AI SDK documentation](https://sdk.vercel.ai/docs/providers/ai-sdk-providers) for complete provider options.
openai?:
Record<string, JSONValue>
OpenAI-specific options. Example: `{ reasoningEffort: 'high' }`
anthropic?:
Record<string, JSONValue>
Anthropic-specific options. Example: `{ maxTokens: 1000 }`
google?:
Record<string, JSONValue>
Google-specific options. Example: `{ safetySettings: [...] }`
[providerName]?:
Record<string, JSONValue>
Other provider-specific options. The key is the provider name and the value is a record of provider-specific options.
runtimeContext?:
RuntimeContext
Runtime context for dependency injection and contextual information.
maxTokens?:
number
Maximum number of tokens to generate.
topP?:
number
Nucleus sampling. This is a number between 0 and 1. It is recommended to set either `temperature` or `topP`, but not both.
topK?:
number
Only sample from the top K options for each subsequent token. Used to remove 'long tail' low probability responses.
presencePenalty?:
number
Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).
frequencyPenalty?:
number
Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).
stopSequences?:
string[]
Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.
seed?:
number
The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.
headers?:
Record<string, string | undefined>
Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.
戻り値
text?:
string
生成されたテキスト応答。`output` が 'text'(スキーマ未指定)の場合に返されます。
object?:
object
生成された構造化応答。`output`、`structuredOutput`、または `experimental_output` でスキーマが指定されている場合に返されます。
toolCalls?:
Array<ToolCall>
生成処理中に実行されたツール呼び出し。テキストモードとオブジェクトモードの両方で返されます。
toolName:
string
呼び出されたツールの名前。
args:
any
ツールに渡された引数。
拡張使用例
import { z } from "zod";
import { ModerationProcessor, TokenLimiterProcessor } from "@mastra/core/processors";
await agent.generate(
[
{ role: "user", content: "message for agent" },
{
role: "user",
content: [
{
type: "text",
text: "message for agent"
},
{
type: "image",
imageUrl: "https://example.com/image.jpg",
mimeType: "image/jpeg"
}
]
}
],
{
temperature: 0.7,
maxSteps: 3,
memory: {
thread: "user-123",
resource: "test-app"
},
toolChoice: "auto",
providerOptions: {
openai: {
reasoningEffort: "high"
}
},
// より良い DX のための構造化出力
structuredOutput: {
schema: z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
confidence: z.number(),
}),
model: openai("gpt-4o-mini"),
errorStrategy: 'warn',
},
// 応答検証のための出力プロセッサ
outputProcessors: [
new ModerationProcessor({ model: openai("gpt-4.1-nano") }),
new TokenLimiterProcessor({ maxTokens: 1000 }),
],
}
);