# Agent.generateLegacy() (Legacy) > **Warning:** **Deprecated**: This method is deprecated and only works with V1 models. For V2 models, use the new [`.generate()`](https://mastra.ai/reference/agents/generate) method instead. The `.generateLegacy()` method is the legacy version of the agent generation API, used to interact with V1 model agents to produce text or structured responses. This method accepts messages and optional generation options. ## Usage example ```typescript await agent.generateLegacy("message for agent"); ``` ## Parameters **messages:** (`string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]`): The messages to send to the agent. Can be a single string, array of strings, or structured message objects with multimodal content (text, images, etc.). **options?:** (`AgentGenerateOptions`): Optional configuration for the generation process. ### Options parameters **abortSignal?:** (`AbortSignal`): Signal object that allows you to abort the agent's execution. When the signal is aborted, all ongoing operations will be terminated. **context?:** (`CoreMessage[]`): Additional context messages to provide to the agent. **structuredOutput?:** (`StructuredOutputOptions`): schema:z.ZodSchema\Zod schema to validate the output against.model:MastraLanguageModelModel to use for the internal structuring agent.errorStrategy?:'strict' | 'warn' | 'fallback'Strategy when parsing or validation fails. Defaults to 'strict'.fallbackValue?:\Fallback value when errorStrategy is 'fallback'.instructions?:stringCustom instructions for the structuring agent. **outputProcessors?:** (`Processor[]`): Overrides the output processors set on the agent. Output processors that can modify or validate messages from the agent before they are returned to the user. Must implement either (or both) of the \`processOutputResult\` and \`processOutputStream\` functions. **inputProcessors?:** (`Processor[]`): Overrides the input processors set on the agent. Input processors that can modify or validate messages before they are processed by the agent. Must implement the \`processInput\` function. **experimental\_output?:** (`Zod schema | JsonSchema7`): Note, the preferred route is to use the \`structuredOutput\` property. Enables structured output generation alongside text generation and tool calls. The model will generate responses that conform to the provided schema. **instructions?:** (`string`): Custom instructions that override the agent's default instructions for this specific generation. Useful for dynamically modifying agent behavior without creating a new agent instance. **output?:** (`Zod schema | JsonSchema7`): Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema. **memory?:** (`object`): thread:string | { id: string; metadata?: Record\, title?: string }The conversation thread, as a string ID or an object with an \`id\` and optional \`metadata\`.resource:stringIdentifier for the user or resource associated with the thread.options?:MemoryConfigConfiguration for memory behavior, like message history and semantic recall. See \`MemoryConfig\` below. **maxSteps?:** (`number`): Maximum number of execution steps allowed. (Default: `5`) **maxRetries?:** (`number`): Maximum number of retries. Set to 0 to disable retries. (Default: `2`) **onStepFinish?:** (`GenerateTextOnStepFinishCallback | never`): Callback function called after each execution step. Receives step details as a JSON string. Unavailable for structured output **runId?:** (`string`): Unique ID for this generation run. Useful for tracking and debugging purposes. **telemetry?:** (`TelemetrySettings`): isEnabled?:booleanEnable or disable telemetry. Disabled by default while experimental.recordInputs?:booleanEnable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.recordOutputs?:booleanEnable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.functionId?:stringIdentifier for this function. Used to group telemetry data by function. **temperature?:** (`number`): Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic. **toolChoice?:** (`'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }`): 'auto':stringLet the model decide whether to use tools (default).'none':stringDo not use any tools.'required':stringRequire the model to use at least one tool.{ type: 'tool'; toolName: string }:objectRequire the model to use a specific tool by name. (Default: `'auto'`) **toolsets?:** (`ToolsetsInput`): Additional toolsets to make available to the agent during generation. **clientTools?:** (`ToolsInput`): Tools that are executed on the 'client' side of the request. These tools do not have execute functions in the definition. **savePerStep?:** (`boolean`): Save messages incrementally after each stream step completes (default: false). **providerOptions?:** (`Record>`): openai?:Record\OpenAI-specific options. Example: \`{ reasoningEffort: 'high' }\`anthropic?:Record\Anthropic-specific options. Example: \`{ maxTokens: 1000 }\`google?:Record\Google-specific options. Example: \`{ safetySettings: \[...] }\`\[providerName]?:Record\Other provider-specific options. The key is the provider name and the value is a record of provider-specific options. **requestContext?:** (`RequestContext`): Request Context for dependency injection and contextual information. **maxTokens?:** (`number`): Maximum number of tokens to generate. **topP?:** (`number`): Nucleus sampling. This is a number between 0 and 1. It is recommended to set either \`temperature\` or \`topP\`, but not both. **topK?:** (`number`): Only sample from the top K options for each subsequent token. Used to remove 'long tail' low probability responses. **presencePenalty?:** (`number`): Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). **frequencyPenalty?:** (`number`): Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). **stopSequences?:** (`string[]`): Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated. **seed?:** (`number`): The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results. **headers?:** (`Record`): Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers. ## Returns **text?:** (`string`): The generated text response. Present when output is 'text' (no schema provided). **object?:** (`object`): The generated structured response. Present when a schema is provided via \`output\`, \`structuredOutput\`, or \`experimental\_output\`. **toolCalls?:** (`Array`): toolName:stringThe name of the tool invoked.args:anyThe arguments passed to the tool. ## Migration to New API > **Info:** The new `.generate()` method offers enhanced capabilities including AI SDK v5+ compatibility, better structured output handling, and improved streaming support. See the [migration guide](https://mastra.ai/guides/migrations/vnext-to-standard-apis) for detailed migration instructions. ### Quick Migration Example #### Before (Legacy) ```typescript const result = await agent.generateLegacy("message", { temperature: 0.7, maxSteps: 3, }); ``` #### After (New API) ```typescript const result = await agent.generate("message", { modelSettings: { temperature: 0.7, }, maxSteps: 3, }); ``` ## Extended usage example ```typescript import { z } from "zod"; import { ModerationProcessor, TokenLimiterProcessor, } from "@mastra/core/processors"; await agent.generateLegacy( [ { role: "user", content: "message for agent" }, { role: "user", content: [ { type: "text", text: "message for agent", }, { type: "image", imageUrl: "https://example.com/image.jpg", mimeType: "image/jpeg", }, ], }, ], { temperature: 0.7, maxSteps: 3, memory: { thread: "user-123", resource: "test-app", }, toolChoice: "auto", providerOptions: { openai: { reasoningEffort: "high", }, }, // Structured output with better DX structuredOutput: { schema: z.object({ sentiment: z.enum(["positive", "negative", "neutral"]), confidence: z.number(), }), model: "openai/gpt-5.1", errorStrategy: "warn", }, // Output processors for response validation outputProcessors: [ new ModerationProcessor({ model: "openai/gpt-4.1-nano" }), new TokenLimiterProcessor({ maxTokens: 1000 }), ], }, ); ``` ## Related - [Migration Guide](https://mastra.ai/guides/migrations/vnext-to-standard-apis) - [New .generate() method](https://mastra.ai/reference/agents/generate) - [Generating responses](https://mastra.ai/docs/agents/overview) - [Streaming responses](https://mastra.ai/docs/agents/overview)