# Agent.streamLegacy() (Legacy) > **Warning:** **Deprecated**: This method is deprecated and only works with V1 models. For V2 models, use the new [`.stream()`](https://mastra.ai/reference/streaming/agents/stream) method instead. See the [migration guide](https://mastra.ai/guides/migrations/vnext-to-standard-apis) for details on upgrading. The `.streamLegacy()` method is the legacy version of the agent streaming API, used for real-time streaming of responses from V1 model agents. This method accepts messages and optional streaming options. ## Usage example ```typescript await agent.streamLegacy('message for agent') ``` ## Parameters **messages:** (`string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]`): The messages to send to the agent. Can be a single string, array of strings, or structured message objects. **options?:** (`AgentStreamOptions`): Optional configuration for the streaming process. ### Options parameters **abortSignal?:** (`AbortSignal`): Signal object that allows you to abort the agent's execution. When the signal is aborted, all ongoing operations will be terminated. **context?:** (`CoreMessage[]`): Additional context messages to provide to the agent. **experimental\_output?:** (`Zod schema | JsonSchema7`): Enables structured output generation alongside text generation and tool calls. The model will generate responses that conform to the provided schema. **instructions?:** (`string`): Custom instructions that override the agent's default instructions for this specific generation. Useful for dynamically modifying agent behavior without creating a new agent instance. **output?:** (`Zod schema | JsonSchema7`): Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema. **memory?:** (`object`): thread:string | { id: string; metadata?: Record\, title?: string }The conversation thread, as a string ID or an object with an \`id\` and optional \`metadata\`.resource:stringIdentifier for the user or resource associated with the thread.options?:MemoryConfigConfiguration for memory behavior, like message history and semantic recall. **maxSteps?:** (`number`): Maximum number of execution steps allowed. (Default: `5`) **maxRetries?:** (`number`): Maximum number of retries. Set to 0 to disable retries. (Default: `2`) **memoryOptions?:** (`MemoryConfig`): lastMessages?:number | falseNumber of recent messages to include in context, or false to disable.semanticRecall?:boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }Enable semantic recall to find relevant past messages. Can be a boolean or detailed configuration.workingMemory?:WorkingMemoryConfiguration for working memory functionality.threads?:{ generateTitle?: boolean | { model: DynamicArgument\; instructions?: DynamicArgument\ } }Thread-specific configuration, including automatic title generation. **onFinish?:** (`StreamTextOnFinishCallback | StreamObjectOnFinishCallback`): Callback function called when streaming completes. Receives the final result. **onStepFinish?:** (`StreamTextOnStepFinishCallback | never`): Callback function called after each execution step. Receives step details as a JSON string. Unavailable for structured output **resourceId?:** (`string`): \*\*Deprecated.\*\* Use \`memory.resource\` instead. Identifier for the user or resource interacting with the agent. Must be provided if threadId is provided. **telemetry?:** (`TelemetrySettings`): isEnabled?:booleanEnable or disable telemetry. Disabled by default while experimental.recordInputs?:booleanEnable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.recordOutputs?:booleanEnable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.functionId?:stringIdentifier for this function. Used to group telemetry data by function. **temperature?:** (`number`): Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic. **threadId?:** (`string`): \*\*Deprecated.\*\* Use \`memory.thread\` instead. Identifier for the conversation thread. Allows for maintaining context across multiple interactions. Must be provided if resourceId is provided. **toolChoice?:** (`'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }`): 'auto':stringLet the model decide whether to use tools (default).'none':stringDo not use any tools.'required':stringRequire the model to use at least one tool.{ type: 'tool'; toolName: string }:objectRequire the model to use a specific tool by name. (Default: `'auto'`) **toolsets?:** (`ToolsetsInput`): Additional toolsets to make available to the agent during streaming. **clientTools?:** (`ToolsInput`): Tools that are executed on the 'client' side of the request. These tools do not have execute functions in the definition. **savePerStep?:** (`boolean`): Save messages incrementally after each stream step completes (default: false). **providerOptions?:** (`Record>`): openai?:Record\OpenAI-specific options. Example: \`{ reasoningEffort: 'high' }\`anthropic?:Record\Anthropic-specific options. Example: \`{ maxTokens: 1000 }\`google?:Record\Google-specific options. Example: \`{ safetySettings: \[...] }\`\[providerName]?:Record\Other provider-specific options. The key is the provider name and the value is a record of provider-specific options. **runId?:** (`string`): Unique ID for this generation run. Useful for tracking and debugging purposes. **requestContext?:** (`RequestContext`): Request Context for dependency injection and contextual information. **maxTokens?:** (`number`): Maximum number of tokens to generate. **topP?:** (`number`): Nucleus sampling. This is a number between 0 and 1. It is recommended to set either \`temperature\` or \`topP\`, but not both. **topK?:** (`number`): Only sample from the top K options for each subsequent token. Used to remove 'long tail' low probability responses. **presencePenalty?:** (`number`): Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). **frequencyPenalty?:** (`number`): Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). **stopSequences?:** (`string[]`): Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated. **seed?:** (`number`): The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results. **headers?:** (`Record`): Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers. ## Returns **textStream?:** (`AsyncGenerator`): Async generator that yields text chunks as they become available. **fullStream?:** (`Promise`): Promise that resolves to a ReadableStream for the complete response. **text?:** (`Promise`): Promise that resolves to the complete text response. **usage?:** (`Promise<{ totalTokens: number; promptTokens: number; completionTokens: number }>`): Promise that resolves to token usage information. **finishReason?:** (`Promise`): Promise that resolves to the reason why the stream finished. **toolCalls?:** (`Promise>`): toolName:stringThe name of the tool invoked.args:anyThe arguments passed to the tool. ## Extended usage example ```typescript await agent.streamLegacy('message for agent', { temperature: 0.7, maxSteps: 3, memory: { thread: 'user-123', resource: 'test-app', }, toolChoice: 'auto', }) ``` ## Migration to New API > **Info:** The new `.stream()` method offers enhanced capabilities including AI SDK v5+ compatibility, better structured output handling, and improved callback system. See the [migration guide](https://mastra.ai/guides/migrations/vnext-to-standard-apis) for detailed migration instructions. ### Quick Migration Example #### Before (Legacy) ```typescript const result = await agent.streamLegacy('message', { temperature: 0.7, maxSteps: 3, onFinish: result => console.log(result), }) ``` #### After (New API) ```typescript const result = await agent.stream('message', { modelSettings: { temperature: 0.7, }, maxSteps: 3, onFinish: result => console.log(result), }) ``` ## Related - [Migration Guide](https://mastra.ai/guides/migrations/vnext-to-standard-apis) - [New .stream() method](https://mastra.ai/reference/streaming/agents/stream) - [Generating responses](https://mastra.ai/docs/agents/overview) - [Streaming responses](https://mastra.ai/docs/agents/overview)