# MastraModelOutput The `MastraModelOutput` class is returned by [.stream()](https://mastra.ai/reference/streaming/agents/stream) and provides both streaming and promise-based access to model outputs. It supports structured output generation, tool calls, reasoning, and comprehensive usage tracking. ```typescript // MastraModelOutput is returned by agent.stream() const stream = await agent.stream('Hello world') ``` For setup and basic usage, see the [.stream()](https://mastra.ai/reference/streaming/agents/stream) method documentation. ## Streaming Properties These properties provide real-time access to model outputs as they're generated: **fullStream:** (`ReadableStream>`): ReadableStreamChunkType:ChunkType\All possible chunk types that can be emitted during streaming **textStream:** (`ReadableStream`): Stream of incremental text content only. Filters out all metadata, tool calls, and control chunks to provide just the text being generated. **objectStream:** (`ReadableStream>`): ReadableStreamPartialSchemaOutput:Partial\Partially completed object matching the defined schema **elementStream:** (`ReadableStream`): Stream of individual array elements when the output schema defines an array type. Each element is emitted as it's completed rather than waiting for the entire array. ## Promise-based Properties These properties resolve to final values after the stream completes: **text:** (`Promise`): The complete concatenated text response from the model. Resolves when text generation is finished. **object:** (`Promise`): PromiseInferSchemaOutput:OUTPUTFully typed object matching the exact schema definition **reasoning:** (`Promise`): Complete reasoning text for models that support reasoning (like OpenAI's o1 series). Returns empty string for models without reasoning capability. **reasoningText:** (`Promise`): Alternative access to reasoning content. May be undefined for models that don't support reasoning, while 'reasoning' returns empty string. **toolCalls:** (`Promise`): ToolCallChunktype:'tool-call'Chunk type identifierrunId:stringExecution run identifierfrom:ChunkFromSource of the chunk (AGENT, WORKFLOW, etc.)payload:ToolCallPayloadTool call data including toolCallId, toolName, args, and execution details **toolResults:** (`Promise`): ToolResultChunktype:'tool-result'Chunk type identifierrunId:stringExecution run identifierfrom:ChunkFromSource of the chunk (AGENT, WORKFLOW, etc.)payload:ToolResultPayloadTool result data including toolCallId, toolName, result, and error status **usage:** (`Promise`): RecordinputTokens:numberTokens consumed by the input promptoutputTokens:numberTokens generated in the responsetotalTokens:numberSum of input and output tokensreasoningTokens?:numberHidden reasoning tokens (for reasoning models)cachedInputTokens?:numberNumber of input tokens that were a cache hit **finishReason:** (`Promise`): enumstop:'stop'Model finished naturallylength:'length'Hit maximum token limittool\_calls:'tool\_calls'Model called toolscontent\_filter:'content\_filter'Content was filtered **response:** (`Promise`): Responseid?:stringResponse ID from the model providertimestamp?:DateResponse timestampmodelId?:stringModel identifier used for this responseheaders?:Record\Response headers from the model providermessages?:ResponseMessage\[]Response messages in model formatuiMessages?:UIMessage\[]Response messages in UI format, includes any metadata added by output processors ## Error Properties **error:** (`string | Error | { message: string; stack: string; } | undefined`): Error information if the stream encountered an error. Undefined if no errors occurred. Can be a string message, Error object, or serialized error with stack trace. ## Methods **getFullOutput:** (`() => Promise`): FullOutputtext:stringComplete text responseobject?:OUTPUTStructured output if schema was providedtoolCalls:ToolCallChunk\[]All tool call chunks madetoolResults:ToolResultChunk\[]All tool result chunksusage:Record\Token usage statisticsreasoning?:stringReasoning text if availablefinishReason?:stringWhy generation finishedresponse:ResponseResponse metadata and messages from the model provider **consumeStream:** (`(options?: ConsumeStreamOptions) => Promise`): ConsumeStreamOptionsonError?:(error: Error) => voidCallback for handling stream errors ## Usage Examples ### Basic Text Streaming ```typescript const stream = await agent.stream('Write a haiku') // Stream text as it's generated for await (const text of stream.textStream) { process.stdout.write(text) } // Or get the complete text const fullText = await stream.text console.log(fullText) ``` ### Structured Output Streaming ```typescript const stream = await agent.stream('Generate user data', { structuredOutput: { schema: z.object({ name: z.string(), age: z.number(), email: z.string(), }), }, }) // Stream partial objects for await (const partial of stream.objectStream) { console.log('Progress:', partial) // { name: "John" }, { name: "John", age: 30 }, ... } // Get final validated object const user = await stream.object console.log('Final:', user) // { name: "John", age: 30, email: "john@example.com" } ``` ````text ### Tool Calls and Results ```typescript const stream = await agent.stream("What's the weather in NYC?", { tools: { weather: weatherTool } }); // Monitor tool calls const toolCalls = await stream.toolCalls; const toolResults = await stream.toolResults; console.log("Tools called:", toolCalls); console.log("Results:", toolResults); ```` ### Complete Output Access ```typescript const stream = await agent.stream('Analyze this data') const output = await stream.getFullOutput() console.log({ text: output.text, usage: output.usage, reasoning: output.reasoning, finishReason: output.finishReason, }) ``` ### Full Stream Processing ```typescript const stream = await agent.stream('Complex task') for await (const chunk of stream.fullStream) { switch (chunk.type) { case 'text-delta': process.stdout.write(chunk.payload.text) break case 'tool-call': console.log(`Calling ${chunk.payload.toolName}...`) break case 'reasoning-delta': console.log(`Reasoning: ${chunk.payload.text}`) break case 'finish': console.log(`Done! Reason: ${chunk.payload.stepResult.reason}`) // Access response messages with any metadata added by output processors const uiMessages = chunk.payload.response?.uiMessages if (uiMessages) { console.log('Response messages:', uiMessages) } break } } ``` ### Error Handling ```typescript const stream = await agent.stream('Analyze this data') try { // Option 1: Handle errors in consumeStream await stream.consumeStream({ onError: error => { console.error('Stream error:', error) }, }) const result = await stream.text } catch (error) { console.error('Failed to get result:', error) } // Option 2: Check error property const result = await stream.getFullOutput() if (stream.error) { console.error('Stream had errors:', stream.error) } ``` ## Related Types - [.stream()](https://mastra.ai/reference/streaming/agents/stream) - Method that returns MastraModelOutput - [ChunkType](https://mastra.ai/reference/streaming/ChunkType) - All possible chunk types in the full stream