Skip to main content
Mastra 1.0 is available 🎉 Read announcement

MastraModelOutput

The MastraModelOutput class is returned by .stream() and provides both streaming and promise-based access to model outputs. It supports structured output generation, tool calls, reasoning, and comprehensive usage tracking.

// MastraModelOutput is returned by agent.stream()
const stream = await agent.stream('Hello world')

For setup and basic usage, see the .stream() method documentation.

Streaming Properties
Direct link to Streaming Properties

These properties provide real-time access to model outputs as they're generated:

fullStream:

ReadableStream<ChunkType<OUTPUT>>
Complete stream of all chunk types including text, tool calls, reasoning, metadata, and control chunks. Provides granular access to every aspect of the model's response.
ReadableStream

ChunkType:

ChunkType<OUTPUT>
All possible chunk types that can be emitted during streaming

textStream:

ReadableStream<string>
Stream of incremental text content only. Filters out all metadata, tool calls, and control chunks to provide just the text being generated.

objectStream:

ReadableStream<Partial<OUTPUT>>
Stream of progressive structured object updates when using output schemas. Emits partial objects as they're built up, allowing real-time visualization of structured data generation.
ReadableStream

PartialSchemaOutput:

Partial<OUTPUT>
Partially completed object matching the defined schema

elementStream:

ReadableStream<OUTPUT extends (infer T)[] ? T : never>
Stream of individual array elements when the output schema defines an array type. Each element is emitted as it's completed rather than waiting for the entire array.

Promise-based Properties
Direct link to Promise-based Properties

These properties resolve to final values after the stream completes:

text:

Promise<string>
The complete concatenated text response from the model. Resolves when text generation is finished.

object:

Promise<OUTPUT>
The complete structured object response when using output schemas. Validated against the schema before resolving. Rejects if validation fails.
Promise

InferSchemaOutput:

OUTPUT
Fully typed object matching the exact schema definition

reasoning:

Promise<string>
Complete reasoning text for models that support reasoning (like OpenAI's o1 series). Returns empty string for models without reasoning capability.

reasoningText:

Promise<string | undefined>
Alternative access to reasoning content. May be undefined for models that don't support reasoning, while 'reasoning' returns empty string.

toolCalls:

Promise<ToolCallChunk[]>
Array of all tool call chunks made during execution. Each chunk contains tool metadata and execution details.
ToolCallChunk

type:

'tool-call'
Chunk type identifier

runId:

string
Execution run identifier

from:

ChunkFrom
Source of the chunk (AGENT, WORKFLOW, etc.)

payload:

ToolCallPayload
Tool call data including toolCallId, toolName, args, and execution details

toolResults:

Promise<ToolResultChunk[]>
Array of all tool result chunks corresponding to the tool calls. Contains execution results and error information.
ToolResultChunk

type:

'tool-result'
Chunk type identifier

runId:

string
Execution run identifier

from:

ChunkFrom
Source of the chunk (AGENT, WORKFLOW, etc.)

payload:

ToolResultPayload
Tool result data including toolCallId, toolName, result, and error status

usage:

Promise<LanguageModelUsage>
Token usage statistics including input tokens, output tokens, total tokens, and reasoning tokens (for reasoning models).
Record

inputTokens:

number
Tokens consumed by the input prompt

outputTokens:

number
Tokens generated in the response

totalTokens:

number
Sum of input and output tokens

reasoningTokens?:

number
Hidden reasoning tokens (for reasoning models)

cachedInputTokens?:

number
Number of input tokens that were a cache hit

finishReason:

Promise<string | undefined>
Reason why generation stopped (e.g., 'stop', 'length', 'tool_calls', 'content_filter'). Undefined if the stream hasn't finished.
enum

stop:

'stop'
Model finished naturally

length:

'length'
Hit maximum token limit

tool_calls:

'tool_calls'
Model called tools

content_filter:

'content_filter'
Content was filtered

response:

Promise<Response>
Response metadata and messages from the model provider.
Response

id?:

string
Response ID from the model provider

timestamp?:

Date
Response timestamp

modelId?:

string
Model identifier used for this response

headers?:

Record<string, string>
Response headers from the model provider

messages?:

ResponseMessage[]
Response messages in model format

uiMessages?:

UIMessage[]
Response messages in UI format, includes any metadata added by output processors

Error Properties
Direct link to Error Properties

error:

string | Error | { message: string; stack: string; } | undefined
Error information if the stream encountered an error. Undefined if no errors occurred. Can be a string message, Error object, or serialized error with stack trace.

Methods
Direct link to Methods

getFullOutput:

() => Promise<FullOutput>
Returns a comprehensive output object containing all results: text, structured object, tool calls, usage statistics, reasoning, and metadata. Convenient single method to access all stream results.
FullOutput

text:

string
Complete text response

object?:

OUTPUT
Structured output if schema was provided

toolCalls:

ToolCallChunk[]
All tool call chunks made

toolResults:

ToolResultChunk[]
All tool result chunks

usage:

Record<string, number>
Token usage statistics

reasoning?:

string
Reasoning text if available

finishReason?:

string
Why generation finished

response:

Response
Response metadata and messages from the model provider

consumeStream:

(options?: ConsumeStreamOptions) => Promise<void>
Manually consume the entire stream without processing chunks. Useful when you only need the final promise-based results and want to trigger stream consumption.
ConsumeStreamOptions

onError?:

(error: Error) => void
Callback for handling stream errors

Usage Examples
Direct link to Usage Examples

Basic Text Streaming
Direct link to Basic Text Streaming

const stream = await agent.stream('Write a haiku')

// Stream text as it's generated
for await (const text of stream.textStream) {
process.stdout.write(text)
}

// Or get the complete text
const fullText = await stream.text
console.log(fullText)

Structured Output Streaming
Direct link to Structured Output Streaming

const stream = await agent.stream('Generate user data', {
structuredOutput: {
schema: z.object({
name: z.string(),
age: z.number(),
email: z.string(),
}),
},
})

// Stream partial objects
for await (const partial of stream.objectStream) {
console.log('Progress:', partial) // { name: "John" }, { name: "John", age: 30 }, ...
}

// Get final validated object
const user = await stream.object
console.log('Final:', user) // { name: "John", age: 30, email: "john@example.com" }

### Tool Calls and Results

```typescript
const stream = await agent.stream("What's the weather in NYC?", {
tools: { weather: weatherTool }
});

// Monitor tool calls
const toolCalls = await stream.toolCalls;
const toolResults = await stream.toolResults;

console.log("Tools called:", toolCalls);
console.log("Results:", toolResults);

Complete Output Access
Direct link to Complete Output Access

const stream = await agent.stream('Analyze this data')

const output = await stream.getFullOutput()
console.log({
text: output.text,
usage: output.usage,
reasoning: output.reasoning,
finishReason: output.finishReason,
})

Full Stream Processing
Direct link to Full Stream Processing

const stream = await agent.stream('Complex task')

for await (const chunk of stream.fullStream) {
switch (chunk.type) {
case 'text-delta':
process.stdout.write(chunk.payload.text)
break
case 'tool-call':
console.log(`Calling ${chunk.payload.toolName}...`)
break
case 'reasoning-delta':
console.log(`Reasoning: ${chunk.payload.text}`)
break
case 'finish':
console.log(`Done! Reason: ${chunk.payload.stepResult.reason}`)
// Access response messages with any metadata added by output processors
const uiMessages = chunk.payload.response?.uiMessages
if (uiMessages) {
console.log('Response messages:', uiMessages)
}
break
}
}

Error Handling
Direct link to Error Handling

const stream = await agent.stream('Analyze this data')

try {
// Option 1: Handle errors in consumeStream
await stream.consumeStream({
onError: error => {
console.error('Stream error:', error)
},
})

const result = await stream.text
} catch (error) {
console.error('Failed to get result:', error)
}

// Option 2: Check error property
const result = await stream.getFullOutput()
if (stream.error) {
console.error('Stream had errors:', stream.error)
}
  • .stream() - Method that returns MastraModelOutput
  • ChunkType - All possible chunk types in the full stream