Skip to main content
Mastra v1 is coming in January 2026. Get ahead by starting new projects with the beta or upgrade your existing project today.

MastraModelOutput

The MastraModelOutput class is returned by .stream() and provides both streaming and promise-based access to model outputs. It supports structured output generation, tool calls, reasoning, and comprehensive usage tracking.

// MastraModelOutput is returned by agent.stream()
const stream = await agent.stream("Hello world");

For setup and basic usage, see the .stream() method documentation.

Streaming PropertiesDirect link to Streaming Properties

These properties provide real-time access to model outputs as they're generated:

fullStream:

ReadableStream<ChunkType<OUTPUT>>
Complete stream of all chunk types including text, tool calls, reasoning, metadata, and control chunks. Provides granular access to every aspect of the model's response.
ReadableStream

ChunkType:

ChunkType<OUTPUT>
All possible chunk types that can be emitted during streaming

textStream:

ReadableStream<string>
Stream of incremental text content only. Filters out all metadata, tool calls, and control chunks to provide just the text being generated.

objectStream:

ReadableStream<PartialSchemaOutput<OUTPUT>>
Stream of progressive structured object updates when using output schemas. Emits partial objects as they're built up, allowing real-time visualization of structured data generation.
ReadableStream

PartialSchemaOutput:

PartialSchemaOutput<OUTPUT>
Partially completed object matching the defined schema

elementStream:

ReadableStream<InferSchemaOutput<OUTPUT> extends (infer T)[] ? T : never>
Stream of individual array elements when the output schema defines an array type. Each element is emitted as it's completed rather than waiting for the entire array.

Promise-based PropertiesDirect link to Promise-based Properties

These properties resolve to final values after the stream completes:

text:

Promise<string>
The complete concatenated text response from the model. Resolves when text generation is finished.

object:

Promise<InferSchemaOutput<OUTPUT>>
The complete structured object response when using output schemas. Validated against the schema before resolving. Rejects if validation fails.
Promise

InferSchemaOutput:

InferSchemaOutput<OUTPUT>
Fully typed object matching the exact schema definition

reasoning:

Promise<string>
Complete reasoning text for models that support reasoning (like OpenAI's o1 series). Returns empty string for models without reasoning capability.

reasoningText:

Promise<string | undefined>
Alternative access to reasoning content. May be undefined for models that don't support reasoning, while 'reasoning' returns empty string.

toolCalls:

Promise<ToolCallChunk[]>
Array of all tool call chunks made during execution. Each chunk contains tool metadata and execution details.
ToolCallChunk

type:

'tool-call'
Chunk type identifier

runId:

string
Execution run identifier

from:

ChunkFrom
Source of the chunk (AGENT, WORKFLOW, etc.)

payload:

ToolCallPayload
Tool call data including toolCallId, toolName, args, and execution details

toolResults:

Promise<ToolResultChunk[]>
Array of all tool result chunks corresponding to the tool calls. Contains execution results and error information.
ToolResultChunk

type:

'tool-result'
Chunk type identifier

runId:

string
Execution run identifier

from:

ChunkFrom
Source of the chunk (AGENT, WORKFLOW, etc.)

payload:

ToolResultPayload
Tool result data including toolCallId, toolName, result, and error status

usage:

Promise<LanguageModelUsage>
Token usage statistics including input tokens, output tokens, total tokens, and reasoning tokens (for reasoning models).
Record

inputTokens:

number
Tokens consumed by the input prompt

outputTokens:

number
Tokens generated in the response

totalTokens:

number
Sum of input and output tokens

reasoningTokens?:

number
Hidden reasoning tokens (for reasoning models)

cachedInputTokens?:

number
Number of input tokens that were a cache hit

finishReason:

Promise<string | undefined>
Reason why generation stopped (e.g., 'stop', 'length', 'tool_calls', 'content_filter'). Undefined if the stream hasn't finished.
enum

stop:

'stop'
Model finished naturally

length:

'length'
Hit maximum token limit

tool_calls:

'tool_calls'
Model called tools

content_filter:

'content_filter'
Content was filtered

Error PropertiesDirect link to Error Properties

error:

string | Error | { message: string; stack: string; } | undefined
Error information if the stream encountered an error. Undefined if no errors occurred. Can be a string message, Error object, or serialized error with stack trace.

MethodsDirect link to Methods

getFullOutput:

() => Promise<FullOutput>
Returns a comprehensive output object containing all results: text, structured object, tool calls, usage statistics, reasoning, and metadata. Convenient single method to access all stream results.
FullOutput

text:

string
Complete text response

object?:

InferSchemaOutput<OUTPUT>
Structured output if schema was provided

toolCalls:

ToolCallChunk[]
All tool call chunks made

toolResults:

ToolResultChunk[]
All tool result chunks

usage:

Record<string, number>
Token usage statistics

reasoning?:

string
Reasoning text if available

finishReason?:

string
Why generation finished

consumeStream:

(options?: ConsumeStreamOptions) => Promise<void>
Manually consume the entire stream without processing chunks. Useful when you only need the final promise-based results and want to trigger stream consumption.
ConsumeStreamOptions

onError?:

(error: Error) => void
Callback for handling stream errors

Usage ExamplesDirect link to Usage Examples

Basic Text StreamingDirect link to Basic Text Streaming

const stream = await agent.stream("Write a haiku");

// Stream text as it's generated
for await (const text of stream.textStream) {
process.stdout.write(text);
}

// Or get the complete text
const fullText = await stream.text;
console.log(fullText);

Structured Output StreamingDirect link to Structured Output Streaming

const stream = await agent.stream("Generate user data", {
structuredOutput: {
schema: z.object({
name: z.string(),
age: z.number(),
email: z.string(),
}),
},
});

// Stream partial objects
for await (const partial of stream.objectStream) {
console.log("Progress:", partial); // { name: "John" }, { name: "John", age: 30 }, ...
}

// Get final validated object
const user = await stream.object;
console.log("Final:", user); // { name: "John", age: 30, email: "john@example.com" }

### Tool Calls and Results

```typescript
const stream = await agent.stream("What's the weather in NYC?", {
tools: { weather: weatherTool }
});

// Monitor tool calls
const toolCalls = await stream.toolCalls;
const toolResults = await stream.toolResults;

console.log("Tools called:", toolCalls);
console.log("Results:", toolResults);

Complete Output AccessDirect link to Complete Output Access

const stream = await agent.stream("Analyze this data");

const output = await stream.getFullOutput();
console.log({
text: output.text,
usage: output.usage,
reasoning: output.reasoning,
finishReason: output.finishReason,
});

Full Stream ProcessingDirect link to Full Stream Processing

const stream = await agent.stream("Complex task");

for await (const chunk of stream.fullStream) {
switch (chunk.type) {
case "text-delta":
process.stdout.write(chunk.payload.text);
break;
case "tool-call":
console.log(`Calling ${chunk.payload.toolName}...`);
break;
case "reasoning-delta":
console.log(`Reasoning: ${chunk.payload.text}`);
break;
case "finish":
console.log(`Done! Reason: ${chunk.payload.stepResult.reason}`);
break;
}
}

Error HandlingDirect link to Error Handling

const stream = await agent.stream("Analyze this data");

try {
// Option 1: Handle errors in consumeStream
await stream.consumeStream({
onError: (error) => {
console.error("Stream error:", error);
},
});

const result = await stream.text;
} catch (error) {
console.error("Failed to get result:", error);
}

// Option 2: Check error property
const result = await stream.getFullOutput();
if (stream.error) {
console.error("Stream had errors:", stream.error);
}
  • .stream() - Method that returns MastraModelOutput
  • ChunkType - All possible chunk types in the full stream