ChunkType
Experimental API: This type is part of the experimental streamVNext()
method. The API may change as we refine the feature based on feedback.
The ChunkType
type defines the mastra format of stream chunks that can be emitted during streaming responses from agents.
Base Properties
All chunks include these base properties:
type:
runId:
from:
AGENT:
USER:
SYSTEM:
WORKFLOW:
Text Chunks
text-start
Signals the beginning of text generation.
type:
payload:
id:
providerMetadata?:
text-delta
Incremental text content during generation.
type:
payload:
id:
text:
providerMetadata?:
text-end
Signals the end of text generation.
type:
payload:
id:
providerMetadata?:
Reasoning Chunks
reasoning-start
Signals the beginning of reasoning generation (for models that support reasoning).
type:
payload:
id:
signature?:
providerMetadata?:
reasoning-delta
Incremental reasoning text during generation.
type:
payload:
id:
text:
providerMetadata?:
reasoning-end
Signals the end of reasoning generation.
type:
payload:
id:
signature?:
providerMetadata?:
reasoning-signature
Contains the reasoning signature from models that support advanced reasoning (like OpenAI’s o1 series). The signature represents metadata about the model’s internal reasoning process, such as effort level or reasoning approach, but not the actual reasoning content itself.
type:
payload:
id:
signature:
providerMetadata?:
Tool Chunks
tool-call
A tool is being called.
type:
payload:
toolCallId:
toolName:
args?:
providerExecuted?:
output?:
providerMetadata?:
tool-result
Result from a tool execution.
type:
payload:
toolCallId:
toolName:
result:
isError?:
providerExecuted?:
args?:
providerMetadata?:
tool-call-input-streaming-start
Signals the start of streaming tool call arguments.
type:
payload:
toolCallId:
toolName:
providerExecuted?:
dynamic?:
providerMetadata?:
tool-call-delta
Incremental tool call arguments during streaming.
type:
payload:
argsTextDelta:
toolCallId:
toolName?:
providerMetadata?:
tool-call-input-streaming-end
Signals the end of streaming tool call arguments.
type:
payload:
toolCallId:
providerMetadata?:
tool-error
An error occurred during tool execution.
type:
payload:
id?:
toolCallId:
toolName:
args?:
error:
providerExecuted?:
providerMetadata?:
Source and File Chunks
source
Contains source information for content.
type:
payload:
id:
sourceType:
title:
mimeType?:
filename?:
url?:
providerMetadata?:
file
Contains file data.
type:
payload:
data:
base64?:
mimeType:
providerMetadata?:
Control Chunks
start
Signals the start of streaming.
type:
payload:
[key: string]:
step-start
Signals the start of a processing step.
type:
payload:
messageId?:
request:
warnings?:
step-finish
Signals the completion of a processing step.
type:
payload:
id?:
messageId?:
stepResult:
output:
metadata:
totalUsage?:
response?:
providerMetadata?:
raw
Contains raw data from the provider.
type:
payload:
[key: string]:
finish
Stream has completed successfully.
type:
payload:
stepResult:
output:
metadata:
messages:
error
An error occurred during streaming.
type:
payload:
error:
abort
Stream was aborted.
type:
payload:
[key: string]:
Object and Output Chunks
object
Emitted when using output generation with defined schemas. Contains partial or complete structured data that conforms to the specified Zod or JSON schema. This chunk is typically skipped in some execution contexts and used for streaming structured object generation.
type:
object:
tool-output
Contains output from agent or workflow execution, particularly used for tracking usage statistics and completion events. Often wraps other chunk types (like finish chunks) to provide nested execution context.
type:
payload:
output:
step-output
Contains output from workflow step execution, used primarily for usage tracking and step completion events. Similar to tool-output but specifically for individual workflow steps.
type:
payload:
output:
Metadata and Special Chunks
response-metadata
Contains metadata about the LLM provider’s response. Emitted by some providers after text generation to provide additional context like model ID, timestamps, and response headers. This chunk is used internally for state tracking and doesn’t affect message assembly.
type:
payload:
signature?:
[key: string]:
watch
Contains monitoring and observability data from agent execution. Can include workflow state information, execution progress, or other runtime details depending on the context where streamVNext()
is used.
type:
payload:
workflowState?:
eventTimestamp?:
[key: string]:
tripwire
Emitted when the stream is forcibly terminated due to content being blocked by output processors. This acts as a safety mechanism to prevent harmful or inappropriate content from being streamed.
type:
payload:
tripwireReason:
Usage Example
const stream = await agent.streamVNext("Hello");
for await (const chunk of stream.fullStream) {
switch (chunk.type) {
case 'text-delta':
console.log('Text:', chunk.payload.text);
break;
case 'tool-call':
console.log('Calling tool:', chunk.payload.toolName);
break;
case 'tool-result':
console.log('Tool result:', chunk.payload.result);
break;
case 'reasoning-delta':
console.log('Reasoning:', chunk.payload.text);
break;
case 'finish':
console.log('Finished:', chunk.payload.stepResult.reason);
console.log('Usage:', chunk.payload.output.usage);
break;
case 'error':
console.error('Error:', chunk.payload.error);
break;
}
}
Related Types
- MastraModelOutput - The stream object that emits these chunks
- Agent.streamVNext() - Method that returns streams emitting these chunks