stream()
The stream()
method enables real-time streaming of responses from an agent. This method accepts messages
and an optional options
object as parameters, similar to generate()
.
Parameters
messages
The messages
parameter can be:
- A single string
- An array of strings
- An array of message objects with
role
andcontent
properties
Message Object Structure
interface Message {
role: 'system' | 'user' | 'assistant';
content: string;
}
options
(Optional)
An optional object that can include:
output?:
string | JSONSchema7 | ZodSchema
Defines the output format. Can be "text" or a schema for structured output.
context?:
CoreMessage[]
Additional context messages to provide to the agent.
threadId?:
string
Identifier for the conversation thread. Allows for maintaining context across multiple interactions.
resourceId?:
string
Identifier for the user or resource interacting with the agent.
onFinish?:
(result: string) => Promise<void> | void
Callback function called when streaming is complete.
onStepFinish?:
(step: string) => void
Callback function called after each step during streaming.
maxSteps?:
number
Maximum number of steps allowed during streaming.
toolsets?:
ToolsetsInput
Additional toolsets to make available to the agent during this stream.
Returns
The method returns a promise that resolves to an object containing one or more of the following properties:
textStream?:
AsyncIterable<string>
An async iterable stream of text chunks. Present when output is "text".
objectStream?:
AsyncIterable<object>
An async iterable stream of structured data. Present when a schema is provided.
object?:
Promise<object>
A promise that resolves to the final structured output when using a schema.
Examples
Basic Text Streaming
const stream = await myAgent.stream([
{ role: "user", content: "Tell me a story." }
]);
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
Structured Output Streaming with Thread Context
const schema = {
type: 'object',
properties: {
summary: { type: 'string' },
nextSteps: { type: 'array', items: { type: 'string' } }
},
required: ['summary', 'nextSteps']
};
const response = await myAgent.stream(
"What should we do next?",
{
output: schema,
threadId: "project-123",
onFinish: text => console.log("Finished:", text)
}
);
for await (const chunk of response.textStream) {
console.log(chunk);
}
const result = await response.object;
console.log("Final structured result:", result);
The key difference between Agent’s stream()
and LLM’s stream()
is that Agents maintain conversation context through threadId
, can access tools, and integrate with the agent’s memory system.