Agent.network()
Experimental Feature
This is an experimental API that may change in future versions. The network() method enables multi-agent collaboration and workflow orchestration. Use with caution in production environments.
The .network() method enables multi-agent collaboration and routing. This method accepts messages and optional execution options.
Usage exampleDirect link to Usage example
import { Agent } from "@mastra/core/agent";
import { agent1, agent2 } from "./agents";
import { workflow1 } from "./workflows";
import { tool1, tool2 } from "./tools";
const agent = new Agent({
id: "network-agent",
name: "Network Agent",
instructions:
"You are a network agent that can help users with a variety of tasks.",
model: "openai/gpt-5.1",
agents: {
agent1,
agent2,
},
workflows: {
workflow1,
},
tools: {
tool1,
tool2,
},
});
await agent.network(`
Find me the weather in Tokyo.
Based on the weather, plan an activity for me.
`);
ParametersDirect link to Parameters
messages:
string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]
The messages to send to the agent. Can be a single string, array of strings, or structured message objects.
options?:
MultiPrimitiveExecutionOptions
Optional configuration for the network process.
OptionsDirect link to Options
maxSteps?:
number
Maximum number of steps to run during execution.
memory?:
object
Configuration for memory. This is the preferred way to manage memory.
thread:
string | { id: string; metadata?: Record<string, any>, title?: string }
The conversation thread, as a string ID or an object with an `id` and optional `metadata`.
resource:
string
Identifier for the user or resource associated with the thread.
options?:
MemoryConfig
Configuration for memory behavior, like message history and semantic recall.
tracingContext?:
TracingContext
Tracing context for creating child spans and adding metadata. Automatically injected when using Mastra's tracing system.
currentSpan?:
Span
Current span for creating child spans and adding metadata. Use this to create custom child spans or update span attributes during execution.
tracingOptions?:
TracingOptions
Options for Tracing configuration.
metadata?:
Record<string, any>
Metadata to add to the root trace span. Useful for adding custom attributes like user IDs, session IDs, or feature flags.
requestContextKeys?:
string[]
Additional RequestContext keys to extract as metadata for this trace. Supports dot notation for nested values (e.g., 'user.id').
traceId?:
string
Trace ID to use for this execution (1-32 hexadecimal characters). If provided, this trace will be part of the specified trace.
parentSpanId?:
string
Parent span ID to use for this execution (1-16 hexadecimal characters). If provided, the root span will be created as a child of this span.
tags?:
string[]
Tags to apply to this trace. String labels for categorizing and filtering traces.
telemetry?:
TelemetrySettings
Settings for OTLP telemetry collection during streaming (not Tracing).
isEnabled?:
boolean
Enable or disable telemetry. Disabled by default while experimental.
recordInputs?:
boolean
Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.
recordOutputs?:
boolean
Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.
functionId?:
string
Identifier for this function. Used to group telemetry data by function.
modelSettings?:
CallSettings
Model-specific settings like temperature, maxOutputTokens, topP, etc. These settings control how the language model generates responses.
temperature?:
number
Controls randomness in generation (0-2). Higher values make output more random.
maxOutputTokens?:
number
Maximum number of tokens to generate in the response. Note: Use maxOutputTokens (not maxTokens) as per AI SDK v5 convention.
maxRetries?:
number
Maximum number of retry attempts for failed requests.
topP?:
number
Nucleus sampling parameter (0-1). Controls diversity of generated text.
topK?:
number
Top-k sampling parameter. Limits vocabulary to k most likely tokens.
presencePenalty?:
number
Penalty for token presence (-2 to 2). Reduces repetition.
frequencyPenalty?:
number
Penalty for token frequency (-2 to 2). Reduces repetition of frequent tokens.
stopSequences?:
string[]
Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.
structuredOutput?:
StructuredOutputOptions
Configuration for generating a typed structured output from the network result.
schema:
ZodSchema | JSONSchema7
The schema to validate the output against. Can be a Zod schema or JSON Schema.
model?:
MastraModelConfig
Model to use for generating the structured output. Defaults to the agent's model.
instructions?:
string
Custom instructions for generating the structured output.
runId?:
string
Unique ID for this generation run. Useful for tracking and debugging purposes.
requestContext?:
RequestContext
Request Context for dependency injection and contextual information.
traceId?:
string
The trace ID associated with this execution when Tracing is enabled. Use this to correlate logs and debug execution flow.
ReturnsDirect link to Returns
stream:
MastraAgentNetworkStream<NetworkChunkType>
A custom stream that extends ReadableStream<NetworkChunkType> with additional network-specific properties
status:
Promise<RunStatus>
A promise that resolves to the current workflow run status
result:
Promise<WorkflowResult<TState, TOutput, TSteps>>
A promise that resolves to the final workflow result
usage:
Promise<{ promptTokens: number; completionTokens: number; totalTokens: number }>
A promise that resolves to token usage statistics
object:
Promise<OUTPUT | undefined>
A promise that resolves to the structured output object. Only available when structuredOutput option is provided. Resolves to undefined if no schema was specified.
objectStream:
ReadableStream<Partial<OUTPUT>>
A stream of partial objects during structured output generation. Useful for streaming partial results as they're being generated.
Structured OutputDirect link to Structured Output
When you need typed, validated results from your network, use the structuredOutput option. The network will generate a response matching your schema after task completion.
import { z } from "zod";
const resultSchema = z.object({
summary: z.string().describe("A brief summary of the findings"),
recommendations: z.array(z.string()).describe("List of recommendations"),
confidence: z.number().min(0).max(1).describe("Confidence score"),
});
const stream = await agent.network("Research AI trends and summarize", {
structuredOutput: {
schema: resultSchema,
},
});
// Consume the stream
for await (const chunk of stream) {
// Handle streaming events
}
// Get the typed result
const result = await stream.object;
// result is typed as { summary: string; recommendations: string[]; confidence: number }
console.log(result?.summary);
console.log(result?.recommendations);
Streaming Partial ObjectsDirect link to Streaming Partial Objects
You can also stream partial objects as they're being generated:
const stream = await agent.network("Analyze data", {
structuredOutput: { schema: resultSchema },
});
// Stream partial objects
for await (const partial of stream.objectStream) {
console.log("Partial result:", partial);
}
// Get final result
const final = await stream.object;
Chunk TypesDirect link to Chunk Types
When using structured output, additional chunk types are emitted:
network-object: Emitted with partial objects during streamingnetwork-object-result: Emitted with the final structured object