Skip to main content

Agent.network()

Deprecated

Agent networks are deprecated and will be removed in a future release. Use the supervisor pattern with agent.stream() or agent.generate() instead. See the migration guide to upgrade.

The .network() method enables multi-agent collaboration and routing. This method accepts messages and optional execution options.

Usage example
Direct link to Usage example

import { Agent } from '@mastra/core/agent'
import { agent1, agent2 } from './agents'
import { workflow1 } from './workflows'
import { tool1, tool2 } from './tools'

const agent = new Agent({
id: 'network-agent',
name: 'Network Agent',
instructions: 'You are a network agent that can help users with a variety of tasks.',
model: 'openai/gpt-5.1',
agents: {
agent1,
agent2,
},
workflows: {
workflow1,
},
tools: {
tool1,
tool2,
},
})

await agent.network(`
Find me the weather in Tokyo.
Based on the weather, plan an activity for me.
`)

Parameters
Direct link to Parameters

messages:

string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]
The messages to send to the agent. Can be a single string, array of strings, or structured message objects.

options?:

MultiPrimitiveExecutionOptions
Optional configuration for the network process.
MultiPrimitiveExecutionOptions

maxSteps?:

number
Maximum number of steps to run during execution.

abortSignal?:

AbortSignal
Signal to abort the network execution. When aborted, the network stops routing, cancels any in-progress sub-agent, tool, or workflow execution, and skips saving partial results to memory.

onAbort?:

(event: { primitiveType: string; primitiveId: string; iteration: number }) => void | Promise<void>
Callback fired when the network is aborted. Receives an event with the type and ID of the primitive that was executing when the abort occurred.

memory?:

object
Configuration for memory. This is the preferred way to manage memory.
object

thread:

string | { id: string; metadata?: Record<string, any>, title?: string }
The conversation thread, as a string ID or an object with an `id` and optional `metadata`.

resource:

string
Identifier for the user or resource associated with the thread.

options?:

MemoryConfig
Configuration for memory behavior, like message history and semantic recall.

tracingContext?:

TracingContext
Tracing context for creating child spans and adding metadata. Automatically injected when using Mastra's tracing system.
TracingContext

currentSpan?:

Span
Current span for creating child spans and adding metadata. Use this to create custom child spans or update span attributes during execution.

tracingOptions?:

TracingOptions
Options for Tracing configuration.
TracingOptions

metadata?:

Record<string, any>
Metadata to add to the root trace span. Useful for adding custom attributes like user IDs, session IDs, or feature flags.

requestContextKeys?:

string[]
Additional RequestContext keys to extract as metadata for this trace. Supports dot notation for nested values (e.g., 'user.id').

traceId?:

string
Trace ID to use for this execution (1-32 hexadecimal characters). If provided, this trace will be part of the specified trace.

parentSpanId?:

string
Parent span ID to use for this execution (1-16 hexadecimal characters). If provided, the root span will be created as a child of this span.

tags?:

string[]
Tags to apply to this trace. String labels for categorizing and filtering traces.

telemetry?:

TelemetrySettings
Settings for OTLP telemetry collection during streaming (not Tracing).
TelemetrySettings

isEnabled?:

boolean
Enable or disable telemetry. Disabled by default while experimental.

recordInputs?:

boolean
Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.

recordOutputs?:

boolean
Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.

functionId?:

string
Identifier for this function. Used to group telemetry data by function.

modelSettings?:

CallSettings
Model-specific settings like temperature, maxOutputTokens, topP, etc. These settings control how the language model generates responses.

temperature?:

number
Controls randomness in generation (0-2). Higher values make output more random.

maxOutputTokens?:

number
Maximum number of tokens to generate in the response. Note: Use maxOutputTokens (not maxTokens) as per AI SDK v5 convention.

maxRetries?:

number
Maximum number of retry attempts for failed requests.

topP?:

number
Nucleus sampling parameter (0-1). Controls diversity of generated text.

topK?:

number
Top-k sampling parameter. Limits vocabulary to k most likely tokens.

presencePenalty?:

number
Penalty for token presence (-2 to 2). Reduces repetition.

frequencyPenalty?:

number
Penalty for token frequency (-2 to 2). Reduces repetition of frequent tokens.

stopSequences?:

string[]
Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.

structuredOutput?:

StructuredOutputOptions
Configuration for generating a typed structured output from the network result.
StructuredOutputOptions

schema:

ZodSchema | JSONSchema7
The schema to validate the output against. Can be a Zod schema or JSON Schema.

model?:

MastraModelConfig
Model to use for generating the structured output. Defaults to the agent's model.

instructions?:

string
Custom instructions for generating the structured output.

runId?:

string
Unique ID for this generation run. Useful for tracking and debugging purposes.

requestContext?:

RequestContext
Request Context for dependency injection and contextual information.

traceId?:

string
The trace ID associated with this execution when Tracing is enabled. Use this to correlate logs and debug execution flow.

onStepFinish?:

(event: any) => Promise<void> | void
Callback fired after each LLM step within a sub-agent execution. Receives step details including finish reason and token usage.

onError?:

({ error }: { error: Error | string }) => Promise<void> | void
Callback fired when an error occurs during sub-agent execution.

Returns
Direct link to Returns

stream:

MastraAgentNetworkStream<NetworkChunkType>
A custom stream that extends ReadableStream<NetworkChunkType> with additional network-specific properties

status:

Promise<RunStatus>
A promise that resolves to the current workflow run status

result:

Promise<WorkflowResult<TState, TOutput, TSteps>>
A promise that resolves to the final workflow result

usage:

Promise<{ promptTokens: number; completionTokens: number; totalTokens: number }>
A promise that resolves to token usage statistics

object:

Promise<OUTPUT | undefined>
A promise that resolves to the structured output object. Only available when structuredOutput option is provided. Resolves to undefined if no schema was specified.

objectStream:

ReadableStream<Partial<OUTPUT>>
A stream of partial objects during structured output generation. Useful for streaming partial results as they're being generated.

Structured Output
Direct link to Structured Output

When you need typed, validated results from your network, use the structuredOutput option. The network will generate a response matching your schema after task completion.

import { z } from 'zod'

const resultSchema = z.object({
summary: z.string().describe('A brief summary of the findings'),
recommendations: z.array(z.string()).describe('List of recommendations'),
confidence: z.number().min(0).max(1).describe('Confidence score'),
})

const stream = await agent.network('Research AI trends and summarize', {
structuredOutput: {
schema: resultSchema,
},
})

// Consume the stream
for await (const chunk of stream) {
// Handle streaming events
}

// Get the typed result
const result = await stream.object
// result is typed as { summary: string; recommendations: string[]; confidence: number }
console.log(result?.summary)
console.log(result?.recommendations)

Streaming Partial Objects
Direct link to Streaming Partial Objects

You can also stream partial objects as they're being generated:

const stream = await agent.network('Analyze data', {
structuredOutput: { schema: resultSchema },
})

// Stream partial objects
for await (const partial of stream.objectStream) {
console.log('Partial result:', partial)
}

// Get final result
const final = await stream.object

Chunk Types
Direct link to Chunk Types

When using structured output, additional chunk types are emitted:

  • network-object: Emitted with partial objects during streaming
  • network-object-result: Emitted with the final structured object

Aborting a Network
Direct link to Aborting a Network

Use abortSignal to cancel a running network. When aborted, the network stops routing, cancels any in-progress sub-agent, tool, or workflow execution, and doesn't save partial results to memory.

const controller = new AbortController()

// Abort after 30 seconds
setTimeout(() => controller.abort(), 30_000)

const stream = await agent.network('Research this topic thoroughly', {
abortSignal: controller.signal,
onAbort: ({ primitiveType, primitiveId, iteration }) => {
console.log(`Aborted ${primitiveType} "${primitiveId}" at iteration ${iteration}`)
},
})

for await (const chunk of stream) {
if (
chunk.type === 'routing-agent-abort' ||
chunk.type === 'agent-execution-abort' ||
chunk.type === 'tool-execution-abort' ||
chunk.type === 'workflow-execution-abort'
) {
console.log('Network was aborted')
}
}