Skip to main content
Mastra v1 is coming in January 2026. Get ahead by starting new projects with the beta or upgrade your existing project today.

Agent.network()

Experimental Feature

This is an experimental API that may change in future versions. The network() method enables multi-agent collaboration and workflow orchestration. Use with caution in production environments.

The .network() method enables multi-agent collaboration and routing. This method accepts messages and optional execution options.

Usage exampleDirect link to Usage example

import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { agent1, agent2 } from "./agents";
import { workflow1 } from "./workflows";
import { tool1, tool2 } from "./tools";

const agent = new Agent({
name: "network-agent",
instructions:
"You are a network agent that can help users with a variety of tasks.",
model: openai("gpt-4o"),
agents: {
agent1,
agent2,
},
workflows: {
workflow1,
},
tools: {
tool1,
tool2,
},
});

await agent.network(`
Find me the weather in Tokyo.
Based on the weather, plan an activity for me.
`);

ParametersDirect link to Parameters

messages:

string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]
The messages to send to the agent. Can be a single string, array of strings, or structured message objects.

options?:

MultiPrimitiveExecutionOptions
Optional configuration for the network process.

OptionsDirect link to Options

maxSteps?:

number
Maximum number of steps to run during execution.

memory?:

object
Configuration for memory. This is the preferred way to manage memory.

thread:

string | { id: string; metadata?: Record<string, any>, title?: string }
The conversation thread, as a string ID or an object with an `id` and optional `metadata`.

resource:

string
Identifier for the user or resource associated with the thread.

options?:

MemoryConfig
Configuration for memory behavior, like message history and semantic recall.

tracingContext?:

TracingContext
AI tracing context for creating child spans and adding metadata. Automatically injected when using Mastra's tracing system.

currentSpan?:

AISpan
Current AI span for creating child spans and adding metadata. Use this to create custom child spans or update span attributes during execution.

tracingOptions?:

TracingOptions
Options for AI tracing configuration.

metadata?:

Record<string, any>
Metadata to add to the root trace span. Useful for adding custom attributes like user IDs, session IDs, or feature flags.

telemetry?:

TelemetrySettings
Settings for OTLP telemetry collection during streaming (not AI tracing).

isEnabled?:

boolean
Enable or disable telemetry. Disabled by default while experimental.

recordInputs?:

boolean
Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.

recordOutputs?:

boolean
Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.

functionId?:

string
Identifier for this function. Used to group telemetry data by function.

modelSettings?:

CallSettings
Model-specific settings like temperature, maxOutputTokens, topP, etc. These settings control how the language model generates responses.

temperature?:

number
Controls randomness in generation (0-2). Higher values make output more random.

maxOutputTokens?:

number
Maximum number of tokens to generate in the response. Note: Use maxOutputTokens (not maxTokens) as per AI SDK v5 convention.

maxRetries?:

number
Maximum number of retry attempts for failed requests.

topP?:

number
Nucleus sampling parameter (0-1). Controls diversity of generated text.

topK?:

number
Top-k sampling parameter. Limits vocabulary to k most likely tokens.

presencePenalty?:

number
Penalty for token presence (-2 to 2). Reduces repetition.

frequencyPenalty?:

number
Penalty for token frequency (-2 to 2). Reduces repetition of frequent tokens.

stopSequences?:

string[]
Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.

runId?:

string
Unique ID for this generation run. Useful for tracking and debugging purposes.

runtimeContext?:

RuntimeContext
Runtime context for dependency injection and contextual information.

traceId?:

string
The trace ID associated with this execution when AI tracing is enabled. Use this to correlate logs and debug execution flow.

ReturnsDirect link to Returns

stream:

MastraAgentNetworkStream<NetworkChunkType>
A custom stream that extends ReadableStream<NetworkChunkType> with additional network-specific properties

status:

Promise<RunStatus>
A promise that resolves to the current workflow run status

result:

Promise<WorkflowResult<TState, TOutput, TSteps>>
A promise that resolves to the final workflow result

usage:

Promise<{ promptTokens: number; completionTokens: number; totalTokens: number }>
A promise that resolves to token usage statistics

On this page