Skip to main content

Agent.network()

Experimental

This feature is experimental and the API may change in future releases.

The .network() method enables multi-agent collaboration and routing. This method accepts messages and optional execution options.

Usage example

import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { agent1, agent2 } from "./agents";
import { workflow1 } from "./workflows";
import { tool1, tool2 } from "./tools";

const agent = new Agent({
name: "network-agent",
instructions:
"You are a network agent that can help users with a variety of tasks.",
model: openai("gpt-4o"),
agents: {
agent1,
agent2,
},
workflows: {
workflow1,
},
tools: {
tool1,
tool2,
},
});

await agent.network(`
Find me the weather in Tokyo.
Based on the weather, plan an activity for me.
`);

Parameters

messages:

string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]
The messages to send to the agent. Can be a single string, array of strings, or structured message objects.

options?:

MultiPrimitiveExecutionOptions
Optional configuration for the network process.

Options

maxSteps?:

number
Maximum number of steps to run during execution.

memory?:

object
Configuration for memory. This is the preferred way to manage memory.

thread:

string | { id: string; metadata?: Record<string, any>, title?: string }
The conversation thread, as a string ID or an object with an `id` and optional `metadata`.

resource:

string
Identifier for the user or resource associated with the thread.

options?:

MemoryConfig
Configuration for memory behavior, like message history and semantic recall.

tracingContext?:

TracingContext
AI tracing context for creating child spans and adding metadata. Automatically injected when using Mastra's tracing system.

currentSpan?:

AISpan
Current AI span for creating child spans and adding metadata. Use this to create custom child spans or update span attributes during execution.

tracingOptions?:

TracingOptions
Options for AI tracing configuration.

metadata?:

Record<string, any>
Metadata to add to the root trace span. Useful for adding custom attributes like user IDs, session IDs, or feature flags.

telemetry?:

TelemetrySettings
Settings for OTLP telemetry collection during streaming (not AI tracing).

isEnabled?:

boolean
Enable or disable telemetry. Disabled by default while experimental.

recordInputs?:

boolean
Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.

recordOutputs?:

boolean
Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.

functionId?:

string
Identifier for this function. Used to group telemetry data by function.

modelSettings?:

CallSettings
Model-specific settings like temperature, maxTokens, topP, etc. These are passed to the underlying language model.

temperature?:

number
Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic.

maxRetries?:

number
Maximum number of retries for failed requests.

topP?:

number
Nucleus sampling. This is a number between 0 and 1. It is recommended to set either temperature or topP, but not both.

topK?:

number
Only sample from the top K options for each subsequent token. Used to remove 'long tail' low probability responses.

presencePenalty?:

number
Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).

frequencyPenalty?:

number
Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).

stopSequences?:

string[]
Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.

runId?:

string
Unique ID for this generation run. Useful for tracking and debugging purposes.

runtimeContext?:

RuntimeContext
Runtime context for dependency injection and contextual information.

traceId?:

string
The trace ID associated with this execution when AI tracing is enabled. Use this to correlate logs and debug execution flow.

Returns

stream:

MastraAgentNetworkStream<NetworkChunkType>
A custom stream that extends ReadableStream<NetworkChunkType> with additional network-specific properties

status:

Promise<RunStatus>
A promise that resolves to the current workflow run status

result:

Promise<WorkflowResult<TState, TOutput, TSteps>>
A promise that resolves to the final workflow result

usage:

Promise<{ promptTokens: number; completionTokens: number; totalTokens: number }>
A promise that resolves to token usage statistics