It's often the case that real-world agents need to be customized on a per-user basis: they may need access to user-specific data, API keys, or other runtime context.
Let's say you want to let your user choose the model, or you want to customize the system prompt on a per-user basis, or customize an agent's toolsets on a per-request basis.
Previously Mastra only allowed you to define these fields statically. As of mastra@0.9.0
, Mastra provides a runtime context system, which is a dependency injection pattern so you can pass configuration at runtime in a type-safe way.
Use cases
Some things we've seen Mastra users do with runtime context so far:
- Run user metadata through an LLM prompt to customize the system prompt for the agent responding to that user
- Build several dozen templated agents for a multi-location hospitality chain, using location details to customize the system prompt and tool selection
- Use a smaller model if a user was on a free plan and a larger one if they were on a paid plan
- Give the agent a different set of tools based on the user's role
How RuntimeContext works
Mastra lets you define a RuntimeContext
and pass it to agent methods. You can set variables on the context, and access them in your agent logic and tools.
import { RuntimeContext } from "@mastra/core/di";
// Define the shape of your runtime context
type WeatherContext = {
"temperature-scale": "celsius" | "fahrenheit";
};
const runtimeContext = new RuntimeContext<WeatherContext>();
runtimeContext.set("temperature-scale", "celsius");
const response = await agent.generate("What's the weather like today?", {
runtimeContext,
});
You can use runtimeContext
in your agent configuration functions:
const dynamicAgent = new Agent({
tools: ({ runtimeContext }) => {
// Use runtimeContext.get() to access variables
const temperatureScale = runtimeContext.get("temperature-scale");
return temperatureScale === "celsius" ? celsiusTools : fahrenheitTools;
},
instructions: ({ runtimeContext }) => {
const temperatureScale = runtimeContext.get("temperature-scale");
return `You are assisting with ${temperatureScale} temperature.`;
},
model: ({ runtimeContext }) => {
return runtimeContext.get("preferFast")
? openai("gpt-4o-mini")
: openai("gpt-4o");
},
});
And in tools:
import { createTool } from "@mastra/core/tools";
import { z } from "zod";
export const weatherTool = createTool({
id: "getWeather",
description: "Get the current weather for a location",
inputSchema: z.object({
location: z.string().describe("The location to get weather for"),
}),
execute: async ({ context, runtimeContext }) => {
const temperatureUnit = runtimeContext.get("temperature-scale");
// ... use temperatureUnit in your logic
},
});
Kudos
Thanks to @mozharovsky for creating a detailed issue and initial implementation.