It's often the case that real-world agents need to be customized on a per-user basis: they may need access to user-specific data, API keys, or other runtime context.
Let's say you want to let your user choose the model, or you want to customize the system prompt on a per-user basis, or customize an agent's toolsets on a per-request basis.
Previously Mastra only allowed you to define these fields statically. As of mastra@0.9.0
, Mastra provides a runtime context system, which is a dependency injection pattern so you can pass configuration at runtime in a type-safe way.
Use cases
Some things we've seen Mastra users do with runtime context so far:
- Run user metadata through an LLM prompt to customize the system prompt for the agent responding to that user
- Build several dozen templated agents for a multi-location hospitality chain, using location details to customize the system prompt and tool selection
- Use a smaller model if a user was on a free plan and a larger one if they were on a paid plan
- Give the agent a different set of tools based on the user's role
How RuntimeContext works
Mastra lets you define a RuntimeContext
and pass it to agent methods. You can set variables on the context, and access them in your agent logic and tools.
1import { RuntimeContext } from "@mastra/core/di";
2
3// Define the shape of your runtime context
4type WeatherContext = {
5 "temperature-scale": "celsius" | "fahrenheit";
6};
7
8const runtimeContext = new RuntimeContext<WeatherContext>();
9runtimeContext.set("temperature-scale", "celsius");
10
11const response = await agent.generate("What's the weather like today?", {
12 runtimeContext,
13});
You can use runtimeContext
in your agent configuration functions:
1const dynamicAgent = new Agent({
2 tools: ({ runtimeContext }) => {
3 // Use runtimeContext.get() to access variables
4 const temperatureScale = runtimeContext.get("temperature-scale");
5 return temperatureScale === "celsius" ? celsiusTools : fahrenheitTools;
6 },
7 instructions: ({ runtimeContext }) => {
8 const temperatureScale = runtimeContext.get("temperature-scale");
9 return `You are assisting with ${temperatureScale} temperature.`;
10 },
11 model: ({ runtimeContext }) => {
12 return runtimeContext.get("preferFast")
13 ? openai("gpt-4o-mini")
14 : openai("gpt-4o");
15 },
16});
And in tools:
1import { createTool } from "@mastra/core/tools";
2import { z } from "zod";
3
4export const weatherTool = createTool({
5 id: "getWeather",
6 description: "Get the current weather for a location",
7 inputSchema: z.object({
8 location: z.string().describe("The location to get weather for"),
9 }),
10 execute: async ({ context, runtimeContext }) => {
11 const temperatureUnit = runtimeContext.get("temperature-scale");
12 // ... use temperatureUnit in your logic
13 },
14});
Kudos
Thanks to @mozharovsky for creating a detailed issue and initial implementation.