Skip to main content
Mastra v1 is coming in January 2026. Get ahead by starting new projects with the beta or upgrade your existing project today.

Runtime Context

This example demonstrates how to use runtime context to create a single agent that dynamically adapts its behavior, capabilities, model selection, tools, memory configuration, input/output processing, and quality scoring based on user subscription tiers.

PrerequisitesDirect link to Prerequisites

This example uses OpenAI models through Mastra's model router. Make sure to add OPENAI_API_KEY to your .env file.

.env
OPENAI_API_KEY=<your-api-key>

Creating the Dynamic AgentDirect link to Creating the Dynamic Agent

Create an agent that adapts all its properties based on the user's subscription tier:

src/mastra/agents/support-agent.ts
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { TokenLimiterProcessor } from "@mastra/core/processors";
import { RuntimeContext } from "@mastra/core/runtime-context";
import {
knowledgeBase,
ticketSystem,
advancedAnalytics,
customIntegration,
} from "../tools/support-tools";
import { CharacterLimiterProcessor } from "../processors/character-limiter";
import { responseQualityScorer } from "../scorers/response-quality";

export type UserTier = "free" | "pro" | "enterprise";
export type SupportRuntimeContext = {
"user-tier": UserTier;
language: "en" | "es" | "ja" | "fr";
};

export const supportAgent = new Agent({
name: "dynamic-support-agent",
description: "AI support agent that adapts to user subscription tiers",

instructions: async ({
runtimeContext,
}: {
runtimeContext: RuntimeContext<SupportRuntimeContext>;
}) => {
const userTier = runtimeContext.get("user-tier");
const language = runtimeContext.get("language");

return `You are a customer support agent for our SaaS platform.
The current user is on the ${userTier} tier and prefers ${language} language.

Support guidance based on tier:
${userTier === "free" ? "- Provide basic support and documentation links" : ""}
${userTier === "pro" ? "- Offer detailed technical support and best practices" : ""}
${userTier === "enterprise" ? "- Provide priority support with custom solutions and dedicated assistance" : ""}

Always respond in ${language} language.
${userTier === "enterprise" ? "You have access to custom integrations and advanced analytics." : ""}`;
},

model: ({
runtimeContext,
}: {
runtimeContext: RuntimeContext<SupportRuntimeContext>;
}) => {
const userTier = runtimeContext.get("user-tier");

if (userTier === "enterprise") return "openai/gpt-5";
if (userTier === "pro") return "openai/gpt-4o";
return "openai/gpt-4o-mini";
},

tools: ({
runtimeContext,
}: {
runtimeContext: RuntimeContext<SupportRuntimeContext>;
}) => {
const userTier = runtimeContext.get("user-tier");
const baseTools = [knowledgeBase, ticketSystem];

if (userTier === "pro" || userTier === "enterprise") {
baseTools.push(advancedAnalytics);
}

if (userTier === "enterprise") {
baseTools.push(customIntegration);
}

return baseTools;
},

memory: ({
runtimeContext,
}: {
runtimeContext: RuntimeContext<SupportRuntimeContext>;
}) => {
const userTier = runtimeContext.get("user-tier");

switch (userTier) {
case "enterprise":
return new Memory({
storage: new LibSQLStore({ url: "file:enterprise.db" }),
options: {
semanticRecall: { topK: 15, messageRange: 8 },
workingMemory: { enabled: true },
},
});
case "pro":
return new Memory({
storage: new LibSQLStore({ url: "file:pro.db" }),
options: {
semanticRecall: { topK: 8, messageRange: 4 },
workingMemory: { enabled: true },
},
});
case "free":
default:
return new Memory({
storage: new LibSQLStore({ url: "file:free.db" }),
options: {
semanticRecall: { topK: 3, messageRange: 2 },
workingMemory: { enabled: false },
},
});
}
},

inputProcessors: ({
runtimeContext,
}: {
runtimeContext: RuntimeContext<SupportRuntimeContext>;
}) => {
const userTier = runtimeContext.get("user-tier");

switch (userTier) {
case "enterprise":
return [];
case "pro":
return [new CharacterLimiterProcessor(2000)];
case "free":
default:
return [new CharacterLimiterProcessor(500)];
}
},

outputProcessors: ({
runtimeContext,
}: {
runtimeContext: RuntimeContext<SupportRuntimeContext>;
}) => {
const userTier = runtimeContext.get("user-tier");

switch (userTier) {
case "enterprise":
return [
new TokenLimiterProcessor({ limit: 2000, strategy: "truncate" }),
];
case "pro":
return [
new TokenLimiterProcessor({ limit: 500, strategy: "truncate" }),
];
case "free":
default:
return [
new TokenLimiterProcessor({ limit: 100, strategy: "truncate" }),
];
}
},

scorers: ({
runtimeContext,
}: {
runtimeContext: RuntimeContext<SupportRuntimeContext>;
}) => {
const userTier = runtimeContext.get("user-tier");

if (userTier === "enterprise") {
return [responseQualityScorer];
}

return [];
},
});

See Agent for a full list of configuration options.

Registering the AgentDirect link to Registering the Agent

Register the agent in your main Mastra instance:

src/mastra/index.ts
import { Mastra } from "@mastra/core/mastra";
import { supportAgent } from "./agents/support-agent";

export const mastra = new Mastra({
agents: { supportAgent },
});

Usage ExamplesDirect link to Usage Examples

Free Tier UserDirect link to Free Tier User

src/examples/free-tier-usage.ts
import "dotenv/config";
import { mastra } from "../mastra";
import { RuntimeContext } from "@mastra/core/runtime-context";
import type { SupportRuntimeContext } from "../mastra/agents/support-agent";

const agent = mastra.getAgent("supportAgent");
const runtimeContext = new RuntimeContext<SupportRuntimeContext>();

runtimeContext.set("user-tier", "free");
runtimeContext.set("language", "en");

const response = await agent.generate(
"I'm having trouble with API rate limits. Can you help?",
{ runtimeContext },
);

console.log(response.text);

Pro Tier UserDirect link to Pro Tier User

src/examples/pro-tier-usage.ts
import "dotenv/config";
import { mastra } from "../mastra";
import { RuntimeContext } from "@mastra/core/runtime-context";
import type { SupportRuntimeContext } from "../mastra/agents/support-agent";

const agent = mastra.getAgent("supportAgent");
const runtimeContext = new RuntimeContext<SupportRuntimeContext>();

runtimeContext.set("user-tier", "pro");
runtimeContext.set("language", "es");

const response = await agent.generate(
"I need detailed analytics on my API usage patterns and optimization recommendations.",
{ runtimeContext },
);

console.log(response.text);

Enterprise Tier UserDirect link to Enterprise Tier User

src/examples/enterprise-tier-usage.ts
import "dotenv/config";
import { mastra } from "../mastra";
import { RuntimeContext } from "@mastra/core/runtime-context";
import type { SupportRuntimeContext } from "../mastra/agents/support-agent";

const agent = mastra.getAgent("supportAgent");
const runtimeContext = new RuntimeContext<SupportRuntimeContext>();

runtimeContext.set("user-tier", "enterprise");
runtimeContext.set("language", "ja");

const response = await agent.generate(
"I need to integrate our custom webhook system with your platform and get real-time analytics on our usage across multiple environments.",
{ runtimeContext },
);

console.log(response.text);

Key BenefitsDirect link to Key Benefits

This runtime context approach provides:

  • Cost Optimization: Enterprise users get premium models and features while free users get basic functionality
  • Resource Management: Input and output limits prevent abuse on lower subscription tiers
  • Quality Assurance: Response quality scoring only where it adds business value (enterprise tier)
  • Scalable Architecture: A single agent definition serves all user segments without code duplication
  • Personalization: Language preferences and tier-specific instructions create tailored experiences