Skip to main content
Mastra v1 is coming in January 2026. Get ahead by starting new projects with the beta or upgrade your existing project today.

Agent.getLLM()

The .getLLM() method retrieves the language model instance configured for an agent, resolving it if it's a function. This method provides access to the underlying LLM that powers the agent's capabilities.

Usage exampleDirect link to Usage example

await agent.getLLM();

ParametersDirect link to Parameters

options?:

{ runtimeContext?: RuntimeContext; model?: MastraLanguageModel | DynamicArgument<MastraLanguageModel> }
= {}
Optional configuration object containing runtime context and optional model override.

ReturnsDirect link to Returns

llm:

MastraLLMV1 | Promise<MastraLLMV1>
The language model instance configured for the agent, either as a direct instance or a promise that resolves to the LLM.

Extended usage exampleDirect link to Extended usage example

await agent.getLLM({
runtimeContext: new RuntimeContext(),
model: openai("gpt-4"),
});

Options parametersDirect link to Options parameters

runtimeContext?:

RuntimeContext
= new RuntimeContext()
Runtime context for dependency injection and contextual information.

model?:

MastraLanguageModel | DynamicArgument<MastraLanguageModel>
Optional model override. If provided, this model will be used used instead of the agent's configured model.

On this page