Agent.getLLM()
The .getLLM() method retrieves the language model instance configured for an agent, resolving it if it's a function. You can also pass a request-scoped model override without mutating the agent's configured model.
Usage exampleDirect link to Usage example
await agent.getLLM()
await agent.getLLM({
model: 'openai/gpt-5.4',
})
ParametersDirect link to Parameters
options?:
{ requestContext?: RequestContext; model?: MastraModelConfig | DynamicArgument<MastraModelConfig> }
= {}
Optional configuration object containing request context and an optional request-scoped model override.
{ requestContext?: RequestContext; model?: MastraModelConfig | DynamicArgument<MastraModelConfig> }
requestContext?:
RequestContext
Request Context for dependency injection and contextual information.
model?:
MastraModelConfig | DynamicArgument<MastraModelConfig>
Optional request-scoped model override. The agent's configured model is not mutated.
ReturnsDirect link to Returns
llm:
MastraLLMV1 | Promise<MastraLLMV1>
The language model instance configured for the agent, either as a direct instance or a promise that resolves to the LLM.
Extended usage exampleDirect link to Extended usage example
await agent.getLLM({
requestContext: new RequestContext(),
model: 'openai/gpt-5.4',
})