Agent.getLLM()
The .getLLM()
method retrieves the language model instance configured for an agent, resolving it if it’s a function. This method provides access to the underlying LLM that powers the agent’s capabilities.
Usage example
await agent.getLLM();
Parameters
options?:
{ runtimeContext?: RuntimeContext; model?: MastraLanguageModel | DynamicArgument<MastraLanguageModel> }
= {}
Optional configuration object containing runtime context and optional model override.
Returns
llm:
MastraLLMV1 | Promise<MastraLLMV1>
The language model instance configured for the agent, either as a direct instance or a promise that resolves to the LLM.
Extended usage example
await agent.getLLM({
runtimeContext: new RuntimeContext(),
model: openai('gpt-4')
});
Options parameters
runtimeContext?:
RuntimeContext
= new RuntimeContext()
Runtime context for dependency injection and contextual information.
model?:
MastraLanguageModel | DynamicArgument<MastraLanguageModel>
Optional model override. If provided, this model will be used used instead of the agent's configured model.