Skip to Content

getLLM()

The .getLLM() method retrieves the language model instance configured for an agent, resolving it if it’s a function. This method provides access to the underlying LLM that powers the agent’s capabilities.

Usage example

const llm = await agent.getLLM();

Parameters

options?:

{ runtimeContext?: RuntimeContext; model?: MastraLanguageModel | DynamicArgument<MastraLanguageModel> }
= {}
Optional configuration object containing runtime context and optional model override.

Extended usage example

const llm = await agent.getLLM({ runtimeContext: new RuntimeContext(), model: openai('gpt-4') });

Options parameters

runtimeContext?:

RuntimeContext
= new RuntimeContext()
Runtime context for dependency injection and contextual information.

model?:

MastraLanguageModel | DynamicArgument<MastraLanguageModel>
Optional model override. If provided, this model will be used instead of the agent's configured model.

Returns

llm:

MastraLLMBase | Promise<MastraLLMBase>
The language model instance configured for the agent, either as a direct instance or a promise that resolves to the LLM.