# Agent.getLLM() The `.getLLM()` method retrieves the language model instance configured for an agent, resolving it if it's a function. This method provides access to the underlying LLM that powers the agent's capabilities. ## Usage example ```typescript await agent.getLLM(); ``` ## Parameters **options?:** (`{ requestContext?: RequestContext; model?: MastraLanguageModel | DynamicArgument }`): Optional configuration object containing request context and optional model override. (Default: `{}`) ## Returns **llm:** (`MastraLLMV1 | Promise`): The language model instance configured for the agent, either as a direct instance or a promise that resolves to the LLM. ## Extended usage example ```typescript await agent.getLLM({ requestContext: new RequestContext(), model: "openai/gpt-5.1", }); ``` ### Options parameters **requestContext?:** (`RequestContext`): Request Context for dependency injection and contextual information. (Default: `new RequestContext()`) **model?:** (`MastraLanguageModel | DynamicArgument`): Optional model override. If provided, this model will be used used instead of the agent's configured model. ## Related - [Agents overview](https://mastra.ai/docs/agents/overview) - [Request Context](https://mastra.ai/docs/server/request-context)