Azure OpenAI
Azure OpenAI provides enterprise-grade access to OpenAI models through dedicated deployments with security, compliance, and SLA guarantees.
Unlike other providers that have fixed model names, Azure uses deployment names that you configure in the Azure Portal.
How Azure Deployments WorkDirect link to How Azure Deployments Work
Azure model IDs follow this pattern: azure-openai/your-deployment-name
The deployment name is specific to your Azure account and chosen when you create a deployment in Azure Portal. Common examples:
azure-openai/my-gpt4-deploymentazure-openai/production-gpt-35-turboazure-openai/staging-gpt-4o
SetupDirect link to Setup
Create deployments in Azure OpenAI Studio. The resource name and API key are in Azure Portal under "Keys and Endpoint".
ConfigurationDirect link to Configuration
Instantiate the gateway and pass it to Mastra. Three configuration modes are available.
Static DeploymentsDirect link to Static Deployments
Provide deployment names from Azure Portal.
import { Mastra } from "@mastra/core";
import { AzureOpenAIGateway } from "@mastra/core/llm";
export const mastra = new Mastra({
gateways: [
new AzureOpenAIGateway({
resourceName: "my-openai-resource",
apiKey: process.env.AZURE_API_KEY!,
deployments: ["gpt-4-prod", "gpt-35-turbo-dev"],
}),
],
});
Dynamic DiscoveryDirect link to Dynamic Discovery
Provide Management API credentials. The gateway queries Azure Management API to list deployments.
import { Mastra } from "@mastra/core";
import { AzureOpenAIGateway } from "@mastra/core/llm";
export const mastra = new Mastra({
gateways: [
new AzureOpenAIGateway({
resourceName: "my-openai-resource",
apiKey: process.env.AZURE_API_KEY!,
management: {
tenantId: process.env.AZURE_TENANT_ID!,
clientId: process.env.AZURE_CLIENT_ID!,
clientSecret: process.env.AZURE_CLIENT_SECRET!,
subscriptionId: process.env.AZURE_SUBSCRIPTION_ID!,
resourceGroup: "my-resource-group",
},
}),
],
});
The Service Principal requires "Cognitive Services User" role. See Azure documentation.
Manual Deployment NamesDirect link to Manual Deployment Names
Provide resource name and API key only. Specify deployment names when creating agents. No IDE autocomplete.
import { Mastra } from "@mastra/core";
import { AzureOpenAIGateway } from "@mastra/core/llm";
export const mastra = new Mastra({
gateways: [
new AzureOpenAIGateway({
resourceName: "my-openai-resource",
apiKey: process.env.AZURE_API_KEY!,
}),
],
});
Configuration ReferenceDirect link to Configuration Reference
| Option | Type | Required | Description |
|---|---|---|---|
resourceName | string | Yes | Azure OpenAI resource name |
apiKey | string | Yes | API key from "Keys and Endpoint" |
apiVersion | string | No | API version (default: 2024-04-01-preview) |
deployments | string[] | No | Deployment names for static mode |
management | object | No | Management API credentials |
management.tenantId | string | Yes* | Azure AD tenant ID |
management.clientId | string | Yes* | Service Principal client ID |
management.clientSecret | string | Yes* | Service Principal secret |
management.subscriptionId | string | Yes* | Azure subscription ID |
management.resourceGroup | string | Yes* | Resource group name |
* Required if management is provided
UsageDirect link to Usage
import { Agent } from "@mastra/core/agent";
const agent = new Agent({
id: "my-agent",
name: "My Agent",
instructions: "You are a helpful assistant",
model: "azure-openai/my-gpt4-deployment" // Use your Azure deployment name (autocompleted in dev mode)
});
// Generate a response
const response = await agent.generate("Hello!");
// Stream a response
const stream = await agent.stream("Tell me a story");
for await (const chunk of stream) {
console.log(chunk);
}
Check Azure OpenAI model availability for region-specific options.