GitHub Models
Access 55 GitHub Models models through Mastra's model router. Authentication is handled automatically using the GITHUB_TOKEN environment variable.
Learn more in the GitHub Models documentation.
GITHUB_TOKEN=your-api-key
import { Agent } from "@mastra/core";
const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant",
model: "github-models/ai21-labs/ai21-jamba-1.5-large",
});
// Generate a response
const response = await agent.generate("Hello!");
// Stream a response
const stream = await agent.stream("Tell me a story");
for await (const chunk of stream) {
console.log(chunk);
}
OpenAI Compatibility
Mastra uses the OpenAI-compatible /chat/completions endpoint. Some provider-specific features may not be available. Check the GitHub Models documentation for details.
Models
| Model | Image | Audio | Video | Tools | Streaming | Context Window |
|---|---|---|---|---|---|---|
github-models/core42/jais-30b-chat | ✗ | ✗ | ✗ | ✓ | ✗ | 8,192 |
github-models/xai/grok-3 | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/xai/grok-3-mini | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/cohere/cohere-command-r-08-2024 | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/cohere/cohere-command-a | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/cohere/cohere-command-r-plus-08-2024 | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/cohere/cohere-command-r | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/cohere/cohere-command-r-plus | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/deepseek/deepseek-r1-0528 | ✗ | ✗ | ✗ | ✓ | ✗ | 65,536 |
github-models/deepseek/deepseek-r1 | ✗ | ✗ | ✗ | ✓ | ✗ | 65,536 |
github-models/deepseek/deepseek-v3-0324 | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/mistral-ai/mistral-medium-2505 | ✓ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/mistral-ai/ministral-3b | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/mistral-ai/mistral-nemo | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/mistral-ai/mistral-large-2411 | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/mistral-ai/codestral-2501 | ✗ | ✗ | ✗ | ✓ | ✗ | 32,000 |
github-models/mistral-ai/mistral-small-2503 | ✓ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-3-medium-128k-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-3-mini-4k-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 4,096 |
github-models/microsoft/phi-3-small-128k-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-3.5-vision-instruct | ✓ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-4 | ✗ | ✗ | ✗ | ✓ | ✗ | 16,000 |
github-models/microsoft/phi-4-mini-reasoning | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-3-small-8k-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 8,192 |
github-models/microsoft/phi-3.5-mini-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-4-multimodal-instruct | ✓ | ✓ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-3-mini-128k-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-3.5-moe-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-4-mini-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/phi-3-medium-4k-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 4,096 |
github-models/microsoft/phi-4-reasoning | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/microsoft/mai-ds-r1 | ✗ | ✗ | ✗ | ✓ | ✗ | 65,536 |
github-models/openai/gpt-4.1-nano | ✓ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/openai/gpt-4.1-mini | ✓ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/openai/o1-preview | ✗ | ✗ | ✗ | ✗ | ✗ | 128,000 |
github-models/openai/o3-mini | ✗ | ✗ | ✗ | ✗ | ✗ | 200,000 |
github-models/openai/gpt-4o | ✓ | ✓ | ✗ | ✓ | ✗ | 128,000 |
github-models/openai/gpt-4.1 | ✓ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/openai/o4-mini | ✓ | ✗ | ✗ | ✗ | ✗ | 200,000 |
github-models/openai/o1 | ✓ | ✗ | ✗ | ✗ | ✗ | 200,000 |
github-models/openai/o1-mini | ✗ | ✗ | ✗ | ✗ | ✗ | 128,000 |
github-models/openai/o3 | ✓ | ✗ | ✗ | ✗ | ✗ | 200,000 |
github-models/openai/gpt-4o-mini | ✓ | ✓ | ✗ | ✓ | ✗ | 128,000 |
github-models/meta/llama-3.2-11b-vision-instruct | ✓ | ✓ | ✗ | ✓ | ✗ | 128,000 |
github-models/meta/meta-llama-3.1-405b-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/meta/llama-4-maverick-17b-128e-instruct-fp8 | ✓ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/meta/meta-llama-3-70b-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 8,192 |
github-models/meta/meta-llama-3.1-70b-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/meta/llama-3.3-70b-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/meta/llama-3.2-90b-vision-instruct | ✓ | ✓ | ✗ | ✓ | ✗ | 128,000 |
github-models/meta/meta-llama-3-8b-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 8,192 |
github-models/meta/llama-4-scout-17b-16e-instruct | ✓ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/meta/meta-llama-3.1-8b-instruct | ✗ | ✗ | ✗ | ✓ | ✗ | 128,000 |
github-models/ai21-labs/ai21-jamba-1.5-large | ✗ | ✗ | ✗ | ✓ | ✗ | 256,000 |
github-models/ai21-labs/ai21-jamba-1.5-mini | ✗ | ✗ | ✗ | ✓ | ✗ | 256,000 |
Advanced Configuration
Custom Headers
const agent = new Agent({
name: "custom-agent",
model: {
url: "https://models.github.ai/inference",
modelId: "ai21-labs/ai21-jamba-1.5-large",
apiKey: process.env.GITHUB_TOKEN,
headers: {
"X-Custom-Header": "value",
},
},
});
Dynamic Model Selection
const agent = new Agent({
name: "dynamic-agent",
model: ({ runtimeContext }) => {
const useAdvanced = runtimeContext.task === "complex";
return useAdvanced
? "github-models/xai/grok-3-mini"
: "github-models/ai21-labs/ai21-jamba-1.5-large";
},
});