Skip to main content

Llama logoLlama

Access 7 Llama models through Mastra's model router. Authentication is handled automatically using the LLAMA_API_KEY environment variable.

Learn more in the Llama documentation.

LLAMA_API_KEY=your-api-key
import { Agent } from "@mastra/core";

const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant",
model: "llama/cerebras-llama-4-maverick-17b-128e-instruct",
});

// Generate a response
const response = await agent.generate("Hello!");

// Stream a response
const stream = await agent.stream("Tell me a story");
for await (const chunk of stream) {
console.log(chunk);
}
OpenAI Compatibility

Mastra uses the OpenAI-compatible /chat/completions endpoint. Some provider-specific features may not be available. Check the Llama documentation for details.

Models

ModelImageAudioVideoToolsStreamingContext Window
llama/llama-3.3-8b-instruct128,000
llama/llama-4-maverick-17b-128e-instruct-fp8128,000
llama/llama-3.3-70b-instruct128,000
llama/llama-4-scout-17b-16e-instruct-fp8128,000
llama/groq-llama-4-maverick-17b-128e-instruct128,000
llama/cerebras-llama-4-scout-17b-16e-instruct128,000
llama/cerebras-llama-4-maverick-17b-128e-instruct128,000

Advanced Configuration

Custom Headers

const agent = new Agent({
name: "custom-agent",
model: {
url: "https://api.llama.com/compat/v1/",
modelId: "cerebras-llama-4-maverick-17b-128e-instruct",
apiKey: process.env.LLAMA_API_KEY,
headers: {
"X-Custom-Header": "value",
},
},
});

Dynamic Model Selection

const agent = new Agent({
name: "dynamic-agent",
model: ({ runtimeContext }) => {
const useAdvanced = runtimeContext.task === "complex";
return useAdvanced
? "llama/llama-4-scout-17b-16e-instruct-fp8"
: "llama/cerebras-llama-4-maverick-17b-128e-instruct";
},
});