# ![Atomic Chat logo](https://models.dev/logos/atomic-chat.svg)Atomic Chat Access 5 Atomic Chat models through Mastra's model router. Authentication is handled automatically using the `ATOMIC_CHAT_API_KEY` environment variable. Learn more in the [Atomic Chat documentation](https://atomic.chat). ```bash ATOMIC_CHAT_API_KEY=your-api-key ``` ```typescript import { Agent } from "@mastra/core/agent"; const agent = new Agent({ id: "my-agent", name: "My Agent", instructions: "You are a helpful assistant", model: "atomic-chat/Meta-Llama-3_1-8B-Instruct-GGUF" }); // Generate a response const response = await agent.generate("Hello!"); // Stream a response const stream = await agent.stream("Tell me a story"); for await (const chunk of stream) { console.log(chunk); } ``` > **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [Atomic Chat documentation](https://atomic.chat) for details. ## Models | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M | | --------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- | | `atomic-chat/gemma-4-E4B-it-IQ4_XS` | 33K | | | | | | — | — | | `atomic-chat/gemma-4-E4B-it-MLX-4bit` | 33K | | | | | | — | — | | `atomic-chat/Meta-Llama-3_1-8B-Instruct-GGUF` | 131K | | | | | | — | — | | `atomic-chat/Qwen3_5-9B-MLX-4bit` | 33K | | | | | | — | — | | `atomic-chat/Qwen3_5-9B-Q4_K_M` | 33K | | | | | | — | — | ## Advanced configuration ### Custom headers ```typescript const agent = new Agent({ id: "custom-agent", name: "custom-agent", model: { url: "http://127.0.0.1:1337/v1", id: "atomic-chat/Meta-Llama-3_1-8B-Instruct-GGUF", apiKey: process.env.ATOMIC_CHAT_API_KEY, headers: { "X-Custom-Header": "value" } } }); ``` ### Dynamic model selection ```typescript const agent = new Agent({ id: "dynamic-agent", name: "Dynamic Agent", model: ({ requestContext }) => { const useAdvanced = requestContext.task === "complex"; return useAdvanced ? "atomic-chat/gemma-4-E4B-it-MLX-4bit" : "atomic-chat/Meta-Llama-3_1-8B-Instruct-GGUF"; } }); ```