Every time you want to try a new LLM provider, you face the same ritual: find the right package, install it, import it correctly, hunt through docs for the API endpoint, figure out the authentication, and hope there are TypeScript types for the model IDs.
So we shipped Model Router so you can access 600+ models from 40+ providers with almost no configuration required.
How it works
Instead of installing provider packages and wiring up imports, specify the model as a string:
1import { Agent } from "@mastra/core";
2
3const agent = new Agent({
4 name: "...",
5 instructions: "...",
6 model: "openai/gpt-4o-mini", // <- here
7});
Want to try Claude?
1const agent = new Agent({
2 // ...
3 model: "anthropic/claude-3-5-sonnet",
4});
Want to use a gateway like OpenRouter? Same pattern:
1const agent = new Agent({
2 // ...
3 model: "openrouter/z-ai/glm-4.6",
4});
The system handles all the routing, authentication, and API compatibility. If you're missing an API key, you get a clear error telling you exactly which environment variable to set (like ANTHROPIC_API_KEY
).
Your model search engine
The Agent model field has full TypeScript autocomplete for all 600+ models. Start typing and your IDE shows you what's available:
So does our playground UI:
Dynamic model selection at runtime
Since models are just strings, you can select them dynamically based on runtime context:
1const agent = new Agent({
2 name: "dynamic-assistant",
3 model: ({ runtimeContext }) => {
4 const provider = runtimeContext.get("provider-id");
5 const model = runtimeContext.get("model-id");
6 return `${provider}/${model}`;
7 },
8});
This unlocks:
- A/B testing models in production
- User-selectable models in your apps
- Cost optimization (cheaper models for simple tasks)
- Multi-tenant apps where each customer brings their own API keys
Before model router, this would require importing every possible provider package and writing a big switch statement. Now it's just string concatenation.
How we built it
Unlike other solutions that force you through their gateway, Mastra routes directly to providers when possible. Use OpenAI's API directly for lowest latency. Use OpenRouter when you need their model selection.
The registry is dynamically fetched from sources like models.dev, OpenRouter, and Netlify. When new models are released, they appear in your IDE autocomplete automatically (yep, no package updates needed).
For major providers like Anthropic and Google, we use their official APIs directly. For the long tail of providers, we route through OpenAI-compatible endpoints. You get the best performance for major providers and maximum compatibility for everything else.
What's next
We're adding more model sources and improving the discovery experience. The playground UI already lets you browse all available models, see which environment variables you need, and jump directly to provider documentation.
But the core is here now: 600+ models, 40+ providers, one API, zero package installs.
Model Router is available now in @mastra/core
0.19.0+. See the documentation for more details.