Vercel
Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 86 models through Mastra's model router.
Learn more in the Vercel documentation.
UsageDirect link to Usage
import { Agent } from "@mastra/core/agent";
const agent = new Agent({
id: "my-agent",
name: "My Agent",
instructions: "You are a helpful assistant",
model: "vercel/alibaba/qwen3-coder-plus"
});
info
Mastra uses the OpenAI-compatible /chat/completions endpoint. Some provider-specific features may not be available. Check the Vercel documentation for details.
ConfigurationDirect link to Configuration
# Use gateway API key
VERCEL_API_KEY=your-gateway-key
# Or use provider API keys directly
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=ant-...
Available ModelsDirect link to Available Models
| Model |
|---|
alibaba/qwen3-coder-plus |
alibaba/qwen3-max |
alibaba/qwen3-next-80b-a3b-instruct |
alibaba/qwen3-next-80b-a3b-thinking |
alibaba/qwen3-vl-instruct |
alibaba/qwen3-vl-thinking |
amazon/nova-lite |
amazon/nova-micro |
amazon/nova-pro |
anthropic/claude-3-haiku |
anthropic/claude-3-opus |
anthropic/claude-3.5-haiku |
anthropic/claude-3.5-sonnet |
anthropic/claude-3.7-sonnet |
anthropic/claude-4-1-opus |
anthropic/claude-4-opus |
anthropic/claude-4-sonnet |
anthropic/claude-4.5-sonnet |
anthropic/claude-haiku-4.5 |
anthropic/claude-opus-4.5 |
deepseek/deepseek-r1 |
deepseek/deepseek-r1-distill-llama-70b |
deepseek/deepseek-v3.1-terminus |
deepseek/deepseek-v3.2-exp |
deepseek/deepseek-v3.2-exp-thinking |
google/gemini-2.0-flash |
google/gemini-2.0-flash-lite |
google/gemini-2.5-flash |
google/gemini-2.5-flash-lite |
google/gemini-2.5-flash-lite-preview-09-2025 |
google/gemini-2.5-flash-preview-09-2025 |
google/gemini-2.5-pro |
google/gemini-3-pro-preview |
meta/llama-3.3-70b |
meta/llama-4-maverick |
meta/llama-4-scout |
minimax/minimax-m2 |
mistral/codestral |
mistral/magistral-medium |
mistral/magistral-small |
mistral/ministral-3b |
mistral/ministral-8b |
mistral/mistral-large |
mistral/mistral-small |
mistral/mixtral-8x22b-instruct |
mistral/pixtral-12b |
mistral/pixtral-large |
moonshotai/kimi-k2 |
morph/morph-v3-fast |
morph/morph-v3-large |
openai/gpt-4-turbo |
openai/gpt-4.1 |
openai/gpt-4.1-mini |
openai/gpt-4.1-nano |
openai/gpt-4o |
openai/gpt-4o-mini |
openai/gpt-5 |
openai/gpt-5-codex |
openai/gpt-5-mini |
openai/gpt-5-nano |
openai/gpt-oss-120b |
openai/gpt-oss-20b |
openai/o1 |
openai/o3 |
openai/o3-mini |
openai/o4-mini |
perplexity/sonar |
perplexity/sonar-pro |
perplexity/sonar-reasoning |
perplexity/sonar-reasoning-pro |
vercel/v0-1.0-md |
vercel/v0-1.5-md |
xai/grok-2 |
xai/grok-2-vision |
xai/grok-3 |
xai/grok-3-fast |
xai/grok-3-mini |
xai/grok-3-mini-fast |
xai/grok-4 |
xai/grok-4-fast |
xai/grok-4-fast-non-reasoning |
xai/grok-code-fast-1 |
zai/glm-4.5 |
zai/glm-4.5-air |
zai/glm-4.5v |
zai/glm-4.6 |