Providers and Models
Mastra supports a variety of language models from different providers. There are four types of providers we support:
- Most popular providers. OpenAI, Anthropic, Google Gemini. These are the most popular models and are highly recommended for most use cases. We will reference them and use them in docs and examples.
- Other natively supported providers. Mastra is built on AI SDK and supports a number of AI SDK supported models out of the box. We will always try to use these models in docs and examples.
- Community supported providers. A number of other providers have built AI SDK integrations (via creating an AI SDK provider).
- Custom providers through Portkey. If a provider does not have an AI SDK integration, you can use them through Portkey (an open-source AI gateway).
Most popular providers
Provider | Supported Models |
---|---|
OpenAI | gpt-4 , gpt-4-turbo , gpt-3.5-turbo , gpt-4o , gpt-4o-mini |
Anthropic | claude-3-5-sonnet-20241022 , claude-3-5-sonnet-20240620 , claude-3-5-haiku-20241022 , claude-3-opus-20240229 , claude-3-sonnet-20240229 , claude-3-haiku-20240307 |
Google Gemini | gemini-1.5-pro-latest , gemini-1.5-pro , gemini-1.5-flash-latest , gemini-1.5-flash , gemini-2.0-flash-exp-latest , gemini-2.0-flash-thinking-exp-1219 , gemini-exp-1206 |
Other natively supported providers
Provider | Supported Models |
---|---|
Groq | llama3-groq-70b-8192-tool-use-preview , llama3-groq-8b-8192-tool-use-preview , gemma2-9b-it , gemma-7b-it |
Perplexity | llama-3.1-sonar-small-128k-online , llama-3.1-sonar-large-128k-online , llama-3.1-sonar-huge-128k-online , llama-3.1-sonar-small-128k-chat |
TogetherAI | codellama/CodeLlama-34b-Instruct-hf , upstage/SOLAR-10.7B-Instruct-v1.0 , mistralai/Mixtral-8x7B-v0.1 , WhereIsAI/UAE-Large-V1 |
LM Studio | qwen2-7b-instruct-4bit , qwen2-math-1.5b , qwen2-0.5b , aya-23-8b , mistral-7b-v0.3 |
Baseten | llama-3.1-70b-instruct , qwen2.5-7b-math-instruct , qwen2.5-14b-instruct , qwen2.5-32b-coder-instruct |
Fireworks | llama-3.1-405b-instruct , llama-3.1-70b-instruct , llama-3.1-8b-instruct , llama-3.2-3b-instruct |
Mistral | pixtral-large-latest , mistral-large-latest , mistral-small-latest , ministral-3b-latest |
X Grok | grok-beta , grok-vision-beta |
Cohere | command-r-plus |
Azure | gpt-35-turbo-instruct |
Amazon | amazon-titan-tg1-large , amazon-titan-text-express-v1 , anthropic-claude-3-5-sonnet-20241022-v2:0 |
Anthropic Vertex | claude-3-5-sonnet@20240620 , claude-3-opus@20240229 , claude-3-sonnet@20240229 , claude-3-haiku@20240307 |
Community supported providers
You can see a list of Vercel’s community supported providers here. You can also write your own provider if desired.
Example: Custom Provider - Ollama
Here is an example of using a custom provider, Ollama, to create a model instance.
npm install ollama-ai-provider
Import and configure the Ollama model by using createOllama
from the ollama-ai-provider
package.
import { createOllama } from "ollama-ai-provider";
const ollama = createOllama({
// optional settings, e.g.
baseURL: "https://api.ollama.com",
});
After creating the instance, you can use it like any other model in Mastra.
src/mastra/index.ts
import { Mastra, type ModelConfig } from "@mastra/core";
import { createOllama } from "ollama-ai-provider";
const ollama = createOllama({
// optional settings, e.g.
baseURL: "https://api.ollama.com",
});
const modelConfig: ModelConfig = {
model: ollama.chat("gemma"), // The model instance created by the Ollama provider
apiKey: process.env.OLLAMA_API_KEY,
provider: "Ollama",
toolChoice: "auto", // Controls how the model handles tool/function calling
};
const mastra = new Mastra({});
const llm = mastra.llm;
const response = await llm.generate(
[
{
role: "user",
content: "What is machine learning?",
},
],
{ model: modelConfig },
);
Portkey supported providers
Portkey is an open-source AI gateway with support for 200+ providers, so if the provider you want isn’t available through AI SDK, it probably is through Portkey.
You can refer to the Portkey documentation for more details on how to implement custom models.