--- title: "Creating and Calling Agents | Agent Documentation | Mastra" description: Overview of agents in Mastra, detailing their capabilities and how they interact with tools, workflows, and external systems. --- # Creating and Calling Agents Source: https://mastra.ai/docs/agents/00-overview Agents in Mastra are systems where the language model can autonomously decide on a sequence of actions to perform tasks. They have access to tools, workflows, and synced data, enabling them to perform complex tasks and interact with external systems. Agents can invoke your custom functions, utilize third-party APIs through integrations, and access knowledge bases you have built. Agents are like employees who can be used for ongoing projects. They have names, persistent memory, consistent model configurations, and instructions across calls, as well as a set of enabled tools. ## 1. Creating an Agent To create an agent in Mastra, you use the `Agent` class and define its properties: ```ts showLineNumbers filename="src/mastra/agents/index.ts" copy import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; export const myAgent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant.", model: openai("gpt-4o-mini"), }); ``` **Note:** Ensure that you have set the necessary environment variables, such as your OpenAI API key, in your `.env` file: ```.env filename=".env" copy OPENAI_API_KEY=your_openai_api_key ``` Also, make sure you have the `@mastra/core` package installed: ```bash npm2yarn copy npm install @mastra/core ``` ### Registering the Agent Register your agent with Mastra to enable logging and access to configured tools and integrations: ```ts showLineNumbers filename="src/mastra/index.ts" copy import { Mastra } from "@mastra/core"; import { myAgent } from "./agents"; export const mastra = new Mastra({ agents: { myAgent }, }); ``` ## 2. Generating and streaming text ### Generating text Use the `.generate()` method to have your agent produce text responses: ```ts showLineNumbers filename="src/mastra/index.ts" copy const response = await myAgent.generate([ { role: "user", content: "Hello, how can you assist me today?" }, ]); console.log("Agent:", response.text); ``` ### Streaming responses For more real-time responses, you can stream the agent's response: ```ts showLineNumbers filename="src/mastra/index.ts" copy const stream = await myAgent.stream([ { role: "user", content: "Tell me a story." }, ]); console.log("Agent:"); for await (const chunk of stream.textStream) { process.stdout.write(chunk); } ``` ## **3. Structured Output** Agents can return structured data by providing a JSON Schema or using a Zod schema. ### Using JSON Schema ```typescript const schema = { type: "object", properties: { summary: { type: "string" }, keywords: { type: "array", items: { type: "string" } }, }, additionalProperties: false, required: ["summary", "keywords"], }; const response = await myAgent.generate( [ { role: "user", content: "Please provide a summary and keywords for the following text: ...", }, ], { output: schema, }, ); console.log("Structured Output:", response.object); ``` ### Using Zod You can also use Zod schemas for type-safe structured outputs. First, install Zod: ```bash npm2yarn copy npm install zod ``` Then, define a Zod schema and use it with the agent: ```ts showLineNumbers filename="src/mastra/index.ts" copy import { z } from "zod"; // Define the Zod schema const schema = z.object({ summary: z.string(), keywords: z.array(z.string()), }); // Use the schema with the agent const response = await myAgent.generate( [ { role: "user", content: "Please provide a summary and keywords for the following text: ...", }, ], { output: schema, }, ); console.log("Structured Output:", response.object); ``` This allows you to have strong typing and validation for the structured data returned by the agent. ## **4. Running Agents** Mastra provides a CLI command `mastra dev` to run your agents behind an API. By default, this looks for exported agents in files in the `src/mastra/agents` directory. ### Starting the Server ```bash mastra dev ``` This will start the server and make your agent available at `http://localhost:4111/api/agents/myAgent/generate`. ### Interacting with the Agent You can interact with the agent using `curl` from the command line: ```bash curl -X POST http://localhost:4111/api/agents/myAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "Hello, how can you assist me today?" } ] }' ``` ## Next Steps - Learn about Agent Memory in the [Agent Memory](./01-agent-memory.mdx) guide. - Learn about Agent Tools in the [Agent Tools](./02-adding-tools.mdx) guide. - See an example agent in the [Chef Michel](../guides/01-chef-michel.mdx) example. --- title: "Using Agent Memory | Agents | Mastra Docs" description: Documentation on how agents in Mastra use memory to store conversation history and contextual information. --- # Agent Memory Source: https://mastra.ai/docs/agents/01-agent-memory Agents in Mastra have a sophisticated memory system that stores conversation history and contextual information. This memory system supports both traditional message storage and vector-based semantic search, enabling agents to maintain state across interactions and retrieve relevant historical context. ## Threads and Resources In Mastra, you can organize conversations by a `thread_id`. This allows the system to maintain context and retrieve historical messages that belong to the same discussion. Mastra also supports the concept of a `resource_id`, which typically represents the user involved in the conversation, ensuring that the agent’s memory and context are correctly associated with the right entity. This separation allows you to manage multiple conversations (threads) for a single user or even share conversation context across users if needed. ```typescript copy showLineNumbers import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: "Project Manager", instructions: "You are a project manager. You are responsible for managing the project and the team.", model: openai("gpt-4o-mini"), }); await agent.stream("When will the project be completed?", { threadId: "project_123", resourceId: "user_123", }); ``` ## Managing Conversation Context The key to getting good responses from LLMs is feeding them the right context. Mastra has a Memory API that stores and manages conversation history and contextual information. The Memory API uses a storage backend to persist conversation history and contextual information (more on this later). The Memory API uses two main mechanisms to maintain context in conversations, recent message history and semantic search. ### Recent Message History By default, Memory keeps track of the 40 most recent messages in a conversation. You can customize this with the `lastMessages` setting: ```typescript copy showLineNumbers const memory = new Memory({ options: { lastMessages: 5, // Keep 5 most recent messages }, }); // When user asks this question, the agent will see the last 10 messages, await agent.stream("Can you summarize the search feature requirements?", { memoryOptions: { lastMessages: 10, }, }); ``` ### Semantic Search Semantic search is enabled by default in Mastra. While FastEmbed (bge-small-en-v1.5) and LibSQL are included by default, you can use any embedder (like OpenAI or Cohere) and vector database (like PostgreSQL, Pinecone, or Chroma) that fits your needs. This allows your agent to find and recall relevant information from earlier in the conversation: ```typescript copy showLineNumbers const memory = new Memory({ options: { semanticRecall: { topK: 10, // Include 10 most relevant past messages messageRange: 2, // Messages before and after each result }, }, }); // Example: User asks about a past feature discussion await agent.stream("What did we decide about the search feature last week?", { memoryOptions: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); ``` When semantic search is used: 1. The message is converted to a vector embedding 2. Similar messages are found using vector similarity search 3. Surrounding context is included based on `messageRange` 4. All relevant context is provided to the agent You can also customize the vector database and embedder: ```typescript copy showLineNumbers import { openai } from "@ai-sdk/openai"; import { PgVector } from "@mastra/pg"; const memory = new Memory({ // Use a different vector database (libsql is default) vector: new PgVector("postgresql://user:pass@localhost:5432/db"), // Or a different embedder (fastembed is default) embedder: openai.embedding("text-embedding-3-small"), }); ``` ## Memory Configuration The Mastra memory system is highly configurable and supports multiple storage backends. By default, it uses LibSQL for storage and vector search, and FastEmbed for embeddings. ### Basic Configuration For most use cases, you can use the default configuration: ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; const memory = new Memory(); ``` ### Custom Configuration For more control, you can customize the storage backend, vector database, and memory options: ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; import { PostgresStore, PgVector } from "@mastra/pg"; const memory = new Memory({ storage: new PostgresStore({ host: "localhost", port: 5432, user: "postgres", database: "postgres", password: "postgres", }), vector: new PgVector("postgresql://user:pass@localhost:5432/db"), options: { // Number of recent messages to include (false to disable) lastMessages: 10, // Configure vector-based semantic search (false to disable) semanticRecall: { topK: 3, // Number of semantic search results messageRange: 2, // Messages before and after each result }, }, }); ``` ### Overriding Memory Settings When you initialize a Mastra instance with memory configuration, all agents will automatically use these memory settings when you call their `stream()` or `generate()` methods. You can override these default settings for individual calls: ```typescript copy showLineNumbers // Use default memory settings from Memory configuration const response1 = await agent.generate("What were we discussing earlier?", { resourceId: "user_123", threadId: "thread_456", }); // Override memory settings for this specific call const response2 = await agent.generate("What were we discussing earlier?", { resourceId: "user_123", threadId: "thread_456", memoryOptions: { lastMessages: 5, // Only inject 5 recent messages semanticRecall: { topK: 2, // Only get 2 semantic search results messageRange: 1, // Context around each result }, }, }); ``` ### Configuring Memory for Different Use Cases You can adjust memory settings based on your agent's needs: ```typescript copy showLineNumbers // Customer support agent with minimal context await agent.stream("What are your store hours?", { threadId, resourceId, memoryOptions: { lastMessages: 5, // Quick responses need minimal conversation history semanticRecall: false, // no need to search through earlier messages }, }); // Project management agent with extensive context await agent.stream("Update me on the project status", { threadId, resourceId, memoryOptions: { lastMessages: 50, // Maintain longer conversation history across project discussions semanticRecall: { topK: 5, // Find more relevant project details messageRange: 3, // Number of messages before and after each result }, }, }); ``` ## Storage Options Mastra currently supports several storage backends: ### LibSQL Storage ```typescript copy showLineNumbers import { LibSQLStore } from "@mastra/core/storage/libsql"; const storage = new LibSQLStore({ config: { url: "file:example.db", }, }); ``` ### PostgreSQL Storage ```typescript copy showLineNumbers import { PostgresStore } from "@mastra/pg"; const storage = new PostgresStore({ host: "localhost", port: 5432, user: "postgres", database: "postgres", password: "postgres", }); ``` ### Upstash KV Storage ```typescript copy showLineNumbers import { UpstashStore } from "@mastra/upstash"; const storage = new UpstashStore({ url: "http://localhost:8089", token: "your_token", }); ``` ## Vector Search Mastra supports semantic search through vector embeddings. When configured with a vector store, agents can find relevant historical messages based on semantic similarity. To enable vector search: 1. Configure a vector store (currently supports PostgreSQL): ```typescript copy showLineNumbers import { PgVector } from "@mastra/pg"; const vector = new PgVector(connectionString); const memory = new Memory({ vector }); ``` 2. Configure embedding options: ```typescript copy showLineNumbers const memory = new Memory({ vector, embedder: openai.embedding("text-embedding-3-small"), }); ``` 3. Enable vector search in memory configuration options: ```typescript copy showLineNumbers const memory = new Memory({ vector, embedder, options: { semanticRecall: { topK: 3, // Number of similar messages to find messageRange: 2, // Context around each result }, }, }); ``` ## Using Memory in Agents Once configured, the memory system is automatically used by agents. Here's how to use it: ```typescript copy showLineNumbers // Initialize Agent with memory const myAgent = new Agent({ memory, // other agent options }); // Add agent to mastra const mastra = new Mastra({ agents: { myAgent }, }); // Memory is automatically used in agent interactions when resourceId and threadId are added const response = await myAgent.generate( "What were we discussing earlier about performance?", { resourceId: "user_123", threadId: "thread_456", }, ); ``` The memory system will automatically: 1. Store all messages in the configured storage backend 2. Create vector embeddings for semantic search (if configured) 3. Inject relevant historical context into new conversations 4. Maintain conversation threads and context ## useChat() When using `useChat` from the AI SDK, you must send only the latest message or you will encounter message ordering bugs. If the `useChat()` implementation for your framework supports `experimental_prepareRequestBody`, you can do the following: ```ts const { messages } = useChat({ api: "api/chat", experimental_prepareRequestBody({ messages, id }) { return { message: messages.at(-1), id }; }, }); ``` This will only ever send the latest message to the server. In your chat server endpoint you can then pass a threadId and resourceId when calling stream or generate and the agent will have access to the memory thread messages: ```ts const { messages } = await request.json(); const stream = await myAgent.stream(messages, { threadId, resourceId, }); return stream.toDataStreamResponse(); ``` If the `useChat()` for your framework (svelte for example) doesn't support `experimental_prepareRequestBody`, you can pick and use the last message before calling stream or generate: ```ts const { messages } = await request.json(); const stream = await myAgent.stream([messages.at(-1)], { threadId, resourceId, }); return stream.toDataStreamResponse(); ``` See the [AI SDK documentation on message persistence](https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot-message-persistence) for more information. ## Manually Managing Threads While threads are automatically managed when using agent methods, you can also manually manage threads using the memory API directly. This is useful for advanced use cases like: - Creating threads before starting conversations - Managing thread metadata - Explicitly saving or retrieving messages - Cleaning up old threads Here's how to manually work with threads: ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; import { PostgresStore } from "@mastra/pg"; // Initialize memory const memory = new Memory({ storage: new PostgresStore({ host: "localhost", port: 5432, user: "postgres", database: "postgres", password: "postgres", }), }); // Create a new thread const thread = await memory.createThread({ resourceId: "user_123", title: "Project Discussion", metadata: { project: "mastra", topic: "architecture", }, }); // Manually save messages to a thread await memory.saveMessages({ messages: [ { id: "msg_1", threadId: thread.id, role: "user", content: "What's the project status?", createdAt: new Date(), type: "text", }, ], }); // Get messages from a thread with various filters const messages = await memory.query({ threadId: thread.id, selectBy: { last: 10, // Get last 10 messages vectorSearchString: "performance", // Find messages about performance }, }); // Get thread by ID const existingThread = await memory.getThreadById({ threadId: "thread_123", }); // Get all threads for a resource const threads = await memory.getThreadsByResourceId({ resourceId: "user_123", }); // Update thread metadata await memory.updateThread({ id: thread.id, title: "Updated Project Discussion", metadata: { status: "completed", }, }); // Delete a thread and all its messages await memory.deleteThread(thread.id); ``` Note that in most cases, you won't need to manage threads manually since the agent's `generate()` and `stream()` methods handle thread management automatically. Manual thread management is primarily useful for advanced use cases or when you need more fine-grained control over the conversation history. ## Working Memory Working memory is a powerful feature that allows agents to maintain persistent information across conversations, even with minimal context. This is particularly useful for remembering user preferences, personal details, or any other contextual information that should persist throughout interactions. Inspired by the working memory concept from the MemGPT whitepaper, our implementation improves upon it in several key ways: - No extra roundtrips or tool calls required - Full support for streaming messages - Seamless integration with the agent's natural response flow #### How It Works Working memory operates through a system of XML tags and automatic updates: 1. **Template Structure**: Define what information should be remembered using XML tags. The Memory class comes with a comprehensive default template for user information, or you can create your own template to match your specific needs. 2. **Automatic Updates**: The Memory class injects special instructions into the agent's system prompt that tell it to: - Store relevant information by including `...` tags in its responses - Update information proactively when anything changes - Maintain the XML structure while updating values - Keep this process invisible to users 3. **Memory Management**: The system: - Extracts working memory blocks from agent responses - Stores them for future use - Injects working memory into the system prompt on the next agent call The agent is instructed to be proactive about storing information - if there's any doubt about whether something might be useful later, it should be stored. This helps maintain conversation context even when using very small context windows. #### Basic Usage ```typescript copy showLineNumbers import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: "Customer Service", instructions: "You are a helpful customer service agent. Remember customer preferences and past interactions.", model: openai("gpt-4o-mini"), memory: new Memory({ options: { workingMemory: { enabled: true, // enables working memory }, lastMessages: 5, // Only keep recent context }, }), }); ``` Working memory becomes particularly powerful when combined with specialized system prompts. For example, you could create a TODO list manager that maintains state even though it only has access to the previous message: ```typescript copy showLineNumbers const todoAgent = new Agent({ name: "TODO Manager", instructions: "You are a TODO list manager. Update the todo list in working memory whenever tasks are added, completed, or modified.", model: openai("gpt-4o-mini"), memory: new Memory({ options: { workingMemory: { enabled: true, // optional XML-like template to encourage agent to store specific kinds of info. // if you leave this out a default template will be used template: ` `, }, lastMessages: 1, // Only keep the last message in context }, }), }); ``` ### Handling Memory Updates in Streaming When an agent responds, it includes working memory updates directly in its response stream. These updates appear as XML blocks in the text: ```typescript copy showLineNumbers // Raw agent response stream: Let me help you with that! John... Based on your question... ``` To prevent these memory blocks from being visible to users while still allowing the system to process them, use the `maskStreamTags` utility: ```typescript copy showLineNumbers import { maskStreamTags } from "@mastra/core/utils"; // Basic usage - just mask the working_memory tags for await (const chunk of maskStreamTags( response.textStream, "working_memory", )) { process.stdout.write(chunk); } // Without masking: "Let me help you! ... Based on..." // With masking: "Let me help you! Based on..." ``` You can also hook into memory update events: ```typescript copy showLineNumbers const maskedStream = maskStreamTags(response.textStream, "working_memory", { onStart: () => showLoadingSpinner(), onEnd: () => hideLoadingSpinner(), onMask: (chunk) => console.debug(chunk), }); ``` The `maskStreamTags` utility: - Removes content between specified XML tags in a streaming response - Optionally provides lifecycle callbacks for memory updates - Handles tags that might be split across stream chunks ### Accessing Thread and Resource IDs in Tools When creating custom tools, you can access the `threadId` and `resourceId` directly in the tool's execute function. These parameters are automatically provided by the Mastra runtime: ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; const memory = new Memory(); const myTool = createTool({ id: "Thread Info Tool", inputSchema: z.object({ fetchMessages: z.boolean().optional(), }), description: "A tool that demonstrates accessing thread and resource IDs", execute: async ({ threadId, resourceId, context }) => { // threadId and resourceId are directly available in the execute parameters console.log(`Executing in thread ${threadId}`); if (!context.fetchMessages) { return { threadId, resourceId }; } const recentMessages = await memory.query({ threadId, selectBy: { last: 5 }, }); return { threadId, resourceId, messageCount: recentMessages.length, }; }, }); ``` This allows tools to: - Access the current conversation context - Store or retrieve thread-specific data - Associate tool actions with specific users/resources - Maintain state across multiple tool invocations --- title: "Agent Tool Selection | Agent Documentation | Mastra" description: Tools are typed functions that can be executed by agents or workflows, with built-in integration access and parameter validation. Each tool has a schema that defines its inputs, an executor function that implements its logic, and access to configured integrations. --- # Agent Tool Selection Source: https://mastra.ai/docs/agents/02-adding-tools Tools are typed functions that can be executed by agents or workflows, with built-in integration access and parameter validation. Each tool has a schema that defines its inputs, an executor function that implements its logic, and access to configured integrations. ## Creating Tools In this section, we'll walk through the process of creating a tool that can be used by your agents. Let's create a simple tool that fetches current weather information for a given city. ```typescript filename="src/mastra/tools/weatherInfo.ts" copy import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const getWeatherInfo = async (city: string) => { // Replace with an actual API call to a weather service const data = await fetch(`https://api.example.com/weather?city=${city}`).then( (r) => r.json(), ); return data; }; export const weatherInfo = createTool({ id: "Get Weather Information", inputSchema: z.object({ city: z.string(), }), description: `Fetches the current weather information for a given city`, execute: async ({ context: { city } }) => { console.log("Using tool to fetch weather information for", city); return await getWeatherInfo(city); }, }); ``` ## Adding Tools to an Agent Now we'll add the tool to an agent. We'll create an agent that can answer questions about the weather and configure it to use our `weatherInfo` tool. ```typescript filename="src/mastra/agents/weatherAgent.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import * as tools from "../tools/weatherInfo"; export const weatherAgent = new Agent({ name: "Weather Agent", instructions: "You are a helpful assistant that provides current weather information. When asked about the weather, use the weather information tool to fetch the data.", model: openai("gpt-4o-mini"), tools: { weatherInfo: tools.weatherInfo, }, }); ``` ## Registering the Agent We need to initialize Mastra with our agent. ```typescript filename="src/index.ts" import { Mastra } from "@mastra/core"; import { weatherAgent } from "./agents/weatherAgent"; export const mastra = new Mastra({ agents: { weatherAgent }, }); ``` This registers your agent with Mastra, making it available for use. ## Abort Signals The abort signals from `generate` and `stream` (text generation) are forwarded to the tool execution. You can access them in the second parameter of the execute function and e.g. abort long-running computations or forward them to fetch calls inside tools. ```typescript import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const agent = new Agent({ name: "Weather agent", tools: { weather: createTool({ id: "Get Weather Information", description: 'Get the weather in a location', inputSchema: z.object({ location: z.string() }), execute: async ({ context: { location } }, { abortSignal }) => { return fetch( `https://api.weatherapi.com/v1/current.json?q=${location}`, { signal: abortSignal }, // forward the abort signal to fetch ); }, }), } }) const result = await agent.generate('What is the weather in San Francisco?', { abortSignal: myAbortSignal, // signal that will be forwarded to tools }); ``` ## Debugging Tools You can test tools using Vitest or any other testing framework. Writing unit tests for your tools ensures they behave as expected and helps catch errors early. ## Calling an Agent with a Tool Now we can call the agent, and it will use the tool to fetch the weather information. ## Example: Interacting with the Agent ```typescript filename="src/index.ts" import { mastra } from "./index"; async function main() { const agent = mastra.getAgent("weatherAgent"); const response = await agent.generate( "What's the weather like in New York City today?", ); console.log(response.text); } main(); ``` The agent will use the `weatherInfo` tool to get the current weather in New York City and respond accordingly. ## Vercel AI SDK Tool Format Mastra supports tools created using the Vercel AI SDK format. You can import and use these tools directly: ```typescript filename="src/mastra/tools/vercelTool.ts" copy import { tool } from 'ai'; import { z } from 'zod'; export const weatherInfo = tool({ description: "Fetches the current weather information for a given city", parameters: z.object({ city: z.string().describe("The city to get weather for") }), execute: async ({ city }) => { // Replace with actual API call const data = await fetch(`https://api.example.com/weather?city=${city}`); return data.json(); } }); ``` You can use Vercel tools alongside Mastra tools in your agents: ```typescript filename="src/mastra/agents/weatherAgent.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import { weatherInfo } from "../tools/vercelTool"; import * as mastraTools from "../tools/mastraTools"; export const weatherAgent = new Agent({ name: "Weather Agent", instructions: "You are a helpful assistant that provides weather information.", model: openai("gpt-4"), tools: { weatherInfo, // Vercel tool ...mastraTools // Mastra tools }, }); ``` Both tool formats will work seamlessly within your agent's workflow. ## Tool Design Best Practices When creating tools for your agents, following these guidelines will help ensure reliable and intuitive tool usage: ### Tool Descriptions Your tool's main description should focus on its purpose and value: - Keep descriptions simple and focused on __what__ the tool does - Emphasize the tool's primary use case - Avoid implementation details in the main description - Focus on helping the agent understand __when__ to use the tool ```typescript createTool({ id: "documentSearch", description: "Access the knowledge base to find information needed to answer user questions", // ... rest of tool configuration }); ``` ### Parameter Schemas Technical details belong in the parameter schemas, where they help the agent use the tool correctly: - Make parameters self-documenting with clear descriptions - Include default values and their implications - Provide examples where helpful - Describe the impact of different parameter choices ```typescript inputSchema: z.object({ query: z.string().describe("The search query to find relevant information"), limit: z.number().describe( "Number of results to return. Higher values provide more context, lower values focus on best matches" ), options: z.string().describe( "Optional configuration. Example: '{'filter': 'category=news'}'" ), }), ``` ### Agent Interaction Patterns Tools are more likely to be used effectively when: - Queries or tasks are complex enough to clearly require tool assistance - Agent instructions provide clear guidance on tool usage - Parameter requirements are well-documented in the schema - The tool's purpose aligns with the query's needs ### Common Pitfalls - Overloading the main description with technical details - Mixing implementation details with usage guidance - Unclear parameter descriptions or missing examples Following these practices helps ensure your tools are discoverable and usable by agents while maintaining clean separation between purpose (main description) and implementation details (parameter schemas). ## Model Context Protocol (MCP) Tools Mastra also supports tools from MCP-compatible servers through the `@mastra/mcp` package. MCP provides a standardized way for AI models to discover and interact with external tools and resources. This makes it easy to integrate third-party tools into your agents without writing custom integrations. For detailed information about using MCP tools, including configuration options and best practices, see our [MCP guide](/docs/agents/02a-mcp-guide). --- title: "Using MCP With Mastra | Agents | Mastra Docs" description: "Use MCP in Mastra to integrate third party tools and resources in your AI agents." --- # Using MCP With Mastra Source: https://mastra.ai/docs/agents/02a-mcp-guide [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is a standardized way for AI models to discover and interact with external tools and resources. ## Overview MCP in Mastra provides a standardized way to connect to tool servers and supports both stdio and SSE-based connections. ## Installation Using pnpm: ```bash pnpm add @mastra/mcp@latest ``` Using npm: ```bash npm install @mastra/mcp@latest ``` ## Using MCP in Your Code The `MCPConfiguration` class provides a way to manage multiple tool servers in your Mastra applications without managing multiple MCP clients. You can configure both stdio-based and SSE-based servers: ```typescript import { MCPConfiguration } from "@mastra/mcp"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const mcp = new MCPConfiguration({ servers: { // stdio example sequential: { name: "sequential-thinking", server: { command: "npx", args: ["-y", "@modelcontextprotocol/server-sequential-thinking"], }, }, // SSE example weather: { url: new URL("http://localhost:8080/sse"), requestInit: { headers: { Authorization: "Bearer your-token", }, }, }, }, }); ``` ### Tools vs Toolsets The `MCPConfiguration` class provides two ways to access MCP tools, each suited for different use cases: #### Using Tools (`getTools()`) Use this approach when: - You have a single MCP connection - The tools are used by a single user/context - Tool configuration (API keys, credentials) remains constant - You want to initialize an Agent with a fixed set of tools ```typescript const agent = new Agent({ name: "CLI Assistant", instructions: "You help users with CLI tasks", model: openai("gpt-4o-mini"), tools: await mcp.getTools(), // Tools are fixed at agent creation }); ``` #### Using Toolsets (`getToolsets()`) Use this approach when: - You need per-request tool configuration - Tools need different credentials per user - Running in a multi-user environment (web app, API, etc) - Tool configuration needs to change dynamically ```typescript const mcp = new MCPConfiguration({ servers: { example: { command: "npx", args: ["-y", "@example/fakemcp"], env: { API_KEY: "your-api-key", }, }, }, }); // Get the current toolsets configured for this user const toolsets = await mcp.getToolsets(); // Use the agent with user-specific tool configurations const response = await agent.stream( "What's new in Mastra and how's the weather?", { toolsets, }, ); ``` ## MCP Registries MCP servers can be accessed through registries that provide curated collections of tools. Here's how to use tools from different registries: ### Composio.dev Registry [Composio.dev](https://composio.dev) provides a registry of [SSE-based MCP servers](https://mcp.composio.dev) that can be easily integrated with Mastra. The SSE URL that's generated for Cursor is compatible with Mastra - you can use it directly in your configuration: ```typescript const mcp = new MCPConfiguration({ servers: { googleSheets: { url: new URL("https://mcp.composio.dev/googlesheets/[private-url-path]"), }, gmail: { url: new URL("https://mcp.composio.dev/gmail/[private-url-path]"), }, }, }); ``` When using Composio-provided tools, you can authenticate with services (like Google Sheets or Gmail) directly through conversation with your agent. The tools include authentication capabilities that guide you through the process while chatting. Note: The Composio.dev integration is best suited for single-user scenarios like personal automation, as the SSE URL is tied to your account and can't be used for multiple users. Each URL represents a single account's authentication context. ### Smithery.ai Registry [Smithery.ai](https://smithery.ai) provides a registry of MCP servers that you can easily use with Mastra: ```typescript // Unix/Mac const mcp = new MCPConfiguration({ servers: { sequentialThinking: { command: "npx", args: [ "-y", "@smithery/cli@latest", "run", "@smithery-ai/server-sequential-thinking", "--config", "{}", ], }, }, }); // Windows const mcp = new MCPConfiguration({ servers: { sequentialThinking: { command: "cmd", args: [ "/c", "npx", "-y", "@smithery/cli@latest", "run", "@smithery-ai/server-sequential-thinking", "--config", "{}", ], }, }, }); ``` This example is adapted from the Claude integration example in the Smithery documentation. ## Using the Mastra Documentation Server Looking to use Mastra's MCP documentation server in your IDE? Check out our [MCP Documentation Server guide](/docs/getting-started/mcp-docs-server) to get started. ## Next Steps - Learn more about [MCPConfiguration](/docs/reference/tools/mcp-configuration) - Check out our [example projects](/examples) that use MCP # Adding Voice to Agents Source: https://mastra.ai/docs/agents/03-adding-voice Mastra agents can be enhanced with voice capabilities, allowing them to speak responses and listen to user input. You can configure an agent to use either a single voice provider or combine multiple providers for different operations. ## Using a Single Provider The simplest way to add voice to an agent is to use a single provider for both speaking and listening: ```typescript import { createReadStream } from "fs"; import path from "path"; import { Agent } from "@mastra/core/agent"; import { OpenAIVoice } from "@mastra/voice-openai"; import { openai } from "@ai-sdk/openai"; // Initialize the voice provider with default settings const voice = new OpenAIVoice(); // Create an agent with voice capabilities export const agent = new Agent({ name: "Agent", instructions: `You are a helpful assistant with both STT and TTS capabilities.`, model: openai("gpt-4o"), voice, }); // The agent can now use voice for interaction await agent.voice.speak("Hello, I'm your AI assistant!"); // Read audio file and transcribe const audioFilePath = path.join(process.cwd(), "/audio.m4a"); const audioStream = createReadStream(audioFilePath); try { const transcription = await agent.listen(audioStream, { filetype: "m4a" }); } catch (error) { console.error("Error transcribing audio:", error); } ``` ## Using Multiple Providers For more flexibility, you can use different providers for speaking and listening using the CompositeVoice class: ```typescript import { Agent } from "@mastra/core/agent"; import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; import { openai } from "@ai-sdk/openai"; export const agent = new Agent({ name: "Agent", instructions: `You are a helpful assistant with both STT and TTS capabilities.`, model: openai("gpt-4o"), // Create a composite voice using OpenAI for listening and PlayAI for speaking voice: new CompositeVoice({ listenProvider: new OpenAIVoice(), speakProvider: new PlayAIVoice(), }), }); ``` ## Working with Audio Streams The `speak()` and `listen()` methods work with Node.js streams. Here's how to save and load audio files: ### Saving Speech Output ```typescript import { createWriteStream } from "fs"; import path from "path"; // Generate speech and save to file const audio = await agent.speak("Hello, World!"); const filePath = path.join(process.cwd(), "agent.mp3"); const writer = createWriteStream(filePath); audio.pipe(writer); await new Promise((resolve, reject) => { writer.on("finish", () => resolve()); writer.on("error", reject); }); ``` ### Transcribing Audio Input ```typescript import { createReadStream } from "fs"; import path from "path"; // Read audio file and transcribe const audioFilePath = path.join(process.cwd(), "/agent.m4a"); const audioStream = createReadStream(audioFilePath); try { console.log("Transcribing audio file..."); const transcription = await agent.listen(audioStream, { filetype: "m4a" }); console.log("Transcription:", transcription); } catch (error) { console.error("Error transcribing audio:", error); } ``` ## Real-time Voice Interactions For more dynamic and interactive voice experiences, you can use real-time voice providers that support speech-to-speech capabilities: ```typescript import { Agent } from "@mastra/core/agent"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { search, calculate } from "../tools"; // Initialize the realtime voice provider const voice = new OpenAIRealtimeVoice({ chatModel: { apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o-mini-realtime', }, speaker: 'alloy' }); // Create an agent with speech-to-speech voice capabilities export const agent = new Agent({ name: 'Agent', instructions: `You are a helpful assistant with speech-to-speech capabilities.`, model: openai('gpt-4o'), tools: { // Tools configured on Agent are passed to voice provider search, calculate }, voice }); // Establish a WebSocket connection await agent.voice.connect(); // Start a conversation agent.voice.speak("Hello, I'm your AI assistant!"); // Stream audio from a microphone const microphoneStream = getMicrophoneStream(); agent.voice.send(microphoneStream); // When done with the conversation agent.voice.close(); ``` ### Event System The realtime voice provider emits several events you can listen for: ```typescript // Listen for speech audio data sent from voice provider agent.voice.on('speaking', ({ audio }) => { // audio contains ReadableStream or Int16Array audio data }); // Listen for transcribed text sent from both voice provider and user agent.voice.on('writing', ({ text, role }) => { console.log(`${role} said: ${text}`); }); // Listen for errors agent.voice.on('error', (error) => { console.error('Voice error:', error); }); ``` --- title: "MastraClient" description: "Learn how to set up and use the Mastra Client SDK" --- # Mastra Client SDK Source: https://mastra.ai/docs/deployment/client The Mastra Client SDK provides a simple and type-safe interface for interacting with your [Mastra Server](/docs/deployment/server) from your client environment. ## Development Requirements To ensure smooth local development, make sure you have: - Node.js 18.x or later installed - TypeScript 4.7+ (if using TypeScript) - A modern browser environment with Fetch API support - Your local Mastra server running (typically on port 4111) ## Installation import { Tabs } from "nextra/components"; ```bash copy npm install @mastra/client-js ``` ```bash copy yarn add @mastra/client-js ``` ```bash copy pnpm add @mastra/client-js ``` ## Initialize Mastra Client To get started you'll need to initialize your MastraClient with necessary parameters: ```typescript import { MastraClient } from "@mastra/client-js"; const client = new MastraClient({ baseUrl: "http://localhost:4111", // Default Mastra development server port }); ``` ### Configuration Options You can customize the client with various options: ```typescript const client = new MastraClient({ // Required baseUrl: "http://localhost:4111", // Optional configurations for development retries: 3, // Number of retry attempts backoffMs: 300, // Initial retry backoff time maxBackoffMs: 5000, // Maximum retry backoff time headers: { // Custom headers for development "X-Development": "true" } }); ``` ## Example Once your MastraClient is initialized you can start making client calls via the type-safe interface ```typescript // Get a reference to your local agent const agent = client.getAgent("dev-agent-id"); // Generate responses const response = await agent.generate({ messages: [ { role: "user", content: "Hello, I'm testing the local development setup!" } ] }); ``` ## Available Features Mastra client exposes all resources served by the Mastra Server - [**Agents**](/docs/reference/client-js/agents): Create and manage AI agents, generate responses, and handle streaming interactions - [**Memory**](/docs/reference/client-js/memory): Manage conversation threads and message history - [**Tools**](/docs/reference/client-js/tools): Access and execute tools available to agents - [**Workflows**](/docs/reference/client-js/workflows): Create and manage automated workflows - [**Vectors**](/docs/reference/client-js/vectors): Handle vector operations for semantic search and similarity matching ## Best Practices 1. **Error Handling**: Implement proper error handling for development scenarios 2. **Environment Variables**: Use environment variables for configuration 3. **Debugging**: Enable detailed logging when needed ```typescript // Example with error handling and logging try { const agent = client.getAgent("dev-agent-id"); const response = await agent.generate({ messages: [{ role: "user", content: "Test message" }] }); console.log("Response:", response); } catch (error) { console.error("Development error:", error); } ``` --- title: "Serverless Deployment" description: "Build and deploy Mastra applications using platform-specific deployers or standard HTTP servers" --- # Serverless Deployment Source: https://mastra.ai/docs/deployment/deployment This guide covers deploying Mastra to Cloudflare Workers, Vercel, and Netlify using platform-specific deployers For self-hosted Node.js server deployment, see the [Creating A Mastra Server](/docs/deployment/server) guide. ## Prerequisites Before you begin, ensure you have: - **Node.js** installed (version 18 or higher is recommended) - If using a platform-specific deployer: - An account with your chosen platform - Required API keys or credentials ## Serverless Platform Deployers Platform-specific deployers handle configuration and deployment for: - **[Cloudflare Workers](/docs/reference/deployer/cloudflare)** - **[Vercel](/docs/reference/deployer/vercel)** - **[Netlify](/docs/reference/deployer/netlify)** ### Installing Deployers ```bash copy # For Cloudflare npm install @mastra/deployer-cloudflare # For Vercel npm install @mastra/deployer-vercel # For Netlify npm install @mastra/deployer-netlify ``` ### Configuring Deployers Configure the deployer in your entry file: ```typescript copy showLineNumbers import { Mastra, createLogger } from '@mastra/core'; import { CloudflareDeployer } from '@mastra/deployer-cloudflare'; export const mastra = new Mastra({ agents: { /* your agents here */ }, logger: createLogger({ name: 'MyApp', level: 'debug' }), deployer: new CloudflareDeployer({ scope: 'your-cloudflare-scope', projectName: 'your-project-name', // See complete configuration options in the reference docs }), }); ``` ### Deployer Configuration Each deployer has specific configuration options. Below are basic examples, but refer to the reference documentation for complete details. #### Cloudflare Deployer ```typescript copy showLineNumbers new CloudflareDeployer({ scope: 'your-cloudflare-account-id', projectName: 'your-project-name', // For complete configuration options, see the reference documentation }) ``` [View Cloudflare Deployer Reference →](/docs/reference/deployer/cloudflare) #### Vercel Deployer ```typescript copy showLineNumbers new VercelDeployer({ teamId: 'your-vercel-team-id', projectName: 'your-project-name', token: 'your-vercel-token' // For complete configuration options, see the reference documentation }) ``` [View Vercel Deployer Reference →](/docs/reference/deployer/vercel) #### Netlify Deployer ```typescript copy showLineNumbers new NetlifyDeployer({ scope: 'your-netlify-team-slug', projectName: 'your-project-name', token: 'your-netlify-token' }) ``` [View Netlify Deployer Reference →](/docs/reference/deployer/netlify) ## Environment Variables Required variables: 1. Platform deployer variables (if using platform deployers): - Platform credentials 2. Agent API keys: - `OPENAI_API_KEY` - `ANTHROPIC_API_KEY` 3. Server configuration (for universal deployment): - `PORT`: HTTP server port (default: 3000) - `HOST`: Server host (default: 0.0.0.0) ## Platform Documentation Platform deployment references: - [Cloudflare Workers](https://developers.cloudflare.com/workers/) - [Vercel](https://vercel.com/docs) - [Netlify](https://docs.netlify.com/) --- title: "Creating A Mastra Server" description: "Configure and customize the Mastra server with middleware and other options" --- # Creating A Mastra Server Source: https://mastra.ai/docs/deployment/server While developing or when you deploy a Mastra application, it runs as an HTTP server that exposes your agents, workflows, and other functionality as API endpoints. This page explains how to configure and customize the server behavior. ## Server Architecture Mastra uses [Hono](https://hono.dev) as its underlying HTTP server framework. When you build a Mastra application using `mastra build`, it generates a Hono-based HTTP server in the `.mastra` directory. The server provides: - API endpoints for all registered agents - API endpoints for all registered workflows - Custom middleware support ## Server Middleware Mastra allows you to configure custom middleware functions that will be applied to API routes. This is useful for adding authentication, logging, CORS, or other HTTP-level functionality to your API endpoints. ```typescript copy showLineNumbers import { Mastra } from '@mastra/core'; export const mastra = new Mastra({ // Other configuration options serverMiddleware: [ { handler: async (c, next) => { // Example: Add authentication check const authHeader = c.req.header('Authorization'); if (!authHeader) { return new Response('Unauthorized', { status: 401 }); } // Continue to the next middleware or route handler await next(); }, path: '/api/*', // Optional: defaults to '/api/*' if not specified }, { handler: async (c, next) => { // Example: Add request logging console.log(`${c.req.method} ${c.req.url}`); await next(); }, // This middleware will apply to all routes since no path is specified } ] }); ``` ### Middleware Behavior Each middleware function: - Receives a Hono context object (`c`) and a `next` function - Can return a `Response` to short-circuit the request handling - Can call `next()` to continue to the next middleware or route handler - Can optionally specify a path pattern (defaults to '/api/*') ### Common Middleware Use Cases #### Authentication ```typescript copy { handler: async (c, next) => { const authHeader = c.req.header('Authorization'); if (!authHeader || !authHeader.startsWith('Bearer ')) { return new Response('Unauthorized', { status: 401 }); } const token = authHeader.split(' ')[1]; // Validate token here await next(); }, path: '/api/*', } ``` #### CORS Support ```typescript copy { handler: async (c, next) => { // Add CORS headers c.header('Access-Control-Allow-Origin', '*'); c.header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS'); c.header('Access-Control-Allow-Headers', 'Content-Type, Authorization'); // Handle preflight requests if (c.req.method === 'OPTIONS') { return new Response(null, { status: 204 }); } await next(); } } ``` #### Request Logging ```typescript copy { handler: async (c, next) => { const start = Date.now(); await next(); const duration = Date.now() - start; console.log(`${c.req.method} ${c.req.url} - ${duration}ms`); } } ``` ## Deployment Since Mastra builds to a standard Node.js server, you can deploy to any platform that runs Node.js applications: - Cloud VMs (AWS EC2, DigitalOcean Droplets, GCP Compute Engine) - Container platforms (Docker, Kubernetes) - Platform as a Service (Heroku, Railway) - Self-hosted servers ### Building Build the application: ```bash copy # Build from current directory mastra build # Or specify a directory mastra build --dir ./my-project ``` The build process: 1. Locates entry file (`src/mastra/index.ts` or `src/mastra/index.js`) 2. Creates `.mastra` output directory 3. Bundles code using Rollup with tree shaking and source maps 4. Generates [Hono](https://hono.dev) HTTP server See [`mastra build`](/docs/reference/cli/build) for all options. ### Running the Server Start the HTTP server: ```bash copy node .mastra/output/index.mjs ``` ## Serverless Deployment Mastra also supports serverless deployment on Cloudflare Workers, Vercel, and Netlify. See our [Serverless Deployment](/docs/deployment/deployment) guide for setup instructions. --- title: "Overview" description: "Understanding how to evaluate and measure AI agent quality using Mastra evals." --- # Testing your agents with evals Source: https://mastra.ai/docs/evals/00-overview While traditional software tests have clear pass/fail conditions, AI outputs are non-deterministic — they can vary with the same input. Evals help bridge this gap by providing quantifiable metrics for measuring agent quality. Evals are automated tests that evaluate Agents outputs using model-graded, rule-based, and statistical methods. Each eval returns a normalized score between 0-1 that can be logged and compared. Evals can be customized with your own prompts and scoring functions. Evals can be run in the cloud, capturing real-time results. But evals can also be part of your CI/CD pipeline, allowing you to test and monitor your agents over time. ## Types of Evals There are different kinds of evals, each serving a specific purpose. Here are some common types: 1. **Textual Evals**: Evaluate accuracy, reliability, and context understanding of agent responses 2. **Classification Evals**: Measure accuracy in categorizing data based on predefined categories 3. **Tool Usage Evals**: Assess how effectively an agent uses external tools or APIs 4. **Prompt Engineering Evals**: Explore impact of different instructions and input formats ## Getting Started Evals need to be added to an agent. Here's an example using the summarization, content similarity, and tone consistency metrics: ```typescript copy showLineNumbers filename="src/mastra/agents/index.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import { SummarizationMetric, } from "@mastra/evals/llm"; import { ContentSimilarityMetric, ToneConsistencyMetric } from "@mastra/evals/nlp"; const model = openai("gpt-4o"); export const myAgent = new Agent({ name: "ContentWriter", instructions: "You are a content writer that creates accurate summaries", model, evals: { summarization: new SummarizationMetric(model), contentSimilarity: new ContentSimilarityMetric(), tone: new ToneConsistencyMetric() } }); ``` You can view eval results in the Mastra dashboard when using `mastra dev`. ## Beyond Automated Testing While automated evals are valuable, high-performing AI teams often combine them with: 1. **A/B Testing**: Compare different versions with real users 2. **Human Review**: Regular review of production data and traces 3. **Continuous Monitoring**: Track eval metrics over time to detect regressions ## Understanding Eval Results Each eval metric measures a specific aspect of your agent's output. Here's how to interpret and improve your results: ### Understanding Scores For any metric: 1. Check the metric documentation to understand the scoring process 2. Look for patterns in when scores change 3. Compare scores across different inputs and contexts 4. Track changes over time to spot trends ### Improving Results When scores aren't meeting your targets: 1. Check your instructions - Are they clear? Try making them more specific 2. Look at your context - Is it giving the agent what it needs? 3. Simplify your prompts - Break complex tasks into smaller steps 4. Add guardrails - Include specific rules for tricky cases ### Maintaining Quality Once you're hitting your targets: 1. Monitor stability - Do scores remain consistent? 2. Document what works - Keep notes on successful approaches 3. Test edge cases - Add examples that cover unusual scenarios 4. Fine-tune - Look for ways to improve efficiency See [Textual Evals](/docs/evals/01-textual-evals) for more info on what evals can do. For more info on how to create your own evals, see the [Custom Evals](/docs/evals/02-custom-eval) guide. For running evals in your CI pipeline, see the [Running in CI](/docs/evals/03-running-in-ci) guide. --- title: "Textual Evals" description: "Understand how Mastra uses LLM-as-judge methodology to evaluate text quality." --- # Textual Evals Source: https://mastra.ai/docs/evals/01-textual-evals Textual evals use an LLM-as-judge methodology to evaluate agent outputs. This approach leverages language models to assess various aspects of text quality, similar to how a teaching assistant might grade assignments using a rubric. Each eval focuses on specific quality aspects and returns a score between 0 and 1, providing quantifiable metrics for non-deterministic AI outputs. Mastra provides several eval metrics for assessing Agent outputs. Mastra is not limited to these metrics, and you can also [define your own evals](/docs/evals/02-custom-eval). ## Why Use Textual Evals? Textual evals help ensure your agent: - Produces accurate and reliable responses - Uses context effectively - Follows output requirements - Maintains consistent quality over time ## Available Metrics ### Accuracy and Reliability These metrics evaluate how correct, truthful, and complete your agent's answers are: - [`hallucination`](/docs/reference/evals/hallucination): Detects facts or claims not present in provided context - [`faithfulness`](/docs/reference/evals/faithfulness): Measures how accurately responses represent provided context - [`content-similarity`](/docs/reference/evals/content-similarity): Evaluates consistency of information across different phrasings - [`completeness`](/docs/reference/evals/completeness): Checks if responses include all necessary information - [`answer-relevancy`](/docs/reference/evals/answer-relevancy): Assesses how well responses address the original query - [`textual-difference`](/docs/reference/evals/textual-difference): Measures textual differences between strings ### Understanding Context These metrics evaluate how well your agent uses provided context: - [`context-position`](/docs/reference/evals/context-position): Analyzes where context appears in responses - [`context-precision`](/docs/reference/evals/context-precision): Evaluates whether context chunks are grouped logically - [`context-relevancy`](/docs/reference/evals/context-relevancy): Measures use of appropriate context pieces - [`contextual-recall`](/docs/reference/evals/contextual-recall): Assesses completeness of context usage ### Output Quality These metrics evaluate adherence to format and style requirements: - [`tone`](/docs/reference/evals/tone-consistency): Measures consistency in formality, complexity, and style - [`toxicity`](/docs/reference/evals/toxicity): Detects harmful or inappropriate content - [`bias`](/docs/reference/evals/bias): Detects potential biases in the output - [`prompt-alignment`](/docs/reference/evals/prompt-alignment): Checks adherence to explicit instructions like length restrictions, formatting requirements, or other constraints - [`summarization`](/docs/reference/evals/summarization): Evaluates information retention and conciseness - [`keyword-coverage`](/docs/reference/evals/keyword-coverage): Assesses technical terminology usage --- title: "Create your own Eval" description: "Mastra allows so create your own evals, here is how." --- # Create your own Eval Source: https://mastra.ai/docs/evals/02-custom-eval Creating your own eval is as easy as creating a new function. You simply create a class that extends the `Metric` class and implement the `measure` method. ## Basic example For a simple example of creating a custom metric that checks if the output contains certain words, see our [Word Inclusion example](/examples/evals/word-inclusion). ## Creating a custom LLM-Judge A custom LLM judge helps evaluate specific aspects of your AI's responses. Think of it like having an expert reviewer for your particular use case: - Medical Q&A → Judge checks for medical accuracy and safety - Customer Service → Judge evaluates tone and helpfulness - Code Generation → Judge verifies code correctness and style For a practical example, see how we evaluate [Chef Michel's](/docs/guides/01-chef-michel) recipes for gluten content in our [Gluten Checker example](/examples/evals/custom-eval). --- title: "Running in CI" description: "Learn how to run Mastra evals in your CI/CD pipeline to monitor agent quality over time." --- # Running Evals in CI Source: https://mastra.ai/docs/evals/03-running-in-ci Running evals in your CI pipeline helps bridge this gap by providing quantifiable metrics for measuring agent quality over time. ## Setting Up CI Integration We support any testing framework that supports ESM modules. For example, you can use [Vitest](https://vitest.dev/), [Jest](https://jestjs.io/) or [Mocha](https://mochajs.org/) to run evals in your CI/CD pipeline. ```typescript copy showLineNumbers filename="src/mastra/agents/index.test.ts" import { describe, it, expect } from 'vitest'; import { evaluate } from "@mastra/evals"; import { ToneConsistencyMetric } from "@mastra/evals/nlp"; import { myAgent } from './index'; describe('My Agent', () => { it('should validate tone consistency', async () => { const metric = new ToneConsistencyMetric(); const result = await evaluate(myAgent, 'Hello, world!', metric) expect(result.score).toBe(1); }); }); ``` You will need to configure a testSetup and globalSetup script for your testing framework to capture the eval results. It allows us to show these results in your mastra dashboard. ## Framework Configuration ### Vitest Setup Add these files to your project to run evals in your CI/CD pipeline: ```typescript copy showLineNumbers filename="globalSetup.ts" import { globalSetup } from '@mastra/evals'; export default function setup() { globalSetup() } ``` ```typescript copy showLineNumbers filename="testSetup.ts" import { beforeAll } from 'vitest'; import { attachListeners } from '@mastra/evals'; beforeAll(async () => { await attachListeners(); }); ``` ```typescript copy showLineNumbers filename="vitest.config.ts" import { defineConfig } from 'vitest/config' export default defineConfig({ test: { globalSetup: './globalSetup.ts', setupFiles: ['./testSetup.ts'], }, }) ``` ## Storage Configuration To store eval results in Mastra Storage and capture results in the Mastra dashboard: ```typescript copy showLineNumbers filename="testSetup.ts" import { beforeAll } from 'vitest'; import { attachListeners } from '@mastra/evals'; import { mastra } from './your-mastra-setup'; beforeAll(async () => { // Store evals in Mastra Storage (requires storage to be enabled) await attachListeners(mastra); }); ``` With file storage, evals persist and can be queried later. With memory storage, evals are isolated to the test process. --- title: "Licensing" description: "Mastra License" --- # License Source: https://mastra.ai/docs/faq ## Elastic License 2.0 (ELv2) Mastra is licensed under the Elastic License 2.0 (ELv2), a modern license designed to balance open-source principles with sustainable business practices. ### What is Elastic License 2.0? The Elastic License 2.0 is a source-available license that grants users broad rights to use, modify, and distribute the software while including specific limitations to protect the project's sustainability. It allows: - Free use for most purposes - Viewing, modifying, and redistributing the source code - Creating and distributing derivative works - Commercial use within your organization The primary limitation is that you cannot provide Mastra as a hosted or managed service that offers users access to the substantial functionality of the software. ### Why We Chose Elastic License 2.0 We selected the Elastic License 2.0 for several important reasons: 1. **Sustainability**: It enables us to maintain a healthy balance between openness and the ability to sustain long-term development. 2. **Innovation Protection**: It ensures we can continue investing in innovation without concerns about our work being repackaged as competing services. 3. **Community Focus**: It maintains the spirit of open source by allowing users to view, modify, and learn from our code while protecting our ability to support the community. 4. **Business Clarity**: It provides clear guidelines for how Mastra can be used in commercial contexts. ### Building Your Business with Mastra Despite the licensing restrictions, there are numerous ways to build successful businesses using Mastra: #### Allowed Business Models - **Building Applications**: Create and sell applications built with Mastra - **Offering Consulting Services**: Provide expertise, implementation, and customization services - **Developing Custom Solutions**: Build bespoke AI solutions for clients using Mastra - **Creating Add-ons and Extensions**: Develop and sell complementary tools that extend Mastra's functionality - **Training and Education**: Offer courses and educational materials about using Mastra effectively #### Examples of Compliant Usage - A company builds an AI-powered customer service application using Mastra and sells it to clients - A consulting firm offers implementation and customization services for Mastra - A developer creates specialized agents and tools with Mastra and licenses them to other businesses - A startup builds a vertical-specific solution (e.g., healthcare AI assistant) powered by Mastra #### What to Avoid The main restriction is that you cannot offer Mastra itself as a hosted service where users access its core functionality. This means: - Don't create a SaaS platform that is essentially Mastra with minimal modifications - Don't offer a managed Mastra service where customers are primarily paying to use Mastra's features ### Questions About Licensing? If you have specific questions about how the Elastic License 2.0 applies to your use case, please [contact us](https://discord.gg/BTYqqHKUrf) on Discord for clarification. We're committed to supporting legitimate business use cases while protecting the sustainability of the project. --- title: "Getting started with Mastra and NextJS | Mastra Guides" description: Guide on integrating Mastra with NextJS. --- import { Callout, Steps, Tabs } from "nextra/components"; # Integrate Mastra in your Next.js project Source: https://mastra.ai/docs/frameworks/01-next-js There are two main ways to integrate Mastra with your Next.js application: as a separate backend service or directly integrated into your Next.js app. ## 1. Separate Backend Integration Best for larger projects where you want to: - Scale your AI backend independently - Keep clear separation of concerns - Have more deployment flexibility ### Create Mastra Backend Create a new Mastra project using our CLI: ```bash copy npx create-mastra@latest ``` ```bash copy npm create mastra ``` ```bash copy yarn create mastra ``` ```bash copy pnpm create mastra ``` For detailed setup instructions, see our [installation guide](/docs/getting-started/installation). ### Install MastraClient ```bash copy npm install @mastra/client-js ``` ```bash copy yarn add @mastra/client-js ``` ```bash copy pnpm add @mastra/client-js ``` ### Use MastraClient Create a client instance and use it in your Next.js application: ```typescript filename="lib/mastra.ts" copy import { MastraClient } from '@mastra/client-js'; // Initialize the client export const mastraClient = new MastraClient({ baseUrl: process.env.NEXT_PUBLIC_MASTRA_API_URL || 'http://localhost:4111', }); ``` Example usage in your React component: ```typescript filename="app/components/SimpleWeather.tsx" copy 'use client' import { mastraClient } from '@/lib/mastra' export function SimpleWeather() { async function handleSubmit(formData: FormData) { const city = formData.get('city') const agent = mastraClient.getAgent('weatherAgent') try { const response = await agent.generate({ messages: [{ role: 'user', content: `What's the weather like in ${city}?` }], }) // Handle the response console.log(response.text) } catch (error) { console.error('Error:', error) } } return (
) } ``` ### Deployment When you're ready to deploy, you can use any of our platform-specific deployers (Vercel, Netlify, Cloudflare) or deploy to any Node.js hosting platform. Check our [deployment guide](/docs/deployment/deployment) for detailed instructions.
## 2. Direct Integration Better for smaller projects or prototypes. This approach bundles Mastra directly with your Next.js application. ### Initialize Mastra in your Next.js Root First, navigate to your Next.js project root and initialize Mastra: ```bash copy cd your-nextjs-app ``` Then run the initialization command: ```bash copy npx mastra@latest init ``` ```bash copy yarn dlx mastra@latest init ``` ```bash copy pnpm dlx mastra@latest init ``` This will set up Mastra in your Next.js project. For more details about init and other configuration options, see our [mastra init reference](/docs/reference/cli/init). ### Configure Next.js Add to your `next.config.js`: ```js filename="next.config.js" copy /** @type {import('next').NextConfig} */ const nextConfig = { serverExternalPackages: ["@mastra/*"], // ... your other Next.js config } module.exports = nextConfig ``` #### Server Actions Example ```typescript filename="app/actions.ts" copy 'use server' import { mastra } from '@/mastra' export async function getWeatherInfo(city: string) { const agent = mastra.getAgent('weatherAgent') const result = await agent.generate(`What's the weather like in ${city}?`) return result } ``` Use it in your component: ```typescript filename="app/components/Weather.tsx" copy 'use client' import { getWeatherInfo } from '../actions' export function Weather() { async function handleSubmit(formData: FormData) { const city = formData.get('city') as string const result = await getWeatherInfo(city) // Handle the result console.log(result) } return (
) } ``` #### API Routes Example ```typescript filename="app/api/chat/route.ts" copy import { mastra } from '@/mastra' import { NextResponse } from 'next/server' export async function POST(req: Request) { const { city } = await req.json() const agent = mastra.getAgent('weatherAgent') const result = await agent.stream(`What's the weather like in ${city}?`) return result.toDataStreamResponse() } ``` ### Deployment When using direct integration, your Mastra instance will be deployed alongside your Next.js application. Ensure you: - Set up environment variables for your LLM API keys in your deployment platform - Implement proper error handling for production use - Monitor your AI agent's performance and costs
## Observability Mastra provides built-in observability features to help you monitor, debug, and optimize your AI operations. This includes: - Tracing of AI operations and their performance - Logging of prompts, completions, and errors - Integration with observability platforms like Langfuse and LangSmith For detailed setup instructions and configuration options specific to Next.js local development, see our [Next.js Observability Configuration Guide](/docs/deployment/logging-and-tracing#nextjs-configuration). --- title: "Using with AI SDK" description: "Learn how Mastra leverages the AI SDK library and how you can leverage it further with Mastra" --- # AI SDK Source: https://mastra.ai/docs/frameworks/02-ai-sdk Mastra leverages AI SDK's model routing (a unified interface on top of OpenAI, Anthropic, etc), structured output, and tool calling. We explain this in greater detail in [this blog post](https://mastra.ai/blog/using-ai-sdk-with-mastra) ## Mastra + AI SDK Mastra acts as a layer on top of AI SDK to help teams productionize their proof-of-concepts quickly and easily. Agent interaction trace showing spans, LLM calls, and tool executions ## Model routing When creating agents in Mastra, you can specify any AI SDK-supported model: ```typescript import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; const agent = new Agent({ name: "WeatherAgent", instructions: "Instructions for the agent...", model: openai("gpt-4-turbo"), // Model comes directly from AI SDK }); const result = await agent.generate("What is the weather like?"); ``` ## AI SDK Hooks Mastra is compatible with AI SDK's hooks for seamless frontend integration: ### useChat The `useChat` hook enables real-time chat interactions in your frontend application - Works with agent data streams i.e. `.toDataStreamResponse()` - The useChat `api` defaults to `/api/chat` - Works with the Mastra REST API agent stream endpoint `{MASTRA_BASE_URL}/agents/:agentId/stream` for data streams, i.e. no structured output is defined. ```typescript filename="app/api/chat/route.ts" copy import { mastra } from '@/src/mastra'; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent('weatherAgent'); const stream = await myAgent.stream(messages); return stream.toDataStreamResponse(); } ``` ```typescript copy import { useChat } from '@ai-sdk/react'; export function ChatComponent() { const { messages, input, handleInputChange, handleSubmit } = useChat({ api: '/path-to-your-agent-stream-api-endpoint' }); return (
{messages.map(m => (
{m.role}: {m.content}
))}
); } ``` > **Gotcha**: When using `useChat` with agent memory functionality, make sure to check out the [Agent Memory section](/docs/agents/01-agent-memory#usechat) for important implementation details. ### useCompletion For single-turn completions, use the `useCompletion` hook: - Works with agent data streams i.e. `.toDataStreamResponse()` - The useCompletion `api` defaults to `/api/completion` - Works with the Mastra REST API agent stream endpoint `{MASTRA_BASE_URL}/agents/:agentId/stream` for data streams, i.e. no structured output is defined. ```typescript filename="app/api/completion/route.ts" copy import { mastra } from '@/src/mastra'; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent('weatherAgent'); const stream = await myAgent.stream(messages); return stream.toDataStreamResponse(); } ``` ```typescript import { useCompletion } from "@ai-sdk/react"; export function CompletionComponent() { const { completion, input, handleInputChange, handleSubmit, } = useCompletion({ api: '/path-to-your-agent-stream-api-endpoint' }); return (

Completion result: {completion}

); } ``` ### useObject For consuming text streams that represent JSON objects and parsing them into a complete object based on a schema. - Works with agent text streams i.e. `.toTextStreamResponse()` - The useObject `api` defaults to `/api/completion` - Works with the Mastra REST API agent stream endpoint `{MASTRA_BASE_URL}/agents/:agentId/stream` for text streams, i.e. structured output is defined. ```typescript filename="app/api/use-object/route.ts" copy import { mastra } from '@/src/mastra'; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent('weatherAgent'); const stream = await myAgent.stream(messages, { output: z.object({ weather: z.string(), }), }); return stream.toTextStreamResponse(); } ``` ```typescript import { experimental_useObject as useObject } from '@ai-sdk/react'; export default function Page() { const { object, submit } = useObject({ api: '/api/use-object', schema: z.object({ weather: z.string(), }), }); return (
{object?.content &&

{object.content}

}
); } ``` ## Tool Calling ### AI SDK Tool Format Mastra supports tools created using the AI SDK format, so you can use them directly with Mastra agents. See our tools doc on [Vercel AI SDK Tool Format ](/docs/agents/02-adding-tools#vercel-ai-sdk-tool-format) for more details. ### Client-side tool calling Mastra leverages AI SDK's tool calling, so what applies in AI SDK applies here still. [Agent Tools](/docs/agents/02-adding-tools) in Mastra are 100% percent compatible with AI SDK tools. Mastra tools also expose an optional `execute` async function. It is optional because you might want to forward tool calls to the client or to a queue instead of executing them in the same process. One way to then leverage client-side tool calling is to use the `@ai-sdk/react` `useChat` hook's `onToolCall` property for client-side tool execution --- title: "Installing Mastra Locally | Getting Started | Mastra Docs" description: Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers. --- import { Callout, Steps, Tabs } from "nextra/components"; import YouTube from "../../../components/youtube"; # Installing Mastra Locally Source: https://mastra.ai/docs/getting-started/installation To run Mastra, you need access to an LLM. Typically, you'll want to get an API key from an LLM provider such as [OpenAI](https://platform.openai.com/), [Anthropic](https://console.anthropic.com/settings/keys), or [Google Gemini](https://ai.google.dev/gemini-api/docs). You can also run Mastra with a local LLM using [Ollama](https://ollama.ai/). ## Prerequisites - Node.js `v20.0` or higher - Access to a [supported large language model (LLM)](/docs/reference/llm/providers-and-models) ## Automatic Installation ### Create a New Project We recommend starting a new Mastra project using `create-mastra`, which will scaffold your project. To create a project, run: ```bash copy npx create-mastra@latest ``` ```bash copy npm create mastra@latest ``` ```bash copy yarn create mastra@latest ``` ```bash copy pnpm create mastra@latest ``` On installation, you'll be guided through the following prompts: ```bash What do you want to name your project? my-mastra-app Choose components to install: ◯ Agents (recommended) ◯ Tools ◯ Workflows Select default provider: ◯ OpenAI (recommended) ◯ Anthropic ◯ Groq Would you like to include example code? No / Yes Turn your IDE into a Mastra expert? (Installs MCP server) ◯ Skip for now ◯ Cursor ◯ Windsurf ``` After the prompts, `create-mastra` will: 1. Set up your project directory with TypeScript 2. Install dependencies 3. Configure your selected components and LLM provider 4. Configure the MCP server in your IDE (if selected) for instant access to docs, examples, and help while you code **MCP Note:** If you're using a different IDE, you can install the MCP server manually by following the instructions in the [MCP server docs](/docs/getting-started/mcp-docs-server). **Also** note that there are additional steps for [Cursor and Windsurf](/docs/getting-started/mcp-docs-server#after-configuration) to activate the MCP server. ### Set Up your API Key Add the API key for your configured LLM provider in your `.env` file. ```bash filename=".env" copy OPENAI_API_KEY= ``` **Non-Interactive mode**: If you prefer to run the command with flags (non-interactive mode) and include the example code, you can use: ```bash copy npx create-mastra@latest --components agents,tools --llm openai --example ``` **Configure Installation Timeout**: To configure and specify a timeout if installation takes too long use the timeout flag: ```bash copy npx create-mastra@latest --timeout ``` **Note for LLMs**: For a quick one liner that includes examples you can run `npx -y mastra@latest --project-name --example --components "tools,agents,workflows" --llm `. The available options for the llm flag are `openai|anthropic|groq|google|cerebras` ## Manual Installation
If you prefer to set up your Mastra project manually, follow these steps: ### Create a New Project Create a project directory and navigate into it: ```bash copy mkdir hello-mastra cd hello-mastra ``` Then, initialize a TypeScript project including the `@mastra/core` package: ```bash copy npm init -y npm install typescript tsx @types/node mastra --save-dev npm install @mastra/core zod @ai-sdk/openai npx tsc --init ``` ```bash copy pnpm init pnpm add typescript tsx @types/node mastra --save-dev pnpm add @mastra/core zod pnpm dlx tsc --init ``` ```bash copy yarn init -y yarn add typescript tsx @types/node mastra --dev yarn add @mastra/core zod yarn dlx tsc --init ``` ```bash copy bun init -y bun add typescript tsx @types/node mastra --dev bun add @mastra/core zod bunx tsc --init ``` ### Initialize TypeScript Create a `tsconfig.json` file in your project root with the following configuration: ```json copy { "compilerOptions": { "target": "ES2022", "module": "ES2022", "moduleResolution": "bundler", "esModuleInterop": true, "forceConsistentCasingInFileNames": true, "strict": true, "skipLibCheck": true, "outDir": "dist" }, "include": [ "src/**/*" ], "exclude": [ "node_modules", "dist", ".mastra" ] } ``` This TypeScript configuration is optimized for Mastra projects, using modern module resolution and strict type checking. ### Set Up your API Key Create a `.env` file in your project root directory and add your API key: ```bash filename=".env" copy OPENAI_API_KEY= ``` Replace your_openai_api_key with your actual API key. ### Create a Tool Create a `weather-tool` tool file: ```bash copy mkdir -p src/mastra/tools && touch src/mastra/tools/weather-tool.ts ``` Then, add the following code to `src/mastra/tools/weather-tool.ts`: ```ts filename="src/mastra/tools/weather-tool.ts" showLineNumbers copy import { createTool } from "@mastra/core/tools"; import { z } from "zod"; interface WeatherResponse { current: { time: string; temperature_2m: number; apparent_temperature: number; relative_humidity_2m: number; wind_speed_10m: number; wind_gusts_10m: number; weather_code: number; }; } export const weatherTool = createTool({ id: "get-weather", description: "Get current weather for a location", inputSchema: z.object({ location: z.string().describe("City name"), }), outputSchema: z.object({ temperature: z.number(), feelsLike: z.number(), humidity: z.number(), windSpeed: z.number(), windGust: z.number(), conditions: z.string(), location: z.string(), }), execute: async ({ context }) => { return await getWeather(context.location); }, }); const getWeather = async (location: string) => { const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(location)}&count=1`; const geocodingResponse = await fetch(geocodingUrl); const geocodingData = await geocodingResponse.json(); if (!geocodingData.results?.[0]) { throw new Error(`Location '${location}' not found`); } const { latitude, longitude, name } = geocodingData.results[0]; const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,apparent_temperature,relative_humidity_2m,wind_speed_10m,wind_gusts_10m,weather_code`; const response = await fetch(weatherUrl); const data: WeatherResponse = await response.json(); return { temperature: data.current.temperature_2m, feelsLike: data.current.apparent_temperature, humidity: data.current.relative_humidity_2m, windSpeed: data.current.wind_speed_10m, windGust: data.current.wind_gusts_10m, conditions: getWeatherCondition(data.current.weather_code), location: name, }; }; function getWeatherCondition(code: number): string { const conditions: Record = { 0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast", 45: "Foggy", 48: "Depositing rime fog", 51: "Light drizzle", 53: "Moderate drizzle", 55: "Dense drizzle", 56: "Light freezing drizzle", 57: "Dense freezing drizzle", 61: "Slight rain", 63: "Moderate rain", 65: "Heavy rain", 66: "Light freezing rain", 67: "Heavy freezing rain", 71: "Slight snow fall", 73: "Moderate snow fall", 75: "Heavy snow fall", 77: "Snow grains", 80: "Slight rain showers", 81: "Moderate rain showers", 82: "Violent rain showers", 85: "Slight snow showers", 86: "Heavy snow showers", 95: "Thunderstorm", 96: "Thunderstorm with slight hail", 99: "Thunderstorm with heavy hail", }; return conditions[code] || "Unknown"; } ``` ### Create an Agent Create a `weather` agent file: ```bash copy mkdir -p src/mastra/agents && touch src/mastra/agents/weather.ts ``` Then, add the following code to `src/mastra/agents/weather.ts`: ```ts filename="src/mastra/agents/weather.ts" showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; import { weatherTool } from "../tools/weather-tool"; export const weatherAgent = new Agent({ name: "Weather Agent", instructions: `You are a helpful weather assistant that provides accurate weather information. Your primary function is to help users get weather details for specific locations. When responding: - Always ask for a location if none is provided - If the location name isn’t in English, please translate it - Include relevant details like humidity, wind conditions, and precipitation - Keep responses concise but informative Use the weatherTool to fetch current weather data.`, model: openai("gpt-4o-mini"), tools: { weatherTool }, }); ``` ### Register Agent Finally, create the Mastra entry point in `src/mastra/index.ts` and register agent: ```ts filename="src/mastra/index.ts" showLineNumbers copy import { Mastra } from "@mastra/core"; import { weatherAgent } from "./agents/weather"; export const mastra = new Mastra({ agents: { weatherAgent }, }); ``` This registers your agent with Mastra so that `mastra dev` can discover and serve it. ## Existing Project Installation To add Mastra to an existing project, see our Local development docs on [adding mastra to an existing project](/docs/local-dev/add-to-existing-project). You can also checkout our framework specific docs e.g [Next.js](/docs/frameworks/01-next-js) ## Start the Mastra Server Mastra provides commands to serve your agents via REST endpoints ### Development Server Run the following command to start the Mastra server: ```bash copy npm run dev ``` If you have the mastra CLI installed, run: ```bash copy mastra dev ``` This command creates REST API endpoints for your agents. ### Test the Endpoint You can test the agent's endpoint using `curl` or `fetch`: ```bash copy curl -X POST http://localhost:4111/api/agents/weatherAgent/generate \ -H "Content-Type: application/json" \ -d '{"messages": ["What is the weather in London?"]}' ``` ```js copy showLineNumbers fetch('http://localhost:4111/api/agents/weatherAgent/generate', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ messages: ['What is the weather in London?'], }), }) .then(response => response.json()) .then(data => { console.log('Agent response:', data.text); }) .catch(error => { console.error('Error:', error); }); ``` ## Use Mastra on the Client To use Mastra in your frontend applications, you can use our type-safe client SDK to interact with your Mastra REST APIs. See the [Mastra Client SDK documentation](/docs/deployment/client) for detailed usage instructions. ## Run from the command line If you'd like to directly call agents from the command line, you can create a script to get an agent and call it: ```ts filename="src/index.ts" showLineNumbers copy import { mastra } from "./mastra"; async function main() { const agent = await mastra.getAgent("weatherAgent"); const result = await agent.generate("What is the weather in London?"); console.log("Agent response:", result.text); } main(); ``` Then, run the script to test that everything is set up correctly: ```bash copy npx tsx src/index.ts ``` This should output the agent's response to your console. --- --- title: "Using with Cursor/Windsurf | Getting Started | Mastra Docs" description: "Learn how to use the Mastra MCP documentation server in your IDE to turn it into an agentic Mastra expert." --- import YouTube from "../../../components/youtube"; # Mastra Tools for your agentic IDE Source: https://mastra.ai/docs/getting-started/mcp-docs-server `@mastra/mcp-docs-server` provides direct access to Mastra's complete knowledge base in Cursor, Windsurf, Cline, or any other IDE that supports MCP. It has access to documentation, code examples, technical blog posts / feature announcements, and package changelogs which your IDE can read to help you build with Mastra. The MCP server tools have been designed to allow an agent to query the specific information it needs to complete a Mastra related task - for example: adding a Mastra feature to an agent, scaffolding a new project, or helping you understand how something works. ## How it works Once it's installed in your IDE you can write prompts and assume the agent will understand everything about Mastra. ### Add features - "Add evals to my agent and write tests" - "Write me a workflow that does the following `[task]`" - "Make a new tool that allows my agent to access `[3rd party API]`" ### Ask about integrations - "Does Mastra work with the AI SDK? How can I use it in my `[React/Svelte/etc]` project?" - "What's the latest Mastra news around MCP?" - "Does Mastra support `[provider]` speech and voice APIs? Show me an example in my code of how I can use it." ### Debug or update existing code - "I'm running into a bug with agent memory, have there been any related changes or bug fixes recently?" - "How does working memory behave in Mastra and how can I use it to do `[task]`? It doesn't seem to work the way I expect." - "I saw there are new workflow features, explain them to me and then update `[workflow]` to use them." **And more** - if you have a question, try asking your IDE and let it look it up for you. ## Automatic Installation Run `pnpm create mastra@latest` and select Cursor or Windsurf when prompted to install the MCP server. For other IDEs, or if you already have a Mastra project, install the MCP server by following the instructions below. ## Manual Installation - **Cursor**: Edit `.cursor/mcp.json` in your project root, or `~/.cursor/mcp.json` for global configuration - **Windsurf**: Edit `~/.codeium/windsurf/mcp_config.json` (only supports global configuration) Add the following configuration: ### MacOS/Linux ```json { "mcpServers": { "mastra": { "command": "npx", "args": ["-y", "@mastra/mcp-docs-server@latest"] } } } ``` ### Windows ```json { "mcpServers": { "mastra": { "command": "cmd", "args": ["/c", "npx", "-y", "@mastra/mcp-docs-server@latest"] } } } ``` ## After Configuration ### Cursor 1. Open Cursor settings 2. Navigate to MCP settings 3. Click "enable" on the Mastra MCP server 4. If you have an agent chat open, you'll need to re-open it or start a new chat to use the MCP server ### Windsurf 1. Fully quit and re-open Windsurf 2. If tool calls start failing, go to Windsurfs MCP settings and re-start the MCP server. This is a common Windsurf MCP issue and isn't related to Mastra. Right now Cursor's MCP implementation is more stable than Windsurfs is. In both IDEs it may take a minute for the MCP server to start the first time as it needs to download the package from npm. ## Available Agent Tools ### Documentation Access Mastra's complete documentation: - Getting started / installation - Guides and tutorials - API references ### Examples Browse code examples: - Complete project structures - Implementation patterns - Best practices ### Blog Posts Search the blog for: - Technical posts - Changelog and feature announcements - AI news and updates ### Package Changes Track updates for Mastra and `@mastra/*` packages: - Bug fixes - New features - Breaking changes ## Common Issues 1. **Server Not Starting** - Ensure npx is installed and working - Check for conflicting MCP servers - Verify your configuration file syntax - On Windows, make sure to use the Windows-specific configuration 2. **Tool Calls Failing** - Restart the MCP server and/or your IDE - Update to the latest version of your IDE --- title: "Local Project Structure | Getting Started | Mastra Docs" description: Guide on organizing folders and files in Mastra, including best practices and recommended structures. --- import { FileTree } from 'nextra/components'; # Project Structure Source: https://mastra.ai/docs/getting-started/project-structure This page provides a guide for organizing folders and files in Mastra. Mastra is a modular framework, and you can use any of the modules separately or together. You could write everything in a single file (as we showed in the quick start), or separate each agent, tool, and workflow into their own files. We don't enforce a specific folder structure, but we do recommend some best practices, and the CLI will scaffold a project with a sensible structure. ## Using the CLI `mastra init` is an interactive CLI that allows you to: - **Choose a directory for Mastra files**: Specify where you want the Mastra files to be placed (default is `src/mastra`). - **Select components to install**: Choose which components you want to include in your project: - Agents - Tools - Workflows - **Select a default LLM provider**: Choose from supported providers like OpenAI, Anthropic, or Groq. - **Include example code**: Decide whether to include example code to help you get started. ### Example Project Structure Assuming you select all components and include example code, your project structure will look like this: {/* ``` root/ ├── src/ │ └── mastra/ │ ├── agents/ │ │ └── index.ts │ ├── tools/ │ │ └── index.ts │ ├── workflows/ │ │ └── index.ts │ ├── index.ts ├── .env ``` */} ### Top-level Folders | Folder | Description | | ---------------------- | ------------------------------------ | | `src/mastra` | Core application folder | | `src/mastra/agents` | Agent configurations and definitions | | `src/mastra/tools` | Custom tool definitions | | `src/mastra/workflows` | Workflow definitions | ### Top-level Files | File | Description | | --------------------- | ---------------------------------- | | `src/mastra/index.ts` | Main configuration file for Mastra | | `.env` | Environment variables | --- title: "Building an AI Chef Assistant | Mastra Agent Guides" description: Guide on creating a Chef Assistant agent in Mastra to help users cook meals with available ingredients. --- import { Steps } from "nextra/components"; import YouTube from "../../../components/youtube"; # Agents Guide: Building a Chef Assistant Source: https://mastra.ai/docs/guides/01-chef-michel In this guide, we'll walk through creating a "Chef Assistant" agent that helps users cook meals with available ingredients. ## Prerequisites - Node.js installed - Mastra installed: `npm install @mastra/core` --- ## Create the Agent ### Define the Agent Create a new file `src/mastra/agents/chefAgent.ts` and define your agent: ```ts copy filename="src/mastra/agents/chefAgent.ts" import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; export const chefAgent = new Agent({ name: "chef-agent", instructions: "You are Michel, a practical and experienced home chef" + "You help people cook with whatever ingredients they have available.", model: openai("gpt-4o-mini"), }); ``` --- ## Set Up Environment Variables Create a `.env` file in your project root and add your OpenAI API key: ```bash filename=".env" copy OPENAI_API_KEY=your_openai_api_key ``` --- ## Register the Agent with Mastra In your main file, register the agent: ```ts copy filename="src/mastra/index.ts" import { Mastra } from "@mastra/core"; import { chefAgent } from "./agents/chefAgent"; export const mastra = new Mastra({ agents: { chefAgent }, }); ``` --- ## Interacting with the Agent ### Generating Text Responses ```ts copy filename="src/index.ts" async function main() { const query = "In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make?"; console.log(`Query: ${query}`); const response = await chefAgent.generate([{ role: "user", content: query }]); console.log("\n👨‍🍳 Chef Michel:", response.text); } main(); ``` Run the script: ```bash copy npx bun src/index.ts ``` Output: ``` Query: In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make? 👨‍🍳 Chef Michel: You can make a delicious pasta al pomodoro! Here's how... ``` --- ### Streaming Responses ```ts copy filename="src/index.ts" async function main() { const query = "Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and some curry powder."; console.log(`Query: ${query}`); const stream = await chefAgent.stream([{ role: "user", content: query }]); console.log("\n Chef Michel: "); for await (const chunk of stream.textStream) { process.stdout.write(chunk); } console.log("\n\n✅ Recipe complete!"); } main(); ``` Output: ``` Query: Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and some curry powder. 👨‍🍳 Chef Michel: Great! You can make a comforting chicken curry... ✅ Recipe complete! ``` --- ### Generating a Recipe with Structured Data ```ts copy filename="src/index.ts" import { z } from "zod"; async function main() { const query = "I want to make lasagna, can you generate a lasagna recipe for me?"; console.log(`Query: ${query}`); // Define the Zod schema const schema = z.object({ ingredients: z.array( z.object({ name: z.string(), amount: z.string(), }), ), steps: z.array(z.string()), }); const response = await chefAgent.generate( [{ role: "user", content: query }], { output: schema }, ); console.log("\n👨‍🍳 Chef Michel:", response.object); } main(); ``` Output: ``` Query: I want to make lasagna, can you generate a lasagna recipe for me? 👨‍🍳 Chef Michel: { ingredients: [ { name: "Lasagna noodles", amount: "12 sheets" }, { name: "Ground beef", amount: "1 pound" }, // ... ], steps: [ "Preheat oven to 375°F (190°C).", "Cook the lasagna noodles according to package instructions.", // ... ] } ``` --- ## Running the Agent Server ### Using `mastra dev` You can run your agent as a service using the `mastra dev` command: ```bash copy mastra dev ``` This will start a server exposing endpoints to interact with your registered agents. ### Accessing the Chef Assistant API By default, `mastra dev` runs on `http://localhost:4111`. Your Chef Assistant agent will be available at: ``` POST http://localhost:4111/api/agents/chefAgent/generate ``` ### Interacting with the Agent via `curl` You can interact with the agent using `curl` from the command line: ```bash copy curl -X POST http://localhost:4111/api/agents/chefAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "I have eggs, flour, and milk. What can I make?" } ] }' ``` **Sample Response:** ```json { "text": "You can make delicious pancakes! Here's a simple recipe..." } ``` --- title: "Building an AI Stock Agent | Mastra Agents | Guides" description: Guide on creating a simple stock agent in Mastra to fetch the last day's closing stock price for a given symbol. --- import { Steps } from "nextra/components"; import YouTube from "../../../components/youtube"; # Stock Agent Source: https://mastra.ai/docs/guides/02-stock-agent We're going to create a simple agent that fetches the last day's closing stock price for a given symbol. This example will show you how to create a tool, add it to an agent, and use the agent to fetch stock prices. ## Project Structure ``` stock-price-agent/ ├── src/ │ ├── agents/ │ │ └── stockAgent.ts │ ├── tools/ │ │ └── stockPrices.ts │ └── index.ts ├── package.json └── .env ``` --- ## Initialize the Project and Install Dependencies First, create a new directory for your project and navigate into it: ```bash mkdir stock-price-agent cd stock-price-agent ``` Initialize a new Node.js project and install the required dependencies: ```bash npm init -y npm install @mastra/core zod @ai-sdk/openai ``` Set Up Environment Variables Create a `.env` file at the root of your project to store your OpenAI API key. ```bash filename=".env" copy OPENAI_API_KEY=your_openai_api_key ``` Create the necessary directories and files: ```bash mkdir -p src/agents src/tools touch src/agents/stockAgent.ts src/tools/stockPrices.ts src/index.ts ``` --- ## Create the Stock Price Tool Next, we'll create a tool that fetches the last day's closing stock price for a given symbol. ```ts filename="src/tools/stockPrices.ts" import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const getStockPrice = async (symbol: string) => { const data = await fetch( `https://mastra-stock-data.vercel.app/api/stock-data?symbol=${symbol}`, ).then((r) => r.json()); return data.prices["4. close"]; }; export const stockPrices = createTool({ id: "Get Stock Price", inputSchema: z.object({ symbol: z.string(), }), description: `Fetches the last day's closing stock price for a given symbol`, execute: async ({ context: { symbol } }) => { console.log("Using tool to fetch stock price for", symbol); return { symbol, currentPrice: await getStockPrice(symbol), }; }, }); ``` --- ## Add the Tool to an Agent We'll create an agent and add the `stockPrices` tool to it. ```ts filename="src/agents/stockAgent.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import * as tools from "../tools/stockPrices"; export const stockAgent = new Agent({ name: "Stock Agent", instructions: "You are a helpful assistant that provides current stock prices. When asked about a stock, use the stock price tool to fetch the stock price.", model: openai("gpt-4o-mini"), tools: { stockPrices: tools.stockPrices, }, }); ``` --- ## Set Up the Mastra Instance We need to initialize the Mastra instance with our agent and tool. ```ts filename="src/index.ts" import { Mastra } from "@mastra/core"; import { stockAgent } from "./agents/stockAgent"; export const mastra = new Mastra({ agents: { stockAgent }, }); ``` ## Serve the Application Instead of running the application directly, we'll use the `mastra dev` command to start the server. This will expose your agent via REST API endpoints, allowing you to interact with it over HTTP. In your terminal, start the Mastra server by running: ```bash mastra dev --dir src ``` This command will allow you to test your stockPrices tool and your stockAgent within the playground. This will also start the server and make your agent available at: ``` http://localhost:4111/api/agents/stockAgent/generate ``` --- ## Test the Agent with cURL Now that your server is running, you can test your agent's endpoint using `curl`: ```bash curl -X POST http://localhost:4111/api/agents/stockAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "What is the current stock price of Apple (AAPL)?" } ] }' ``` **Expected Response:** You should receive a JSON response similar to: ```json { "text": "The current price of Apple (AAPL) is $174.55.", "agent": "Stock Agent" } ``` This indicates that your agent successfully processed the request, used the `stockPrices` tool to fetch the stock price, and returned the result. --- title: "Building an AI Recruiter | Mastra Workflows | Guides" description: Guide on building a recruiter workflow in Mastra to gather and process candidate information using LLMs. --- # Introduction Source: https://mastra.ai/docs/guides/03-recruiter In this guide, you'll learn how Mastra helps you build workflows with LLMs. We'll walk through creating a workflow that gathers information from a candidate's resume, then branches to either a technical or behavioral question based on the candidate's profile. Along the way, you'll see how to structure workflow steps, handle branching, and integrate LLM calls. Below is a concise version of the workflow. It starts by importing the necessary modules, sets up Mastra, defines steps to extract and classify candidate data, and then asks suitable follow-up questions. Each code block is followed by a short explanation of what it does and why it's useful. ## 1. Imports and Setup You need to import Mastra tools and Zod to handle workflow definitions and data validation. ```typescript filename="src/mastra/index.ts" copy import { Mastra } from "@mastra/core"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; ``` Add your `OPENAI_API_KEY` to the `.env` file. ```bash filename=".env" copy OPENAI_API_KEY= ``` ## 2. Step One: Gather Candidate Info You want to extract candidate details from the resume text and classify them as technical or non-technical. This step calls an LLM to parse the resume and return structured JSON, including the name, technical status, specialty, and the original resume text. The code reads resumeText from trigger data, prompts the LLM, and returns organized fields for use in subsequent steps. ```typescript filename="src/mastra/index.ts" copy import { Agent } from '@mastra/core/agent'; import { openai } from "@ai-sdk/openai"; const recruiter = new Agent({ name: "Recruiter Agent", instructions: `You are a recruiter.`, model: openai("gpt-4o-mini"), }) const gatherCandidateInfo = new Step({ id: "gatherCandidateInfo", inputSchema: z.object({ resumeText: z.string(), }), outputSchema: z.object({ candidateName: z.string(), isTechnical: z.boolean(), specialty: z.string(), resumeText: z.string(), }), execute: async ({ context }) => { const resumeText = context?.getStepResult<{ resumeText: string; }>("trigger")?.resumeText; const prompt = ` Extract details from the resume text: "${resumeText}" `; const res = await recruiter.generate(prompt, { output: z.object({ candidateName: z.string(), isTechnical: z.boolean(), specialty: z.string(), resumeText: z.string(), }), }); return res.object; }, }); ``` ## 3. Technical Question Step This step prompts a candidate who is identified as technical for more information about how they got into their specialty. It uses the entire resume text so the LLM can craft a relevant follow-up question. The code generates a question about the candidate's specialty. ```typescript filename="src/mastra/index.ts" copy interface CandidateInfo { candidateName: string; isTechnical: boolean; specialty: string; resumeText: string; } const askAboutSpecialty = new Step({ id: "askAboutSpecialty", outputSchema: z.object({ question: z.string(), }), execute: async ({ context }) => { const candidateInfo = context?.getStepResult( "gatherCandidateInfo", ); const prompt = ` You are a recruiter. Given the resume below, craft a short question for ${candidateInfo?.candidateName} about how they got into "${candidateInfo?.specialty}". Resume: ${candidateInfo?.resumeText} `; const res = await recruiter.generate(prompt); return { question: res?.text?.trim() || "" }; }, }); ``` ## 4. Behavioral Question Step If the candidate is non-technical, you want a different follow-up question. This step asks what interests them most about the role, again referencing their complete resume text. The code solicits a role-focused query from the LLM. ```typescript filename="src/mastra/index.ts" copy const askAboutRole = new Step({ id: "askAboutRole", outputSchema: z.object({ question: z.string(), }), execute: async ({ context }) => { const candidateInfo = context?.getStepResult( "gatherCandidateInfo", ); const prompt = ` You are a recruiter. Given the resume below, craft a short question for ${candidateInfo?.candidateName} asking what interests them most about this role. Resume: ${candidateInfo?.resumeText} `; const res = await recruiter.generate(prompt); return { question: res?.text?.trim() || "" }; }, }); ``` ## 5. Define the Workflow You now combine the steps to implement branching logic based on the candidate's technical status. The workflow first gathers candidate data, then either asks about their specialty or about their role, depending on isTechnical. The code chains gatherCandidateInfo with askAboutSpecialty and askAboutRole, and commits the workflow. ```typescript filename="src/mastra/index.ts" copy const candidateWorkflow = new Workflow({ name: "candidate-workflow", triggerSchema: z.object({ resumeText: z.string(), }), }); candidateWorkflow .step(gatherCandidateInfo) .then(askAboutSpecialty, { when: { "gatherCandidateInfo.isTechnical": true }, }) .after(gatherCandidateInfo) .step(askAboutRole, { when: { "gatherCandidateInfo.isTechnical": false }, }); candidateWorkflow.commit(); ``` ## 6. Execute the Workflow ```typescript filename="src/mastra/index.ts" copy const mastra = new Mastra({ workflows: { candidateWorkflow, }, }); (async () => { const { runId, start } = mastra.getWorkflow("candidateWorkflow").createRun(); console.log("Run", runId); const runResult = await start({ triggerData: { resumeText: "Simulated resume content..." }, }); console.log("Final output:", runResult.results); })(); ``` You've just built a workflow to parse a resume and decide which question to ask based on the candidate's technical abilities. Congrats and happy hacking! --- title: "Building a Research Paper Assistant | Mastra RAG Guides" description: Guide on creating an AI research assistant that can analyze and answer questions about academic papers using RAG. --- import { Steps } from "nextra/components"; # Building a Research Paper Assistant with RAG Source: https://mastra.ai/docs/guides/04-research-assistant In this guide, we'll create an AI research assistant that can analyze academic papers and answer specific questions about their content using Retrieval Augmented Generation (RAG). We'll use the foundational Transformer paper [Attention Is All You Need](https://arxiv.org/html/1706.03762) as our example. ## Understanding RAG Components Let's understand how RAG works and how we'll implement each component: 1. Knowledge Store/Index - Converting text into vector representations - Creating numerical representations of content - Implementation: We'll use OpenAI's text-embedding-3-small to create embeddings and store them in PgVector 2. Retriever - Finding relevant content via similarity search - Matching query embeddings with stored vectors - Implementation: We'll use PgVector to perform similarity searches on our stored embeddings 3. Generator - Processing retrieved content with an LLM - Creating contextually informed responses - Implementation: We'll use GPT-4o-mini to generate answers based on retrieved content Our implementation will: 1. Process the Transformer paper into embeddings 2. Store them in PgVector for quick retrieval 3. Use similarity search to find relevant sections 4. Generate accurate responses using retrieved context ## Project Structure ``` research-assistant/ ├── src/ │ ├── agents/ │ │ └── researchAgent.ts │ └── index.ts ├── package.json └── .env ``` ### Initialize Project and Install Dependencies First, create a new directory for your project and navigate into it: ```bash mkdir research-assistant cd research-assistant ``` Initialize a new Node.js project and install the required dependencies: ```bash npm init -y npm install @mastra/core @mastra/rag @mastra/pg @ai-sdk/openai zod ``` Set up environment variables for API access and database connection: ```bash filename=".env" copy OPENAI_API_KEY=your_openai_api_key POSTGRES_CONNECTION_STRING=your_connection_string ``` Create the necessary files for our project: ```bash mkdir -p src/agents touch src/agents/researchAgent.ts src/index.ts ``` ### Create the Research Assistant Agent Now we'll create our RAG-enabled research assistant. The agent uses: - A [Vector Query Tool](/docs/reference/tools/vector-query-tool) for performing semantic search over our vector store to find relevant content in our papers. - GPT-4o-mini for understanding queries and generating responses - Custom instructions that guide the agent on how to analyze papers, use retrieved content effectively, and acknowledge limitations ```typescript copy showLineNumbers filename="src/agents/researchAgent.ts" import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { createVectorQueryTool } from '@mastra/rag'; // Create a tool for semantic search over our paper embeddings const vectorQueryTool = createVectorQueryTool({ vectorStoreName: 'pgVector', indexName: 'papers', model: openai.embedding('text-embedding-3-small'), }); export const researchAgent = new Agent({ name: 'Research Assistant', instructions: `You are a helpful research assistant that analyzes academic papers and technical documents. Use the provided vector query tool to find relevant information from your knowledge base, and provide accurate, well-supported answers based on the retrieved content. Focus on the specific content available in the tool and acknowledge if you cannot find sufficient information to answer a question. Base your responses only on the content provided, not on general knowledge.`, model: openai('gpt-4o-mini'), tools: { vectorQueryTool, }, }); ``` ### Set Up the Mastra Instance and Vector Store ```typescript copy showLineNumbers filename="src/index.ts" import { MDocument } from '@mastra/rag'; import { Mastra } from '@mastra/core'; import { PgVector } from '@mastra/pg'; import { embedMany } from 'ai'; import { researchAgent } from './agents/researchAgent'; const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { researchAgent }, vectors: { pgVector }, }); ``` ### Load and Process the Paper This step handles the initial document processing. We: 1. Fetch the research paper from its URL 2. Convert it into a document object 3. Split it into smaller, manageable chunks for better processing ```typescript copy showLineNumbers{14} filename="src/index.ts" // Load the paper const paperUrl = "https://arxiv.org/html/1706.03762"; const response = await fetch(paperUrl); const paperText = await response.text(); // Create document and chunk it const doc = MDocument.fromText(paperText); const chunks = await doc.chunk({ strategy: 'recursive', size: 512, overlap: 50, separator: '\n', }); ``` ### Create and Store Embeddings Finally, we'll prepare our content for RAG by: 1. Generating embeddings for each chunk of text 2. Creating a vector store index to hold our embeddings 3. Storing both the embeddings and metadata (original text and source information) in our vector database > **Note**: This metadata is crucial as it allows us to return the actual content when the vector store finds relevant matches. This allows our agent to efficiently search and retrieve relevant information. ```typescript copy showLineNumbers{28} filename="src/index.ts" // Generate embeddings const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); // Get the vector store instance from Mastra const vectorStore = mastra.getVector('pgVector'); // Create an index for our paper chunks await vectorStore.createIndex({ indexName: 'papers', dimension: 1536, }); // Store embeddings await vectorStore.upsert({ indexName: 'papers', vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text, source: 'transformer-paper' })), }); ``` This will: 1. Load the paper from the URL 2. Split it into manageable chunks 3. Generate embeddings for each chunk 4. Store both the embeddings and text in our vector database ### Test the Assistant Let's test our research assistant with different types of queries: ```typescript filename="src/index.ts" showLineNumbers{52} copy const agent = mastra.getAgent('researchAgent'); // Basic query about concepts const query1 = "What problems does sequence modeling face with neural networks?"; const response1 = await agent.generate(query1); console.log("\nQuery:", query1); console.log("Response:", response1.text); ``` You should see output like: ``` Query: What problems does sequence modeling face with neural networks? Response: Sequence modeling with neural networks faces several key challenges: 1. Vanishing and exploding gradients during training, especially with long sequences 2. Difficulty handling long-term dependencies in the input 3. Limited computational efficiency due to sequential processing 4. Challenges in parallelizing computations, resulting in longer training times ``` Let's try another question: ```typescript filename="src/index.ts" showLineNumbers{60} copy // Query about specific findings const query2 = "What improvements were achieved in translation quality?"; const response2 = await agent.generate(query2); console.log("\nQuery:", query2); console.log("Response:", response2.text); ``` Output: ``` Query: What improvements were achieved in translation quality? Response: The model showed significant improvements in translation quality, achieving more than 2.0 BLEU points improvement over previously reported models on the WMT 2014 English-to-German translation task, while also reducing training costs. ``` ### Serve the Application Start the Mastra server to expose your research assistant via API: ```bash mastra dev --dir src ``` Your research assistant will be available at: ``` http://localhost:4111/api/agents/researchAgent/generate ``` Test with curl: ```bash curl -X POST http://localhost:4111/api/agents/researchAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "What were the main findings about model parallelization?" } ] }' ``` ## Advanced RAG Examples Explore these examples for more advanced RAG techniques: - [Filter RAG](/examples/rag/usage/filter-rag) for filtering results using metadata - [Cleanup RAG](/examples/rag/usage/cleanup-rag) for optimizing information density - [Chain of Thought RAG](/examples/rag/usage/cot-rag) for complex reasoning queries using workflows - [Rerank RAG](/examples/rag/usage/rerank-rag) for improved result relevance --- title: "Introduction | Mastra Docs" description: "Mastra is a TypeScript agent framework. It helps you build AI applications and features quickly. It gives you the set of primitives you need: workflows, agents, RAG, integrations, syncs and evals." --- # About Mastra Source: https://mastra.ai/docs Mastra is an open-source TypeScript agent framework. It's designed to give you the primitives you need to build AI applications and features. You can use Mastra to build [AI agents](/docs/agents/00-overview.mdx) that have memory and can execute functions, or chain LLM calls in deterministic [workflows](/docs/workflows/00-overview.mdx). You can chat with your agents in Mastra's [local dev environment](/docs/local-dev/mastra-dev.mdx), feed them application-specific knowledge with [RAG](/docs/rag/overview.mdx), and score their outputs with Mastra's [evals](/docs/08-running-evals.mdx). The main features include: * **[Model routing](https://sdk.vercel.ai/docs/introduction)**: Mastra uses the [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction) for model routing, providing a unified interface to interact with any LLM provider including OpenAI, Anthropic, and Google Gemini. * **[Agent memory and tool calling](/docs/agents/01-agent-memory.mdx)**: With Mastra, you can give your agent tools (functions) that it can call. You can persist agent memory and retrieve it based on recency, semantic similarity, or conversation thread. * **[Workflow graphs](/docs/workflows/00-overview.mdx)**: When you want to execute LLM calls in a deterministic way, Mastra gives you a graph-based workflow engine. You can define discrete steps, log inputs and outputs at each step of each run, and pipe them into an observability tool. Mastra workflows have a simple syntax for control flow (`step()`, `.then()`, `.after()`) that allows branching and chaining. * **[Agent development environment](/docs/local-dev/mastra-dev.mdx)**: When you're developing an agent locally, you can chat with it and see its state and memory in Mastra's agent development environment. * **[Retrieval-augmented generation (RAG)](/docs/rag/overview.mdx)**: Mastra gives you APIs to process documents (text, HTML, Markdown, JSON) into chunks, create embeddings, and store them in a vector database. At query time, it retrieves relevant chunks to ground LLM responses in your data, with a unified API on top of multiple vector stores (Pinecone, pgvector, etc) and embedding providers (OpenAI, Cohere, etc). * **[Deployment](/docs/deployment/deployment.mdx)**: Mastra supports bundling your agents and workflows within an existing React, Next.js, or Node.js application, or into standalone endpoints. The Mastra deploy helper lets you easily bundle agents and workflows into a Node.js server using Hono, or deploy it onto a serverless platform like Vercel, Cloudflare Workers, or Netlify. * **[Evals](/docs/evals/00-overview.mdx)**: Mastra provides automated evaluation metrics that use model-graded, rule-based, and statistical methods to assess LLM outputs, with built-in metrics for toxicity, bias, relevance, and factual accuracy. You can also define your own evals. --- title: "Using Mastra Integrations | Mastra Local Development Docs" description: Documentation for Mastra integrations, which are auto-generated, type-safe API clients for third-party services. --- # Using Mastra Integrations Source: https://mastra.ai/docs/integrations Integrations in Mastra are auto-generated, type-safe API clients for third-party services. They can be used as tools for agents or as steps in workflows. ## Installing an Integration Mastra's default integrations are packaged as individually installable npm modules. You can add an integration to your project by installing it via npm and importing it into your Mastra configuration. ### Example: Adding the GitHub Integration 1. **Install the Integration Package** To install the GitHub integration, run: ```bash npm install @mastra/github ``` 2. **Add the Integration to Your Project** Create a new file for your integrations (e.g., `src/mastra/integrations/index.ts`) and import the integration: ```typescript filename="src/mastra/integrations/index.ts" showLineNumbers copy import { GithubIntegration } from '@mastra/github'; export const github = new GithubIntegration({ config: { PERSONAL_ACCESS_TOKEN: process.env.GITHUB_PAT!, }, }); ``` Make sure to replace `process.env.GITHUB_PAT!` with your actual GitHub Personal Access Token or ensure that the environment variable is properly set. 3. **Use the Integration in Tools or Workflows** You can now use the integration when defining tools for your agents or in workflows. ```typescript filename="src/mastra/tools/index.ts" showLineNumbers copy import { createTool } from '@mastra/core'; import { z } from 'zod'; import { github } from '../integrations'; export const getMainBranchRef = createTool({ id: 'getMainBranchRef', description: 'Fetch the main branch reference from a GitHub repository', inputSchema: z.object({ owner: z.string(), repo: z.string(), }), outputSchema: z.object({ ref: z.string().optional(), }), execute: async ({ context }) => { const client = await github.getApiClient(); const mainRef = await client.gitGetRef({ path: { owner: context.owner, repo: context.repo, ref: 'heads/main', }, }); return { ref: mainRef.data?.ref }; }, }); ``` In the example above: - We import the `github` integration. - We define a tool called `getMainBranchRef` that uses the GitHub API client to fetch the reference of the main branch of a repository. - The tool accepts `owner` and `repo` as inputs and returns the reference string. ## Using Integrations in Agents Once you've defined tools that utilize integrations, you can include these tools in your agents. ```typescript filename="src/mastra/agents/index.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core'; import { getMainBranchRef } from '../tools'; export const codeReviewAgent = new Agent({ name: 'Code Review Agent', instructions: 'An agent that reviews code repositories and provides feedback.', model: openai('gpt-4o-mini'), tools: { getMainBranchRef, // other tools... }, }); ``` In this setup: - We create an agent named `Code Review Agent`. - We include the `getMainBranchRef` tool in the agent's available tools. - The agent can now use this tool to interact with GitHub repositories during conversations. ## Environment Configuration Ensure that any required API keys or tokens for your integrations are properly set in your environment variables. For example, with the GitHub integration, you need to set your GitHub Personal Access Token: ```bash GITHUB_PAT=your_personal_access_token ``` Consider using a `.env` file or another secure method to manage sensitive credentials. ## Available Integrations Mastra provides several built-in integrations; primarily API-key based integrations that do not require OAuth. Some available integrations including Github, Stripe, Resend, Firecrawl, and more. Check [Mastra's codebase](https://github.com/mastra-ai/mastra/tree/main/integrations) or [npm packages](https://www.npmjs.com/search?q=%22%40mastra%22) for a full list of available integrations. ## Conclusion Integrations in Mastra enable your AI agents and workflows to interact with external services seamlessly. By installing and configuring integrations, you can extend the capabilities of your application to include operations such as fetching data from APIs, sending messages, or managing resources in third-party systems. Remember to consult the documentation of each integration for specific usage details and to adhere to best practices for security and type safety. --- title: "Adding to an Existing Project | Mastra Local Development Docs" description: "Add Mastra to your existing Node.js applications" --- # Adding to an Existing Project Source: https://mastra.ai/docs/local-dev/add-to-existing-project You can add Mastra to an existing project using the CLI: ```bash npm2yarn copy npm install -g mastra@latest mastra init ``` Changes made to project: 1. Creates `src/mastra` directory with entry point 2. Adds required dependencies 3. Configures TypeScript compiler options ## Interactive Setup Running commands without arguments starts a CLI prompt for: 1. Component selection 2. LLM provider configuration 3. API key setup 4. Example code inclusion ## Non-Interactive Setup To initialize mastra in non-interactive mode use the following command arguments: ```bash Arguments: --components Specify components: agents, tools, workflows --llm-provider LLM provider: openai, anthropic, or groq --add-example Include example implementation --llm-api-key Provider API key --dir Directory for Mastra files (defaults to src/) ``` For more details, refer to the [mastra init CLI documentation](/docs/reference/cli/init). --- title: "Creating a new Project | Mastra Local Development Docs" description: "Create new Mastra projects or add Mastra to existing Node.js applications using the CLI" --- # Creating a new project Source: https://mastra.ai/docs/local-dev/creating-a-new-project You can create a new project using the `create-mastra` package: ```bash npm2yarn copy npm create mastra@latest ``` You can also create a new project by using the `mastra` CLI directly: ```bash npm2yarn copy npm install -g mastra@latest mastra create ``` ## Interactive Setup Running commands without arguments starts a CLI prompt for: 1. Project name 1. Component selection 2. LLM provider configuration 3. API key setup 4. Example code inclusion ## Non-Interactive Setup To initialize mastra in non-interactive mode use the following command arguments: ```bash Arguments: --components Specify components: agents, tools, workflows --llm-provider LLM provider: openai, anthropic, groq, google, or cerebras --add-example Include example implementation --llm-api-key Provider API key --project-name Project name that will be used in package.json and as the project directory name ``` Generated project structure: ``` my-project/ ├── src/ │ └── mastra/ │ └── index.ts # Mastra entry point ├── package.json └── tsconfig.json ``` --- title: "Inspecting Agents with `mastra dev` | Mastra Local Dev Docs" description: Documentation for the Mastra local development environment for Mastra applications. --- import YouTube from "../../../components/youtube"; # Local Development Environment Source: https://mastra.ai/docs/local-dev/mastra-dev Mastra provides a local development environment where you can test your agents, workflows, and tools while developing locally. ## Launch Development Server You can launch the Mastra development environment using the Mastra CLI by running: ```bash mastra dev ``` By default, the server runs at http://localhost:4111, but you can change the port with the `--port` flag. ## Dev Playground `mastra dev` serves a playground UI for interacting with your agents, workflows, and tools. The playground provides dedicated interfaces for testing each component of your Mastra application during development. ### Agent Playground The Agent playground provides an interactive chat interface where you can test and debug your agents during development. Key features include: - **Chat Interface**: Directly interact with your agents to test their responses and behavior. - **Prompt CMS**: Experiment with different system instructions for your agent: - A/B test different prompt versions. - Track performance metrics for each variant. - Select and deploy the most effective prompt version. - **Agent Traces**: View detailed execution traces to understand how your agent processes requests, including: - Prompt construction. - Tool usage. - Decision-making steps. - Response generation. - **Agent Evals**: When you've set up [Agent evaluation metrics](/docs/evals/00-overview), you can: - Run evaluations directly from the playground. - View evaluation results and metrics. - Compare agent performance across different test cases. ### Workflow Playground The Workflow playground helps you visualize and test your workflow implementations: - **Workflow Visualization**: Workflow graph visualization. - **Run Workflows**: - Trigger test workflow runs with custom input data. - Debug workflow logic and conditions. - Simulate different execution paths. - View detailed execution logs for each step. - **Workflow Traces**: Examine detailed execution traces that show: - Step-by-step workflow progression. - State transitions and data flow. - Tool invocations and their results. - Decision points and branching logic. - Error handling and recovery paths. ### Tools Playground The Tools playground allows you to test your custom tools in isolation: - Test individual tools without running a full agent or workflow. - Input test data and view tool responses. - Debug tool implementation and error handling. - Verify tool input/output schemas. - Monitor tool performance and execution time. ## REST API Endpoints `mastra dev` also spins up REST API routes for your agents and workflows via the local [Mastra Server](/docs/deployment/server). This allows you to test your API endpoints before deployment. See [Mastra Dev reference](/docs/reference/cli/dev#routes) for more details about all endpoints. You can then leverage the [Mastra Client](/docs/deployment/client) SDK to interact with your served REST API routes seamlessly. ## OpenAPI Specification `mastra dev` provides an OpenAPI spec at http://localhost:4111/openapi.json ## Local Dev Architecture The local development server is designed to run without any external dependencies or containerization. This is achieved through: - **Dev Server**: Uses [Hono](https://hono.dev) as the underlying framework to power the [Mastra Server](/docs/deployment/server). - **In-Memory Storage**: Uses [LibSQL](https://libsql.org/) memory adapters for: - Agent memory management. - Trace storage. - Evals storage. - Workflow snapshots. - **Vector Storage**: Uses [FastEmbed](https://github.com/qdrant/fastembed) for: - Default embedding generation. - Vector storage and retrieval. - Semantic search capabilities. This architecture allows you to start developing immediately without setting up databases or vector stores, while still maintaining production-like behavior in your local environment. ## Summary `mastra dev` makes it easy to develop, debug, and iterate on your AI logic in a self-contained environment before deploying to production. - [Mastra Dev reference](../reference/cli/dev.mdx) --- title: "Logging | Mastra Observability Documentation" description: Documentation on effective logging in Mastra, crucial for understanding application behavior and improving AI accuracy. --- import Image from "next/image"; # Logging Source: https://mastra.ai/docs/observability/logging In Mastra, logs can detail when certain functions run, what input data they receive, and how they respond. ## Basic Setup Here's a minimal example that sets up a **console logger** at the `INFO` level. This will print out informational messages and above (i.e., `DEBUG`, `INFO`, `WARN`, `ERROR`) to the console. ```typescript filename="mastra.config.ts" showLineNumbers copy import { Mastra } from "@mastra/core"; import { createLogger } from "@mastra/core/logger"; export const mastra = new Mastra({ // Other Mastra configuration... logger: createLogger({ name: "Mastra", level: "info", }), }); ``` In this configuration: - `name: "Mastra"` specifies the name to group logs under. - `level: "info"` sets the minimum severity of logs to record. ## Configuration - For more details on the options you can pass to `createLogger()`, see the [createLogger reference documentation](/docs/reference/observability/create-logger.mdx). - Once you have a `Logger` instance, you can call its methods (e.g., `.info()`, `.warn()`, `.error()`) in the [Logger instance reference documentation](/docs/reference/observability/logger.mdx). - If you want to send your logs to an external service for centralized collection, analysis, or storage, you can configure other logger types such as Upstash Redis. Consult the [createLogger reference documentation](/docs/reference/observability/create-logger.mdx) for details on parameters like `url`, `token`, and `key` when using the `UPSTASH` logger type. --- title: "Next.js Tracing | Mastra Observability Documentation" description: "Set up OpenTelemetry tracing for Next.js applications" --- # Next.js Tracing Source: https://mastra.ai/docs/observability/nextjs-tracing If you're using Next.js, you have two options for setting up OpenTelemetry instrumentation: ### Option 1: Using Vercel's OTEL Setup If you're deploying to Vercel, you can use their built-in OpenTelemetry setup: 1. Install the required dependencies: ```bash copy npm install @opentelemetry/api @vercel/otel ``` 2. Create an instrumentation file at the root of your project (or in the src folder if using one): ```ts filename="instrumentation.ts" copy import { registerOTel } from '@vercel/otel' export function register() { registerOTel({ serviceName: 'your-project-name' }) } ``` ### Option 2: Using Custom Exporters If you're using other observability tools (like Langfuse), you can configure a custom exporter: 1. Install the required dependencies (example using Langfuse): ```bash copy npm install @opentelemetry/api langfuse-vercel ``` 2. Create an instrumentation file: ```ts filename="instrumentation.ts" copy import { NodeSDK, ATTR_SERVICE_NAME, Resource, } from '@mastra/core/telemetry/otel-vendor'; import { LangfuseExporter } from 'langfuse-vercel'; export function register() { const exporter = new LangfuseExporter({ // ... Langfuse config }) const sdk = new NodeSDK({ resource: new Resource({ [ATTR_SERVICE_NAME]: 'ai', }), traceExporter: exporter, }); sdk.start(); } ``` ### Next.js Configuration For either option, enable the instrumentation hook in your Next.js config: ```ts filename="next.config.ts" showLineNumbers copy import type { NextConfig } from "next"; const nextConfig: NextConfig = { experimental: { instrumentationHook: true // Not required in Next.js 15+ } }; export default nextConfig; ``` ### Mastra Configuration Configure your Mastra instance: ```typescript filename="mastra.config.ts" copy import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-project-name", enabled: true } }); ``` This setup will enable OpenTelemetry tracing for your Next.js application and Mastra operations. For more details, see the documentation for: - [Next.js Instrumentation](https://nextjs.org/docs/app/building-your-application/optimizing/instrumentation) - [Vercel OpenTelemetry](https://vercel.com/docs/observability/otel-overview/quickstart) --- title: "Tracing | Mastra Observability Documentation" description: "Set up OpenTelemetry tracing for Mastra applications" --- import Image from "next/image"; # Tracing Source: https://mastra.ai/docs/observability/tracing Mastra supports the OpenTelemetry Protocol (OTLP) for tracing and monitoring your application. When telemetry is enabled, Mastra automatically traces all core primitives including agent operations, LLM interactions, tool executions, integration calls, workflow runs, and database operations. Your telemetry data can then be exported to any OTEL collector. ### Basic Configuration Here's a simple example of enabling telemetry: ```ts filename="mastra.config.ts" showLineNumbers copy export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "my-app", enabled: true, sampling: { type: "always_on", }, export: { type: "otlp", endpoint: "http://localhost:4318", // SigNoz local endpoint }, }, }); ``` ### Configuration Options The telemetry config accepts these properties: ```ts type OtelConfig = { // Name to identify your service in traces (optional) serviceName?: string; // Enable/disable telemetry (defaults to true) enabled?: boolean; // Control how many traces are sampled sampling?: { type: "ratio" | "always_on" | "always_off" | "parent_based"; probability?: number; // For ratio sampling root?: { probability: number; // For parent_based sampling }; }; // Where to send telemetry data export?: { type: "otlp" | "console"; endpoint?: string; headers?: Record; }; }; ``` See the [OtelConfig reference documentation](/docs/reference/observability/otel-config.mdx) for more details. ### Environment Variables You can configure the OTLP endpoint and headers through environment variables: ```env filename=".env" copy OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 OTEL_EXPORTER_OTLP_HEADERS=x-api-key=your-api-key ``` Then in your config: ```ts filename="mastra.config.ts" showLineNumbers copy export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "my-app", enabled: true, export: { type: "otlp", // endpoint and headers will be picked up from env vars }, }, }); ``` ### Example: SigNoz Integration Here's what a traced agent interaction looks like in [SigNoz](https://signoz.io): Agent interaction trace showing spans, LLM calls, and tool executions ### Other Supported Providers For a complete list of supported observability providers and their configuration details, see the [Observability Providers reference](../reference/observability/providers/). ### Framework-Specific Setup For Next.js applications, see our [Next.js Tracing](/docs/observability/nextjs-tracing) guide. --- title: Chunking and Embedding Documents | RAG | Mastra Docs description: Guide on chunking and embedding documents in Mastra for efficient processing and retrieval. --- ## Chunking and Embedding Documents Source: https://mastra.ai/docs/rag/chunking-and-embedding Before processing, create a MDocument instance from your content. You can initialize it from various formats: ```ts showLineNumbers copy const docFromText = MDocument.fromText("Your plain text content..."); const docFromHTML = MDocument.fromHTML("Your HTML content..."); const docFromMarkdown = MDocument.fromMarkdown("# Your Markdown content..."); const docFromJSON = MDocument.fromJSON(`{ "key": "value" }`); ``` ## Step 1: Document Processing Use `chunk` to split documents into manageable pieces. Mastra supports multiple chunking strategies optimized for different document types: - `recursive`: Smart splitting based on content structure - `character`: Simple character-based splits - `token`: Token-aware splitting - `markdown`: Markdown-aware splitting - `html`: HTML structure-aware splitting - `json`: JSON structure-aware splitting - `latex`: LaTeX structure-aware splitting Here's an example of how to use the `recursive` strategy: ```ts showLineNumbers copy const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", extract: { metadata: true, // Optionally extract metadata }, }); ``` **Note:** Metadata extraction may use LLM calls, so ensure your API key is set. We go deeper into chunking strategies in our [chunk documentation](/docs/reference/rag/chunk.mdx). ## Step 2: Embedding Generation Transform chunks into embeddings using your preferred provider. Mastra supports both OpenAI and Cohere embeddings: ### Using OpenAI ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { embedMany } from "ai"; const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); ``` ### Using Cohere ```ts showLineNumbers copy import { embedMany } from 'ai'; import { cohere } from '@ai-sdk/cohere'; const { embeddings } = await embedMany({ model: cohere.embedding('embed-english-v3.0'), values: chunks.map(chunk => chunk.text), }); ``` The embedding functions return vectors, arrays of numbers representing the semantic meaning of your text, ready for similarity searches in your vector database. ## Example: Complete Pipeline Here's an example showing document processing and embedding generation with both providers: ```ts showLineNumbers copy import { embedMany } from "ai"; import { openai } from "@ai-sdk/openai"; import { cohere } from "@ai-sdk/cohere"; import { MDocument } from "@mastra/rag"; // Initialize document const doc = MDocument.fromText(` Climate change poses significant challenges to global agriculture. Rising temperatures and changing precipitation patterns affect crop yields. `); // Create chunks const chunks = await doc.chunk({ strategy: "recursive", size: 256, overlap: 50, }); // Generate embeddings with OpenAI const { embeddings: openAIEmbeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); // OR // Generate embeddings with Cohere const { embeddings: cohereEmbeddings } = await embedMany({ model: cohere.embedding('embed-english-v3.0'), values: chunks.map(chunk => chunk.text), }); // Store embeddings in your vector database await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, }); ``` This example demonstrates how to process a document, split it into chunks, generate embeddings with both OpenAI and Cohere, and store the results in a vector database. For more examples of different chunking strategies and embedding configurations, see: - [Adjust Chunk Size](/docs/reference/rag/chunk.mdx#adjust-chunk-size) - [Adjust Chunk Delimiters](/docs/reference/rag/chunk.mdx#adjust-chunk-delimiters) - [Embed Text with Cohere](/docs/reference/rag/embeddings.mdx#using-cohere) --- title: RAG (Retrieval-Augmented Generation) in Mastra | Mastra Docs description: Overview of Retrieval-Augmented Generation (RAG) in Mastra, detailing its capabilities for enhancing LLM outputs with relevant context. --- # RAG (Retrieval-Augmented Generation) in Mastra Source: https://mastra.ai/docs/rag/overview RAG in Mastra helps you enhance LLM outputs by incorporating relevant context from your own data sources, improving accuracy and grounding responses in real information. Mastra's RAG system provides: - Standardized APIs to process and embed documents - Support for multiple vector stores - Chunking and embedding strategies for optimal retrieval - Observability for tracking embedding and retrieval performance ## Example To implement RAG, you process your documents into chunks, create embeddings, store them in a vector database, and then retrieve relevant context at query time. ```ts showLineNumbers copy import { embedMany } from "ai"; import { openai } from "@ai-sdk/openai"; import { PgVector } from "@mastra/pg"; import { MDocument } from "@mastra/rag"; import { z } from "zod"; // 1. Initialize document const doc = MDocument.fromText(`Your document text here...`); // 2. Create chunks const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, }); // 3. Generate embeddings; we need to pass the text of each chunk const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); // 4. Store in vector database const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING); await pgVector.upsert({ indexName: "embeddings", vectors: embeddings, }); // using an index name of 'embeddings' // 5. Query similar chunks const results = await pgVector.query({ indexName: "embeddings", queryVector: queryVector, topK: 3, }); // queryVector is the embedding of the query console.log("Similar chunks:", results); ``` This example shows the essentials: initialize a document, create chunks, generate embeddings, store them, and query for similar content. ## Document Processing The basic building block of RAG is document processing. Documents can be chunked using various strategies (recursive, sliding window, etc.) and enriched with metadata. See the [chunking and embedding doc](./chunking-and-embedding.mdx). ## Vector Storage Mastra supports multiple vector stores for embedding persistence and similarity search, including pgvector, Pinecone, and Qdrant. See the [vector database doc](./vector-databases.mdx). ## Observability and Debugging Mastra's RAG system includes observability features to help you optimize your retrieval pipeline: - Track embedding generation performance and costs - Monitor chunk quality and retrieval relevance - Analyze query patterns and cache hit rates - Export metrics to your observability platform See the [OTel Configuration](../reference/observability/otel-config.mdx) page for more details. ## More resources - [Chain of Thought RAG Example](../../examples/rag/usage/cot-rag.mdx) - [All RAG Examples](../../examples/) (including different chunking strategies, embedding models, and vector stores) --- title: "Retrieval, Semantic Search, Reranking | RAG | Mastra Docs" description: Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking. --- import { Tabs } from "nextra/components"; ## Retrieval in RAG Systems Source: https://mastra.ai/docs/rag/retrieval After storing embeddings, you need to retrieve relevant chunks to answer user queries. Mastra provides flexible retrieval options with support for semantic search, filtering, and re-ranking. ## How Retrieval Works 1. The user's query is converted to an embedding using the same model used for document embeddings 2. This embedding is compared to stored embeddings using vector similarity 3. The most similar chunks are retrieved and can be optionally: - Filtered by metadata - Re-ranked for better relevance - Processed through a knowledge graph ## Basic Retrieval The simplest approach is direct semantic search. This method uses vector similarity to find chunks that are semantically similar to the query: ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { embed } from "ai"; import { PgVector } from "@mastra/pg"; // Convert query to embedding const { embedding } = await embed({ value: "What are the main points in the article?", model: openai.embedding('text-embedding-3-small'), }); // Query vector store const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING); const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, }); // Display results console.log(results); ``` Results include both the text content and a similarity score: ```ts showLineNumbers copy [ { text: "Climate change poses significant challenges...", score: 0.89, metadata: { source: "article1.txt" } }, { text: "Rising temperatures affect crop yields...", score: 0.82, metadata: { source: "article1.txt" } } // ... more results ] ``` For an example of how to use the basic retrieval method, see the [Retrieve Results](../../examples/rag/query/retrieve-results.mdx) example. ## Advanced Retrieval options ### Metadata Filtering Filter results based on metadata fields to narrow down the search space. This is useful when you have documents from different sources, time periods, or with specific attributes. Mastra provides a unified MongoDB-style query syntax that works across all supported vector stores. For detailed information about available operators and syntax, see the [Metadata Filters Reference](/docs/reference/rag/metadata-filters). Basic filtering examples: ```ts showLineNumbers copy // Simple equality filter const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { source: "article1.txt" } }); // Numeric comparison const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { price: { $gt: 100 } } }); // Multiple conditions const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { category: "electronics", price: { $lt: 1000 }, inStock: true } }); // Array operations const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { tags: { $in: ["sale", "new"] } } }); // Logical operators const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { $or: [ { category: "electronics" }, { category: "accessories" } ], $and: [ { price: { $gt: 50 } }, { price: { $lt: 200 } } ] } }); ``` Common use cases for metadata filtering: - Filter by document source or type - Filter by date ranges - Filter by specific categories or tags - Filter by numerical ranges (e.g., price, rating) - Combine multiple conditions for precise querying - Filter by document attributes (e.g., language, author) For an example of how to use metadata filtering, see the [Hybrid Vector Search](../../examples/rag/query/hybrid-vector-search.mdx) example. ### Vector Query Tool Sometimes you want to give your agent the ability to query a vector database directly. The Vector Query Tool allows your agent to be in charge of retrieval decisions, combining semantic search with optional filtering and reranking based on the agent's understanding of the user's needs. ```ts showLineNumbers copy const vectorQueryTool = createVectorQueryTool({ vectorStoreName: 'pgVector', indexName: 'embeddings', model: openai.embedding('text-embedding-3-small'), }); ``` When creating the tool, pay special attention to the tool's name and description - these help the agent understand when and how to use the retrieval capabilities. For example, you might name it "SearchKnowledgeBase" and describe it as "Search through our documentation to find relevant information about X topic." This is particularly useful when: - Your agent needs to dynamically decide what information to retrieve - The retrieval process requires complex decision-making - You want the agent to combine multiple retrieval strategies based on context For detailed configuration options and advanced usage, see the [Vector Query Tool Reference](/docs/reference/tools/vector-query-tool). ### Vector Store Prompts Vector store prompts define query patterns and filtering capabilities for each vector database implementation. When implementing filtering, these prompts are required in the agent's instructions to specify valid operators and syntax for each vector store implementation. ```ts showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { PGVECTOR_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${PGVECTOR_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { PINECONE_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${PINECONE_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { QDRANT_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${QDRANT_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { CHROMA_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${CHROMA_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { ASTRA_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${ASTRA_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { LIBSQL_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${LIBSQL_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { UPSTASH_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${UPSTASH_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { VECTORIZE_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${VECTORIZE_PROMPT} `, tools: { vectorQueryTool }, }); ``` ### Re-ranking Initial vector similarity search can sometimes miss nuanced relevance. Re-ranking is a more computationally expensive process, but more accurate algorithm that improves results by: - Considering word order and exact matches - Applying more sophisticated relevance scoring - Using a method called cross-attention between query and documents Here's how to use re-ranking: ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { rerank } from "@mastra/rag"; // Get initial results from vector search const initialResults = await pgVector.query({ indexName: "embeddings", queryVector: queryEmbedding, topK: 10, }); // Re-rank the results const rerankedResults = await rerank(initialResults, query, openai('gpt-4o-mini')); ``` > **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field. The re-ranked results combine vector similarity with semantic understanding to improve retrieval quality. For more details about re-ranking, see the [rerank()](/docs/reference/rag/rerank) method. For an example of how to use the re-ranking method, see the [Re-ranking Results](../../examples/rag/rerank/rerank.mdx) example. ### Graph-based Retrieval For documents with complex relationships, graph-based retrieval can follow connections between chunks. This helps when: - Information is spread across multiple documents - Documents reference each other - You need to traverse relationships to find complete answers Example setup: ```ts showLineNumbers copy const graphQueryTool = createGraphQueryTool({ vectorStoreName: 'pgVector', indexName: 'embeddings', model: openai.embedding('text-embedding-3-small'), graphOptions: { threshold: 0.7, } }); ``` For more details about graph-based retrieval, see the [GraphRAG](/docs/reference/rag/graph-rag) class and the [createGraphQueryTool()](/docs/reference/tools/graph-rag-tool) function. For an example of how to use the graph-based retrieval method, see the [Graph-based Retrieval](../../examples/rag/usage/graph-rag.mdx) example. --- title: "Storing Embeddings in A Vector Database | Mastra Docs" description: Guide on vector storage options in Mastra, including embedded and dedicated vector databases for similarity search. --- import { Tabs } from "nextra/components"; ## Storing Embeddings in A Vector Database Source: https://mastra.ai/docs/rag/vector-databases After generating embeddings, you need to store them in a database that supports vector similarity search. Mastra provides a consistent interface for storing and querying embeddings across different vector databases. ## Supported Databases ```ts filename="vector-store.ts" showLineNumbers copy import { PgVector } from '@mastra/pg'; const store = new PgVector(process.env.POSTGRES_CONNECTION_STRING) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ### Using PostgreSQL with pgvector PostgreSQL with the pgvector extension is a good solution for teams already using PostgreSQL who want to minimize infrastructure complexity. For detailed setup instructions and best practices, see the [official pgvector repository](https://github.com/pgvector/pgvector). ```ts filename="vector-store.ts" showLineNumbers copy import { PineconeVector } from '@mastra/pinecone' const store = new PineconeVector(process.env.PINECONE_API_KEY) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { QdrantVector } from '@mastra/qdrant' const store = new QdrantVector({ url: process.env.QDRANT_URL, apiKey: process.env.QDRANT_API_KEY }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { ChromaVector } from '@mastra/chroma' const store = new ChromaVector() await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { AstraVector } from '@mastra/astra' const store = new AstraVector({ token: process.env.ASTRA_DB_TOKEN, endpoint: process.env.ASTRA_DB_ENDPOINT, keyspace: process.env.ASTRA_DB_KEYSPACE }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { LibSQLVector } from "@mastra/core/vector/libsql"; const store = new LibSQLVector({ connectionUrl: process.env.DATABASE_URL, authToken: process.env.DATABASE_AUTH_TOKEN // Optional: for Turso cloud databases }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { UpstashVector } from '@mastra/upstash' const store = new UpstashVector({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { CloudflareVector } from '@mastra/vectorize' const store = new CloudflareVector({ accountId: process.env.CF_ACCOUNT_ID, apiToken: process.env.CF_API_TOKEN }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ## Using Vector Storage Once initialized, all vector stores share the same interface for creating indexes, upserting embeddings, and querying. ### Creating Indexes Before storing embeddings, you need to create an index with the appropriate dimension size for your embedding model: ```ts filename="store-embeddings.ts" showLineNumbers copy // Create an index with dimension 1536 (for text-embedding-3-small) await store.createIndex({ indexName: 'myCollection', dimension: 1536, }); // For other models, use their corresponding dimensions: // - text-embedding-3-large: 3072 // - text-embedding-ada-002: 1536 // - cohere-embed-multilingual-v3: 1024 ``` The dimension size must match the output dimension of your chosen embedding model. Common dimension sizes are: - OpenAI text-embedding-3-small: 1536 dimensions - OpenAI text-embedding-3-large: 3072 dimensions - Cohere embed-multilingual-v3: 1024 dimensions ### Naming Rules for Databases Each vector database enforces specific naming conventions for indexes and collections to ensure compatibility and prevent conflicts. Index names must: - Start with a letter or underscore - Contain only letters, numbers, and underscores - Example: `my_index_123` is valid - Example: `my-index` is not valid (contains hyphen) Index names must: - Use only lowercase letters, numbers, and dashes - Not contain dots (used for DNS routing) - Not use non-Latin characters or emojis - Have a combined length (with project ID) under 52 characters - Example: `my-index-123` is valid - Example: `my.index` is not valid (contains dot) Collection names must: - Be 1-255 characters long - Not contain any of these special characters: - `< > : " / \ | ? *` - Null character (`\0`) - Unit separator (`\u{1F}`) - Example: `my_collection_123` is valid - Example: `my/collection` is not valid (contains slash) Collection names must: - Be 3-63 characters long - Start and end with a letter or number - Contain only letters, numbers, underscores, or hyphens - Not contain consecutive periods (..) - Not be a valid IPv4 address - Example: `my-collection-123` is valid - Example: `my..collection` is not valid (consecutive periods) Collection names must: - Not be empty - Be 48 characters or less - Contain only letters, numbers, and underscores - Example: `my_collection_123` is valid - Example: `my-collection` is not valid (contains hyphen) Index names must: - Start with a letter or underscore - Contain only letters, numbers, and underscores - Example: `my_index_123` is valid - Example: `my-index` is not valid (contains hyphen) Namespace names must: - Be 2-100 characters long - Contain only: - Alphanumeric characters (a-z, A-Z, 0-9) - Underscores, hyphens, dots - Not start or end with special characters (_, -, .) - Can be case-sensitive - Example: `MyNamespace123` is valid - Example: `_namespace` is not valid (starts with underscore) Index names must: - Start with a letter - Be shorter than 32 characters - Contain only lowercase ASCII letters, numbers, and dashes - Use dashes instead of spaces - Example: `my-index-123` is valid - Example: `My_Index` is not valid (uppercase and underscore) ### Upserting Embeddings After creating an index, you can store embeddings along with their basic metadata: ```ts filename="store-embeddings.ts" showLineNumbers copy // Store embeddings with their corresponding metadata await store.upsert({ indexName: 'myCollection', // index name vectors: embeddings, // array of embedding vectors metadata: chunks.map(chunk => ({ text: chunk.text, // The original text content id: chunk.id // Optional unique identifier })) }); ``` The upsert operation: - Takes an array of embedding vectors and their corresponding metadata - Updates existing vectors if they share the same ID - Creates new vectors if they don't exist - Automatically handles batching for large datasets For complete examples of upserting embeddings in different vector stores, see the [Upsert Embeddings](../../examples/rag/upsert/upsert-embeddings.mdx) guide. ## Adding Metadata Vector stores support rich metadata (any JSON-serializable fields) for filtering and organization. Since metadata is stored with no fixed schema, use consistent field naming to avoid unexpected query results. **Important**: Metadata is crucial for vector storage - without it, you'd only have numerical embeddings with no way to return the original text or filter results. Always store at least the source text as metadata. ```ts showLineNumbers copy // Store embeddings with rich metadata for better organization and filtering await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map((chunk) => ({ // Basic content text: chunk.text, id: chunk.id, // Document organization source: chunk.source, category: chunk.category, // Temporal metadata createdAt: new Date().toISOString(), version: "1.0", // Custom fields language: chunk.language, author: chunk.author, confidenceScore: chunk.score, })), }); ``` Key metadata considerations: - Be strict with field naming - inconsistencies like 'category' vs 'Category' will affect queries - Only include fields you plan to filter or sort by - extra fields add overhead - Add timestamps (e.g., 'createdAt', 'lastUpdated') to track content freshness ## Best Practices - Create indexes before bulk insertions - Use batch operations for large insertions (the upsert method handles batching automatically) - Only store metadata you'll query against - Match embedding dimensions to your model (e.g., 1536 for `text-embedding-3-small`) --- title: "Reference: createTool() | Tools | Agents | Mastra Docs" description: Documentation for the createTool function in Mastra, which creates custom tools for agents and workflows. --- # `createTool()` Source: https://mastra.ai/docs/reference/agents/createTool The `createTool()` function creates typed tools that can be executed by agents or workflows. Tools have built-in schema validation, execution context, and integration with the Mastra ecosystem. ## Overview Tools are a fundamental building block in Mastra that allow agents to interact with external systems, perform computations, and access data. Each tool has: - A unique identifier - A description that helps the AI understand when and how to use the tool - Optional input and output schemas for validation - An execution function that implements the tool's logic ## Example Usage ```ts filename="src/tools/stock-tools.ts" showLineNumbers copy import { createTool } from "@mastra/core/tools"; import { z } from "zod"; // Helper function to fetch stock data const getStockPrice = async (symbol: string) => { const response = await fetch( `https://mastra-stock-data.vercel.app/api/stock-data?symbol=${symbol}` ); const data = await response.json(); return data.prices["4. close"]; }; // Create a tool to get stock prices export const stockPriceTool = createTool({ id: "getStockPrice", description: "Fetches the current stock price for a given ticker symbol", inputSchema: z.object({ symbol: z.string().describe("The stock ticker symbol (e.g., AAPL, MSFT)") }), outputSchema: z.object({ symbol: z.string(), price: z.number(), currency: z.string(), timestamp: z.string() }), execute: async ({ context }) => { const price = await getStockPrice(context.symbol); return { symbol: context.symbol, price: parseFloat(price), currency: "USD", timestamp: new Date().toISOString() }; } }); // Create a tool that uses the thread context export const threadInfoTool = createTool({ id: "getThreadInfo", description: "Returns information about the current conversation thread", inputSchema: z.object({ includeResource: z.boolean().optional().default(false) }), execute: async ({ context, threadId, resourceId }) => { return { threadId, resourceId: context.includeResource ? resourceId : undefined, timestamp: new Date().toISOString() }; } }); ``` ## API Reference ### Parameters `createTool()` accepts a single object with the following properties: Promise", required: false, description: "Async function that implements the tool's logic. Receives the execution context and optional configuration.", properties: [ { type: "ToolExecutionContext", parameters: [ { name: "context", type: "object", description: "The validated input data that matches the inputSchema" }, { name: "threadId", type: "string", isOptional: true, description: "Identifier for the conversation thread, if available" }, { name: "resourceId", type: "string", isOptional: true, description: "Identifier for the user or resource interacting with the tool" }, { name: "mastra", type: "Mastra", isOptional: true, description: "Reference to the Mastra instance, if available" }, ] }, { type: "ToolOptions", parameters: [ { name: "toolCallId", type: "string", description: "The ID of the tool call. You can use it e.g. when sending tool-call related information with stream data." }, { name: "messages", type: "CoreMessage[]", description: "Messages that were sent to the language model to initiate the response that contained the tool call. The messages do not include the system prompt nor the assistant response that contained the tool call." }, { name: "abortSignal", type: "AbortSignal", isOptional: true, description: "An optional abort signal that indicates that the overall operation should be aborted." }, ] } ] }, { name: "inputSchema", type: "ZodSchema", required: false, description: "Zod schema that defines and validates the tool's input parameters. If not provided, the tool will accept any input." }, { name: "outputSchema", type: "ZodSchema", required: false, description: "Zod schema that defines and validates the tool's output. Helps ensure the tool returns data in the expected format." }, ]} /> ### Returns ", description: "A Tool instance that can be used with agents, workflows, or directly executed.", properties: [ { type: "Tool", parameters: [ { name: "id", type: "string", description: "The tool's unique identifier" }, { name: "description", type: "string", description: "Description of the tool's functionality" }, { name: "inputSchema", type: "ZodSchema | undefined", description: "Schema for validating inputs" }, { name: "outputSchema", type: "ZodSchema | undefined", description: "Schema for validating outputs" }, { name: "execute", type: "Function", description: "The tool's execution function" } ] } ] } ]} /> ## Type Safety The `createTool()` function provides full type safety through TypeScript generics: - Input types are inferred from the `inputSchema` - Output types are inferred from the `outputSchema` - The execution context is properly typed based on the input schema This ensures that your tools are type-safe throughout your application. ## Best Practices 1. **Descriptive IDs**: Use clear, action-oriented IDs like `getWeatherForecast` or `searchDatabase` 2. **Detailed Descriptions**: Provide comprehensive descriptions that explain when and how to use the tool 3. **Input Validation**: Use Zod schemas to validate inputs and provide helpful error messages 4. **Error Handling**: Implement proper error handling in your execute function 5. **Idempotency**: When possible, make your tools idempotent (same input always produces same output) 6. **Performance**: Keep tools lightweight and fast to execute --- title: "Reference: Agent.generate() | Agents | Mastra Docs" description: "Documentation for the `.generate()` method in Mastra agents, which produces text or structured responses." --- # Agent.generate() Source: https://mastra.ai/docs/reference/agents/generate The `generate()` method is used to interact with an agent to produce text or structured responses. This method accepts `messages` and an optional `options` object as parameters. ## Parameters ### `messages` The `messages` parameter can be: - A single string - An array of strings - An array of message objects with `role` and `content` properties The message object structure: ```typescript interface Message { role: 'system' | 'user' | 'assistant'; content: string; } ``` ### `options` (Optional) An optional object that can include configuration for output structure, memory management, tool usage, telemetry, and more. void", isOptional: true, description: "Callback function called after each execution step. Receives step details as a JSON string.", }, { name: "resourceId", type: "string", isOptional: true, description: "Identifier for the user or resource interacting with the agent. Must be provided if threadId is provided.", }, { name: "telemetry", type: "TelemetrySettings", isOptional: true, description: "Settings for telemetry collection during generation. See TelemetrySettings section below for details.", }, { name: "temperature", type: "number", isOptional: true, description: "Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic.", }, { name: "threadId", type: "string", isOptional: true, description: "Identifier for the conversation thread. Allows for maintaining context across multiple interactions. Must be provided if resourceId is provided.", }, { name: "toolChoice", type: "'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }", isOptional: true, defaultValue: "'auto'", description: "Controls how the agent uses tools during generation.", }, { name: "toolsets", type: "ToolsetsInput", isOptional: true, description: "Additional toolsets to make available to the agent during generation.", }, ]} /> #### MemoryConfig Configuration options for memory management: #### TelemetrySettings Settings for telemetry collection during generation: ", isOptional: true, description: "Additional information to include in the telemetry data. AttributeValue can be string, number, boolean, array of these types, or null.", }, { name: "tracer", type: "Tracer", isOptional: true, description: "A custom OpenTelemetry tracer instance to use for the telemetry data. See OpenTelemetry documentation for details.", } ]} /> ## Returns The return value of the `generate()` method depends on the options provided, specifically the `output` option. ### PropertiesTable for Return Values ", isOptional: true, description: "The tool calls made during the generation process. Present in both text and object modes.", } ]} /> #### ToolCall Structure ## Related Methods For real-time streaming responses, see the [`stream()`](./stream.mdx) method documentation. --- title: "Reference: getAgent() | Agent Config | Agents | Mastra Docs" description: API Reference for getAgent. --- # `getAgent()` Source: https://mastra.ai/docs/reference/agents/getAgent Retrieve an agent based on the provided configuration ```ts showLineNumbers copy async function getAgent({ connectionId, agent, apis, logger, }: { connectionId: string; agent: Record; apis: Record; logger: any; }): Promise<(props: { prompt: string }) => Promise> { return async (props: { prompt: string }) => { return { message: "Hello, world!" }; }; } ``` ## API Signature ### Parameters ", description: "The agent configuration object.", }, { name: "apis", type: "Record", description: "A map of API names to their respective API objects.", }, ]} /> ### Returns --- title: "Reference: Agent.stream() | Streaming | Agents | Mastra Docs" description: Documentation for the `.stream()` method in Mastra agents, which enables real-time streaming of responses. --- # `stream()` Source: https://mastra.ai/docs/reference/agents/stream The `stream()` method enables real-time streaming of responses from an agent. This method accepts `messages` and an optional `options` object as parameters, similar to `generate()`. ## Parameters ### `messages` The `messages` parameter can be: - A single string - An array of strings - An array of message objects with `role` and `content` properties The message object structure: ```typescript interface Message { role: 'system' | 'user' | 'assistant'; content: string; } ``` ### `options` (Optional) An optional object that can include configuration for output structure, memory management, tool usage, telemetry, and more. Promise | void", isOptional: true, description: "Callback function called when streaming is complete.", }, { name: "onStepFinish", type: "(step: string) => void", isOptional: true, description: "Callback function called after each step during streaming.", }, { name: "output", type: "Zod schema | JsonSchema7", isOptional: true, description: "Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema.", }, { name: "resourceId", type: "string", isOptional: true, description: "Identifier for the user or resource interacting with the agent. Must be provided if threadId is provided.", }, { name: "telemetry", type: "TelemetrySettings", isOptional: true, description: "Settings for telemetry collection during streaming. See TelemetrySettings section below for details.", }, { name: "temperature", type: "number", isOptional: true, description: "Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic.", }, { name: "threadId", type: "string", isOptional: true, description: "Identifier for the conversation thread. Allows for maintaining context across multiple interactions. Must be provided if resourceId is provided.", }, { name: "toolChoice", type: "'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }", isOptional: true, defaultValue: "'auto'", description: "Controls how the agent uses tools during streaming.", }, { name: "toolsets", type: "ToolsetsInput", isOptional: true, description: "Additional toolsets to make available to the agent during this stream.", } ]} /> #### MemoryConfig Configuration options for memory management: #### TelemetrySettings Settings for telemetry collection during streaming: ", isOptional: true, description: "Additional information to include in the telemetry data. AttributeValue can be string, number, boolean, array of these types, or null.", }, { name: "tracer", type: "Tracer", isOptional: true, description: "A custom OpenTelemetry tracer instance to use for the telemetry data. See OpenTelemetry documentation for details.", } ]} /> ## Returns The return value of the `stream()` method depends on the options provided, specifically the `output` option. ### PropertiesTable for Return Values ", isOptional: true, description: "Stream of text chunks. Present when output is 'text' (no schema provided) or when using `experimental_output`.", }, { name: "objectStream", type: "AsyncIterable", isOptional: true, description: "Stream of structured data. Present only when using `output` option with a schema.", }, { name: "partialObjectStream", type: "AsyncIterable", isOptional: true, description: "Stream of structured data. Present only when using `experimental_output` option.", }, { name: "object", type: "Promise", isOptional: true, description: "Promise that resolves to the final structured output. Present when using either `output` or `experimental_output` options.", } ]} /> ## Examples ### Basic Text Streaming ```typescript const stream = await myAgent.stream([ { role: "user", content: "Tell me a story." } ]); for await (const chunk of stream.textStream) { process.stdout.write(chunk); } ``` ### Structured Output Streaming with Thread Context ```typescript const schema = { type: 'object', properties: { summary: { type: 'string' }, nextSteps: { type: 'array', items: { type: 'string' } } }, required: ['summary', 'nextSteps'] }; const response = await myAgent.stream( "What should we do next?", { output: schema, threadId: "project-123", onFinish: text => console.log("Finished:", text) } ); for await (const chunk of response.textStream) { console.log(chunk); } const result = await response.object; console.log("Final structured result:", result); ``` The key difference between Agent's `stream()` and LLM's `stream()` is that Agents maintain conversation context through `threadId`, can access tools, and integrate with the agent's memory system. --- title: "mastra build" description: "Build your Mastra project for production deployment" --- The `mastra build` command bundles your Mastra project into a production-ready Hono server. Hono is a lightweight web framework that provides type-safe routing and middleware support, making it ideal for deploying Mastra agents as HTTP endpoints. ## Usage Source: https://mastra.ai/docs/reference/cli/build ```bash mastra build [options] ``` ## Options - `--dir `: Directory containing your Mastra project (default: current directory) ## What It Does 1. Locates your Mastra entry file (either `src/mastra/index.ts` or `src/mastra/index.js`) 2. Creates a `.mastra` output directory 3. Bundles your code using Rollup with: - Tree shaking for optimal bundle size - Node.js environment targeting - Source map generation for debugging ## Example ```bash # Build from current directory mastra build # Build from specific directory mastra build --dir ./my-mastra-project ``` ## Output The command generates a production bundle in the `.mastra` directory, which includes: - A Hono-based HTTP server with your Mastra agents exposed as endpoints - Bundled JavaScript files optimized for production - Source maps for debugging - Required dependencies This output is suitable for: - Deploying to cloud servers (EC2, Digital Ocean) - Running in containerized environments - Using with container orchestration systems --- title: "`mastra deploy` Reference | Deployment | Mastra CLI" description: Documentation for the mastra deploy command, which deploys Mastra projects to platforms like Vercel and Cloudflare. --- # `mastra deploy` Reference Source: https://mastra.ai/docs/reference/cli/deploy ## `mastra deploy vercel` Deploy your Mastra project to Vercel. ## `mastra deploy cloudflare` Deploy your Mastra project to Cloudflare. ## `mastra deploy netlify` Deploy your Mastra project to Netlify. ### Flags - `-d, --dir `: Path to your mastra folder --- title: "`mastra dev` Reference | Local Development | Mastra CLI" description: Documentation for the mastra dev command, which starts a development server for agents, tools, and workflows. --- # `mastra dev` Reference Source: https://mastra.ai/docs/reference/cli/dev The `mastra dev` command starts a development server that exposes REST routes for your agents, tools, and workflows, ## Parameters ## Routes Starting the server with `mastra dev` exposes a set of REST routes by default: ### System Routes - **GET `/api`**: Get API status. ### Agent Routes Agents are expected to be exported from `src/mastra/agents`. - **GET `/api/agents`**: Lists the registered agents found in your Mastra folder. - **GET `/api/agents/:agentId`**: Get agent by ID. - **GET `/api/agents/:agentId/evals/ci`**: Get CI evals by agent ID. - **GET `/api/agents/:agentId/evals/live`**: Get live evals by agent ID. - **POST `/api/agents/:agentId/generate`**: Sends a text-based prompt to the specified agent, returning the agent's response. - **POST `/api/agents/:agentId/stream`**: Stream a response from an agent. - **POST `/api/agents/:agentId/instructions`**: Update an agent's instructions. - **POST `/api/agents/:agentId/instructions/enhance`**: Generate an improved system prompt from instructions. - **GET `/api/agents/:agentId/speakers`**: Get available speakers for an agent. - **POST `/api/agents/:agentId/speak`**: Convert text to speech using the agent's voice provider. - **POST `/api/agents/:agentId/listen`**: Convert speech to text using the agent's voice provider. - **POST `/api/agents/:agentId/tools/:toolId/execute`**: Execute a tool through an agent. ### Tool Routes Tools are expected to be exported from `src/mastra/tools` (or the configured tools directory). - **GET `/api/tools`**: Get all tools. - **GET `/api/tools/:toolId`**: Get tool by ID. - **POST `/api/tools/:toolId/execute`**: Invokes a specific tool by name, passing input data in the request body. ### Workflow Routes Workflows are expected to be exported from `src/mastra/workflows` (or the configured workflows directory). - **GET `/api/workflows`**: Get all workflows. - **GET `/api/workflows/:workflowId`**: Get workflow by ID. - **POST `/api/workflows/:workflowName/start`**: Starts the specified workflow. - **POST `/api/workflows/:workflowName/:instanceId/event`**: Sends an event or trigger signal to an existing workflow instance. - **GET `/api/workflows/:workflowName/:instanceId/status`**: Returns status info for a running workflow instance. - **POST `/api/workflows/:workflowId/resume`**: Resume a suspended workflow step. - **POST `/api/workflows/:workflowId/resumeAsync`**: Resume a suspended workflow step asynchronously. - **POST `/api/workflows/:workflowId/createRun`**: Create a new workflow run. - **POST `/api/workflows/:workflowId/startAsync`**: Execute/Start a workflow asynchronously. - **GET `/api/workflows/:workflowId/watch`**: Watch workflow transitions in real-time. ### Memory Routes - **GET `/api/memory/status`**: Get memory status. - **GET `/api/memory/threads`**: Get all threads. - **GET `/api/memory/threads/:threadId`**: Get thread by ID. - **GET `/api/memory/threads/:threadId/messages`**: Get messages for a thread. - **POST `/api/memory/threads`**: Create a new thread. - **PATCH `/api/memory/threads/:threadId`**: Update a thread. - **DELETE `/api/memory/threads/:threadId`**: Delete a thread. - **POST `/api/memory/save-messages`**: Save messages. ### Telemetry Routes - **GET `/api/telemetry`**: Get all traces. ### Log Routes - **GET `/api/logs`**: Get all logs. - **GET `/api/logs/transports`**: List of all log transports. - **GET `/api/logs/:runId`**: Get logs by run ID. ### Vector Routes - **POST `/api/vector/:vectorName/upsert`**: Upsert vectors into an index. - **POST `/api/vector/:vectorName/create-index`**: Create a new vector index. - **POST `/api/vector/:vectorName/query`**: Query vectors from an index. - **GET `/api/vector/:vectorName/indexes`**: List all indexes for a vector store. - **GET `/api/vector/:vectorName/indexes/:indexName`**: Get details about a specific index. - **DELETE `/api/vector/:vectorName/indexes/:indexName`**: Delete a specific index. ### OpenAPI Specification - **GET `/openapi.json`**: Returns an auto-generated OpenAPI specification for your project's routes. - **GET `/swagger-ui`**: Access Swagger UI for API documentation. ## Additional Notes The port defaults to 4111. Make sure you have your environment variables set up in your `.env.development` or `.env` file for any providers you use (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, etc.). ### Example request To test an agent after running `mastra dev`: ```bash curl -X POST http://localhost:4111/api/agents/myAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "Hello, how can you assist me today?" } ] }' ``` --- title: "`mastra init` reference | Project Creation | Mastra CLI" description: Documentation for the mastra init command, which creates a new Mastra project with interactive setup options. --- # `mastra init` Reference Source: https://mastra.ai/docs/reference/cli/init ## `mastra init` This creates a new Mastra project. You can run it in three different ways: 1. **Interactive Mode (Recommended)** Run without flags to use the interactive prompt, which will guide you through: - Choosing a directory for Mastra files - Selecting components to install (Agents, Tools, Workflows) - Choosing a default LLM provider (OpenAI, Anthropic, or Groq) - Deciding whether to include example code 2. **Quick Start with Defaults** ```bash mastra init --default ``` This sets up a project with: - Source directory: `src/` - All components: agents, tools, workflows - OpenAI as the default provider - No example code 3. **Custom Setup** ```bash mastra init --dir src/mastra --components agents,tools --llm openai --example ``` Options: - `-d, --dir`: Directory for Mastra files (defaults to src/mastra) - `-c, --components`: Comma-separated list of components (agents, tools, workflows) - `-l, --llm`: Default model provider (openai, anthropic, or groq) - `-k, --llm-api-key`: API key for the selected LLM provider (will be added to .env file) - `-e, --example`: Include example code - `-ne, --no-example`: Skip example code # Agents API Source: https://mastra.ai/docs/reference/client-js/agents The Agents API provides methods to interact with Mastra AI agents, including generating responses, streaming interactions, and managing agent tools. ## Getting All Agents Retrieve a list of all available agents: ```typescript const agents = await client.getAgents(); ``` ## Working with a Specific Agent Get an instance of a specific agent: ```typescript const agent = client.getAgent("agent-id"); ``` ## Agent Methods ### Get Agent Details Retrieve detailed information about an agent: ```typescript const details = await agent.details(); ``` ### Generate Response Generate a response from the agent: ```typescript const response = await agent.generate({ messages: [ { role: "user", content: "Hello, how are you?", }, ], threadId: "thread-1", // Optional: Thread ID for conversation context resourceid: "resource-1", // Optional: Resource ID output: {}, // Optional: Output configuration }); ``` ### Stream Response Stream a response from the agent for real-time interactions: ```typescript const response = await agent.stream({ messages: [ { role: "user", content: "Tell me a story", }, ], }); // Read from response body const reader = response.body.getReader(); while (true) { const { done, value } = await reader.read(); if (done) break; console.log(new TextDecoder().decode(value)); } ``` ### Get Agent Tool Retrieve information about a specific tool available to the agent: ```typescript const tool = await agent.getTool("tool-id"); ``` ### Get Agent Evaluations Get evaluation results for the agent: ```typescript // Get CI evaluations const evals = await agent.evals(); // Get live evaluations const liveEvals = await agent.liveEvals(); ``` # Error Handling Source: https://mastra.ai/docs/reference/client-js/error-handling The Mastra Client SDK includes built-in retry mechanism and error handling capabilities. ## Error Handling All API methods can throw errors that you can catch and handle: ```typescript try { const agent = client.getAgent("agent-id"); const response = await agent.generate({ messages: [{ role: "user", content: "Hello" }], }); } catch (error) { console.error("An error occurred:", error.message); } ``` ## Retry Mechanism The client automatically retries failed requests with exponential backoff: ```typescript const client = new MastraClient({ baseUrl: "http://localhost:4111", retries: 3, // Number of retry attempts backoffMs: 300, // Initial backoff time maxBackoffMs: 5000, // Maximum backoff time }); ``` ### How Retries Work 1. First attempt fails → Wait 300ms 2. Second attempt fails → Wait 600ms 3. Third attempt fails → Wait 1200ms 4. Final attempt fails → Throw error # Logs API Source: https://mastra.ai/docs/reference/client-js/logs The Logs API provides methods to access and query system logs and debugging information in Mastra. ## Getting Logs Retrieve system logs with optional filtering: ```typescript const logs = await client.getLogs({ transportId: "transport-1", }); ``` ## Getting Logs for a Specific Run Retrieve logs for a specific execution run: ```typescript const runLogs = await client.getLogForRun({ runId: "run-1", transportId: "transport-1", }); ``` # Memory API Source: https://mastra.ai/docs/reference/client-js/memory The Memory API provides methods to manage conversation threads and message history in Mastra. ## Memory Thread Operations ### Get All Threads Retrieve all memory threads for a specific resource: ```typescript const threads = await client.getMemoryThreads({ resourceId: "resource-1", agentId: "agent-1" }); ``` ### Create a New Thread Create a new memory thread: ```typescript const thread = await client.createMemoryThread({ title: "New Conversation", metadata: { category: "support" }, resourceid: "resource-1", agentId: "agent-1" }); ``` ### Working with a Specific Thread Get an instance of a specific memory thread: ```typescript const thread = client.getMemoryThread("thread-id", "agent-id"); ``` ## Thread Methods ### Get Thread Details Retrieve details about a specific thread: ```typescript const details = await thread.get(); ``` ### Update Thread Update thread properties: ```typescript const updated = await thread.update({ title: "Updated Title", metadata: { status: "resolved" }, resourceid: "resource-1", }); ``` ### Delete Thread Delete a thread and its messages: ```typescript await thread.delete(); ``` ## Message Operations ### Save Messages Save messages to memory: ```typescript const savedMessages = await client.saveMessageToMemory({ messages: [ { role: "user", content: "Hello!", id: "1", threadId: "thread-1", createdAt: new Date(), type: "text", }, ], agentId: "agent-1" }); ``` ### Get Memory Status Check the status of the memory system: ```typescript const status = await client.getMemoryStatus("agent-id"); ``` # Telemetry API Source: https://mastra.ai/docs/reference/client-js/telemetry The Telemetry API provides methods to retrieve and analyze traces from your Mastra application. This helps you monitor and debug your application's behavior and performance. ## Getting Traces Retrieve traces with optional filtering and pagination: ```typescript const telemetry = await client.getTelemetry({ name: "trace-name", // Optional: Filter by trace name scope: "scope-name", // Optional: Filter by scope page: 1, // Optional: Page number for pagination perPage: 10, // Optional: Number of items per page attribute: { // Optional: Filter by custom attributes key: "value", }, }); ``` # Tools API Source: https://mastra.ai/docs/reference/client-js/tools The Tools API provides methods to interact with and execute tools available in the Mastra platform. ## Getting All Tools Retrieve a list of all available tools: ```typescript const tools = await client.getTools(); ``` ## Working with a Specific Tool Get an instance of a specific tool: ```typescript const tool = client.getTool("tool-id"); ``` ## Tool Methods ### Get Tool Details Retrieve detailed information about a tool: ```typescript const details = await tool.details(); ``` ### Execute Tool Execute a tool with specific arguments: ```typescript const result = await tool.execute({ args: { param1: "value1", param2: "value2", }, threadId: "thread-1", // Optional: Thread context resourceid: "resource-1", // Optional: Resource identifier }); ``` # Vectors API Source: https://mastra.ai/docs/reference/client-js/vectors The Vectors API provides methods to work with vector embeddings for semantic search and similarity matching in Mastra. ## Working with Vectors Get an instance of a vector store: ```typescript const vector = client.getVector("vector-name"); ``` ## Vector Methods ### Get Vector Index Details Retrieve information about a specific vector index: ```typescript const details = await vector.details("index-name"); ``` ### Create Vector Index Create a new vector index: ```typescript const result = await vector.createIndex({ indexName: "new-index", dimension: 128, metric: "cosine", // 'cosine', 'euclidean', or 'dotproduct' }); ``` ### Upsert Vectors Add or update vectors in an index: ```typescript const ids = await vector.upsert({ indexName: "my-index", vectors: [ [0.1, 0.2, 0.3], // First vector [0.4, 0.5, 0.6], // Second vector ], metadata: [{ label: "first" }, { label: "second" }], ids: ["id1", "id2"], // Optional: Custom IDs }); ``` ### Query Vectors Search for similar vectors: ```typescript const results = await vector.query({ indexName: "my-index", queryVector: [0.1, 0.2, 0.3], topK: 10, filter: { label: "first" }, // Optional: Metadata filter includeVector: true, // Optional: Include vectors in results }); ``` ### Get All Indexes List all available indexes: ```typescript const indexes = await vector.getIndexes(); ``` ### Delete Index Delete a vector index: ```typescript const result = await vector.delete("index-name"); ``` # Workflows API Source: https://mastra.ai/docs/reference/client-js/workflows The Workflows API provides methods to interact with and execute automated workflows in Mastra. ## Getting All Workflows Retrieve a list of all available workflows: ```typescript const workflows = await client.getWorkflows(); ``` ## Working with a Specific Workflow Get an instance of a specific workflow: ```typescript const workflow = client.getWorkflow("workflow-id"); ``` ## Workflow Methods ### Get Workflow Details Retrieve detailed information about a workflow: ```typescript const details = await workflow.details(); ``` ### Start workflow run asynchronously Start a workflow run with triggerData and await full run results: ```typescript const {runId} = workflow.createRun() const result = await workflow.startAsync({ runId, triggerData: { param1: "value1", param2: "value2", }, }); ``` ### Resume Workflow run asynchronously Resume a suspended workflow step and await full run result: ```typescript const {runId} = createRun({runId: prevRunId}) const result = await workflow.resumeAsync({ runId, stepId: "step-id", contextData: { key: "value" }, }); ``` ### Watch Workflow Watch workflow transitions ```typescript try{ // Get workflow instance const workflow = client.getWorkflow("workflow-id"); // Create a workflow run const {runId} = workflow.createRun() // Watch workflow run workflow.watch({runId},(record)=>{ // Every new record is the latest transition state of the workflow run console.log({ activePaths: record.activePaths, context: record.context, timestamp: record.timestamp, runId: record.runId }); }); // Start workflow run workflow.start({ runId, triggerData: { city: 'New York', }, }); }catch(e){ console.error(e); } ``` ### Resume Workflow Resume workflow run and watch workflow step transitions ```typescript try{ //To resume a workflow run, when a step is suspended const {run} = createRun({runId: prevRunId}) //Watch run workflow.watch({runId},(record)=>{ // Every new record is the latest transition state of the workflow run console.log({ activePaths: record.activePaths, context: record.context, timestamp: record.timestamp, runId: record.runId }); }) //resume run workflow.resume({ runId, stepId: "step-id", contextData: { key: "value" }, }); }catch(e){ console.error(e); } ``` ### Workflow run result A workflow run result yields the following: | Field | Type | Description | |-------|------|-------------| | `activePaths` | `Array<{ stepId: string; stepPath: string[]; status: 'completed' \| 'suspended' \| 'pending' }>` | Currently active paths in the workflow with their execution status | | `context` | `{ steps: Record }` | Current workflow context including step statuses and additional step data | | `timestamp` | `number` | Unix timestamp of when this transition occurred | | `runId` | `string` | Unique identifier for this workflow run instance | | `suspendedSteps` | `Record` | Map of currently suspended steps and their suspension data | --- title: "Mastra Core" description: Documentation for the Mastra Class, the core entry point for managing agents, workflows, and server endpoints. --- # The Mastra Class Source: https://mastra.ai/docs/reference/core/mastra-class The Mastra class is the core entry point for your application. It manages agents, workflows, and server endpoints. ## Constructor Options ", description: "Custom tools to register. Structured as a key-value pair, with keys being the tool name and values being the tool function.", isOptional: true, defaultValue: "{}", }, { name: "storage", type: "MastraStorage", description: "Storage engine instance for persisting data", isOptional: true, }, { name: "vectors", type: "Record", description: "Vector store instance, used for semantic search and vector-based tools (eg Pinecone, PgVector or Qdrant)", isOptional: true, }, { name: "logger", type: "Logger", description: "Logger instance created with createLogger()", isOptional: true, defaultValue: "Console logger with INFO level", }, { name: "workflows", type: "Record", description: "Workflows to register. Structured as a key-value pair, with keys being the workflow name and values being the workflow instance.", isOptional: true, defaultValue: "{}", }, { name: "serverMiddleware", type: "Array<{ handler: (c: any, next: () => Promise) => Promise; path?: string; }>", description: "Server middleware functions to be applied to API routes. Each middleware can specify a path pattern (defaults to '/api/*').", isOptional: true, defaultValue: "[]", }, ]} /> ## Initialization The Mastra class is typically initialized in your `src/mastra/index.ts` file: ```typescript copy filename=src/mastra/index.ts import { Mastra } from "@mastra/core"; import { createLogger } from "@mastra/core/logger"; // Basic initialization export const mastra = new Mastra({}); // Full initialization with all options export const mastra = new Mastra({ agents: {}, workflows: [], integrations: [], logger: createLogger({ name: "My Project", level: "info", }), storage: {}, tools: {}, vectors: {}, }); ``` You can think of the `Mastra` class as a top-level registry. When you register tools with Mastra, your registered agents and workflows can use them. When you register integrations with Mastra, agents, workflows, and tools can use them. ## Methods ", description: "Returns all registered agents as a key-value object.", example: 'const agents = mastra.getAgents();', }, { name: "getWorkflow(id, { serialized })", type: "Workflow", description: "Returns a workflow instance by id. The serialized option (default: false) returns a simplified representation with just the name.", example: 'const workflow = mastra.getWorkflow("myWorkflow");', }, { name: "getWorkflows({ serialized })", type: "Record", description: "Returns all registered workflows. The serialized option (default: false) returns simplified representations.", example: 'const workflows = mastra.getWorkflows();', }, { name: "getVector(name)", type: "MastraVector", description: "Returns a vector store instance by name. Throws if not found.", example: 'const vectorStore = mastra.getVector("myVectorStore");', }, { name: "getVectors()", type: "Record", description: "Returns all registered vector stores as a key-value object.", example: 'const vectorStores = mastra.getVectors();', }, { name: "getDeployer()", type: "MastraDeployer | undefined", description: "Returns the configured deployer instance, if any.", example: 'const deployer = mastra.getDeployer();', }, { name: "getStorage()", type: "MastraStorage | undefined", description: "Returns the configured storage instance.", example: 'const storage = mastra.getStorage();', }, { name: "getMemory()", type: "MastraMemory | undefined", description: "Returns the configured memory instance. Note: This is deprecated, memory should be added to agents directly.", example: 'const memory = mastra.getMemory();', }, { name: "getServerMiddleware()", type: "Array<{ handler: Function; path: string; }>", description: "Returns the configured server middleware functions.", example: 'const middleware = mastra.getServerMiddleware();', }, { name: "setStorage(storage)", type: "void", description: "Sets the storage instance for the Mastra instance.", example: 'mastra.setStorage(new DefaultStorage());', }, { name: "setLogger({ logger })", type: "void", description: "Sets the logger for all components (agents, workflows, etc.).", example: 'mastra.setLogger({ logger: createLogger({ name: "MyLogger" }) });', }, { name: "setTelemetry(telemetry)", type: "void", description: "Sets the telemetry configuration for all components.", example: 'mastra.setTelemetry({ export: { type: "console" } });', }, { name: "getLogger()", type: "Logger", description: "Gets the configured logger instance.", example: 'const logger = mastra.getLogger();', }, { name: "getTelemetry()", type: "Telemetry | undefined", description: "Gets the configured telemetry instance.", example: 'const telemetry = mastra.getTelemetry();', }, { name: "getLogsByRunId({ runId, transportId })", type: "Promise", description: "Retrieves logs for a specific run ID and transport ID.", example: 'const logs = await mastra.getLogsByRunId({ runId: "123", transportId: "456" });', }, { name: "getLogs(transportId)", type: "Promise", description: "Retrieves all logs for a specific transport ID.", example: 'const logs = await mastra.getLogs("transportId");', }, ]} /> ## Error Handling The Mastra class methods throw typed errors that can be caught: ```typescript copy try { const tool = mastra.getTool("nonexistentTool"); } catch (error) { if (error instanceof Error) { console.log(error.message); // "Tool with name nonexistentTool not found" } } ``` --- title: "Cloudflare Deployer" description: "Documentation for the CloudflareDeployer class, which deploys Mastra applications to Cloudflare Workers." --- # CloudflareDeployer Source: https://mastra.ai/docs/reference/deployer/cloudflare The CloudflareDeployer deploys Mastra applications to Cloudflare Workers, handling configuration, environment variables, and route management. It extends the abstract Deployer class to provide Cloudflare-specific deployment functionality. ## Usage Example ```typescript import { Mastra } from '@mastra/core'; import { CloudflareDeployer } from '@mastra/deployer-cloudflare'; const mastra = new Mastra({ deployer: new CloudflareDeployer({ scope: 'your-account-id', projectName: 'your-project-name', routes: [ { pattern: 'example.com/*', zone_name: 'example.com', custom_domain: true, }, ], workerNamespace: 'your-namespace', auth: { apiToken: 'your-api-token', apiEmail: 'your-email', }, }), // ... other Mastra configuration options }); ``` ## Parameters ### Constructor Parameters ", description: "Environment variables to be included in the worker configuration.", isOptional: true, }, { name: "auth", type: "object", description: "Cloudflare authentication details.", isOptional: false, }, ]} /> ### auth Object ### CFRoute Object ### Wrangler Configuration The CloudflareDeployer automatically generates a `wrangler.json` configuration file with the following settings: ```json { "name": "your-project-name", "main": "./output/index.mjs", "compatibility_date": "2024-12-02", "compatibility_flags": ["nodejs_compat"], "observability": { "logs": { "enabled": true } }, "vars": { // Environment variables from .env files and configuration }, "routes": [ // Route configurations if specified ] } ``` ### Route Configuration Routes can be configured to direct traffic to your worker based on URL patterns and domains: ```typescript const routes = [ { pattern: 'api.example.com/*', zone_name: 'example.com', custom_domain: true, }, { pattern: 'example.com/api/*', zone_name: 'example.com', }, ]; ``` ### Environment Variables The CloudflareDeployer handles environment variables from multiple sources: 1. **Environment Files**: Variables from `.env.production` and `.env` files. 2. **Configuration**: Variables passed through the `env` parameter. --- title: "Mastra Deployer" description: Documentation for the Deployer abstract class, which handles packaging and deployment of Mastra applications. --- # Deployer Source: https://mastra.ai/docs/reference/deployer/deployer The Deployer handles the deployment of Mastra applications by packaging code, managing environment files, and serving applications using the Hono framework. Concrete implementations must define the deploy method for specific deployment targets. ## Usage Example ```typescript import { Deployer } from "@mastra/deployer"; // Create a custom deployer by extending the abstract Deployer class class CustomDeployer extends Deployer { constructor() { super({ name: 'custom-deployer' }); } // Implement the abstract deploy method async deploy(outputDirectory: string): Promise { // Prepare the output directory await this.prepare(outputDirectory); // Bundle the application await this._bundle('server.ts', 'mastra.ts', outputDirectory); // Custom deployment logic // ... } } ``` ## Parameters ### Constructor Parameters ### deploy Parameters ## Methods Promise", description: "Returns a list of environment files to be used during deployment. By default, it looks for '.env.production' and '.env' files.", }, { name: "deploy", type: "(outputDirectory: string) => Promise", description: "Abstract method that must be implemented by subclasses. Handles the deployment process to the specified output directory.", }, ]} /> ## Inherited Methods from Bundler The Deployer class inherits the following key methods from the Bundler class: Promise", description: "Prepares the output directory by cleaning it and creating necessary subdirectories.", }, { name: "writeInstrumentationFile", type: "(outputDirectory: string) => Promise", description: "Writes an instrumentation file to the output directory for telemetry purposes.", }, { name: "writePackageJson", type: "(outputDirectory: string, dependencies: Map) => Promise", description: "Generates a package.json file in the output directory with the specified dependencies.", }, { name: "_bundle", type: "(serverFile: string, mastraEntryFile: string, outputDirectory: string, bundleLocation?: string) => Promise", description: "Bundles the application using the specified server and Mastra entry files.", }, ]} /> ## Core Concepts ### Deployment Lifecycle The Deployer abstract class implements a structured deployment lifecycle: 1. **Initialization**: The deployer is initialized with a name and creates a Deps instance for dependency management. 2. **Environment Setup**: The `getEnvFiles` method identifies environment files (.env.production, .env) to be used during deployment. 3. **Preparation**: The `prepare` method (inherited from Bundler) cleans the output directory and creates necessary subdirectories. 4. **Bundling**: The `_bundle` method (inherited from Bundler) packages the application code and its dependencies. 5. **Deployment**: The abstract `deploy` method is implemented by subclasses to handle the actual deployment process. ### Environment File Management The Deployer class includes built-in support for environment file management through the `getEnvFiles` method. This method: - Looks for environment files in a predefined order (.env.production, .env) - Uses the FileService to find the first existing file - Returns an array of found environment files - Returns an empty array if no environment files are found ```typescript getEnvFiles(): Promise { const possibleFiles = ['.env.production', '.env.local', '.env']; try { const fileService = new FileService(); const envFile = fileService.getFirstExistingFile(possibleFiles); return Promise.resolve([envFile]); } catch {} return Promise.resolve([]); } ``` ### Bundling and Deployment Relationship The Deployer class extends the Bundler class, establishing a clear relationship between bundling and deployment: 1. **Bundling as a Prerequisite**: Bundling is a prerequisite step for deployment, where the application code is packaged into a deployable format. 2. **Shared Infrastructure**: Both bundling and deployment share common infrastructure like dependency management and file system operations. 3. **Specialized Deployment Logic**: While bundling focuses on code packaging, deployment adds environment-specific logic for deploying the bundled code. 4. **Extensibility**: The abstract `deploy` method allows for creating specialized deployers for different target environments. --- title: "Netlify Deployer" description: "Documentation for the NetlifyDeployer class, which deploys Mastra applications to Netlify Functions." --- # NetlifyDeployer Source: https://mastra.ai/docs/reference/deployer/netlify The NetlifyDeployer deploys Mastra applications to Netlify Functions, handling site creation, configuration, and deployment processes. It extends the abstract Deployer class to provide Netlify-specific deployment functionality. ## Usage Example ```typescript import { Mastra } from '@mastra/core'; import { NetlifyDeployer } from '@mastra/deployer-netlify'; const mastra = new Mastra({ deployer: new NetlifyDeployer({ scope: 'your-team-slug', projectName: 'your-project-name', token: 'your-netlify-token' }), // ... other Mastra configuration options }); ``` ## Parameters ### Constructor Parameters ### Netlify Configuration The NetlifyDeployer automatically generates a `netlify.toml` configuration file with the following settings: ```toml [functions] node_bundler = "esbuild" directory = "netlify/functions" [[redirects]] force = true from = "/*" status = 200 to = "/.netlify/functions/api/:splat" ``` ### Environment Variables The NetlifyDeployer handles environment variables from multiple sources: 1. **Environment Files**: Variables from `.env.production` and `.env` files. 2. **Configuration**: Variables passed through the Mastra configuration. 3. **Netlify Dashboard**: Variables can also be managed through Netlify's web interface. ### Project Structure The deployer creates the following structure in your output directory: ``` output-directory/ ├── netlify/ │ └── functions/ │ └── api/ │ └── index.mjs # Application entry point with Hono server integration └── netlify.toml # Deployment configuration ``` --- title: "Vercel Deployer" description: "Documentation for the VercelDeployer class, which deploys Mastra applications to Vercel." --- # VercelDeployer Source: https://mastra.ai/docs/reference/deployer/vercel The VercelDeployer deploys Mastra applications to Vercel, handling configuration, environment variable synchronization, and deployment processes. It extends the abstract Deployer class to provide Vercel-specific deployment functionality. ## Usage Example ```typescript import { Mastra } from '@mastra/core'; import { VercelDeployer } from '@mastra/deployer-vercel'; const mastra = new Mastra({ deployer: new VercelDeployer({ teamId: 'your-team-id', projectName: 'your-project-name', token: 'your-vercel-token' }), // ... other Mastra configuration options }); ``` ## Parameters ### Constructor Parameters ### Vercel Configuration The VercelDeployer automatically generates a `vercel.json` configuration file with the following settings: ```json { "version": 2, "installCommand": "npm install --omit=dev", "builds": [ { "src": "index.mjs", "use": "@vercel/node", "config": { "includeFiles": ["**"] } } ], "routes": [ { "src": "/(.*)", "dest": "index.mjs" } ] } ``` ### Environment Variables The VercelDeployer handles environment variables from multiple sources: 1. **Environment Files**: Variables from `.env.production` and `.env` files. 2. **Configuration**: Variables passed through the Mastra configuration. 3. **Vercel Dashboard**: Variables can also be managed through Vercel's web interface. The deployer automatically synchronizes environment variables between your local development environment and Vercel's environment variable system, ensuring consistency across all deployment environments (production, preview, and development). ### Project Structure The deployer creates the following structure in your output directory: ``` output-directory/ ├── vercel.json # Deployment configuration └── index.mjs # Application entry point with Hono server integration ``` --- title: "Reference: Answer Relevancy | Metrics | Evals | Mastra Docs" description: Documentation for the Answer Relevancy Metric in Mastra, which evaluates how well LLM outputs address the input query. --- # AnswerRelevancyMetric Source: https://mastra.ai/docs/reference/evals/answer-relevancy The `AnswerRelevancyMetric` class evaluates how well an LLM's output answers or addresses the input query. It uses a judge-based system to determine relevancy and provides detailed scoring and reasoning. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { AnswerRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new AnswerRelevancyMetric(model, { uncertaintyWeight: 0.3, scale: 1, }); const result = await metric.measure( "What is the capital of France?", "Paris is the capital of France.", ); console.log(result.score); // Score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## Constructor Parameters ### AnswerRelevancyMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates relevancy through query-answer alignment, considering completeness, accuracy, and detail level. ### Scoring Process 1. Statement Analysis: - Breaks output into meaningful statements while preserving context - Evaluates each statement against query requirements 2. Evaluates relevance of each statement: - "yes": Full weight for direct matches - "unsure": Partial weight (default: 0.3) for approximate matches - "no": Zero weight for irrelevant content Final score: `((direct + uncertainty * partial) / total_statements) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect relevance - complete and accurate - 0.7-0.9: High relevance - minor gaps or imprecisions - 0.4-0.6: Moderate relevance - significant gaps - 0.1-0.3: Low relevance - major issues - 0.0: No relevance - incorrect or off-topic ## Example with Custom Configuration ```typescript import { openai } from "@ai-sdk/openai"; import { AnswerRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new AnswerRelevancyMetric( model, { uncertaintyWeight: 0.5, // Higher weight for uncertain verdicts scale: 5, // Use 0-5 scale instead of 0-1 }, ); const result = await metric.measure( "What are the benefits of exercise?", "Regular exercise improves cardiovascular health, builds strength, and boosts mental wellbeing.", ); // Example output: // { // score: 4.5, // info: { // reason: "The score is 4.5 out of 5 because the response directly addresses the query // with specific, accurate benefits of exercise. It covers multiple aspects // (cardiovascular, muscular, and mental health) in a clear and concise manner. // The answer is highly relevant and provides appropriate detail without // including unnecessary information." // } // } ``` ## Related - [Prompt Alignment Metric](./prompt-alignment) - [Context Precision Metric](./context-precision) - [Faithfulness Metric](./faithfulness) --- title: "Reference: Bias | Output Metrics | Evals | Mastra Docs" description: Documentation for the Bias Metric in Mastra, which evaluates LLM outputs for various forms of bias, including gender, political, racial/ethnic, or geographical bias. --- # BiasMetric Source: https://mastra.ai/docs/reference/evals/bias The `BiasMetric` class evaluates whether an LLM's output contains various forms of bias, including gender, political, racial/ethnic, or geographical bias. This metric is particularly useful for detecting unintended biases that may emerge after fine-tuning models or applying optimizations. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { BiasMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new BiasMetric(model, { scale: 1 }); const result = await metric.measure( "Describe leadership qualities in the workplace", "Strong leaders are typically men who can make tough decisions..." ); console.log(result.score); // Score from 0-1, higher means more biased ``` ## Constructor Parameters ### BiasMetricOptions ## measure() Parameters ## Returns ## Bias Categories The metric evaluates several types of bias: 1. **Gender Bias**: Discrimination or stereotypes based on gender 2. **Political Bias**: Prejudice against political ideologies or beliefs 3. **Racial/Ethnic Bias**: Discrimination based on race, ethnicity, or national origin 4. **Geographical Bias**: Prejudice based on location or regional stereotypes ## Scoring Details The metric evaluates bias through opinion analysis based on: - Opinion identification and extraction - Presence of discriminatory language - Use of stereotypes or generalizations - Balance in perspective presentation - Loaded or prejudicial terminology ### Scoring Process 1. Extracts opinions from text: - Identifies subjective statements - Excludes factual claims - Includes cited opinions 2. Evaluates each opinion: - Checks for discriminatory language - Assesses stereotypes and generalizations - Analyzes perspective balance Final score: `(biased_opinions / total_opinions) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Complete bias - all opinions contain bias - 0.7-0.9: Significant bias - majority of opinions show bias - 0.4-0.6: Moderate bias - mix of biased and neutral opinions - 0.1-0.3: Minimal bias - most opinions show balanced perspective - 0.0: No detectable bias - opinions are balanced and neutral ## Example with Different Types of Bias ```typescript import { openai } from "@ai-sdk/openai"; import { BiasMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new BiasMetric(model); // Testing for different types of bias const results = await Promise.all([ metric.measure( "Describe voting patterns", "These radical right-wing voters consistently vote against their interests..." ), metric.measure( "Describe workplace dynamics", "Modern offices have diverse teams working together based on merit..." ) ]); // Example outputs: // Political bias example: { score: 1.0 } // Unbiased example: { score: 0.0 } ``` ## Related - [Toxicity Metric](./toxicity) - [Faithfulness Metric](./faithfulness) - [Hallucination Metric](./hallucination) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Completeness | Metrics | Evals | Mastra Docs" description: Documentation for the Completeness Metric in Mastra, which evaluates how thoroughly LLM outputs cover key elements present in the input. --- # CompletenessMetric Source: https://mastra.ai/docs/reference/evals/completeness The `CompletenessMetric` class evaluates how thoroughly an LLM's output covers the key elements present in the input. It analyzes nouns, verbs, topics, and terms to determine coverage and provides a detailed completeness score. ## Basic Usage ```typescript import { CompletenessMetric } from "@mastra/evals/nlp"; const metric = new CompletenessMetric(); const result = await metric.measure( "Explain how photosynthesis works in plants using sunlight, water, and carbon dioxide.", "Plants use sunlight to convert water and carbon dioxide into glucose through photosynthesis." ); console.log(result.score); // Coverage score from 0-1 console.log(result.info); // Object containing detailed metrics about element coverage ``` ## measure() Parameters ## Returns ## Element Extraction Details The metric extracts and analyzes several types of elements: - Nouns: Key objects, concepts, and entities - Verbs: Actions and states (converted to infinitive form) - Topics: Main subjects and themes - Terms: Individual significant words The extraction process includes: - Normalization of text (removing diacritics, converting to lowercase) - Splitting camelCase words - Handling of word boundaries - Special handling of short words (3 characters or less) - Deduplication of elements ## Scoring Details The metric evaluates completeness through linguistic element coverage analysis. ### Scoring Process 1. Extracts key elements: - Nouns and named entities - Action verbs - Topic-specific terms - Normalized word forms 2. Calculates coverage of input elements: - Exact matches for short terms (≤3 chars) - Substantial overlap (>60%) for longer terms Final score: `(covered_elements / total_input_elements) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Complete coverage - contains all input elements - 0.7-0.9: High coverage - includes most key elements - 0.4-0.6: Partial coverage - contains some key elements - 0.1-0.3: Low coverage - missing most key elements - 0.0: No coverage - output lacks all input elements ## Example with Analysis ```typescript import { CompletenessMetric } from "@mastra/evals/nlp"; const metric = new CompletenessMetric(); const result = await metric.measure( "The quick brown fox jumps over the lazy dog", "A brown fox jumped over a dog" ); // Example output: // { // score: 0.75, // info: { // inputElements: ["quick", "brown", "fox", "jump", "lazy", "dog"], // outputElements: ["brown", "fox", "jump", "dog"], // missingElements: ["quick", "lazy"], // elementCounts: { input: 6, output: 4 } // } // } ``` ## Related - [Answer Relevancy Metric](./answer-relevancy) - [Content Similarity Metric](./content-similarity) - [Textual Difference Metric](./textual-difference) - [Keyword Coverage Metric](./keyword-coverage) --- title: "Reference: Content Similarity | Evals | Mastra Docs" description: Documentation for the Content Similarity Metric in Mastra, which measures textual similarity between strings and provides a matching score. --- # ContentSimilarityMetric Source: https://mastra.ai/docs/reference/evals/content-similarity The `ContentSimilarityMetric` class measures the textual similarity between two strings, providing a score that indicates how closely they match. It supports configurable options for case sensitivity and whitespace handling. ## Basic Usage ```typescript import { ContentSimilarityMetric } from "@mastra/evals/nlp"; const metric = new ContentSimilarityMetric({ ignoreCase: true, ignoreWhitespace: true }); const result = await metric.measure( "Hello, world!", "hello world" ); console.log(result.score); // Similarity score from 0-1 console.log(result.info); // Detailed similarity metrics ``` ## Constructor Parameters ### ContentSimilarityOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates textual similarity through character-level matching and configurable text normalization. ### Scoring Process 1. Normalizes text: - Case normalization (if ignoreCase: true) - Whitespace normalization (if ignoreWhitespace: true) 2. Compares processed strings using string-similarity algorithm: - Analyzes character sequences - Aligns word boundaries - Considers relative positions - Accounts for length differences Final score: `similarity_value * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect match - identical texts - 0.7-0.9: High similarity - mostly matching content - 0.4-0.6: Moderate similarity - partial matches - 0.1-0.3: Low similarity - few matching patterns - 0.0: No similarity - completely different texts ## Example with Different Options ```typescript import { ContentSimilarityMetric } from "@mastra/evals/nlp"; // Case-sensitive comparison const caseSensitiveMetric = new ContentSimilarityMetric({ ignoreCase: false, ignoreWhitespace: true }); const result1 = await caseSensitiveMetric.measure( "Hello World", "hello world" ); // Lower score due to case difference // Example output: // { // score: 0.75, // info: { similarity: 0.75 } // } // Strict whitespace comparison const strictWhitespaceMetric = new ContentSimilarityMetric({ ignoreCase: true, ignoreWhitespace: false }); const result2 = await strictWhitespaceMetric.measure( "Hello World", "Hello World" ); // Lower score due to whitespace difference // Example output: // { // score: 0.85, // info: { similarity: 0.85 } // } ``` ## Related - [Completeness Metric](./completeness) - [Textual Difference Metric](./textual-difference) - [Answer Relevancy Metric](./answer-relevancy) - [Keyword Coverage Metric](./keyword-coverage) --- title: "Reference: Context Position | Metrics | Evals | Mastra Docs" description: Documentation for the Context Position Metric in Mastra, which evaluates the ordering of context nodes based on their relevance to the query and output. --- # ContextPositionMetric Source: https://mastra.ai/docs/reference/evals/context-position The `ContextPositionMetric` class evaluates how well context nodes are ordered based on their relevance to the query and output. It uses position-weighted scoring to emphasize the importance of having the most relevant context pieces appear earlier in the sequence. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPositionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPositionMetric(model, { context: [ "Photosynthesis is a biological process used by plants to create energy from sunlight.", "The process of photosynthesis produces oxygen as a byproduct.", "Plants need water and nutrients from the soil to grow.", ], }); const result = await metric.measure( "What is photosynthesis?", "Photosynthesis is the process by which plants convert sunlight into energy.", ); console.log(result.score); // Position score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## Constructor Parameters ### ContextPositionMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates context positioning through binary relevance assessment and position-based weighting. ### Scoring Process 1. Evaluates context relevance: - Assigns binary verdict (yes/no) to each piece - Records position in sequence - Documents relevance reasoning 2. Applies position weights: - Earlier positions weighted more heavily (weight = 1/(position + 1)) - Sums weights of relevant pieces - Normalizes by maximum possible score Final score: `(weighted_sum / max_possible_sum) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Optimal - most relevant context first - 0.7-0.9: Good - relevant context mostly early - 0.4-0.6: Mixed - relevant context scattered - 0.1-0.3: Suboptimal - relevant context mostly later - 0.0: Poor ordering - relevant context at end or missing ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPositionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPositionMetric(model, { context: [ "A balanced diet is important for health.", "Exercise strengthens the heart and improves blood circulation.", "Regular physical activity reduces stress and anxiety.", "Exercise equipment can be expensive.", ], }); const result = await metric.measure( "What are the benefits of exercise?", "Regular exercise improves cardiovascular health and mental wellbeing.", ); // Example output: // { // score: 0.5, // info: { // reason: "The score is 0.5 because while the second and third contexts are highly // relevant to the benefits of exercise, they are not optimally positioned at // the beginning of the sequence. The first and last contexts are not relevant // to the query, which impacts the position-weighted scoring." // } // } ``` ## Related - [Context Precision Metric](./context-precision) - [Answer Relevancy Metric](./answer-relevancy) - [Completeness Metric](./completeness) + [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Context Precision | Metrics | Evals | Mastra Docs" description: Documentation for the Context Precision Metric in Mastra, which evaluates the relevance and precision of retrieved context nodes for generating expected outputs. --- # ContextPrecisionMetric Source: https://mastra.ai/docs/reference/evals/context-precision The `ContextPrecisionMetric` class evaluates how relevant and precise the retrieved context nodes are for generating the expected output. It uses a judge-based system to analyze each context piece's contribution and provides weighted scoring based on position. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPrecisionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPrecisionMetric(model, { context: [ "Photosynthesis is a biological process used by plants to create energy from sunlight.", "Plants need water and nutrients from the soil to grow.", "The process of photosynthesis produces oxygen as a byproduct.", ], }); const result = await metric.measure( "What is photosynthesis?", "Photosynthesis is the process by which plants convert sunlight into energy.", ); console.log(result.score); // Precision score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## Constructor Parameters ### ContextPrecisionMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates context precision through binary relevance assessment and Mean Average Precision (MAP) scoring. ### Scoring Process 1. Assigns binary relevance scores: - Relevant context: 1 - Irrelevant context: 0 2. Calculates Mean Average Precision: - Computes precision at each position - Weights earlier positions more heavily - Normalizes to configured scale Final score: `Mean Average Precision * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: All relevant context in optimal order - 0.7-0.9: Mostly relevant context with good ordering - 0.4-0.6: Mixed relevance or suboptimal ordering - 0.1-0.3: Limited relevance or poor ordering - 0.0: No relevant context ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPrecisionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPrecisionMetric(model, { context: [ "Exercise strengthens the heart and improves blood circulation.", "A balanced diet is important for health.", "Regular physical activity reduces stress and anxiety.", "Exercise equipment can be expensive.", ], }); const result = await metric.measure( "What are the benefits of exercise?", "Regular exercise improves cardiovascular health and mental wellbeing.", ); // Example output: // { // score: 0.75, // info: { // reason: "The score is 0.75 because the first and third contexts are highly relevant // to the benefits mentioned in the output, while the second and fourth contexts // are not directly related to exercise benefits. The relevant contexts are well-positioned // at the beginning and middle of the sequence." // } // } ``` ## Related - [Answer Relevancy Metric](./answer-relevancy) - [Context Position Metric](./context-position) - [Completeness Metric](./completeness) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Context Relevancy | Evals | Mastra Docs" description: Documentation for the Context Relevancy Metric, which evaluates the relevance of retrieved context in RAG pipelines. --- # ContextRelevancyMetric Source: https://mastra.ai/docs/reference/evals/context-relevancy The `ContextRelevancyMetric` class evaluates the quality of your RAG (Retrieval-Augmented Generation) pipeline's retriever by measuring how relevant the retrieved context is to the input query. It uses an LLM-based evaluation system that first extracts statements from the context and then assesses their relevance to the input. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ContextRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextRelevancyMetric(model, { context: [ "All data is encrypted at rest and in transit", "Two-factor authentication is mandatory", "The platform supports multiple languages", "Our offices are located in San Francisco" ] }); const result = await metric.measure( "What are our product's security features?", "Our product uses encryption and requires 2FA.", ); console.log(result.score); // Score from 0-1 console.log(result.info.reason); // Explanation of the relevancy assessment ``` ## Constructor Parameters ### ContextRelevancyMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates how well retrieved context matches the query through binary relevance classification. ### Scoring Process 1. Extracts statements from context: - Breaks down context into meaningful units - Preserves semantic relationships 2. Evaluates statement relevance: - Assesses each statement against query - Counts relevant statements - Calculates relevance ratio Final score: `(relevant_statements / total_statements) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect relevancy - all retrieved context is relevant - 0.7-0.9: High relevancy - most context is relevant with few irrelevant pieces - 0.4-0.6: Moderate relevancy - a mix of relevant and irrelevant context - 0.1-0.3: Low relevancy - mostly irrelevant context - 0.0: No relevancy - completely irrelevant context ## Example with Custom Configuration ```typescript import { openai } from "@ai-sdk/openai"; import { ContextRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextRelevancyMetric(model, { scale: 100, // Use 0-100 scale instead of 0-1 context: [ "Basic plan costs $10/month", "Pro plan includes advanced features at $30/month", "Enterprise plan has custom pricing", "Our company was founded in 2020", "We have offices worldwide" ] }); const result = await metric.measure( "What are our pricing plans?", "We offer Basic, Pro, and Enterprise plans.", ); // Example output: // { // score: 60, // info: { // reason: "3 out of 5 statements are relevant to pricing plans. The statements about // company founding and office locations are not relevant to the pricing query." // } // } ``` ## Related - [Contextual Recall Metric](./contextual-recall) - [Context Precision Metric](./context-precision) - [Context Position Metric](./context-position) --- title: "Reference: Contextual Recall | Metrics | Evals | Mastra Docs" description: Documentation for the Contextual Recall Metric, which evaluates the completeness of LLM responses in incorporating relevant context. --- # ContextualRecallMetric Source: https://mastra.ai/docs/reference/evals/contextual-recall The `ContextualRecallMetric` class evaluates how effectively an LLM's response incorporates all relevant information from the provided context. It measures whether important information from the reference documents was successfully included in the response, focusing on completeness rather than precision. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ContextualRecallMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextualRecallMetric(model, { context: [ "Product features: cloud synchronization capability", "Offline mode available for all users", "Supports multiple devices simultaneously", "End-to-end encryption for all data" ] }); const result = await metric.measure( "What are the key features of the product?", "The product includes cloud sync, offline mode, and multi-device support.", ); console.log(result.score); // Score from 0-1 ``` ## Constructor Parameters ### ContextualRecallMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates recall through comparison of response content against relevant context items. ### Scoring Process 1. Evaluates information recall: - Identifies relevant items in context - Tracks correctly recalled information - Measures completeness of recall 2. Calculates recall score: - Counts correctly recalled items - Compares against total relevant items - Computes coverage ratio Final score: `(correctly_recalled_items / total_relevant_items) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect recall - all relevant information included - 0.7-0.9: High recall - most relevant information included - 0.4-0.6: Moderate recall - some relevant information missed - 0.1-0.3: Low recall - significant information missed - 0.0: No recall - no relevant information included ## Example with Custom Configuration ```typescript import { openai } from "@ai-sdk/openai"; import { ContextualRecallMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextualRecallMetric( model, { scale: 100, // Use 0-100 scale instead of 0-1 context: [ "All data is encrypted at rest and in transit", "Two-factor authentication (2FA) is mandatory", "Regular security audits are performed", "Incident response team available 24/7" ] } ); const result = await metric.measure( "Summarize the company's security measures", "The company implements encryption for data protection and requires 2FA for all users.", ); // Example output: // { // score: 50, // Only half of the security measures were mentioned // info: { // reason: "The score is 50 because only half of the security measures were mentioned // in the response. The response missed the regular security audits and incident // response team information." // } // } ``` ## Related + [Context Relevancy Metric](./context-relevancy) + [Completeness Metric](./completeness) + [Summarization Metric](./summarization) --- title: "Reference: Faithfulness | Metrics | Evals | Mastra Docs" description: Documentation for the Faithfulness Metric in Mastra, which evaluates the factual accuracy of LLM outputs compared to the provided context. --- # FaithfulnessMetric Reference Source: https://mastra.ai/docs/reference/evals/faithfulness The `FaithfulnessMetric` in Mastra evaluates how factually accurate an LLM's output is compared to the provided context. It extracts claims from the output and verifies them against the context, making it essential to measure RAG pipeline responses' reliability. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { FaithfulnessMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new FaithfulnessMetric(model, { context: [ "The company was established in 1995.", "Currently employs around 450-550 people.", ], }); const result = await metric.measure( "Tell me about the company.", "The company was founded in 1995 and has 500 employees.", ); console.log(result.score); // 1.0 console.log(result.info.reason); // "All claims are supported by the context." ``` ## Constructor Parameters ### FaithfulnessMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates faithfulness through claim verification against provided context. ### Scoring Process 1. Analyzes claims and context: - Extracts all claims (factual and speculative) - Verifies each claim against context - Assigns one of three verdicts: - "yes" - claim supported by context - "no" - claim contradicts context - "unsure" - claim unverifiable 2. Calculates faithfulness score: - Counts supported claims - Divides by total claims - Scales to configured range Final score: `(supported_claims / total_claims) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: All claims supported by context - 0.7-0.9: Most claims supported, few unverifiable - 0.4-0.6: Mixed support with some contradictions - 0.1-0.3: Limited support, many contradictions - 0.0: No supported claims ## Advanced Example ```typescript import { openai } from "@ai-sdk/openai"; import { FaithfulnessMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new FaithfulnessMetric(model, { context: [ "The company had 100 employees in 2020.", "Current employee count is approximately 500.", ], }); // Example with mixed claim types const result = await metric.measure( "What's the company's growth like?", "The company has grown from 100 employees in 2020 to 500 now, and might expand to 1000 by next year.", ); // Example output: // { // score: 0.67, // info: { // reason: "The score is 0.67 because two claims are supported by the context // (initial employee count of 100 in 2020 and current count of 500), // while the future expansion claim is marked as unsure as it cannot // be verified against the context." // } // } ``` ### Related - [Answer Relevancy Metric](./answer-relevancy) - [Hallucination Metric](./hallucination) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Hallucination | Metrics | Evals | Mastra Docs" description: Documentation for the Hallucination Metric in Mastra, which evaluates the factual correctness of LLM outputs by identifying contradictions with provided context. --- # HallucinationMetric Source: https://mastra.ai/docs/reference/evals/hallucination The `HallucinationMetric` evaluates whether an LLM generates factually correct information by comparing its output against the provided context. This metric measures hallucination by identifying direct contradictions between the context and the output. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { HallucinationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new HallucinationMetric(model, { context: [ "Tesla was founded in 2003 by Martin Eberhard and Marc Tarpenning in San Carlos, California.", ], }); const result = await metric.measure( "Tell me about Tesla's founding.", "Tesla was founded in 2004 by Elon Musk in California.", ); console.log(result.score); // Score from 0-1 console.log(result.info.reason); // Explanation of the score // Example output: // { // score: 0.67, // info: { // reason: "The score is 0.67 because two out of three statements from the context // (founding year and founders) were contradicted by the output, while the // location statement was not contradicted." // } // } ``` ## Constructor Parameters ### HallucinationMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates hallucination through contradiction detection and unsupported claim analysis. ### Scoring Process 1. Analyzes factual content: - Extracts statements from context - Identifies numerical values and dates - Maps statement relationships 2. Analyzes output for hallucinations: - Compares against context statements - Marks direct conflicts as hallucinations - Identifies unsupported claims as hallucinations - Evaluates numerical accuracy - Considers approximation context 3. Calculates hallucination score: - Counts hallucinated statements (contradictions and unsupported claims) - Divides by total statements - Scales to configured range Final score: `(hallucinated_statements / total_statements) * scale` ### Important Considerations - Claims not present in context are treated as hallucinations - Subjective claims are hallucinations unless explicitly supported - Speculative language ("might", "possibly") about facts IN context is allowed - Speculative language about facts NOT in context is treated as hallucination - Empty outputs result in zero hallucinations - Numerical evaluation considers: - Scale-appropriate precision - Contextual approximations - Explicit precision indicators ### Score interpretation (0 to scale, default 0-1) - 1.0: Complete hallucination - contradicts all context statements - 0.75: High hallucination - contradicts 75% of context statements - 0.5: Moderate hallucination - contradicts half of context statements - 0.25: Low hallucination - contradicts 25% of context statements - 0.0: No hallucination - output aligns with all context statements **Note:** The score represents the degree of hallucination - lower scores indicate better factual alignment with the provided context ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { HallucinationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new HallucinationMetric(model, { context: [ "OpenAI was founded in December 2015 by Sam Altman, Greg Brockman, and others.", "The company launched with a $1 billion investment commitment.", "Elon Musk was an early supporter but left the board in 2018.", ], }); const result = await metric.measure({ input: "What are the key details about OpenAI?", output: "OpenAI was founded in 2015 by Elon Musk and Sam Altman with a $2 billion investment.", }); // Example output: // { // score: 0.33, // info: { // reason: "The score is 0.33 because one out of three statements from the context // was contradicted (the investment amount was stated as $2 billion instead // of $1 billion). The founding date was correct, and while the output's // description of founders was incomplete, it wasn't strictly contradictory." // } // } ``` ## Related - [Faithfulness Metric](./faithfulness) - [Answer Relevancy Metric](./answer-relevancy) - [Context Precision Metric](./context-precision) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Keyword Coverage | Metrics | Evals | Mastra Docs" description: Documentation for the Keyword Coverage Metric in Mastra, which evaluates how well LLM outputs cover important keywords from the input. --- # KeywordCoverageMetric Source: https://mastra.ai/docs/reference/evals/keyword-coverage The `KeywordCoverageMetric` class evaluates how well an LLM's output covers the important keywords from the input. It analyzes keyword presence and matches while ignoring common words and stop words. ## Basic Usage ```typescript import { KeywordCoverageMetric } from "@mastra/evals/nlp"; const metric = new KeywordCoverageMetric(); const result = await metric.measure( "What are the key features of Python programming language?", "Python is a high-level programming language known for its simple syntax and extensive libraries." ); console.log(result.score); // Coverage score from 0-1 console.log(result.info); // Object containing detailed metrics about keyword coverage ``` ## measure() Parameters ## Returns ## Scoring Details The metric evaluates keyword coverage by matching keywords with the following features: - Common word and stop word filtering (e.g., "the", "a", "and") - Case-insensitive matching - Word form variation handling - Special handling of technical terms and compound words ### Scoring Process 1. Processes keywords from input and output: - Filters out common words and stop words - Normalizes case and word forms - Handles special terms and compounds 2. Calculates keyword coverage: - Matches keywords between texts - Counts successful matches - Computes coverage ratio Final score: `(matched_keywords / total_keywords) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect keyword coverage - 0.7-0.9: Good coverage with most keywords present - 0.4-0.6: Moderate coverage with some keywords missing - 0.1-0.3: Poor coverage with many keywords missing - 0.0: No keyword matches ## Examples with Analysis ```typescript import { KeywordCoverageMetric } from "@mastra/evals/nlp"; const metric = new KeywordCoverageMetric(); // Perfect coverage example const result1 = await metric.measure( "The quick brown fox jumps over the lazy dog", "A quick brown fox jumped over a lazy dog" ); // { // score: 1.0, // info: { // matchedKeywords: 6, // totalKeywords: 6 // } // } // Partial coverage example const result2 = await metric.measure( "Python features include easy syntax, dynamic typing, and extensive libraries", "Python has simple syntax and many libraries" ); // { // score: 0.67, // info: { // matchedKeywords: 4, // totalKeywords: 6 // } // } // Technical terms example const result3 = await metric.measure( "Discuss React.js component lifecycle and state management", "React components have lifecycle methods and manage state" ); // { // score: 1.0, // info: { // matchedKeywords: 4, // totalKeywords: 4 // } // } ``` ## Special Cases The metric handles several special cases: - Empty input/output: Returns score of 1.0 if both empty, 0.0 if only one is empty - Single word: Treated as a single keyword - Technical terms: Preserves compound technical terms (e.g., "React.js", "machine learning") - Case differences: "JavaScript" matches "javascript" - Common words: Ignored in scoring to focus on meaningful keywords ## Related - [Completeness Metric](./completeness) - [Content Similarity Metric](./content-similarity) - [Answer Relevancy Metric](./answer-relevancy) - [Textual Difference Metric](./textual-difference) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Prompt Alignment | Metrics | Evals | Mastra Docs" description: Documentation for the Prompt Alignment Metric in Mastra, which evaluates how well LLM outputs adhere to given prompt instructions. --- # PromptAlignmentMetric Source: https://mastra.ai/docs/reference/evals/prompt-alignment The `PromptAlignmentMetric` class evaluates how strictly an LLM's output follows a set of given prompt instructions. It uses a judge-based system to verify each instruction is followed exactly and provides detailed reasoning for any deviations. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { PromptAlignmentMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const instructions = [ "Start sentences with capital letters", "End each sentence with a period", "Use present tense", ]; const metric = new PromptAlignmentMetric(model, { instructions, scale: 1, }); const result = await metric.measure( "describe the weather", "The sun is shining. Clouds float in the sky. A gentle breeze blows.", ); console.log(result.score); // Alignment score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## Constructor Parameters ### PromptAlignmentOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates instruction alignment through: - Applicability assessment for each instruction - Strict compliance evaluation for applicable instructions - Detailed reasoning for all verdicts - Proportional scoring based on applicable instructions ### Instruction Verdicts Each instruction receives one of three verdicts: - "yes": Instruction is applicable and completely followed - "no": Instruction is applicable but not followed or only partially followed - "n/a": Instruction is not applicable to the given context ### Scoring Process 1. Evaluates instruction applicability: - Determines if each instruction applies to the context - Marks irrelevant instructions as "n/a" - Considers domain-specific requirements 2. Assesses compliance for applicable instructions: - Evaluates each applicable instruction independently - Requires complete compliance for "yes" verdict - Documents specific reasons for all verdicts 3. Calculates alignment score: - Counts followed instructions ("yes" verdicts) - Divides by total applicable instructions (excluding "n/a") - Scales to configured range Final score: `(followed_instructions / applicable_instructions) * scale` ### Important Considerations - Empty outputs: - All formatting instructions are considered applicable - Marked as "no" since they cannot satisfy requirements - Domain-specific instructions: - Always applicable if about the queried domain - Marked as "no" if not followed, not "n/a" - "n/a" verdicts: - Only used for completely different domains - Do not affect the final score calculation ### Score interpretation (0 to scale, default 0-1) - 1.0: All applicable instructions followed perfectly - 0.7-0.9: Most applicable instructions followed - 0.4-0.6: Mixed compliance with applicable instructions - 0.1-0.3: Limited compliance with applicable instructions - 0.0: No applicable instructions followed ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { PromptAlignmentMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new PromptAlignmentMetric(model, { instructions: [ "Use bullet points for each item", "Include exactly three examples", "End each point with a semicolon" ], scale: 1 }); const result = await metric.measure( "List three fruits", "• Apple is red and sweet; • Banana is yellow and curved; • Orange is citrus and round." ); // Example output: // { // score: 1.0, // info: { // reason: "The score is 1.0 because all instructions were followed exactly: // bullet points were used, exactly three examples were provided, and // each point ends with a semicolon." // } // } const result2 = await metric.measure( "List three fruits", "1. Apple 2. Banana 3. Orange and Grape" ); // Example output: // { // score: 0.33, // info: { // reason: "The score is 0.33 because: numbered lists were used instead of bullet points, // no semicolons were used, and four fruits were listed instead of exactly three." // } // } ``` ## Related - [Answer Relevancy Metric](./answer-relevancy) - [Keyword Coverage Metric](./keyword-coverage) --- title: "Reference: Summarization | Metrics | Evals | Mastra Docs" description: Documentation for the Summarization Metric in Mastra, which evaluates the quality of LLM-generated summaries for content and factual accuracy. --- # SummarizationMetric Source: https://mastra.ai/docs/reference/evals/summarization , The `SummarizationMetric` evaluates how well an LLM's summary captures the original text's content while maintaining factual accuracy. It combines two aspects: alignment (factual correctness) and coverage (inclusion of key information), using the minimum scores to ensure both qualities are necessary for a good summary. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { SummarizationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new SummarizationMetric(model); const result = await metric.measure( "The company was founded in 1995 by John Smith. It started with 10 employees and grew to 500 by 2020. The company is based in Seattle.", "Founded in 1995 by John Smith, the company grew from 10 to 500 employees by 2020.", ); console.log(result.score); // Score from 0-1 console.log(result.info); // Object containing detailed metrics about the summary ``` ## Constructor Parameters ### SummarizationMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates summaries through two essential components: 1. **Alignment Score**: Measures factual correctness - Extracts claims from the summary - Verifies each claim against the original text - Assigns "yes", "no", or "unsure" verdicts 2. **Coverage Score**: Measures inclusion of key information - Generates key questions from the original text - Check if the summary answers these questions - Checks information inclusion and assesses comprehensiveness ### Scoring Process 1. Calculates alignment score: - Extracts claims from summary - Verifies against source text - Computes: `supported_claims / total_claims` 2. Determines coverage score: - Generates questions from source - Checks summary for answers - Evaluates completeness - Calculates: `answerable_questions / total_questions` Final score: `min(alignment_score, coverage_score) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect summary - completely factual and covers all key information - 0.7-0.9: Strong summary with minor omissions or slight inaccuracies - 0.4-0.6: Moderate quality with significant gaps or inaccuracies - 0.1-0.3: Poor summary with major omissions or factual errors - 0.0: Invalid summary - either completely inaccurate or missing critical information ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { SummarizationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new SummarizationMetric(model); const result = await metric.measure( "The electric car company Tesla was founded in 2003 by Martin Eberhard and Marc Tarpenning. Elon Musk joined in 2004 as the largest investor and became CEO in 2008. The company's first car, the Roadster, was launched in 2008.", "Tesla, founded by Elon Musk in 2003, revolutionized the electric car industry starting with the Roadster in 2008.", ); // Example output: // { // score: 0.5, // info: { // reason: "The score is 0.5 because while the coverage is good (0.75) - mentioning the founding year, // first car model, and launch date - the alignment score is lower (0.5) due to incorrectly // attributing the company's founding to Elon Musk instead of Martin Eberhard and Marc Tarpenning. // The final score takes the minimum of these two scores to ensure both factual accuracy and // coverage are necessary for a good summary." // alignmentScore: 0.5, // coverageScore: 0.75, // } // } ``` ## Related - [Faithfulness Metric](./faithfulness) - [Completeness Metric](./completeness) - [Contextual Recall Metric](./contextual-recall) - [Hallucination Metric](./hallucination) --- title: "Reference: Textual Difference | Evals | Mastra Docs" description: Documentation for the Textual Difference Metric in Mastra, which measures textual differences between strings using sequence matching. --- # TextualDifferenceMetric Source: https://mastra.ai/docs/reference/evals/textual-difference The `TextualDifferenceMetric` class uses sequence matching to measure the textual differences between two strings. It provides detailed information about changes, including the number of operations needed to transform one text into another. ## Basic Usage ```typescript import { TextualDifferenceMetric } from "@mastra/evals/nlp"; const metric = new TextualDifferenceMetric(); const result = await metric.measure( "The quick brown fox", "The fast brown fox" ); console.log(result.score); // Similarity ratio from 0-1 console.log(result.info); // Detailed change metrics ``` ## measure() Parameters ## Returns ## Scoring Details The metric calculates several measures: - **Similarity Ratio**: Based on sequence matching between texts (0-1) - **Changes**: Count of non-matching operations needed - **Length Difference**: Normalized difference in text lengths - **Confidence**: Inversely proportional to length difference ### Scoring Process 1. Analyzes textual differences: - Performs sequence matching between input and output - Counts the number of change operations required - Measures length differences 2. Calculates metrics: - Computes similarity ratio - Determines confidence score - Combines into weighted score Final score: `(similarity_ratio * confidence) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Identical texts - no differences - 0.7-0.9: Minor differences - few changes needed - 0.4-0.6: Moderate differences - significant changes - 0.1-0.3: Major differences - extensive changes - 0.0: Completely different texts ## Example with Analysis ```typescript import { TextualDifferenceMetric } from "@mastra/evals/nlp"; const metric = new TextualDifferenceMetric(); const result = await metric.measure( "Hello world! How are you?", "Hello there! How is it going?" ); // Example output: // { // score: 0.65, // info: { // confidence: 0.95, // ratio: 0.65, // changes: 2, // lengthDiff: 0.05 // } // } ``` ## Related - [Content Similarity Metric](./content-similarity) - [Completeness Metric](./completeness) - [Keyword Coverage Metric](./keyword-coverage) --- title: "Reference: Tone Consistency | Metrics | Evals | Mastra Docs" description: Documentation for the Tone Consistency Metric in Mastra, which evaluates emotional tone and sentiment consistency in text. --- # ToneConsistencyMetric Source: https://mastra.ai/docs/reference/evals/tone-consistency The `ToneConsistencyMetric` class evaluates the text's emotional tone and sentiment consistency. It can operate in two modes: comparing tone between input/output pairs or analyzing tone stability within a single text. ## Basic Usage ```typescript import { ToneConsistencyMetric } from "@mastra/evals/nlp"; const metric = new ToneConsistencyMetric(); // Compare tone between input and output const result1 = await metric.measure( "I love this amazing product!", "This product is wonderful and fantastic!" ); // Analyze tone stability in a single text const result2 = await metric.measure( "The service is excellent. The staff is friendly. The atmosphere is perfect.", "" // Empty string for single-text analysis ); console.log(result1.score); // Tone consistency score from 0-1 console.log(result2.score); // Tone stability score from 0-1 ``` ## measure() Parameters ## Returns ### info Object (Tone Comparison) ### info Object (Tone Stability) ## Scoring Details The metric evaluates sentiment consistency through tone pattern analysis and mode-specific scoring. ### Scoring Process 1. Analyzes tone patterns: - Extracts sentiment features - Computes sentiment scores - Measures tone variations 2. Calculates mode-specific score: **Tone Consistency** (input and output): - Compares sentiment between texts - Calculates sentiment difference - Score = 1 - (sentiment_difference / max_difference) **Tone Stability** (single input): - Analyzes sentiment across sentences - Calculates sentiment variance - Score = 1 - (sentiment_variance / max_variance) Final score: `mode_specific_score * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect tone consistency/stability - 0.7-0.9: Strong consistency with minor variations - 0.4-0.6: Moderate consistency with noticeable shifts - 0.1-0.3: Poor consistency with major tone changes - 0.0: No consistency - completely different tones ## Example with Both Modes ```typescript import { ToneConsistencyMetric } from "@mastra/evals/nlp"; const metric = new ToneConsistencyMetric(); // Tone Consistency Mode const consistencyResult = await metric.measure( "This product is fantastic and amazing!", "The product is excellent and wonderful!" ); // Example output: // { // score: 0.95, // info: { // responseSentiment: 0.8, // referenceSentiment: 0.75, // difference: 0.05 // } // } // Tone Stability Mode const stabilityResult = await metric.measure( "Great service! Friendly staff. Perfect atmosphere.", "" ); // Example output: // { // score: 0.9, // info: { // avgSentiment: 0.6, // sentimentVariance: 0.1 // } // } ``` ## Related - [Content Similarity Metric](./content-similarity) - [Toxicity Metric](./toxicity) --- title: "Reference: Toxicity | Metrics | Evals | Mastra Docs" description: Documentation for the Toxicity Metric in Mastra, which evaluates LLM outputs for racist, biased, or toxic elements. --- # ToxicityMetric Source: https://mastra.ai/docs/reference/evals/toxicity The `ToxicityMetric` class evaluates whether an LLM's output contains racist, biased, or toxic elements. It uses a judge-based system to analyze responses for various forms of toxicity including personal attacks, mockery, hate speech, dismissive statements, and threats. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ToxicityMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ToxicityMetric(model, { scale: 1, // Default scale is 0-1 }); const result = await metric.measure( "How is Sarah as a person?", "Sarah is a dedicated team member who always delivers quality work.", ); console.log(result.score); // Score from 0-1 (0 = not toxic, 1 = toxic) console.log(result.info.reason); // Explanation of the toxicity assessment ``` ## Constructor Parameters ### ToxicityMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates toxicity through multiple aspects: - Personal attacks - Mockery or sarcasm - Hate speech - Dismissive statements - Threats or intimidation ### Scoring Process 1. Analyzes toxic elements: - Identifies personal attacks and mockery - Detects hate speech and threats - Evaluates dismissive statements - Assesses severity levels 2. Calculates toxicity score: - Weighs detected elements - Combines severity ratings - Normalizes to scale Final score: `(toxicity_weighted_sum / max_toxicity) * scale` ### Score interpretation (0 to scale, default 0-1) - 0.8-1.0: Severe toxicity - 0.4-0.7: Moderate toxicity - 0.1-0.3: Mild toxicity - 0.0: No toxic elements detected ## Example with Custom Configuration ```typescript import { openai } from "@ai-sdk/openai"; const model = openai("gpt-4o-mini"); const metric = new ToxicityMetric(model, { scale: 10, // Use 0-10 scale instead of 0-1 }); const result = await metric.measure( "What do you think about the new team member?", "The new team member shows promise but needs significant improvement in basic skills.", ); ``` ## Related - [Tone Consistency Metric](./tone-consistency) - [Bias Metric](./bias) --- title: "API Reference" description: "Mastra API Reference" --- # Reference Source: https://mastra.ai/docs/reference The Reference section provides documentation of Mastra's API, including parameters, types and usage examples. # Memory Class Reference Source: https://mastra.ai/docs/reference/memory/Memory The `Memory` class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. By default, it uses LibSQL for storage and vector search, and FastEmbed for embeddings. ## Basic Usage ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; const agent = new Agent({ memory: new Memory(), ...otherOptions, }); ``` ## Custom Configuration ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; import { LibSQLStore } from "@mastra/core/storage/libsql"; import { LibSQLVector } from "@mastra/core/vector/libsql"; import { Agent } from "@mastra/core/agent"; const memory = new Memory({ // Optional storage configuration - libsql will be used by default storage: new LibSQLStore({ url: "file:memory.db", }), // Optional vector database for semantic search - libsql will be used by default vector: new LibSQLVector({ url: "file:vector.db", }), // Memory configuration options options: { // Number of recent messages to include lastMessages: 20, // Semantic search configuration semanticRecall: { topK: 3, // Number of similar messages to retrieve messageRange: { // Messages to include around each result before: 2, after: 1, }, }, // Working memory configuration workingMemory: { enabled: true, template: "", }, }, }); const agent = new Agent({ memory, ...otherOptions, }); ``` ## Parameters ### options ...', use: 'text-stream' }", }, { name: "threads", type: "{ generateTitle?: boolean }", description: "Settings related to memory thread creation. `generateTitle` will cause the thread.title to be generated from an llm summary of the users first message.", isOptional: true, defaultValue: "{ generateTitle: true }", }, ]} /> ### Working Memory The working memory feature allows agents to maintain persistent information across conversations. When enabled, the Memory class will automatically manage XML-based working memory updates through either text stream tags or tool calls. There are two modes for handling working memory updates: 1. **text-stream** (default): The agent includes working memory updates directly in its responses using XML-like tags (`...`). These tags are automatically processed and stripped from the visible output. 2. **tool-call**: The agent uses a dedicated tool to update working memory. This mode should be used when working with `toDataStream()` as text-stream mode is not compatible with data streaming. Additionally, this mode provides more explicit control over memory updates and may be preferred when working with agents that are better at using tools than managing text tags. Example configuration: ```typescript copy showLineNumbers const memory = new Memory({ options: { workingMemory: { enabled: true, template: "", use: "tool-call", // or 'text-stream' }, }, }); ``` If no template is provided, the Memory class uses a default template that includes fields for user details, preferences, goals, and other contextual information. See the [Agent Memory Guide](/docs/agents/01-agent-memory#working-memory) for detailed usage examples and best practices. ### embedder By default, Memory uses FastEmbed with the `bge-small-en-v1.5` model, which provides a good balance of performance and model size (~130MB). You only need to specify an embedder if you want to use a different model or provider. ### Related - [createThread](/docs/reference/memory/createThread.mdx) - [query](/docs/reference/memory/query.mdx) # createThread Source: https://mastra.ai/docs/reference/memory/createThread Creates a new conversation thread in the memory system. Each thread represents a distinct conversation or context and can contain multiple messages. ## Usage Example ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ /* config */ }); const thread = await memory.createThread({ resourceId: "user-123", title: "Support Conversation", metadata: { category: "support", priority: "high" } }); ``` ## Parameters ", description: "Optional metadata to associate with the thread", isOptional: true, }, ]} /> ## Returns ", description: "Additional metadata associated with the thread", }, ]} /> ### Related - [Memory](/docs/reference/memory/Memory.mdx) - [getThreadById](/docs/reference/memory/getThreadById.mdx) - [getThreadsByResourceId](/docs/reference/memory/getThreadsByResourceId.mdx) # getThreadById Reference Source: https://mastra.ai/docs/reference/memory/getThreadById The `getThreadById` function retrieves a specific thread by its ID from storage. ## Usage Example ```typescript import { Memory } from "@mastra/core/memory"; const memory = new Memory(config); const thread = await memory.getThreadById({ threadId: "thread-123" }); ``` ## Parameters ## Returns ### Related - [Memory](/docs/reference/memory/Memory.mdx) # getThreadsByResourceId Reference Source: https://mastra.ai/docs/reference/memory/getThreadsByResourceId The `getThreadsByResourceId` function retrieves all threads associated with a specific resource ID from storage. ## Usage Example ```typescript import { Memory } from "@mastra/core/memory"; const memory = new Memory(config); const threads = await memory.getThreadsByResourceId({ resourceId: "resource-123", }); ``` ## Parameters ## Returns ### Related - [Memory](/docs/reference/memory/Memory.mdx) # query Source: https://mastra.ai/docs/reference/memory/query Retrieves messages from a specific thread, with support for pagination and filtering options. ## Usage Example ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ /* config */ }); // Get last 50 messages const { messages, uiMessages } = await memory.query({ threadId: "thread-123", selectBy: { last: 50, }, }); // Get messages with context around specific messages const { messages: contextMessages } = await memory.query({ threadId: "thread-123", selectBy: { include: [ { id: "msg-123", // Get just this message (no context) }, { id: "msg-456", // Get this message with custom context withPreviousMessages: 3, // 3 messages before withNextMessages: 1, // 1 message after }, ], }, }); // Semantic search in messages const { messages } = await memory.query({ threadId: "thread-123", selectBy: { vectorSearchString: "What was discussed about deployment?", }, threadConfig: { historySearch: true, }, }); ``` ## Parameters ### selectBy ### include ## Returns ## Additional Notes The `query` function returns two different message formats: - `messages`: Core message format used internally - `uiMessages`: Formatted messages suitable for UI display, including proper threading of tool calls and results ### Related - [Memory](/docs/reference/memory/Memory.mdx) --- title: 'AgentNetwork (Experimental)' description: 'Reference documentation for the AgentNetwork class' --- # AgentNetwork (Experimental) Source: https://mastra.ai/docs/reference/networks/agent-network > **Note:** The AgentNetwork feature is experimental and may change in future releases. The `AgentNetwork` class provides a way to create a network of specialized agents that can collaborate to solve complex tasks. Unlike Workflows, which require explicit control over execution paths, AgentNetwork uses an LLM-based router to dynamically determine which agent to call next. ## Key Concepts - **LLM-based Routing**: AgentNetwork exclusively uses an LLM to figure out the best way to use your agents - **Agent Collaboration**: Multiple specialized agents can work together to solve complex tasks - **Dynamic Decision Making**: The router decides which agent to call based on the task requirements ## Usage ```typescript import { AgentNetwork } from '@mastra/core'; import { openai } from '@mastra/openai'; // Create specialized agents const webSearchAgent = new Agent({ name: 'Web Search Agent', instructions: 'You search the web for information.', model: openai('gpt-4o'), tools: { /* web search tools */ }, }); const dataAnalysisAgent = new Agent({ name: 'Data Analysis Agent', instructions: 'You analyze data and provide insights.', model: openai('gpt-4o'), tools: { /* data analysis tools */ }, }); // Create the network const researchNetwork = new AgentNetwork({ name: 'Research Network', instructions: 'Coordinate specialized agents to research topics thoroughly.', model: openai('gpt-4o'), agents: [webSearchAgent, dataAnalysisAgent], }); // Use the network const result = await researchNetwork.generate('Research the impact of climate change on agriculture'); console.log(result.text); ``` ## Constructor ```typescript constructor(config: AgentNetworkConfig) ``` ### Parameters - `config`: Configuration object for the AgentNetwork - `name`: Name of the network - `instructions`: Instructions for the routing agent - `model`: Language model to use for routing - `agents`: Array of specialized agents in the network ## Methods ### generate() Generates a response using the agent network. This method has replaced the deprecated `run()` method for consistency with the rest of the codebase. ```typescript async generate( messages: string | string[] | CoreMessage[], args?: AgentGenerateOptions ): Promise ``` ### stream() Streams a response using the agent network. ```typescript async stream( messages: string | string[] | CoreMessage[], args?: AgentStreamOptions ): Promise ``` ### getRoutingAgent() Returns the routing agent used by the network. ```typescript getRoutingAgent(): Agent ``` ### getAgents() Returns the array of specialized agents in the network. ```typescript getAgents(): Agent[] ``` ### getAgentHistory() Returns the history of interactions for a specific agent. ```typescript getAgentHistory(agentId: string): Array<{ input: string; output: string; timestamp: string; }> ``` ### getAgentInteractionHistory() Returns the history of all agent interactions that have occurred in the network. ```typescript getAgentInteractionHistory(): Record< string, Array<{ input: string; output: string; timestamp: string; }> > ``` ### getAgentInteractionSummary() Returns a formatted summary of agent interactions in chronological order. ```typescript getAgentInteractionSummary(): string ``` ## When to Use AgentNetwork vs Workflows - **Use AgentNetwork when:** You want the AI to figure out the best way to use your agents, with dynamic routing based on the task requirements. - **Use Workflows when:** You need explicit control over execution paths, with predetermined sequences of agent calls and conditional logic. ## Internal Tools The AgentNetwork uses a special `transmit` tool that allows the routing agent to call specialized agents. This tool handles: - Single agent calls - Multiple parallel agent calls - Context sharing between agents ## Limitations - The AgentNetwork approach may use more tokens than a well-designed Workflow for the same task - Debugging can be more challenging as the routing decisions are made by the LLM - Performance may vary based on the quality of the routing instructions and the capabilities of the specialized agents --- title: "Reference: createLogger() | Mastra Observability Docs" description: Documentation for the createLogger function, which instantiates a logger based on a given configuration. --- # createLogger() Source: https://mastra.ai/docs/reference/observability/create-logger The `createLogger()` function is used to instantiate a logger based on a given configuration. You can create console-based, file-based, or Upstash Redis-based loggers by specifying the type and any additional parameters relevant to that type. ### Usage #### Console Logger (Development) ```typescript showLineNumbers copy const consoleLogger = createLogger({ name: "Mastra", level: "debug" }); consoleLogger.info("App started"); ``` #### File Transport (Structured Logs) ```typescript showLineNumbers copy import { FileTransport } from "@mastra/loggers/file"; const fileLogger = createLogger({ name: "Mastra", transports: { file: new FileTransport({ path: "test-dir/test.log" }) }, level: "warn", }); fileLogger.warn("Low disk space", { destinationPath: "system", type: "WORKFLOW", }); ``` #### Upstash Logger (Remote Log Drain) ```typescript showLineNumbers copy import { UpstashTransport } from "@mastra/loggers/upstash"; const logger = createLogger({ name: "Mastra", transports: { upstash: new UpstashTransport({ listName: "production-logs", upstashUrl: process.env.UPSTASH_URL!, upstashToken: process.env.UPSTASH_TOKEN!, }), }, level: "info", }); logger.info({ message: "User signed in", destinationPath: "auth", type: "AGENT", runId: "run_123", }); ``` ### Parameters --- title: "Reference: Logger Instance | Mastra Observability Docs" description: Documentation for Logger instances, which provide methods to record events at various severity levels. --- # Logger Instance Source: https://mastra.ai/docs/reference/observability/logger A Logger instance is created by `createLogger()` and provides methods to record events at various severity levels. Depending on the logger type, messages may be written to the console, file, or an external service. ## Example ```typescript showLineNumbers copy // Using a console logger const logger = createLogger({ name: 'Mastra', level: 'info' }); logger.debug('Debug message'); // Won't be logged because level is INFO logger.info({ message: 'User action occurred', destinationPath: 'user-actions', type: 'AGENT' }); // Logged logger.error('An error occurred'); // Logged as ERROR ``` ## Methods void | Promise', description: 'Write a DEBUG-level log. Only recorded if level ≤ DEBUG.', }, { name: 'info', type: '(message: BaseLogMessage | string, ...args: any[]) => void | Promise', description: 'Write an INFO-level log. Only recorded if level ≤ INFO.', }, { name: 'warn', type: '(message: BaseLogMessage | string, ...args: any[]) => void | Promise', description: 'Write a WARN-level log. Only recorded if level ≤ WARN.', }, { name: 'error', type: '(message: BaseLogMessage | string, ...args: any[]) => void | Promise', description: 'Write an ERROR-level log. Only recorded if level ≤ ERROR.', }, { name: 'cleanup', type: '() => Promise', isOptional: true, description: 'Cleanup resources held by the logger (e.g., network connections for Upstash). Not all loggers implement this.', }, ]} /> **Note:** Some loggers require a `BaseLogMessage` object (with `message`, `destinationPath`, `type` fields). For instance, the `File` and `Upstash` loggers need structured messages. --- title: "Reference: OtelConfig | Mastra Observability Docs" description: Documentation for the OtelConfig object, which configures OpenTelemetry instrumentation, tracing, and exporting behavior. --- # `OtelConfig` Source: https://mastra.ai/docs/reference/observability/otel-config The `OtelConfig` object is used to configure OpenTelemetry instrumentation, tracing, and exporting behavior within your application. By adjusting its properties, you can control how telemetry data (such as traces) is collected, sampled, and exported. To use the `OtelConfig` within Mastra, pass it as the value of the `telemetry` key when initializing Mastra. This will configure Mastra to use your custom OpenTelemetry settings for tracing and instrumentation. ```typescript showLineNumbers copy import { Mastra } from 'mastra'; const otelConfig: OtelConfig = { serviceName: 'my-awesome-service', enabled: true, sampling: { type: 'ratio', probability: 0.5, }, export: { type: 'otlp', endpoint: 'https://otel-collector.example.com/v1/traces', headers: { Authorization: 'Bearer YOUR_TOKEN_HERE', }, }, }; ``` ### Properties ', isOptional: true, description: 'Additional headers to send with OTLP requests, useful for authentication or routing.', }, ], }, ]} /> --- title: "Reference: Braintrust | Observability | Mastra Docs" description: Documentation for integrating Braintrust with Mastra, an evaluation and monitoring platform for LLM applications. --- # Braintrust Source: https://mastra.ai/docs/reference/observability/providers/braintrust Braintrust is an evaluation and monitoring platform for LLM applications. ## Configuration To use Braintrust with Mastra, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer , x-bt-parent=project_id:" ``` ## Implementation Here's how to configure Mastra to use Braintrust: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## Dashboard Access your Braintrust dashboard at [braintrust.dev](https://www.braintrust.dev/) --- title: "Reference: Provider List | Observability | Mastra Docs" description: Overview of observability providers supported by Mastra, including SigNoz, Braintrust, Langfuse, and more. --- # Observability Providers Source: https://mastra.ai/docs/reference/observability/providers Observability providers include: - [SigNoz](./providers/signoz.mdx) - [Braintrust](./providers/braintrust.mdx) - [Langfuse](./providers/langfuse.mdx) - [Langsmith](./providers/langsmith.mdx) - [New Relic](./providers/new-relic.mdx) - [Traceloop](./providers/traceloop.mdx) - [Laminar](./providers/laminar.mdx) --- title: "Reference: Laminar Integration | Mastra Observability Docs" description: Documentation for integrating Laminar with Mastra, a specialized observability platform for LLM applications. --- # Laminar Source: https://mastra.ai/docs/reference/observability/providers/laminar Laminar is a specialized observability platform for LLM applications. ## Configuration To use Laminar with Mastra, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://api.lmnr.ai:8443 OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your_api_key, x-laminar-team-id=your_team_id" ``` ## Implementation Here's how to configure Mastra to use Laminar: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", protocol: "grpc", }, }, }); ``` ## Dashboard Access your Laminar dashboard at [https://lmnr.ai/](https://lmnr.ai/) --- title: "Reference: Langfuse Integration | Mastra Observability Docs" description: Documentation for integrating Langfuse with Mastra, an open-source observability platform for LLM applications. --- # Langfuse Source: https://mastra.ai/docs/reference/observability/providers/langfuse Langfuse is an open-source observability platform designed specifically for LLM applications. > **Note**: Currently, only AI-related calls will contain detailed telemetry data. Other operations will create traces but with limited information. ## Configuration To use Langfuse with Mastra, you'll need to configure the following environment variables: ```env LANGFUSE_PUBLIC_KEY=your_public_key LANGFUSE_SECRET_KEY=your_secret_key LANGFUSE_BASEURL=https://cloud.langfuse.com # Optional - defaults to cloud.langfuse.com ``` **Important**: When configuring the telemetry export settings, the `traceName` parameter must be set to `"ai"` for the Langfuse integration to work properly. ## Implementation Here's how to configure Mastra to use Langfuse: ```typescript import { Mastra } from "@mastra/core"; import { LangfuseExporter } from "langfuse-vercel"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "ai", // this must be set to "ai" so that the LangfuseExporter thinks it's an AI SDK trace enabled: true, export: { type: "custom", exporter: new LangfuseExporter({ publicKey: process.env.LANGFUSE_PUBLIC_KEY, secretKey: process.env.LANGFUSE_SECRET_KEY, baseUrl: process.env.LANGFUSE_BASEURL, }), }, }, }); ``` ## Dashboard Once configured, you can view your traces and analytics in the Langfuse dashboard at [cloud.langfuse.com](https://cloud.langfuse.com) --- title: "Reference: LangSmith Integration | Mastra Observability Docs" description: Documentation for integrating LangSmith with Mastra, a platform for debugging, testing, evaluating, and monitoring LLM applications. --- # LangSmith Source: https://mastra.ai/docs/reference/observability/providers/langsmith LangSmith is LangChain's platform for debugging, testing, evaluating, and monitoring LLM applications. > **Note**: Currently, this integration only traces AI-related calls in your application. Other types of operations are not captured in the telemetry data. ## Configuration To use LangSmith with Mastra, you'll need to configure the following environment variables: ```env LANGSMITH_TRACING=true LANGSMITH_ENDPOINT=https://api.smith.langchain.com LANGSMITH_API_KEY=your-api-key LANGSMITH_PROJECT=your-project-name ``` ## Implementation Here's how to configure Mastra to use LangSmith: ```typescript import { Mastra } from "@mastra/core"; import { AISDKExporter } from "langsmith/vercel"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "custom", exporter: new AISDKExporter(), }, }, }); ``` ## Dashboard Access your traces and analytics in the LangSmith dashboard at [smith.langchain.com](https://smith.langchain.com) --- title: "Reference: LangWatch Integration | Mastra Observability Docs" description: Documentation for integrating LangWatch with Mastra, a specialized observability platform for LLM applications. --- # LangWatch Source: https://mastra.ai/docs/reference/observability/providers/langwatch LangWatch is a specialized observability platform for LLM applications. ## Configuration To use LangWatch with Mastra, configure these environment variables: ```env LANGWATCH_API_KEY=your_api_key LANGWATCH_PROJECT_ID=your_project_id ``` ## Implementation Here's how to configure Mastra to use LangWatch: ```typescript import { Mastra } from "@mastra/core"; import { LangWatchExporter } from "langwatch"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "custom", exporter: new LangWatchExporter({ apiKey: process.env.LANGWATCH_API_KEY, projectId: process.env.LANGWATCH_PROJECT_ID, }), }, }, }); ``` ## Dashboard Access your LangWatch dashboard at [app.langwatch.ai](https://app.langwatch.ai) --- title: "Reference: New Relic Integration | Mastra Observability Docs" description: Documentation for integrating New Relic with Mastra, a comprehensive observability platform supporting OpenTelemetry for full-stack monitoring. --- # New Relic Source: https://mastra.ai/docs/reference/observability/providers/new-relic New Relic is a comprehensive observability platform that supports OpenTelemetry (OTLP) for full-stack monitoring. ## Configuration To use New Relic with Mastra via OTLP, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:4317 OTEL_EXPORTER_OTLP_HEADERS="api-key=your_license_key" ``` ## Implementation Here's how to configure Mastra to use New Relic: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## Dashboard View your telemetry data in the New Relic One dashboard at [one.newrelic.com](https://one.newrelic.com) --- title: "Reference: SigNoz Integration | Mastra Observability Docs" description: Documentation for integrating SigNoz with Mastra, an open-source APM and observability platform providing full-stack monitoring through OpenTelemetry. --- # SigNoz Source: https://mastra.ai/docs/reference/observability/providers/signoz SigNoz is an open-source APM and observability platform that provides full-stack monitoring capabilities through OpenTelemetry. ## Configuration To use SigNoz with Mastra, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.{region}.signoz.cloud:443 OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=your_signoz_token ``` ## Implementation Here's how to configure Mastra to use SigNoz: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## Dashboard Access your SigNoz dashboard at [signoz.io](https://signoz.io/) --- title: "Reference: Traceloop Integration | Mastra Observability Docs" description: Documentation for integrating Traceloop with Mastra, an OpenTelemetry-native observability platform for LLM applications. --- # Traceloop Source: https://mastra.ai/docs/reference/observability/providers/traceloop Traceloop is an OpenTelemetry-native observability platform specifically designed for LLM applications. ## Configuration To use Traceloop with Mastra, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://api.traceloop.com OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your_api_key, x-traceloop-destination-id=your_destination_id" ``` ## Implementation Here's how to configure Mastra to use Traceloop: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## Dashboard Access your traces and analytics in the Traceloop dashboard at [app.traceloop.com](https://app.traceloop.com) --- title: "Reference: Astra Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the AstraVector class in Mastra, which provides vector search using DataStax Astra DB. --- # Astra Vector Store Source: https://mastra.ai/docs/reference/rag/astra The AstraVector class provides vector search using [DataStax Astra DB](https://www.datastax.com/products/datastax-astra), a cloud-native, serverless database built on Apache Cassandra. It provides vector search capabilities with enterprise-grade scalability and high availability. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() ", isOptional: true, description: "New metadata values", }, ], }, ]} /> ### deleteIndexById() ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Environment Variables Required environment variables: - `ASTRA_DB_TOKEN`: Your Astra DB API token - `ASTRA_DB_ENDPOINT`: Your Astra DB API endpoint ### Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Chroma Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the ChromaVector class in Mastra, which provides vector search using ChromaDB. --- # Chroma Vector Store Source: https://mastra.ai/docs/reference/rag/chroma The ChromaVector class provides vector search using [ChromaDB](https://www.trychroma.com/), an open-source embedding database. It offers efficient vector search with metadata filtering and hybrid search capabilities. ## Constructor Options ### auth ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, { name: "documents", type: "string[]", isOptional: true, description: "Chroma-specific: Original text documents associated with the vectors", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, { name: "documentFilter", type: "Record", isOptional: true, description: "Chroma-specific: Filter to apply on the document content", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() The `update` object can contain: ", isOptional: true, description: "New metadata to replace the existing metadata", }, ]} /> ### deleteIndexById() ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; document?: string; // Chroma-specific: Original document if it was stored vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ### Related - [Metadata Filters](./metadata-filters) --- title: "Reference: .chunk() | Document Processing | RAG | Mastra Docs" description: Documentation for the chunk function in Mastra, which splits documents into smaller segments using various strategies. --- # Reference: .chunk() Source: https://mastra.ai/docs/reference/rag/chunk The `.chunk()` function splits documents into smaller segments using various strategies and options. ## Example ```typescript import { Document } from '@mastra/core'; const doc = new Document(` # Introduction This is a sample document that we want to split into chunks. ## Section 1 Here is the first section with some content. ## Section 2 Here is another section with different content. `); // Basic chunking with defaults const chunks = await doc.chunk(); // Markdown-specific chunking with header extraction const chunksWithMetadata = await doc.chunk({ strategy: 'markdown', headers: [['#', 'title'], ['##', 'section']], extract: { fields: [ { name: 'summary', description: 'A brief summary of the chunk content' }, { name: 'keywords', description: 'Key terms found in the chunk' } ] } }); ``` ## Parameters ## Strategy-Specific Options Strategy-specific options are passed as top-level parameters alongside the strategy parameter. For example: ```typescript showLineNumbers copy // HTML strategy example const chunks = await doc.chunk({ strategy: 'html', headers: [['h1', 'title'], ['h2', 'subtitle']], // HTML-specific option sections: [['div.content', 'main']], // HTML-specific option size: 500 // general option }); // Markdown strategy example const chunks = await doc.chunk({ strategy: 'markdown', headers: [['#', 'title'], ['##', 'section']], // Markdown-specific option stripHeaders: true, // Markdown-specific option overlap: 50 // general option }); // Token strategy example const chunks = await doc.chunk({ strategy: 'token', encodingName: 'gpt2', // Token-specific option modelName: 'gpt-3.5-turbo', // Token-specific option size: 1000 // general option }); ``` The options documented below are passed directly at the top level of the configuration object, not nested within a separate options object. ### HTML ", description: "Array of [selector, metadata key] pairs for header-based splitting", }, { name: "sections", type: "Array<[string, string]>", description: "Array of [selector, metadata key] pairs for section-based splitting", }, { name: "returnEachLine", type: "boolean", isOptional: true, description: "Whether to return each line as a separate chunk", }, ]} /> ### Markdown ", description: "Array of [header level, metadata key] pairs", }, { name: "stripHeaders", type: "boolean", isOptional: true, description: "Whether to remove headers from the output", }, { name: "returnEachLine", type: "boolean", isOptional: true, description: "Whether to return each line as a separate chunk", }, ]} /> ### Token ### JSON ## Return Value Returns a `MDocument` instance containing the chunked documents. Each chunk includes: ```typescript interface DocumentNode { text: string; metadata: Record; embedding?: number[]; } ``` --- title: "Reference: MDocument | Document Processing | RAG | Mastra Docs" description: Documentation for the MDocument class in Mastra, which handles document processing and chunking. --- # MDocument Source: https://mastra.ai/docs/reference/rag/document The MDocument class processes documents for RAG applications. The main methods are `.chunk()` and `.extractMetadata()`. ## Constructor }>", description: "Array of document chunks with their text content and optional metadata", }, { name: "type", type: "'text' | 'html' | 'markdown' | 'json' | 'latex'", description: "Type of document content", } ]} /> ## Static Methods ### fromText() Creates a document from plain text content. ```typescript static fromText(text: string, metadata?: Record): MDocument ``` ### fromHTML() Creates a document from HTML content. ```typescript static fromHTML(html: string, metadata?: Record): MDocument ``` ### fromMarkdown() Creates a document from Markdown content. ```typescript static fromMarkdown(markdown: string, metadata?: Record): MDocument ``` ### fromJSON() Creates a document from JSON content. ```typescript static fromJSON(json: string, metadata?: Record): MDocument ``` ## Instance Methods ### chunk() Splits document into chunks and optionally extracts metadata. ```typescript async chunk(params?: ChunkParams): Promise ``` See [chunk() reference](./chunk) for detailed options. ### getDocs() Returns array of processed document chunks. ```typescript getDocs(): Chunk[] ``` ### getText() Returns array of text strings from chunks. ```typescript getText(): string[] ``` ### getMetadata() Returns array of metadata objects from chunks. ```typescript getMetadata(): Record[] ``` ### extractMetadata() Extracts metadata using specified extractors. See [ExtractParams reference](./extract-params) for details. ```typescript async extractMetadata(params: ExtractParams): Promise ``` ## Examples ```typescript import { MDocument } from '@mastra/rag'; // Create document from text const doc = MDocument.fromText('Your content here'); // Split into chunks with metadata extraction const chunks = await doc.chunk({ strategy: 'markdown', headers: [['#', 'title'], ['##', 'section']], extract: { fields: [ { name: 'summary', description: 'A brief summary' }, { name: 'keywords', description: 'Key terms' } ] } }); // Get processed chunks const docs = doc.getDocs(); const texts = doc.getText(); const metadata = doc.getMetadata(); ``` --- title: "Reference: embed() | Document Embedding | RAG | Mastra Docs" description: Documentation for embedding functionality in Mastra using the AI SDK. --- # Embed Source: https://mastra.ai/docs/reference/rag/embeddings Mastra uses the AI SDK's `embed` and `embedMany` functions to generate vector embeddings for text inputs, enabling similarity search and RAG workflows. ## Single Embedding The `embed` function generates a vector embedding for a single text input: ```typescript import { embed } from 'ai'; const result = await embed({ model: openai.embedding('text-embedding-3-small'), value: "Your text to embed", maxRetries: 2 // optional, defaults to 2 }); ``` ### Parameters ", description: "The text content or object to embed" }, { name: "maxRetries", type: "number", description: "Maximum number of retries per embedding call. Set to 0 to disable retries.", isOptional: true, defaultValue: "2" }, { name: "abortSignal", type: "AbortSignal", description: "Optional abort signal to cancel the request", isOptional: true }, { name: "headers", type: "Record", description: "Additional HTTP headers for the request (only for HTTP-based providers)", isOptional: true } ]} /> ### Return Value ## Multiple Embeddings For embedding multiple texts at once, use the `embedMany` function: ```typescript import { embedMany } from 'ai'; const result = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: ["First text", "Second text", "Third text"], maxRetries: 2 // optional, defaults to 2 }); ``` ### Parameters []", description: "Array of text content or objects to embed" }, { name: "maxRetries", type: "number", description: "Maximum number of retries per embedding call. Set to 0 to disable retries.", isOptional: true, defaultValue: "2" }, { name: "abortSignal", type: "AbortSignal", description: "Optional abort signal to cancel the request", isOptional: true }, { name: "headers", type: "Record", description: "Additional HTTP headers for the request (only for HTTP-based providers)", isOptional: true } ]} /> ### Return Value ## Example Usage ```typescript import { embed, embedMany } from 'ai'; import { openai } from '@ai-sdk/openai'; // Single embedding const singleResult = await embed({ model: openai.embedding('text-embedding-3-small'), value: "What is the meaning of life?", }); // Multiple embeddings const multipleResult = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: [ "First question about life", "Second question about universe", "Third question about everything" ], }); ``` For more detailed information about embeddings in the Vercel AI SDK, see: - [AI SDK Embeddings Overview](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings) - [embed()](https://sdk.vercel.ai/docs/reference/ai-sdk-core/embed) - [embedMany()](https://sdk.vercel.ai/docs/reference/ai-sdk-core/embed-many) --- title: "Reference: ExtractParams | Document Processing | RAG | Mastra Docs" description: Documentation for metadata extraction configuration in Mastra. --- # ExtractParams Source: https://mastra.ai/docs/reference/rag/extract-params ExtractParams configures metadata extraction from document chunks. ## Example ## ExtractParams `ExtractParams` configures automatic metadata extraction from chunks using LLM analysis. ```typescript showLineNumbers copy const doc = new Document(text); const chunks = await doc.chunk({ extract: { fields: [ { name: 'summary', description: 'A 1-2 sentence summary of the main points' }, { name: 'entities', description: 'List of companies, people, and locations mentioned' }, { name: 'custom_field', description: 'Any other metadata you want to extract, guided by this description' } ], model: 'gpt-4o-mini' // Optional: specify a different model } }); ``` ## Parameters ", description: "Array of fields to extract from each chunk", isOptional: false }, { name: "model", type: "string", description: "OpenAI model to use for extraction", defaultValue: "gpt-3.5-turbo", isOptional: true } ]} /> ## Field Types The fields are flexible - you can define any metadata fields you want to extract. Common field types include: - `summary`: Brief overview of chunk content - `keywords`: Key terms or concepts - `topics`: Main subjects discussed - `entities`: Named entities (people, places, organizations) - `sentiment`: Emotional tone - `language`: Detected language - `timestamp`: Temporal references - `categories`: Content classification Example: --- title: "Reference: GraphRAG | Graph-based RAG | RAG | Mastra Docs" description: Documentation for the GraphRAG class in Mastra, which implements a graph-based approach to retrieval augmented generation. --- # GraphRAG Source: https://mastra.ai/docs/reference/rag/graph-rag The `GraphRAG` class implements a graph-based approach to retrieval augmented generation. It creates a knowledge graph from document chunks where nodes represent documents and edges represent semantic relationships, enabling both direct similarity matching and discovery of related content through graph traversal. ## Basic Usage ```typescript import { GraphRAG } from "@mastra/rag"; const graphRag = new GraphRAG({ dimension: 1536, threshold: 0.7 }); // Create the graph from chunks and embeddings graphRag.createGraph(documentChunks, embeddings); // Query the graph with embedding const results = await graphRag.query({ query: queryEmbedding, topK: 10, randomWalkSteps: 100, restartProb: 0.15 }); ``` ## Constructor Parameters ## Methods ### createGraph Creates a knowledge graph from document chunks and their embeddings. ```typescript createGraph(chunks: GraphChunk[], embeddings: GraphEmbedding[]): void ``` #### Parameters ### query Performs a graph-based search combining vector similarity and graph traversal. ```typescript query({ query, topK = 10, randomWalkSteps = 100, restartProb = 0.15 }: { query: number[]; topK?: number; randomWalkSteps?: number; restartProb?: number; }): RankedNode[] ``` #### Parameters #### Returns Returns an array of `RankedNode` objects, where each node contains: ", description: "Additional metadata associated with the chunk", }, { name: "score", type: "number", description: "Combined relevance score from graph traversal", } ]} /> ## Advanced Example ```typescript const graphRag = new GraphRAG({ dimension: 1536, threshold: 0.8 // Stricter similarity threshold }); // Create graph from chunks and embeddings graphRag.createGraph(documentChunks, embeddings); // Query with custom parameters const results = await graphRag.query({ query: queryEmbedding, topK: 5, randomWalkSteps: 200, restartProb: 0.2 }); ``` ## Related - [createGraphRAGTool](../tools/graph-rag-tool) --- title: "Default Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the LibSQLVector class in Mastra, which provides vector search using LibSQL with vector extensions. --- # LibSQLVector Store Source: https://mastra.ai/docs/reference/rag/libsql The LibSQL storage implementation provides a SQLite-compatible vector search [LibSQL](https://github.com/tursodatabase/libsql), a fork of SQLite with vector extensions, and [Turso](https://turso.tech/) with vector extensions, offering a lightweight and efficient vector database solution. It's part of the `@mastra/core` package and offers efficient vector similarity search with metadata filtering. ## Installation Default vector store is included in the core package: ```bash copy npm install @mastra/core ``` ## Usage ```typescript copy showLineNumbers import { LibSQLVector } from "@mastra/core/vector/libsql"; // Create a new vector store instance const store = new LibSQLVector({ connectionUrl: process.env.DATABASE_URL, // Optional: for Turso cloud databases authToken: process.env.DATABASE_AUTH_TOKEN, }); // Create an index await store.createIndex({ indexName: "myCollection", dimension: 1536, }); // Add vectors with metadata const vectors = [[0.1, 0.2, ...], [0.3, 0.4, ...]]; const metadata = [ { text: "first document", category: "A" }, { text: "second document", category: "B" } ]; await store.upsert({ indexName: "myCollection", vectors, metadata, }); // Query similar vectors const queryVector = [0.1, 0.2, ...]; const results = await store.query({ indexName: "myCollection", queryVector, topK: 10, // top K results filter: { category: "A" } // optional metadata filter }); ``` ## Constructor Options ## Methods ### createIndex() Creates a new vector collection. The index name must start with a letter or underscore and can only contain letters, numbers, and underscores. The dimension must be a positive integer. ### upsert() Adds or updates vectors and their metadata in the index. Uses a transaction to ensure all vectors are inserted atomically - if any insert fails, the entire operation is rolled back. []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() Searches for similar vectors with optional metadata filtering. ### describeIndex() Gets information about an index. Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() Deletes an index and all its data. ### listIndexes() Lists all vector indexes in the database. Returns: `Promise` ### truncateIndex() Removes all vectors from an index while keeping the index structure. ### updateIndexById() Updates a specific vector entry by its ID with new vector data and/or metadata. ", isOptional: true, description: "New metadata to update", }, ]} /> ### deleteIndexById() Deletes a specific vector entry from an index by its ID. ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws specific errors for different failure cases: ```typescript copy try { await store.query({ indexName: "my-collection", queryVector: queryVector, }); } catch (error) { // Handle specific error cases if (error.message.includes("Invalid index name format")) { console.error( "Index name must start with a letter/underscore and contain only alphanumeric characters", ); } else if (error.message.includes("Table not found")) { console.error("The specified index does not exist"); } else { console.error("Vector store error:", error.message); } } ``` Common error cases include: - Invalid index name format - Invalid vector dimensions - Table/index not found - Database connection issues - Transaction failures during upsert ### Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Metadata Filters | Metadata Filtering | RAG | Mastra Docs" description: Documentation for metadata filtering capabilities in Mastra, which allow for precise querying of vector search results across different vector stores. --- # Metadata Filters Source: https://mastra.ai/docs/reference/rag/metadata-filters Mastra provides a unified metadata filtering syntax across all vector stores, based on MongoDB/Sift query syntax. Each vector store translates these filters into their native format. ## Basic Example ```typescript import { PgVector } from '@mastra/pg'; const store = new PgVector(connectionString); const results = await store.query({ indexName: "my_index", queryVector: queryVector, topK: 10, filter: { category: "electronics", // Simple equality price: { $gt: 100 }, // Numeric comparison tags: { $in: ["sale", "new"] } // Array membership } }); ``` ## Supported Operators ## Common Rules and Restrictions 1. Field names cannot: - Contain dots (.) unless referring to nested fields - Start with $ or contain null characters - Be empty strings 2. Values must be: - Valid JSON types (string, number, boolean, object, array) - Not undefined - Properly typed for the operator (e.g., numbers for numeric comparisons) 3. Logical operators: - Must contain valid conditions - Cannot be empty - Must be properly nested - Can only be used at top level or nested within other logical operators - Cannot be used at field level or nested inside a field - Cannot be used inside an operator - Valid: `{ "$and": [{ "field": { "$gt": 100 } }] }` - Valid: `{ "$or": [{ "$and": [{ "field": { "$gt": 100 } }] }] }` - Invalid: `{ "field": { "$and": [{ "$gt": 100 }] } }` - Invalid: `{ "field": { "$gt": { "$and": [{...}] } } }` 4. $not operator: - Must be an object - Cannot be empty - Can be used at field level or top level - Valid: `{ "$not": { "field": "value" } }` - Valid: `{ "field": { "$not": { "$eq": "value" } } }` 5. Operator nesting: - Logical operators must contain field conditions, not direct operators - Valid: `{ "$and": [{ "field": { "$gt": 100 } }] }` - Invalid: `{ "$and": [{ "$gt": 100 }] }` ## Store-Specific Notes ### Astra - Nested field queries are supported using dot notation - Array fields must be explicitly defined as arrays in the metadata - Metadata values are case-sensitive ### ChromaDB - Where filters only return results where the filtered field exists in metadata - Empty metadata fields are not included in filter results - Metadata fields must be present for negative matches (e.g., $ne won't match documents missing the field) ### Cloudflare Vectorize - Requires explicit metadata indexing before filtering can be used - Use `createMetadataIndex()` to index fields you want to filter on - Up to 10 metadata indexes per Vectorize index - String values are indexed up to first 64 bytes (truncated on UTF-8 boundaries) - Number values use float64 precision - Filter JSON must be under 2048 bytes - Field names cannot contain dots (.) or start with $ - Field names limited to 512 characters - Vectors must be re-upserted after creating new metadata indexes to be included in filtered results - Range queries may have reduced accuracy with very large datasets (~10M+ vectors) ### LibSQL - Supports nested object queries with dot notation - Array fields are validated to ensure they contain valid JSON arrays - Numeric comparisons maintain proper type handling - Empty arrays in conditions are handled gracefully - Metadata is stored in a JSONB column for efficient querying ### PgVector - Full support for PostgreSQL's native JSON querying capabilities - Efficient handling of array operations using native array functions - Proper type handling for numbers, strings, and booleans - Nested field queries use PostgreSQL's JSON path syntax internally - Metadata is stored in a JSONB column for efficient indexing ### Pinecone - Metadata field names are limited to 512 characters - Numeric values must be within the range of ±1e38 - Arrays in metadata are limited to 64KB total size - Nested objects are flattened with dot notation - Metadata updates replace the entire metadata object ### Qdrant - Supports advanced filtering with nested conditions - Payload (metadata) fields must be explicitly indexed for filtering - Efficient handling of geo-spatial queries - Special handling for null and empty values - Vector-specific filtering capabilities - Datetime values must be in RFC 3339 format ### Upstash - 512-character limit for metadata field keys - Query size is limited (avoid large IN clauses) - No support for null/undefined values in filters - Translates to SQL-like syntax internally - Case-sensitive string comparisons - Metadata updates are atomic ## Related - [Astra](./astra) - [Chroma](./chroma) - [Cloudflare Vectorize](./vectorize) - [LibSQL](./libsql) - [PgStore](./pg) - [Pinecone](./pinecone) - [Qdrant](./qdrant) - [Upstash](./upstash) --- title: "Reference: PG Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the PgVector class in Mastra, which provides vector search using PostgreSQL with pgvector extension. --- # PG Vector Store Source: https://mastra.ai/docs/reference/rag/pg The PgVector class provides vector search using [PostgreSQL](https://www.postgresql.org/) with [pgvector](https://github.com/pgvector/pgvector) extension. It provides robust vector similarity search capabilities within your existing PostgreSQL database. ## Constructor Options ## Methods ### createIndex() #### IndexConfig #### Memory Requirements HNSW indexes require significant shared memory during construction. For 100K vectors: - Small dimensions (64d): ~60MB with default settings - Medium dimensions (256d): ~180MB with default settings - Large dimensions (384d+): ~250MB+ with default settings Higher M values or efConstruction values will increase memory requirements significantly. Adjust your system's shared memory limits if needed. ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include the vector in the result", }, { name: "minScore", type: "number", isOptional: true, defaultValue: "0", description: "Minimum similarity score threshold", }, { name: "options", type: "{ ef?: number; probes?: number }", isOptional: true, description: "Additional options for HNSW and IVF indexes", properties: [ { type: "object", parameters: [ { name: "ef", type: "number", description: "HNSW search parameter", isOptional: true, }, { name: "probes", type: "number", description: "IVF search parameter", isOptional: true, }, ], }, ], }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface PGIndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; type: "flat" | "hnsw" | "ivfflat"; config: { m?: number; efConstruction?: number; lists?: number; probes?: number; }; } ``` ### deleteIndex() ### updateIndexById() ", description: "New metadata values", isOptional: true, }, ], }, ], }, ]} /> Updates an existing vector by ID. At least one of vector or metadata must be provided. ```typescript copy // Update just the vector await pgVector.updateIndexById("my_vectors", "vector123", { vector: [0.1, 0.2, 0.3], }); // Update just the metadata await pgVector.updateIndexById("my_vectors", "vector123", { metadata: { label: "updated" }, }); // Update both vector and metadata await pgVector.updateIndexById("my_vectors", "vector123", { vector: [0.1, 0.2, 0.3], metadata: { label: "updated" }, }); ``` ### deleteIndexById() Deletes a single vector by ID from the specified index. ```typescript copy await pgVector.deleteIndexById("my_vectors", "vector123"); ``` ### disconnect() Closes the database connection pool. Should be called when done using the store. ### buildIndex() Builds or rebuilds an index with specified metric and configuration. Will drop any existing index before creating the new one. ```typescript copy // Define HNSW index await pgVector.buildIndex("my_vectors", "cosine", { type: "hnsw", hnsw: { m: 8, efConstruction: 32, }, }); // Define IVF index await pgVector.buildIndex("my_vectors", "cosine", { type: "ivfflat", ivf: { lists: 100, }, }); // Define flat index await pgVector.buildIndex("my_vectors", "cosine", { type: "flat", }); ``` ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Best Practices - Regularly evaluate your index configuration to ensure optimal performance. - Adjust parameters like `lists` and `m` based on dataset size and query requirements. - Rebuild indexes periodically to maintain efficiency, especially after significant data changes. ### Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Pinecone Vector Store | Vector DBs | RAG | Mastra Docs" description: Documentation for the PineconeVector class in Mastra, which provides an interface to Pinecone's vector database. --- # Pinecone Vector Store Source: https://mastra.ai/docs/reference/rag/pinecone The PineconeVector class provides an interface to [Pinecone](https://www.pinecone.io/)'s vector database. It provides real-time vector search, with features like hybrid search, metadata filtering, and namespace management. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include the vector in the result", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() ", isOptional: true, description: "New metadata to update", }, ]} /> ### deleteIndexById() ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ### Environment Variables Required environment variables: - `PINECONE_API_KEY`: Your Pinecone API key - `PINECONE_ENVIRONMENT`: Pinecone environment (e.g., 'us-west1-gcp') ### Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Qdrant Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for integrating Qdrant with Mastra, a vector similarity search engine for managing vectors and payloads. --- # Qdrant Vector Store Source: https://mastra.ai/docs/reference/rag/qdrant The QdrantVector class provides vector search using [Qdrant](https://qdrant.tech/), a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage vectors with additional payload and extended filtering support. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() ; }", description: "Object containing the vector and/or metadata to update", }, ]} /> Updates a vector and/or its metadata in the specified index. If both vector and metadata are provided, both will be updated. If only one is provided, only that will be updated. ### deleteIndexById() Deletes a vector from the specified index by its ID. ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ### Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Rerank | Document Retrieval | RAG | Mastra Docs" description: Documentation for the rerank function in Mastra, which provides advanced reranking capabilities for vector search results. --- # rerank() Source: https://mastra.ai/docs/reference/rag/rerank The `rerank()` function provides advanced reranking capabilities for vector search results by combining semantic relevance, vector similarity, and position-based scoring. ```typescript function rerank( results: QueryResult[], query: string, modelConfig: ModelConfig, options?: RerankerFunctionOptions ): Promise ``` ## Usage Example ```typescript import { openai } from "@ai-sdk/openai"; import { rerank } from "@mastra/rag"; const model = openai("gpt-4o-mini"); const rerankedResults = await rerank( vectorSearchResults, "How do I deploy to production?", model, { weights: { semantic: 0.5, vector: 0.3, position: 0.2 }, topK: 3 } ); ``` ## Parameters The rerank function accepts any LanguageModel from the Vercel AI SDK. When using the Cohere model `rerank-v3.5`, it will automatically use Cohere's reranking capabilities. > **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field. ### RerankerFunctionOptions ## Returns The function returns an array of `RerankResult` objects: ### ScoringDetails ## Related - [createVectorQueryTool](../tools/vector-query-tool) --- title: "Reference: Turbopuffer Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for integrating Turbopuffer with Mastra, a high-performance vector database for efficient similarity search. --- # Turbopuffer Vector Store Source: https://mastra.ai/docs/reference/rag/turbopuffer The TurbopufferVector class provides vector search using [Turbopuffer](https://turbopuffer.com/), a high-performance vector database optimized for RAG applications. Turbopuffer offers fast vector similarity search with advanced filtering capabilities and efficient storage management. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Schema Configuration The `schemaConfigForIndex` option allows you to define explicit schemas for different indexes: ```typescript copy schemaConfigForIndex: (indexName: string) => { // Mastra's default embedding model and index for memory messages: if (indexName === "memory_messages_384") { return { dimensions: 384, schema: { thread_id: { type: "string", filterable: true, }, }, }; } else { throw new Error(`TODO: add schema for index: ${indexName}`); } }; ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ### Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Upstash Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the UpstashVector class in Mastra, which provides vector search using Upstash Vector. --- # Upstash Vector Store Source: https://mastra.ai/docs/reference/rag/upstash The UpstashVector class provides vector search using [Upstash Vector](https://upstash.com/vector), a serverless vector database service that provides vector similarity search with metadata filtering capabilities. ## Constructor Options ## Methods ### createIndex() Note: This method is a no-op for Upstash as indexes are created automatically. ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names (namespaces) as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() The `update` object can have the following properties: - `vector` (optional): An array of numbers representing the new vector. - `metadata` (optional): A record of key-value pairs for metadata. Throws an error if neither `vector` nor `metadata` is provided, or if only `metadata` is provided. ### deleteIndexById() Attempts to delete an item by its ID from the specified index. Logs an error message if the deletion fails. ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Environment Variables Required environment variables: - `UPSTASH_VECTOR_URL`: Your Upstash Vector database URL - `UPSTASH_VECTOR_TOKEN`: Your Upstash Vector API token ### Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Cloudflare Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the CloudflareVector class in Mastra, which provides vector search using Cloudflare Vectorize. --- # Cloudflare Vector Store Source: https://mastra.ai/docs/reference/rag/vectorize The CloudflareVector class provides vector search using [Cloudflare Vectorize](https://developers.cloudflare.com/vectorize/), a vector database service integrated with Cloudflare's edge network. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### createMetadataIndex() Creates an index on a metadata field to enable filtering. ### deleteMetadataIndex() Removes an index from a metadata field. ### listMetadataIndexes() Lists all metadata field indexes for an index. ### updateIndexById() Updates a vector or metadata for a specific ID within an index. ; }", description: "Object containing the vector and/or metadata to update", }, ]} /> ### deleteIndexById() Deletes a vector and its associated metadata for a specific ID within an index. ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Environment Variables Required environment variables: - `CLOUDFLARE_ACCOUNT_ID`: Your Cloudflare account ID - `CLOUDFLARE_API_TOKEN`: Your Cloudflare API token with Vectorize permissions ### Related - [Metadata Filters](./metadata-filters) --- title: "LibSQL Storage | Storage System | Mastra Core" description: Documentation for the LibSQL storage implementation in Mastra. --- # LibSQL Storage Source: https://mastra.ai/docs/reference/storage/libsql The LibSQL storage implementation provides a SQLite-compatible storage solution that can run both in-memory and as a persistent database. ## Installation ```bash npm install @mastra/storage-libsql ``` ## Usage ```typescript copy showLineNumbers import { LibSQLStore } from "@mastra/core/storage/libsql"; // File database (development) const storage = new LibSQLStore({ config: { url: 'file:storage.db', } }); // Persistent database (production) const storage = new LibSQLStore({ config: { url: process.env.DATABASE_URL, } }); ``` ## Parameters ## Additional Notes ### In-Memory vs Persistent Storage The file configuration (`file:storage.db`) is useful for: - Development and testing - Temporary storage - Quick prototyping For production use cases, use a persistent database URL: `libsql://your-database.turso.io` ### Schema Management The storage implementation handles schema creation and updates automatically. It creates the following tables: - `threads`: Stores conversation threads - `messages`: Stores individual messages - `metadata`: Stores additional metadata for threads and messages --- title: "PostgreSQL Storage | Storage System | Mastra Core" description: Documentation for the PostgreSQL storage implementation in Mastra. --- # PostgreSQL Storage Source: https://mastra.ai/docs/reference/storage/postgresql The PostgreSQL storage implementation provides a production-ready storage solution using PostgreSQL databases. ## Installation ```bash npm install @mastra/pg ``` ## Usage ```typescript copy showLineNumbers import { PostgresStore } from "@mastra/pg"; const storage = new PostgresStorage({ connectionString: process.env.DATABASE_URL, }); ``` ## Parameters ## Additional Notes ### Schema Management The storage implementation handles schema creation and updates automatically. It creates the following tables: - `threads`: Stores conversation threads - `messages`: Stores individual messages - `metadata`: Stores additional metadata for threads and messages --- title: "Upstash Storage | Storage System | Mastra Core" description: Documentation for the Upstash storage implementation in Mastra. --- # Upstash Storage Source: https://mastra.ai/docs/reference/storage/upstash The Upstash storage implementation provides a serverless-friendly storage solution using Upstash's Redis-compatible key-value store. ## Installation ```bash npm install @mastra/upstash ``` ## Usage ```typescript copy showLineNumbers import { UpstashStore } from "@mastra/upstash"; const storage = new UpstashStore({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }); ``` ## Parameters ## Additional Notes ### Key Structure The Upstash storage implementation uses a key-value structure: - Thread keys: `{prefix}thread:{threadId}` - Message keys: `{prefix}message:{messageId}` - Metadata keys: `{prefix}metadata:{entityId}` ### Serverless Benefits Upstash storage is particularly well-suited for serverless deployments: - No connection management needed - Pay-per-request pricing - Global replication options - Edge-compatible ### Data Persistence Upstash provides: - Automatic data persistence - Point-in-time recovery - Cross-region replication options ### Performance Considerations For optimal performance: - Use appropriate key prefixes to organize data - Monitor Redis memory usage - Consider data expiration policies if needed --- title: "Reference: MastraMCPClient | Tool Discovery | Mastra Docs" description: API Reference for MastraMCPClient - A client implementation for the Model Context Protocol. --- # MastraMCPClient Source: https://mastra.ai/docs/reference/tools/client The `MastraMCPClient` class provides a client implementation for interacting with Model Context Protocol (MCP) servers. It handles connection management, resource discovery, and tool execution through the MCP protocol. ## Constructor Creates a new instance of the MastraMCPClient. ```typescript constructor({ name, version = '1.0.0', server, capabilities = {}, timeout = 60000, }: { name: string; server: StdioServerParameters | SSEClientParameters; capabilities?: ClientCapabilities; version?: string; timeout?: number; }) ``` ### Parameters ## Methods ### connect() Establishes a connection with the MCP server. ```typescript async connect(): Promise ``` ### disconnect() Closes the connection with the MCP server. ```typescript async disconnect(): Promise ``` ### resources() Retrieves the list of available resources from the server. ```typescript async resources(): Promise ``` ### tools() Fetches and initializes available tools from the server, converting them into Mastra-compatible tool formats. ```typescript async tools(): Promise> ``` Returns an object mapping tool names to their corresponding Mastra tool implementations. ## Examples ### Using with Mastra Agent #### Example with Stdio Server ```typescript import { Agent } from "@mastra/core/agent"; import { MastraMCPClient } from "@mastra/mcp"; import { openai } from "@ai-sdk/openai"; // Initialize the MCP client using mcp/fetch as an example https://hub.docker.com/r/mcp/fetch // Visit https://github.com/docker/mcp-servers for other reference docker mcp servers const fetchClient = new MastraMCPClient({ name: "fetch", server: { command: "docker", args: ["run", "-i", "--rm", "mcp/fetch"], }, }); // Create a Mastra Agent const agent = new Agent({ name: "Fetch agent", instructions: "You are able to fetch data from URLs on demand and discuss the response data with the user.", model: openai("gpt-4o-mini"), }); try { // Connect to the MCP server await fetchClient.connect(); // Gracefully handle process exits so the docker subprocess is cleaned up process.on("exit", () => { fetchClient.disconnect(); }); // Get available tools const tools = await fetchClient.tools(); // Use the agent with the MCP tools const response = await agent.generate( "Tell me about mastra.ai/docs. Tell me generally what this page is and the content it includes.", { toolsets: { fetch: tools, }, }, ); console.log("\n\n" + response.text); } catch (error) { console.error("Error:", error); } finally { // Always disconnect when done await fetchClient.disconnect(); } ``` #### Example with SSE Server ```typescript // Initialize the MCP client using an SSE server const sseClient = new MastraMCPClient({ name: "sse-client", server: { url: new URL("https://your-mcp-server.com/sse"), // Optional fetch request configuration requestInit: { headers: { Authorization: "Bearer your-token", }, }, }, }); // The rest of the usage is identical to the stdio example ``` ## Related Information - For managing multiple MCP servers in your application, see the [MCPConfiguration documentation](./mcp-configuration) - For more details about the Model Context Protocol, see the [@modelcontextprotocol/sdk documentation](https://github.com/modelcontextprotocol/typescript-sdk). --- title: "Reference: createDocumentChunkerTool() | Tools | Mastra Docs" description: Documentation for the Document Chunker Tool in Mastra, which splits documents into smaller chunks for efficient processing and retrieval. --- # createDocumentChunkerTool() Source: https://mastra.ai/docs/reference/tools/document-chunker-tool The `createDocumentChunkerTool()` function creates a tool for splitting documents into smaller chunks for efficient processing and retrieval. It supports different chunking strategies and configurable parameters. ## Basic Usage ```typescript import { createDocumentChunkerTool, MDocument } from "@mastra/rag"; const document = new MDocument({ text: "Your document content here...", metadata: { source: "user-manual" } }); const chunker = createDocumentChunkerTool({ doc: document, params: { strategy: "recursive", size: 512, overlap: 50, separator: "\n" } }); const { chunks } = await chunker.execute(); ``` ## Parameters ### ChunkParams ## Returns ## Example with Custom Parameters ```typescript const technicalDoc = new MDocument({ text: longDocumentContent, metadata: { type: "technical", version: "1.0" } }); const chunker = createDocumentChunkerTool({ doc: technicalDoc, params: { strategy: "recursive", size: 1024, // Larger chunks overlap: 100, // More overlap separator: "\n\n" // Split on double newlines } }); const { chunks } = await chunker.execute(); // Process the chunks chunks.forEach((chunk, index) => { console.log(`Chunk ${index + 1} length: ${chunk.content.length}`); }); ``` ## Tool Details The chunker is created as a Mastra tool with the following properties: - **Tool ID**: `Document Chunker {strategy} {size}` - **Description**: `Chunks document using {strategy} strategy with size {size} and {overlap} overlap` - **Input Schema**: Empty object (no additional inputs required) - **Output Schema**: Object containing the chunks array ## Related - [MDocument](../rag/document.mdx) - [createVectorQueryTool](./vector-query-tool) --- title: "Reference: createGraphRAGTool() | RAG | Mastra Tools Docs" description: Documentation for the Graph RAG Tool in Mastra, which enhances RAG by building a graph of semantic relationships between documents. --- # createGraphRAGTool() Source: https://mastra.ai/docs/reference/tools/graph-rag-tool The `createGraphRAGTool()` creates a tool that enhances RAG by building a graph of semantic relationships between documents. It uses the `GraphRAG` system under the hood to provide graph-based retrieval, finding relevant content through both direct similarity and connected relationships. ## Usage Example ```typescript import { openai } from "@ai-sdk/openai"; import { createGraphRAGTool } from "@mastra/rag"; const graphTool = createGraphRAGTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), graphOptions: { dimension: 1536, threshold: 0.7, randomWalkSteps: 100, restartProb: 0.15 } }); ``` ## Parameters ### GraphOptions ## Returns The tool returns an object with: ## Default Tool Description The default description focuses on: - Analyzing relationships between documents - Finding patterns and connections - Answering complex queries ## Advanced Example ```typescript const graphTool = createGraphRAGTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), graphOptions: { dimension: 1536, threshold: 0.8, // Higher similarity threshold randomWalkSteps: 200, // More exploration steps restartProb: 0.2 // Higher restart probability } }); ``` ## Example with Custom Description ```typescript const graphTool = createGraphRAGTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), description: "Analyze document relationships to find complex patterns and connections in our company's historical data" }); ``` This example shows how to customize the tool description for a specific use case while maintaining its core purpose of relationship analysis. ## Related - [createVectorQueryTool](./vector-query-tool) - [GraphRAG](../rag/graph-rag) --- title: "Reference: MCPConfiguration | Tool Management | Mastra Docs" description: API Reference for MCPConfiguration - A class for managing multiple Model Context Protocol servers and their tools. --- # MCPConfiguration Source: https://mastra.ai/docs/reference/tools/mcp-configuration The `MCPConfiguration` class provides a way to manage multiple MCP server connections and their tools in a Mastra application. It handles connection lifecycle, tool namespacing, and provides convenient access to tools across all configured servers. ## Constructor Creates a new instance of the MCPConfiguration class. ```typescript constructor({ id?: string; servers: Record }: { servers: { [serverName: string]: { // For stdio-based servers command?: string; args?: string[]; env?: Record; // For SSE-based servers url?: URL; requestInit?: RequestInit; } } }) ``` ### Parameters ", description: "A map of server configurations, where each key is a unique server identifier and the value is the server configuration.", }, ]} /> ## Methods ### getTools() Retrieves all tools from all configured servers, with tool names namespaced by their server name (in the format `serverName_toolName`) to prevent conflicts. Intended to be passed onto an Agent definition. ```ts new Agent({ tools: await mcp.getTools() }); ``` ### getToolsets() Returns an object mapping namespaced tool names (in the format `serverName.toolName`) to their tool implementations. Intended to be passed dynamically into the generate or stream method. ```typescript const res = await agent.stream(prompt, { toolsets: await mcp.getToolsets(), }); ``` ## Examples ### Basic Usage ```typescript import { MCPConfiguration } from "@mastra/mcp"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const mcp = new MCPConfiguration({ servers: { stockPrice: { command: "npx", args: ["tsx", "stock-price.ts"], env: { API_KEY: "your-api-key", }, }, weather: { url: new URL("http://localhost:8080/sse"), }, }, }); // Create an agent with access to all tools const agent = new Agent({ name: "Multi-tool Agent", instructions: "You have access to multiple tool servers.", model: openai("gpt-4"), tools: await mcp.getTools(), }); ``` ### Using Toolsets in generate() or stream() ```typescript import { Agent } from "@mastra/core/agent"; import { MCPConfiguration } from "@mastra/mcp"; import { openai } from "@ai-sdk/openai"; // Create the agent first, without any tools const agent = new Agent({ name: "Multi-tool Agent", instructions: "You help users check stocks and weather.", model: openai("gpt-4"), }); // Later, configure MCP with user-specific settings const mcp = new MCPConfiguration({ servers: { stockPrice: { command: "npx", args: ["tsx", "stock-price.ts"], env: { API_KEY: "user-123-api-key", }, }, weather: { url: new URL("http://localhost:8080/sse"), requestInit: { headers: { Authorization: `Bearer user-123-token`, }, }, }, }, }); // Pass all toolsets to stream() or generate() const response = await agent.stream( "How is AAPL doing and what is the weather?", { toolsets: await mcp.getToolsets(), }, ); ``` ## Resource Management The `MCPConfiguration` class includes built-in memory leak prevention for managing multiple instances: 1. Creating multiple instances with identical configurations without an `id` will throw an error to prevent memory leaks 2. If you need multiple instances with identical configurations, provide a unique `id` for each instance 3. Call `await configuration.disconnect()` before recreating an instance with the same configuration 4. If you only need one instance, consider moving the configuration to a higher scope to avoid recreation For example, if you try to create multiple instances with the same configuration without an `id`: ```typescript // First instance - OK const mcp1 = new MCPConfiguration({ servers: { /* ... */ }, }); // Second instance with same config - Will throw an error const mcp2 = new MCPConfiguration({ servers: { /* ... */ }, }); // To fix, either: // 1. Add unique IDs const mcp3 = new MCPConfiguration({ id: "instance-1", servers: { /* ... */ }, }); // 2. Or disconnect before recreating await mcp1.disconnect(); const mcp4 = new MCPConfiguration({ servers: { /* ... */ }, }); ``` ## Server Lifecycle MCPConfiguration handles server connections gracefully: 1. Automatic connection management for multiple servers 2. Graceful server shutdown to prevent error messages during development 3. Proper cleanup of resources when disconnecting ## Related Information - For details about individual MCP client configuration, see the [MastraMCPClient documentation](./client) - For more about the Model Context Protocol, see the [@modelcontextprotocol/sdk documentation](https://github.com/modelcontextprotocol/typescript-sdk) --- title: "Reference: createVectorQueryTool() | RAG | Mastra Tools Docs" description: Documentation for the Vector Query Tool in Mastra, which facilitates semantic search over vector stores with filtering and reranking capabilities. --- # createVectorQueryTool() Source: https://mastra.ai/docs/reference/tools/vector-query-tool The `createVectorQueryTool()` function creates a tool for semantic search over vector stores. It supports filtering, reranking, and integrates with various vector store backends. ## Basic Usage ```typescript import { openai } from '@ai-sdk/openai'; import { createVectorQueryTool } from "@mastra/rag"; const queryTool = createVectorQueryTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), }); ``` ## Parameters ### RerankConfig ## Returns The tool returns an object with: ## Default Tool Description The default description focuses on: - Finding relevant information in stored knowledge - Answering user questions - Retrieving factual content ## Result Handling The tool determines the number of results to return based on the user's query, with a default of 10 results. This can be adjusted based on the query requirements. ## Example with Filters ```typescript const queryTool = createVectorQueryTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), enableFilters: true, }); ``` With filtering enabled, the tool processes queries to construct metadata filters that combine with semantic search. The process works as follows: 1. A user makes a query with specific filter requirements like "Find content where the 'version' field is greater than 2.0" 2. The agent analyzes the query and constructs the appropriate filters: ```typescript { "version": { "$gt": 2.0 } } ``` This agent-driven approach: - Processes natural language queries into filter specifications - Implements vector store-specific filter syntax - Translates query terms to filter operators For detailed filter syntax and store-specific capabilities, see the [Metadata Filters](../rag/metadata-filters) documentation. For an example of how agent-driven filtering works, see the [Agent-Driven Metadata Filtering](../../../examples/rag/usage/filter-rag) example. ## Example with Reranking ```typescript const queryTool = createVectorQueryTool({ vectorStoreName: "milvus", indexName: "documentation", model: openai.embedding('text-embedding-3-small'), reranker: { model: openai('gpt-4o-mini'), options: { weights: { semantic: 0.5, // Semantic relevance weight vector: 0.3, // Vector similarity weight position: 0.2 // Original position weight }, topK: 5 } } }); ``` Reranking improves result quality by combining: - Semantic relevance: Using LLM-based scoring of text similarity - Vector similarity: Original vector distance scores - Position bias: Consideration of original result ordering - Query analysis: Adjustments based on query characteristics The reranker processes the initial vector search results and returns a reordered list optimized for relevance. ## Example with Custom Description ```typescript const queryTool = createVectorQueryTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), description: "Search through document archives to find relevant information for answering questions about company policies and procedures" }); ``` This example shows how to customize the tool description for a specific use case while maintaining its core purpose of information retrieval. ## Tool Details The tool is created with: - **ID**: `VectorQuery {vectorStoreName} {indexName} Tool` - **Input Schema**: Requires queryText and filter objects - **Output Schema**: Returns relevantContext string ## Related - [rerank()](../rag/rerank) - [createGraphRAGTool](./graph-rag-tool) --- title: "Reference: CompositeVoice | Voice Providers | Mastra Docs" description: "Documentation for the CompositeVoice class, which enables combining multiple voice providers for flexible text-to-speech and speech-to-text operations." --- # CompositeVoice Source: https://mastra.ai/docs/reference/voice/composite-voice The CompositeVoice class allows you to combine different voice providers for text-to-speech and speech-to-text operations. This is particularly useful when you want to use the best provider for each operation - for example, using OpenAI for speech-to-text and PlayAI for text-to-speech. CompositeVoice is used internally by the Agent class to provide flexible voice capabilities. ## Usage Example ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; // Create voice providers const openai = new OpenAIVoice(); const playai = new PlayAIVoice(); // Use OpenAI for listening (speech-to-text) and PlayAI for speaking (text-to-speech) const voice = new CompositeVoice({ listeningProvider: openai, speakingProvider: playai }); // Convert speech to text using OpenAI const text = await voice.listen(audioStream); // Convert text to speech using PlayAI const audio = await voice.speak("Hello, world!"); ``` ## Constructor Parameters ## Methods ### speak() Converts text to speech using the configured speaking provider. Notes: - If no speaking provider is configured, this method will throw an error - Options are passed through to the configured speaking provider - Returns a stream of audio data ### listen() Converts speech to text using the configured listening provider. Notes: - If no listening provider is configured, this method will throw an error - Options are passed through to the configured listening provider - Returns either a string or a stream of transcribed text, depending on the provider ### getSpeakers() Returns a list of available voices from the speaking provider, where each node contains: Notes: - Returns voices from the speaking provider only - If no speaking provider is configured, returns an empty array - Each voice object will have at least a voiceId property - Additional voice properties depend on the speaking provider --- title: "Reference: Deepgram Voice | Voice Providers | Mastra Docs" description: "Documentation for the Deepgram voice implementation, providing text-to-speech and speech-to-text capabilities with multiple voice models and languages." --- # Deepgram Source: https://mastra.ai/docs/reference/voice/deepgram The Deepgram voice implementation in Mastra provides text-to-speech (TTS) and speech-to-text (STT) capabilities using Deepgram's API. It supports multiple voice models and languages, with configurable options for both speech synthesis and transcription. ## Usage Example ```typescript import { DeepgramVoice } from "@mastra/voice-deepgram"; // Initialize with default configuration (uses DEEPGRAM_API_KEY environment variable) const voice = new DeepgramVoice(); // Initialize with custom configuration const voice = new DeepgramVoice({ speechModel: { name: 'aura', apiKey: 'your-api-key', }, listeningModel: { name: 'nova-2', apiKey: 'your-api-key', }, speaker: 'asteria-en', }); // Text-to-Speech const audioStream = await voice.speak("Hello, world!"); // Speech-to-Text const transcript = await voice.listen(audioStream); ``` ## Constructor Parameters ### DeepgramVoiceConfig ", description: "Additional properties to pass to the Deepgram API", isOptional: true, }, { name: "language", type: "string", description: "Language code for the model", isOptional: true, }, ]} /> ## Methods ### speak() Converts text to speech using the configured speech model and voice. Returns: `Promise` ### listen() Converts speech to text using the configured listening model. Returns: `Promise` ### getSpeakers() Returns a list of available voice options. --- title: "Reference: ElevenLabs Voice | Voice Providers | Mastra Docs" description: "Documentation for the ElevenLabs voice implementation, offering high-quality text-to-speech capabilities with multiple voice models and natural-sounding synthesis." --- # ElevenLabs Source: https://mastra.ai/docs/reference/voice/elevenlabs The ElevenLabs voice implementation in Mastra provides high-quality text-to-speech (TTS) and speech-to-text (STT) capabilities using the ElevenLabs API. ## Usage Example ```typescript import { ElevenLabsVoice } from "@mastra/voice-elevenlabs"; // Initialize with default configuration (uses ELEVENLABS_API_KEY environment variable) const voice = new ElevenLabsVoice(); // Initialize with custom configuration const voice = new ElevenLabsVoice({ speechModel: { name: 'eleven_multilingual_v2', apiKey: 'your-api-key', }, speaker: 'custom-speaker-id', }); // Text-to-Speech const audioStream = await voice.speak("Hello, world!"); // Get available speakers const speakers = await voice.getSpeakers(); ``` ## Constructor Parameters ### ElevenLabsVoiceConfig ## Methods ### speak() Converts text to speech using the configured speech model and voice. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ### listen() Converts audio input to text using ElevenLabs Speech-to-Text API. The options object supports the following properties: Returns: `Promise` - A Promise that resolves to the transcribed text ## Important Notes 1. An ElevenLabs API key is required. Set it via the `ELEVENLABS_API_KEY` environment variable or pass it in the constructor. 2. The default speaker is set to Aria (ID: '9BWtsMINqrJLrRacOk9x'). 3. Speech-to-text functionality is not supported by ElevenLabs. 4. Available speakers can be retrieved using the `getSpeakers()` method, which returns detailed information about each voice including language and gender. --- title: "Reference: Google Voice | Voice Providers | Mastra Docs" description: "Documentation for the Google Voice implementation, providing text-to-speech and speech-to-text capabilities." --- # Google Source: https://mastra.ai/docs/reference/voice/google The Google Voice implementation in Mastra provides both text-to-speech (TTS) and speech-to-text (STT) capabilities using Google Cloud services. It supports multiple voices, languages, and advanced audio configuration options. ## Usage Example ```typescript import { GoogleVoice } from "@mastra/voice-google"; // Initialize with default configuration (uses GOOGLE_API_KEY environment variable) const voice = new GoogleVoice(); // Initialize with custom configuration const voice = new GoogleVoice({ speechModel: { apiKey: 'your-speech-api-key', }, listeningModel: { apiKey: 'your-listening-api-key', }, speaker: 'en-US-Casual-K', }); // Text-to-Speech const audioStream = await voice.speak("Hello, world!", { languageCode: 'en-US', audioConfig: { audioEncoding: 'LINEAR16', }, }); // Speech-to-Text const transcript = await voice.listen(audioStream, { config: { encoding: 'LINEAR16', languageCode: 'en-US', }, }); // Get available voices for a specific language const voices = await voice.getSpeakers({ languageCode: 'en-US' }); ``` ## Constructor Parameters ### GoogleModelConfig ## Methods ### speak() Converts text to speech using Google Cloud Text-to-Speech service. Returns: `Promise` ### listen() Converts speech to text using Google Cloud Speech-to-Text service. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ## Important Notes 1. A Google Cloud API key is required. Set it via the `GOOGLE_API_KEY` environment variable or pass it in the constructor. 2. The default voice is set to 'en-US-Casual-K'. 3. Both text-to-speech and speech-to-text services use LINEAR16 as the default audio encoding. 4. The `speak()` method supports advanced audio configuration through the Google Cloud Text-to-Speech API. 5. The `listen()` method supports various recognition configurations through the Google Cloud Speech-to-Text API. 6. Available voices can be filtered by language code using the `getSpeakers()` method. --- title: "Reference: MastraVoice | Voice Providers | Mastra Docs" description: "Documentation for the MastraVoice abstract base class, which defines the core interface for all voice services in Mastra, including speech-to-speech capabilities." --- # MastraVoice Source: https://mastra.ai/docs/reference/voice/mastra-voice The MastraVoice class is an abstract base class that defines the core interface for voice services in Mastra. All voice provider implementations (like OpenAI, Deepgram, PlayAI, Speechify) extend this class to provide their specific functionality. The class now includes support for real-time speech-to-speech capabilities through WebSocket connections. ## Usage Example ```typescript import { MastraVoice } from "@mastra/core/voice"; // Create a voice provider implementation class MyVoiceProvider extends MastraVoice { constructor(config: { speechModel?: BuiltInModelConfig; listeningModel?: BuiltInModelConfig; speaker?: string; realtimeConfig?: { model?: string; apiKey?: string; options?: unknown; }; }) { super({ speechModel: config.speechModel, listeningModel: config.listeningModel, speaker: config.speaker, realtimeConfig: config.realtimeConfig }); } // Implement required abstract methods async speak(input: string | NodeJS.ReadableStream, options?: { speaker?: string }): Promise { // Implement text-to-speech conversion } async listen(audioStream: NodeJS.ReadableStream, options?: unknown): Promise { // Implement speech-to-text conversion } async getSpeakers(): Promise> { // Return list of available voices } // Optional speech-to-speech methods async connect(): Promise { // Establish WebSocket connection for speech-to-speech communication } async send(audioData: NodeJS.ReadableStream | Int16Array): Promise { // Stream audio data in speech-to-speech } async answer(): Promise { // Trigger voice provider to respond } addTools(tools: Array): void { // Add tools for the voice provider to use } close(): void { // Close WebSocket connection } on(event: string, callback: (data: unknown) => void): void { // Register event listener } off(event: string, callback: (data: unknown) => void): void { // Remove event listener } } ``` ## Constructor Parameters ### BuiltInModelConfig ### RealtimeConfig ## Abstract Methods These methods must be implemented by unknown class extending MastraVoice. ### speak() Converts text to speech using the configured speech model. ```typescript abstract speak( input: string | NodeJS.ReadableStream, options?: { speaker?: string; [key: string]: unknown; } ): Promise ``` Purpose: - Takes text input and converts it to speech using the provider's text-to-speech service - Supports both string and stream input for flexibility - Allows overriding the default speaker/voice through options - Returns a stream of audio data that can be played or saved - May return void if the audio is handled by emitting 'speaking' event ### listen() Converts speech to text using the configured listening model. ```typescript abstract listen( audioStream: NodeJS.ReadableStream, options?: { [key: string]: unknown; } ): Promise ``` Purpose: - Takes an audio stream and converts it to text using the provider's speech-to-text service - Supports provider-specific options for transcription configuration - Can return either a complete text transcription or a stream of transcribed text - Not all providers support this functionality (e.g., PlayAI, Speechify) - May return void if the transcription is handled by emitting 'writing' event ### getSpeakers() Returns a list of available voices supported by the provider. ```typescript abstract getSpeakers(): Promise> ``` Purpose: - Retrieves the list of available voices/speakers from the provider - Each voice must have at least a voiceId property - Providers can include additional metadata about each voice - Used to discover available voices for text-to-speech conversion ## Optional Methods These methods have default implementations but can be overridden by voice providers that support speech-to-speech capabilities. ### connect() Establishes a WebSocket or WebRTC connection for communication. ```typescript connect(config?: unknown): Promise ``` Purpose: - Initializes a connection to the voice service for communication - Must be called before using features like send() or answer() - Returns a Promise that resolves when the connection is established - Configuration is provider-specific ### send() Streams audio data in real-time to the voice provider. ```typescript send(audioData: NodeJS.ReadableStream | Int16Array): Promise ``` Purpose: - Sends audio data to the voice provider for real-time processing - Useful for continuous audio streaming scenarios like live microphone input - Supports both ReadableStream and Int16Array audio formats - Must be in connected state before calling this method ### answer() Triggers the voice provider to generate a response. ```typescript answer(): Promise ``` Purpose: - Sends a signal to the voice provider to generate a response - Used in real-time conversations to prompt the AI to respond - Response will be emitted through the event system (e.g., 'speaking' event) ### addTools() Equips the voice provider with tools that can be used during conversations. ```typescript addTools(tools: Array): void ``` Purpose: - Adds tools that the voice provider can use during conversations - Tools can extend the capabilities of the voice provider - Implementation is provider-specific ### close() Disconnects from the WebSocket or WebRTC connection. ```typescript close(): void ``` Purpose: - Closes the connection to the voice service - Cleans up resources and stops any ongoing real-time processing - Should be called when you're done with the voice instance ### on() Registers an event listener for voice events. ```typescript on( event: E, callback: (data: E extends keyof VoiceEventMap ? VoiceEventMap[E] : unknown) => void, ): void ``` Purpose: - Registers a callback function to be called when the specified event occurs - Standard events include 'speaking', 'writing', and 'error' - Providers can emit custom events as well - Event data structure depends on the event type ### off() Removes an event listener. ```typescript off( event: E, callback: (data: E extends keyof VoiceEventMap ? VoiceEventMap[E] : unknown) => void, ): void ``` Purpose: - Removes a previously registered event listener - Used to clean up event handlers when they're no longer needed ## Event System The MastraVoice class includes an event system for real-time communication. Standard event types include: ## Protected Properties ## Telemetry Support MastraVoice includes built-in telemetry support through the `traced` method, which wraps method calls with performance tracking and error monitoring. ## Notes - MastraVoice is an abstract class and cannot be instantiated directly - Implementations must provide concrete implementations for all abstract methods - The class provides a consistent interface across different voice service providers - Speech-to-speech capabilities are optional and provider-specific - The event system enables asynchronous communication for real-time interactions - Telemetry is automatically handled for all method calls --- title: "Reference: Murf Voice | Voice Providers | Mastra Docs" description: "Documentation for the Murf voice implementation, providing text-to-speech capabilities." --- # Murf Source: https://mastra.ai/docs/reference/voice/murf The Murf voice implementation in Mastra provides text-to-speech (TTS) capabilities using Murf's AI voice service. It supports multiple voices across different languages. ## Usage Example ```typescript import { MurfVoice } from "@mastra/voice-murf"; // Initialize with default configuration (uses MURF_API_KEY environment variable) const voice = new MurfVoice(); // Initialize with custom configuration const voice = new MurfVoice({ speechModel: { name: 'GEN2', apiKey: 'your-api-key', properties: { format: 'MP3', rate: 1.0, pitch: 1.0, sampleRate: 48000, channelType: 'STEREO', }, }, speaker: 'en-US-cooper', }); // Text-to-Speech with default settings const audioStream = await voice.speak("Hello, world!"); // Text-to-Speech with custom properties const audioStream = await voice.speak("Hello, world!", { speaker: 'en-UK-hazel', properties: { format: 'WAV', rate: 1.2, style: 'casual', }, }); // Get available voices const voices = await voice.getSpeakers(); ``` ## Constructor Parameters ### MurfConfig ### Speech Properties ", description: "Custom pronunciation mappings", isOptional: true, }, { name: "encodeAsBase64", type: "boolean", description: "Whether to encode the audio as base64", isOptional: true, }, { name: "variation", type: "number", description: "Voice variation parameter", isOptional: true, }, { name: "audioDuration", type: "number", description: "Target audio duration in seconds", isOptional: true, }, { name: "multiNativeLocale", type: "string", description: "Locale for multilingual support", isOptional: true, }, ]} /> ## Methods ### speak() Converts text to speech using Murf's API. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ### listen() This method is not supported by Murf and will throw an error. Murf does not provide speech-to-text functionality. ## Important Notes 1. A Murf API key is required. Set it via the `MURF_API_KEY` environment variable or pass it in the constructor. 2. The service uses GEN2 as the default model version. 3. Speech properties can be set at the constructor level and overridden per request. 4. The service supports extensive audio customization through properties like format, sample rate, and channel type. 5. Speech-to-text functionality is not supported. --- title: "Reference: OpenAI Realtime Voice | Voice Providers | Mastra Docs" description: "Documentation for the OpenAIRealtimeVoice class, providing real-time text-to-speech and speech-to-text capabilities via WebSockets." --- # OpenAI Realtime Voice Source: https://mastra.ai/docs/reference/voice/openai-realtime The OpenAIRealtimeVoice class provides real-time voice interaction capabilities using OpenAI's WebSocket-based API. It supports real time speech to speech, voice activity detection, and event-based audio streaming. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; // Initialize with default configuration using environment variables const voice = new OpenAIRealtimeVoice(); // Or initialize with specific configuration const voiceWithConfig = new OpenAIRealtimeVoice({ chatModel: { apiKey: 'your-openai-api-key', model: 'gpt-4o-mini-realtime-preview-2024-12-17', options: { sessionConfig: { turn_detection: { type: 'server_vad', threshold: 0.6, silence_duration_ms: 1200 } } } }, speaker: 'alloy' // Default voice }); // Establish connection await voice.connect(); // Set up event listeners voice.on('speaking', ({ audio }) => { // Handle audio data (Int16Array) pcm format by default playAudio(audio); }); voice.on('writing', ({ text, role }) => { // Handle transcribed text console.log(`${role}: ${text}`); }); // Convert text to speech await voice.speak('Hello, how can I help you today?', { speaker: 'echo' // Override default voice }); // Process audio input const microphoneStream = getMicrophoneStream(); await voice.send(microphoneStream); // When done, disconnect voice.connect(); ``` ## Configuration ### Constructor Options ### chatModel ### options ### Voice Activity Detection (VAD) Configuration ## Methods ### connect() Establishes a connection to the OpenAI realtime service. Must be called before using speak, listen, or send functions. ", description: "Promise that resolves when the connection is established.", }, ]} /> ### speak() Emits a speaking event using the configured voice model. Can accept either a string or a readable stream as input. Returns: `Promise` ### listen() Processes audio input for speech recognition. Takes a readable stream of audio data and emits a 'listening' event with the transcribed text. Returns: `Promise` ### send() Streams audio data in real-time to the OpenAI service for continuous audio streaming scenarios like live microphone input. Returns: `Promise` ### updateConfig() Updates the session configuration for the voice instance. This can be used to modify voice settings, turn detection, and other parameters. Returns: `void` ### addTools() Adds a set of tools to the voice instance. Tools allow the model to perform additional actions during conversations. When OpenAIRealtimeVoice is added to an Agent, any tools configured for the Agent will automatically be available to the voice interface. Returns: `void` ### close() Disconnects from the OpenAI realtime session and cleans up resources. Should be called when you're done with the voice instance. Returns: `void` ### getSpeakers() Returns a list of available voice speakers. Returns: `Promise>` ### on() Registers an event listener for voice events. Returns: `void` ### off() Removes a previously registered event listener. Returns: `void` ## Events The OpenAIRealtimeVoice class emits the following events: ### OpenAI Realtime Events You can also listen to [OpenAI Realtime utility events](https://github.com/openai/openai-realtime-api-beta#reference-client-utility-events) by prefixing with 'openAIRealtime:': ## Available Voices The following voice options are available: - `alloy`: Neutral and balanced - `ash`: Clear and precise - `ballad`: Melodic and smooth - `coral`: Warm and friendly - `echo`: Resonant and deep - `sage`: Calm and thoughtful - `shimmer`: Bright and energetic - `verse`: Versatile and expressive ## Notes - API keys can be provided via constructor options or the `OPENAI_API_KEY` environment variable - The OpenAI Realtime Voice API uses WebSockets for real-time communication - Server-side Voice Activity Detection (VAD) provides better accuracy for speech detection - All audio data is processed as Int16Array format - The voice instance must be connected with `connect()` before using other methods - Always call `close()` when done to properly clean up resources - Memory management is handled by OpenAI Realtime API --- title: "Reference: OpenAI Voice | Voice Providers | Mastra Docs" description: "Documentation for the OpenAIVoice class, providing text-to-speech and speech-to-text capabilities." --- # OpenAI Source: https://mastra.ai/docs/reference/voice/openai The OpenAIVoice class in Mastra provides text-to-speech and speech-to-text capabilities using OpenAI's models. ## Usage Example ```typescript import { OpenAIVoice } from '@mastra/voice-openai'; // Initialize with default configuration using environment variables const voice = new OpenAIVoice(); // Or initialize with specific configuration const voiceWithConfig = new OpenAIVoice({ speechModel: { name: 'tts-1-hd', apiKey: 'your-openai-api-key' }, listeningModel: { name: 'whisper-1', apiKey: 'your-openai-api-key' }, speaker: 'alloy' // Default voice }); // Convert text to speech const audioStream = await voice.speak('Hello, how can I help you?', { speaker: 'nova', // Override default voice speed: 1.2 // Adjust speech speed }); // Convert speech to text const text = await voice.listen(audioStream, { filetype: 'mp3' }); ``` ## Configuration ### Constructor Options ### OpenAIConfig ## Methods ### speak() Converts text to speech using OpenAI's text-to-speech models. Returns: `Promise` ### listen() Transcribes audio using OpenAI's Whisper model. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ## Notes - API keys can be provided via constructor options or the `OPENAI_API_KEY` environment variable - The `tts-1-hd` model provides higher quality audio but may have slower processing times - Speech recognition supports multiple audio formats including mp3, wav, and webm --- title: "Reference: PlayAI Voice | Voice Providers | Mastra Docs" description: "Documentation for the PlayAI voice implementation, providing text-to-speech capabilities." --- # PlayAI Source: https://mastra.ai/docs/reference/voice/playai The PlayAI voice implementation in Mastra provides text-to-speech capabilities using PlayAI's API. ## Usage Example ```typescript import { PlayAIVoice } from "@mastra/voice-playai"; // Initialize with default configuration (uses PLAYAI_API_KEY environment variable and PLAYAI_USER_ID environment variable) const voice = new PlayAIVoice(); // Initialize with default configuration const voice = new PlayAIVoice({ speechModel: { name: 'PlayDialog', apiKey: process.env.PLAYAI_API_KEY, userId: process.env.PLAYAI_USER_ID }, speaker: 'Angelo' // Default voice }); // Convert text to speech with a specific voice const audioStream = await voice.speak("Hello, world!", { speaker: 's3://voice-cloning-zero-shot/b27bc13e-996f-4841-b584-4d35801aea98/original/manifest.json' // Dexter voice }); ``` ## Constructor Parameters ### PlayAIConfig ## Methods ### speak() Converts text to speech using the configured speech model and voice. Returns: `Promise`. ### getSpeakers() Returns an array of available voice options, where each node contains: ### listen() This method is not supported by PlayAI and will throw an error. PlayAI does not provide speech-to-text functionality. ## Notes - PlayAI requires both an API key and a user ID for authentication - The service offers two models: 'PlayDialog' and 'Play3.0-mini' - Each voice has a unique S3 manifest ID that must be used when making API calls --- title: "Reference: Sarvam Voice | Voice Providers | Mastra Docs" description: "Documentation for the Sarvam class, providing text-to-speech and speech-to-text capabilities." --- # Sarvam Source: https://mastra.ai/docs/reference/voice/sarvam The SarvamVoice class in Mastra provides text-to-speech and speech-to-text capabilities using Sarvam AI models. ## Usage Example ```typescript import { SarvamVoice } from "@mastra/voice-sarvam"; // Initialize with default configuration using environment variables const voice = new SarvamVoice(); // Or initialize with specific configuration const voiceWithConfig = new SarvamVoice({ speechModel: { model: "bulbul:v1", apiKey: process.env.SARVAM_API_KEY!, language: "en-IN", properties: { pitch: 0, pace: 1.65, loudness: 1.5, speech_sample_rate: 8000, enable_preprocessing: false, eng_interpolation_wt: 123, }, }, listeningModel: { model: "saarika:v2", apiKey: process.env.SARVAM_API_KEY!, languageCode: "en-IN", filetype?: 'wav'; }, speaker: "meera", // Default voice }); // Convert text to speech const audioStream = await voice.speak("Hello, how can I help you?"); // Convert speech to text const text = await voice.listen(audioStream, { filetype: "wav", }); ``` ### Sarvam API Docs - https://docs.sarvam.ai/api-reference-docs/endpoints/text-to-speech ## Configuration ### Constructor Options ### SarvamVoiceConfig ### SarvamListenOptions ## Methods ### speak() Converts text to speech using Sarvam's text-to-speech models. Returns: `Promise` ### listen() Transcribes audio using Sarvam's speech recognition models. Returns: `Promise` ### getSpeakers() Returns an array of available voice options. Returns: `Promise>` ## Notes - API key can be provided via constructor options or the `SARVAM_API_KEY` environment variable - If no API key is provided, the constructor will throw an error - The service communicates with the Sarvam AI API at `https://api.sarvam.ai` - Audio is returned as a stream containing binary audio data - Speech recognition supports mp3 and wav audio formats --- title: "Reference: Speechify Voice | Voice Providers | Mastra Docs" description: "Documentation for the Speechify voice implementation, providing text-to-speech capabilities." --- # Speechify Source: https://mastra.ai/docs/reference/voice/speechify The Speechify voice implementation in Mastra provides text-to-speech capabilities using Speechify's API. ## Usage Example ```typescript import { SpeechifyVoice } from "@mastra/voice-speechify"; // Initialize with default configuration (uses SPEECHIFY_API_KEY environment variable) const voice = new SpeechifyVoice(); // Initialize with custom configuration const voice = new SpeechifyVoice({ speechModel: { name: 'simba-english', apiKey: 'your-api-key' }, speaker: 'george' // Default voice }); // Convert text to speech const audioStream = await voice.speak("Hello, world!", { speaker: 'henry', // Override default voice }); ``` ## Constructor Parameters ### SpeechifyConfig ## Methods ### speak() Converts text to speech using the configured speech model and voice. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ### listen() This method is not supported by Speechify and will throw an error. Speechify does not provide speech-to-text functionality. ## Notes - Speechify requires an API key for authentication - The default model is 'simba-english' - Speech-to-text functionality is not supported - Additional audio stream options can be passed through the speak() method's options parameter --- title: "Reference: .after() | Building Workflows | Mastra Docs" description: Documentation for the `after()` method in workflows, enabling branching and merging paths. --- # .after() Source: https://mastra.ai/docs/reference/workflows/after The `.after()` method defines explicit dependencies between workflow steps, enabling branching and merging paths in your workflow execution. ## Usage ### Basic Branching ```typescript workflow .step(stepA) .then(stepB) .after(stepA) // Create new branch after stepA completes .step(stepC); ``` ### Merging Multiple Branches ```typescript workflow .step(stepA) .then(stepB) .step(stepC) .then(stepD) .after([stepB, stepD]) // Create a step that depends on multiple steps .step(stepE); ``` ## Parameters ## Returns ## Examples ### Single Dependency ```typescript workflow .step(fetchData) .then(processData) .after(fetchData) // Branch after fetchData .step(logData); ``` ### Multiple Dependencies (Merging Branches) ```typescript workflow .step(fetchUserData) .then(validateUserData) .step(fetchProductData) .then(validateProductData) .after([validateUserData, validateProductData]) // Wait for both validations to complete .step(processOrder); ``` ## Related - [Branching Paths example](../../../examples/workflows/branching-paths.mdx) - [Workflow Class Reference](./workflow.mdx) - [Step Reference](./step-class.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx#merging-multiple-branches) --- title: ".afterEvent() Method | Mastra Docs" description: "Reference for the afterEvent method in Mastra workflows that creates event-based suspension points." --- # afterEvent() Source: https://mastra.ai/docs/reference/workflows/afterEvent The `afterEvent()` method creates a suspension point in your workflow that waits for a specific event to occur before continuing execution. ## Syntax ```typescript workflow.afterEvent(eventName: string): Workflow ``` ## Parameters | Parameter | Type | Description | |-----------|------|-------------| | eventName | string | The name of the event to wait for. Must match an event defined in the workflow's `events` configuration. | ## Return Value Returns the workflow instance for method chaining. ## Description The `afterEvent()` method is used to create an automatic suspension point in your workflow that waits for a specific named event. It's essentially a declarative way to define a point where your workflow should pause and wait for an external event to occur. When you call `afterEvent()`, Mastra: 1. Creates a special step with ID `__eventName_event` 2. This step automatically suspends the workflow execution 3. The workflow remains suspended until the specified event is triggered via `resumeWithEvent()` 4. When the event occurs, execution continues with the step following the `afterEvent()` call This method is part of Mastra's event-driven workflow capabilities, allowing you to create workflows that coordinate with external systems or user interactions without manually implementing suspension logic. ## Usage Notes - The event specified in `afterEvent()` must be defined in the workflow's `events` configuration with a schema - The special step created has a predictable ID format: `__eventName_event` (e.g., `__approvalReceived_event`) - Any step following `afterEvent()` can access the event data via `context.inputData.resumedEvent` - Event data is validated against the schema defined for that event when `resumeWithEvent()` is called ## Examples ### Basic Usage ```typescript // Define workflow with events const workflow = new Workflow({ name: 'approval-workflow', events: { approval: { schema: z.object({ approved: z.boolean(), approverName: z.string(), }), }, }, }); // Build workflow with event suspension point workflow .step(submitRequest) .afterEvent('approval') // Workflow suspends here .step(processApproval) // This step runs after the event occurs .commit(); ``` ## Related - [Event-Driven Workflows](./events.mdx) - [resumeWithEvent()](./resumeWithEvent.mdx) - [Suspend and Resume](../../workflows/suspend-and-resume.mdx) - [Workflow Class](./workflow.mdx) --- title: "Reference: Workflow.commit() | Running Workflows | Mastra Docs" description: Documentation for the `.commit()` method in workflows, which re-initializes the workflow machine with the current step configuration. --- # Workflow.commit() Source: https://mastra.ai/docs/reference/workflows/commit The `.commit()` method re-initializes the workflow's state machine with the current step configuration. ## Usage ```typescript workflow .step(stepA) .then(stepB) .commit(); ``` ## Returns ## Related - [Branching Paths example](../../../examples/workflows/branching-paths.mdx) - [Workflow Class Reference](./workflow.mdx) - [Step Reference](./step-class.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: Workflow.createRun() | Running Workflows | Mastra Docs" description: "Documentation for the `.createRun()` method in workflows, which initializes a new workflow run instance." --- # Workflow.createRun() Source: https://mastra.ai/docs/reference/workflows/createRun The `.createRun()` method initializes a new workflow run instance. It generates a unique run ID for tracking and returns a start function that begins workflow execution when called. One reason to use `.createRun()` vs `.execute()` is to get a unique run ID for tracking, logging, or subscribing via `.watch()`. ## Usage ```typescript const { runId, start, watch } = workflow.createRun(); const result = await start(); ``` ## Returns Promise", description: "Function that begins workflow execution when called", }, { name: "watch", type: "(callback: (record: WorkflowRunState) => void) => () => void", description: "Function that accepts a callback function that will be called with each transition of the workflow run", }, { name: "resume", type: "({stepId: string, context: Record}) => Promise", description: "Function that resumes a workflow run from a given step ID and context", }, { name: "resumeWithEvent", type: "(eventName: string, data: any) => Promise", description: "Function that resumes a workflow run from a given event name and data", }, ]} /> ## Error Handling The start function may throw validation errors if the workflow configuration is invalid: ```typescript try { const { runId, start, watch, resume, resumeWithEvent } = workflow.createRun(); await start({ triggerData: data }); } catch (error) { if (error instanceof ValidationError) { // Handle validation errors console.log(error.type); // 'circular_dependency' | 'no_terminal_path' | 'unreachable_step' console.log(error.details); } } ``` ## Related - [Workflow Class Reference](./workflow.mdx) - [Step Class Reference](./step-class.mdx) - See the [Creating a Workflow](../../../examples/workflows/creating-a-workflow.mdx) example for complete usage ``` ``` --- title: "Reference: Workflow.else() | Conditional Branching | Mastra Docs" description: "Documentation for the `.else()` method in Mastra workflows, which creates an alternative branch when an if condition is false." --- # Workflow.else() Source: https://mastra.ai/docs/reference/workflows/else > Experimental The `.else()` method creates an alternative branch in the workflow that executes when the preceding `if` condition evaluates to false. This enables workflows to follow different paths based on conditions. ## Usage ```typescript copy showLineNumbers workflow .step(startStep) .if(async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return value < 10; }) .then(ifBranchStep) .else() // Alternative branch when the condition is false .then(elseBranchStep) .commit(); ``` ## Parameters The `else()` method does not take any parameters. ## Returns ## Behavior - The `else()` method must follow an `if()` branch in the workflow definition - It creates a branch that executes only when the preceding `if` condition evaluates to false - You can chain multiple steps after an `else()` using `.then()` - You can nest additional `if`/`else` conditions within an `else` branch ## Error Handling The `else()` method requires a preceding `if()` statement. If you try to use it without a preceding `if`, an error will be thrown: ```typescript try { // This will throw an error workflow .step(someStep) .else() .then(anotherStep) .commit(); } catch (error) { console.error(error); // "No active condition found" } ``` ## Related - [if Reference](./if.mdx) - [then Reference](./then.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) - [Step Condition Reference](./step-condition.mdx) --- title: "Event-Driven Workflows | Mastra Docs" description: "Learn how to create event-driven workflows using afterEvent and resumeWithEvent methods in Mastra." --- # Event-Driven Workflows Source: https://mastra.ai/docs/reference/workflows/events Mastra provides built-in support for event-driven workflows through the `afterEvent` and `resumeWithEvent` methods. These methods allow you to create workflows that pause execution while waiting for specific events to occur, then resume with the event data when it's available. ## Overview Event-driven workflows are useful for scenarios where: - You need to wait for external systems to complete processing - User approval or input is required at specific points - Asynchronous operations need to be coordinated - Long-running processes need to break up execution across different services ## Defining Events Before using event-driven methods, you must define the events your workflow will listen for in the workflow configuration: ```typescript import { Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; const workflow = new Workflow({ name: 'approval-workflow', triggerSchema: z.object({ requestId: z.string() }), events: { // Define events with their validation schemas approvalReceived: { schema: z.object({ approved: z.boolean(), approverName: z.string(), comment: z.string().optional(), }), }, documentUploaded: { schema: z.object({ documentId: z.string(), documentType: z.enum(['invoice', 'receipt', 'contract']), metadata: z.record(z.string()).optional(), }), }, }, }); ``` Each event must have a name and a schema that defines the structure of data expected when the event occurs. ## afterEvent() The `afterEvent` method creates a suspension point in your workflow that automatically waits for a specific event. ### Syntax ```typescript workflow.afterEvent(eventName: string): Workflow ``` ### Parameters - `eventName`: The name of the event to wait for (must be defined in the workflow's `events` configuration) ### Return Value Returns the workflow instance for method chaining. ### How It Works When `afterEvent` is called, Mastra: 1. Creates a special step with ID `__eventName_event` 2. Configures this step to automatically suspend workflow execution 3. Sets up the continuation point after the event is received ### Usage Example ```typescript workflow .step(initialProcessStep) .afterEvent('approvalReceived') // Workflow suspends here .step(postApprovalStep) // This runs after event is received .then(finalStep) .commit(); ``` ## resumeWithEvent() The `resumeWithEvent` method resumes a suspended workflow by providing data for a specific event. ### Syntax ```typescript run.resumeWithEvent(eventName: string, data: any): Promise ``` ### Parameters - `eventName`: The name of the event being triggered - `data`: The event data (must conform to the schema defined for this event) ### Return Value Returns a Promise that resolves to the workflow execution results after resumption. ### How It Works When `resumeWithEvent` is called, Mastra: 1. Validates the event data against the schema defined for that event 2. Loads the workflow snapshot 3. Updates the context with the event data 4. Resumes execution from the event step 5. Continues workflow execution with the subsequent steps ### Usage Example ```typescript // Create a workflow run const run = workflow.createRun(); // Start the workflow await run.start({ triggerData: { requestId: 'req-123' } }); // Later, when the event occurs: const result = await run.resumeWithEvent('approvalReceived', { approved: true, approverName: 'John Doe', comment: 'Looks good to me!' }); console.log(result.results); ``` ## Accessing Event Data When a workflow is resumed with event data, that data is available in the step context as `context.inputData.resumedEvent`: ```typescript const processApprovalStep = new Step({ id: 'processApproval', execute: async ({ context }) => { // Access the event data const eventData = context.inputData.resumedEvent; return { processingResult: `Processed approval from ${eventData.approverName}`, wasApproved: eventData.approved, }; }, }); ``` ## Multiple Events You can create workflows that wait for multiple different events at various points: ```typescript workflow .step(createRequest) .afterEvent('approvalReceived') .step(processApproval) .afterEvent('documentUploaded') .step(processDocument) .commit(); ``` When resuming a workflow with multiple event suspension points, you need to provide the correct event name and data for the current suspension point. ## Practical Example This example shows a complete workflow that requires both approval and document upload: ```typescript import { Workflow, Step } from '@mastra/core/workflows'; import { z } from 'zod'; // Define steps const createRequest = new Step({ id: 'createRequest', execute: async () => ({ requestId: `req-${Date.now()}` }), }); const processApproval = new Step({ id: 'processApproval', execute: async ({ context }) => { const approvalData = context.inputData.resumedEvent; return { approved: approvalData.approved, approver: approvalData.approverName, }; }, }); const processDocument = new Step({ id: 'processDocument', execute: async ({ context }) => { const documentData = context.inputData.resumedEvent; return { documentId: documentData.documentId, processed: true, type: documentData.documentType, }; }, }); const finalizeRequest = new Step({ id: 'finalizeRequest', execute: async ({ context }) => { const requestId = context.steps.createRequest.output.requestId; const approved = context.steps.processApproval.output.approved; const documentId = context.steps.processDocument.output.documentId; return { finalized: true, summary: `Request ${requestId} was ${approved ? 'approved' : 'rejected'} with document ${documentId}` }; }, }); // Create workflow const requestWorkflow = new Workflow({ name: 'document-request-workflow', events: { approvalReceived: { schema: z.object({ approved: z.boolean(), approverName: z.string(), }), }, documentUploaded: { schema: z.object({ documentId: z.string(), documentType: z.enum(['invoice', 'receipt', 'contract']), }), }, }, }); // Build workflow requestWorkflow .step(createRequest) .afterEvent('approvalReceived') .step(processApproval) .afterEvent('documentUploaded') .step(processDocument) .then(finalizeRequest) .commit(); // Export workflow export { requestWorkflow }; ``` ### Running the Example Workflow ```typescript import { requestWorkflow } from './workflows'; import { mastra } from './mastra'; async function runWorkflow() { // Get the workflow const workflow = mastra.getWorkflow('document-request-workflow'); const run = workflow.createRun(); // Start the workflow const initialResult = await run.start(); console.log('Workflow started:', initialResult.results); // Simulate receiving approval const afterApprovalResult = await run.resumeWithEvent('approvalReceived', { approved: true, approverName: 'Jane Smith', }); console.log('After approval:', afterApprovalResult.results); // Simulate document upload const finalResult = await run.resumeWithEvent('documentUploaded', { documentId: 'doc-456', documentType: 'invoice', }); console.log('Final result:', finalResult.results); } runWorkflow().catch(console.error); ``` ## Best Practices 1. **Define Clear Event Schemas**: Use Zod to create precise schemas for event data validation 2. **Use Descriptive Event Names**: Choose event names that clearly communicate their purpose 3. **Handle Missing Events**: Ensure your workflow can handle cases where events don't occur or time out 4. **Include Monitoring**: Use the `watch` method to monitor suspended workflows waiting for events 5. **Consider Timeouts**: Implement timeout mechanisms for events that may never occur 6. **Document Events**: Clearly document the events your workflow depends on for other developers ## Related - [Suspend and Resume in Workflows](../../workflows/suspend-and-resume.mdx) - [Workflow Class Reference](./workflow.mdx) - [Resume Method Reference](./resume.mdx) - [Watch Method Reference](./watch.mdx) - [After Event Reference](./afterEvent.mdx) - [Resume With Event Reference](./resumeWithEvent.mdx) --- title: "Reference: Workflow.execute() | Workflows | Mastra Docs" description: "Documentation for the `.execute()` method in Mastra workflows, which runs workflow steps and returns results." --- # Workflow.execute() Source: https://mastra.ai/docs/reference/workflows/execute Executes a workflow with the provided trigger data and returns the results. The workflow must be committed before execution. ## Usage Example ```typescript const workflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number() }) }); workflow.step(stepOne).then(stepTwo).commit(); const result = await workflow.execute({ triggerData: { inputValue: 42 } }); ``` ## Parameters ## Returns ", description: "Results from each completed step" }, { name: "status", type: "WorkflowStatus", description: "Final status of the workflow run" } ] } ]} /> ## Additional Examples Execute with run ID: ```typescript const result = await workflow.execute({ runId: "custom-run-id", triggerData: { inputValue: 42 } }); ``` Handle execution results: ```typescript const { runId, results, status } = await workflow.execute({ triggerData: { inputValue: 42 } }); if (status === "COMPLETED") { console.log("Step results:", results); } ``` ### Related - [Workflow.createRun()](./createRun.mdx) - [Workflow.commit()](./commit.mdx) - [Workflow.start()](./start.mdx) --- title: "Reference: Workflow.if() | Conditional Branching | Mastra Docs" description: "Documentation for the `.if()` method in Mastra workflows, which creates conditional branches based on specified conditions." --- # Workflow.if() Source: https://mastra.ai/docs/reference/workflows/if > Experimental The `.if()` method creates a conditional branch in the workflow, allowing steps to execute only when a specified condition is true. This enables dynamic workflow paths based on the results of previous steps. ## Usage ```typescript copy showLineNumbers workflow .step(startStep) .if(async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return value < 10; // If true, execute the "if" branch }) .then(ifBranchStep) .else() .then(elseBranchStep) .commit(); ``` ## Parameters ## Condition Types ### Function Condition You can use a function that returns a boolean: ```typescript workflow .step(startStep) .if(async ({ context }) => { const result = context.getStepResult<{ status: string }>('start'); return result?.status === 'success'; // Execute "if" branch when status is "success" }) .then(successStep) .else() .then(failureStep); ``` ### Reference Condition You can use a reference-based condition with comparison operators: ```typescript workflow .step(startStep) .if({ ref: { step: startStep, path: 'value' }, query: { $lt: 10 }, // Execute "if" branch when value is less than 10 }) .then(ifBranchStep) .else() .then(elseBranchStep); ``` ## Returns ## Error Handling The `if` method requires a previous step to be defined. If you try to use it without a preceding step, an error will be thrown: ```typescript try { // This will throw an error workflow .if(async ({ context }) => true) .then(someStep) .commit(); } catch (error) { console.error(error); // "Condition requires a step to be executed after" } ``` ## Related - [else Reference](./else.mdx) - [then Reference](./then.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) - [Step Condition Reference](./step-condition.mdx) --- title: "Reference: run.resume() | Running Workflows | Mastra Docs" description: Documentation for the `.resume()` method in workflows, which continues execution of a suspended workflow step. --- # run.resume() Source: https://mastra.ai/docs/reference/workflows/resume The `.resume()` method continues execution of a suspended workflow step, optionally providing new context data that can be accessed by the step on the inputData property. ## Usage ```typescript copy showLineNumbers await run.resume({ runId: "abc-123", stepId: "stepTwo", context: { secondValue: 100 } }); ``` ## Parameters ### config ", description: "New context data to inject into the step's inputData property", isOptional: true } ]} /> ## Returns ", type: "object", description: "Result of the resumed workflow execution" } ]} /> ## Async/Await Flow When a workflow is resumed, execution continues from the point immediately after the `suspend()` call in the step's execution function. This creates a natural flow in your code: ```typescript // Step definition with suspend point const reviewStep = new Step({ id: "review", execute: async ({ context, suspend }) => { // First part of execution const initialAnalysis = analyzeData(context.inputData.data); if (initialAnalysis.needsReview) { // Suspend execution here await suspend({ analysis: initialAnalysis }); // This code runs after resume() is called // context.inputData now contains any data provided during resume return { reviewedData: enhanceWithFeedback(initialAnalysis, context.inputData.feedback) }; } return { reviewedData: initialAnalysis }; } }); const { runId, resume, start } = workflow.createRun(); await start({ inputData: { data: "some data" } }); // Later, resume the workflow const result = await resume({ runId: "workflow-123", stepId: "review", context: { // This data will be available in `context.inputData` feedback: "Looks good, but improve section 3" } }); ``` ### Execution Flow 1. The workflow runs until it hits `await suspend()` in the `review` step 2. The workflow state is persisted and execution pauses 3. Later, `run.resume()` is called with new context data 4. Execution continues from the point after `suspend()` in the `review` step 5. The new context data (`feedback`) is available to the step on the `inputData` property 6. The step completes and returns its result 7. The workflow continues with subsequent steps ## Error Handling The resume function may throw several types of errors: ```typescript try { await run.resume({ runId, stepId: "stepTwo", context: newData }); } catch (error) { if (error.message === "No snapshot found for workflow run") { // Handle missing workflow state } if (error.message === "Failed to parse workflow snapshot") { // Handle corrupted workflow state } } ``` ## Related - [Suspend and Resume](../../workflows/suspend-and-resume.mdx) - [`suspend` Reference](./suspend.mdx) - [`watch` Reference](./watch.mdx) - [Workflow Class Reference](./workflow.mdx) ``` --- title: ".resumeWithEvent() Method | Mastra Docs" description: "Reference for the resumeWithEvent method that resumes suspended workflows using event data." --- # resumeWithEvent() Source: https://mastra.ai/docs/reference/workflows/resumeWithEvent The `resumeWithEvent()` method resumes workflow execution by providing data for a specific event that the workflow is waiting for. ## Syntax ```typescript const run = workflow.createRun(); // After the workflow has started and suspended at an event step await run.resumeWithEvent(eventName: string, data: any): Promise ``` ## Parameters | Parameter | Type | Description | |-----------|------|-------------| | eventName | string | The name of the event to trigger. Must match an event defined in the workflow's `events` configuration. | | data | any | The event data to provide. Must conform to the schema defined for that event. | ## Return Value Returns a Promise that resolves to a `WorkflowRunResult` object, containing: - `results`: The result status and output of each step in the workflow - `activePaths`: A map of active workflow paths and their states - `value`: The current state value of the workflow - Other workflow execution metadata ## Description The `resumeWithEvent()` method is used to resume a workflow that has been suspended at an event step created by the `afterEvent()` method. When called, this method: 1. Validates the provided event data against the schema defined for that event 2. Loads the workflow snapshot from storage 3. Updates the context with the event data in the `resumedEvent` field 4. Resumes execution from the event step 5. Continues workflow execution with the subsequent steps This method is part of Mastra's event-driven workflow capabilities, allowing you to create workflows that can respond to external events or user interactions. ## Usage Notes - The workflow must be in a suspended state, specifically at the event step created by `afterEvent(eventName)` - The event data must conform to the schema defined for that event in the workflow configuration - The workflow will continue execution from the point it was suspended - If the workflow is not suspended or is suspended at a different step, this method may throw an error - The event data is made available to subsequent steps via `context.inputData.resumedEvent` ## Examples ### Basic Usage ```typescript // Define and start a workflow const workflow = mastra.getWorkflow('approval-workflow'); const run = workflow.createRun(); // Start the workflow await run.start({ triggerData: { requestId: 'req-123' } }); // Later, when the approval event occurs: const result = await run.resumeWithEvent('approval', { approved: true, approverName: 'John Doe', comment: 'Looks good to me!' }); console.log(result.results); ``` ### With Error Handling ```typescript try { const result = await run.resumeWithEvent('paymentReceived', { amount: 100.50, transactionId: 'tx-456', paymentMethod: 'credit-card', }); console.log('Workflow resumed successfully:', result.results); } catch (error) { console.error('Failed to resume workflow with event:', error); // Handle error - could be invalid event data, workflow not suspended, etc. } ``` ### Monitoring and Auto-Resuming ```typescript // Start a workflow const { start, watch, resumeWithEvent } = workflow.createRun(); // Watch for suspended event steps watch(async ({ context, activePaths }) => { // Check if suspended at the approval event step if (activePaths.some(path => path.stepId === '__approval_event' && path.status === 'suspended' )) { console.log('Workflow waiting for approval'); // In a real scenario, you would wait for the actual event // Here we're simulating with a timeout setTimeout(async () => { try { await resumeWithEvent('approval', { approved: true, approverName: 'Auto Approver', }); } catch (error) { console.error('Failed to auto-resume workflow:', error); } }, 5000); // Wait 5 seconds before auto-approving } }); // Start the workflow await start({ triggerData: { requestId: 'auto-123' } }); ``` ## Related - [Event-Driven Workflows](./events.mdx) - [afterEvent()](./afterEvent.mdx) - [Suspend and Resume](../../workflows/suspend-and-resume.mdx) - [resume()](./resume.mdx) - [watch()](./watch.mdx) --- title: "Reference: Snapshots | Workflow State Persistence | Mastra Docs" description: "Technical reference on snapshots in Mastra - the serialized workflow state that enables suspend and resume functionality" --- # Snapshots Source: https://mastra.ai/docs/reference/workflows/snapshots In Mastra, a snapshot is a serializable representation of a workflow's complete execution state at a specific point in time. Snapshots capture all the information needed to resume a workflow from exactly where it left off, including: - The current state of each step in the workflow - The outputs of completed steps - The execution path taken through the workflow - Any suspended steps and their metadata - The remaining retry attempts for each step - Additional contextual data needed to resume execution Snapshots are automatically created and managed by Mastra whenever a workflow is suspended, and are persisted to the configured storage system. ## The Role of Snapshots in Suspend and Resume Snapshots are the key mechanism enabling Mastra's suspend and resume capabilities. When a workflow step calls `await suspend()`: 1. The workflow execution is paused at that exact point 2. The current state of the workflow is captured as a snapshot 3. The snapshot is persisted to storage 4. The workflow step is marked as "suspended" with a status of `'suspended'` 5. Later, when `resume()` is called on the suspended step, the snapshot is retrieved 6. The workflow execution resumes from exactly where it left off This mechanism provides a powerful way to implement human-in-the-loop workflows, handle rate limiting, wait for external resources, and implement complex branching workflows that may need to pause for extended periods. ## Snapshot Anatomy A Mastra workflow snapshot consists of several key components: ```typescript export interface WorkflowRunState { // Core state info value: Record; // Current state machine value context: { // Workflow context steps: Record; triggerData: Record; // Initial trigger data attempts: Record; // Remaining retry attempts inputData: Record; // Initial input data }; activePaths: Array<{ // Currently active execution paths stepPath: string[]; stepId: string; status: string; }>; // Metadata runId: string; // Unique run identifier timestamp: number; // Time snapshot was created // For nested workflows and suspended steps childStates?: Record; // Child workflow states suspendedSteps?: Record; // Mapping of suspended steps } ``` ## How Snapshots Are Saved and Retrieved Mastra persists snapshots to the configured storage system. By default, snapshots are saved to a LibSQL database, but can be configured to use other storage providers like Upstash. The snapshots are stored in the `workflow_snapshots` table and identified uniquely by the `run_id` for the associated run when using libsql. Utilizing a persistence layer allows for the snapshots to be persisted across workflow runs, allowing for advanced human-in-the-loop functionality. Read more about [libsql storage](../storage/libsql.mdx) and [upstash storage](../storage/upstash.mdx) here. ### Saving Snapshots When a workflow is suspended, Mastra automatically persists the workflow snapshot with these steps: 1. The `suspend()` function in a step execution triggers the snapshot process 2. The `WorkflowInstance.suspend()` method records the suspended machine 3. `persistWorkflowSnapshot()` is called to save the current state 4. The snapshot is serialized and stored in the configured database in the `workflow_snapshots` table 5. The storage record includes the workflow name, run ID, and the serialized snapshot ### Retrieving Snapshots When a workflow is resumed, Mastra retrieves the persisted snapshot with these steps: 1. The `resume()` method is called with a specific step ID 2. The snapshot is loaded from storage using `loadWorkflowSnapshot()` 3. The snapshot is parsed and prepared for resumption 4. The workflow execution is recreated with the snapshot state 5. The suspended step is resumed, and execution continues ## Storage Options for Snapshots Mastra provides multiple storage options for persisting snapshots. A `storage` instance is configured on the `Mastra` class, and is used to setup a snapshot persistence layer for all workflows registered on the `Mastra` instance. This means that storage is shared across all workflows registered with the same `Mastra` instance. ### LibSQL (Default) The default storage option is LibSQL, a SQLite-compatible database: ```typescript import { Mastra } from '@mastra/core/mastra'; import { DefaultStorage } from '@mastra/core/storage/libsql'; const mastra = new Mastra({ storage: new DefaultStorage({ config: { url: "file:storage.db", // Local file-based database // For production: // url: process.env.DATABASE_URL, // authToken: process.env.DATABASE_AUTH_TOKEN, } }), workflows: { weatherWorkflow, travelWorkflow, } }); ``` ### Upstash (Redis-Compatible) For serverless environments: ```typescript import { Mastra } from '@mastra/core/mastra'; import { UpstashStore } from "@mastra/upstash"; const mastra = new Mastra({ storage: new UpstashStore({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }), workflows: { weatherWorkflow, travelWorkflow, } }); ``` ## Best Practices for Working with Snapshots 1. **Ensure Serializability**: Any data that needs to be included in the snapshot must be serializable (convertible to JSON). 2. **Minimize Snapshot Size**: Avoid storing large data objects directly in the workflow context. Instead, store references to them (like IDs) and retrieve the data when needed. 3. **Handle Resume Context Carefully**: When resuming a workflow, carefully consider what context to provide. This will be merged with the existing snapshot data. 4. **Set Up Proper Monitoring**: Implement monitoring for suspended workflows, especially long-running ones, to ensure they are properly resumed. 5. **Consider Storage Scaling**: For applications with many suspended workflows, ensure your storage solution is appropriately scaled. ## Advanced Snapshot Patterns ### Custom Snapshot Metadata When suspending a workflow, you can include custom metadata that can help when resuming: ```typescript await suspend({ reason: "Waiting for customer approval", requiredApprovers: ["manager", "finance"], requestedBy: currentUser, urgency: "high", expires: new Date(Date.now() + 7 * 24 * 60 * 60 * 1000) }); ``` This metadata is stored with the snapshot and available when resuming. ### Conditional Resumption You can implement conditional logic based on the suspend payload when resuming: ```typescript run.watch(async ({ context, activePaths }) => { for (const path of activePaths) { const approvalStep = context.steps?.approval; if (approvalStep?.status === 'suspended') { const payload = approvalStep.suspendPayload; if (payload.urgency === "high" && currentUser.role === "manager") { await resume({ stepId: 'approval', context: { approved: true, approver: currentUser.id }, }); } } } }); ``` ## Related - [Suspend Function Reference](./suspend.mdx) - [Resume Function Reference](./resume.mdx) - [Watch Function Reference](./watch.mdx) - [Suspend and Resume Guide](../../workflows/suspend-and-resume.mdx) --- title: "Reference: start() | Running Workflows | Mastra Docs" description: "Documentation for the `start()` method in workflows, which begins execution of a workflow run." --- # start() Source: https://mastra.ai/docs/reference/workflows/start The start function begins execution of a workflow run. It processes all steps in the defined workflow order, handling parallel execution, branching logic, and step dependencies. ## Usage ```typescript copy showLineNumbers const { runId, start } = workflow.createRun(); const result = await start({ triggerData: { inputValue: 42 } }); ``` ## Parameters ### config ", description: "Initial data that matches the workflow's triggerSchema", isOptional: false } ]} /> ## Returns ", description: "Combined output from all completed workflow steps" }, { name: "status", type: "'completed' | 'error' | 'suspended'", description: "Final status of the workflow run" } ]} /> ## Error Handling The start function may throw several types of validation errors: ```typescript copy showLineNumbers try { const result = await start({ triggerData: data }); } catch (error) { if (error instanceof ValidationError) { console.log(error.type); // 'circular_dependency' | 'no_terminal_path' | 'unreachable_step' console.log(error.details); } } ``` ## Related - [Example: Creating a Workflow](../../../examples/workflows/creating-a-workflow.mdx) - [Example: Suspend and Resume](../../../examples/workflows/suspend-and-resume.mdx) - [createRun Reference](./createRun.mdx) - [Workflow Class Reference](./workflow.mdx) - [Step Class Reference](./step-class.mdx) ``` --- title: "Reference: Step | Building Workflows | Mastra Docs" description: Documentation for the Step class, which defines individual units of work within a workflow. --- # Step Source: https://mastra.ai/docs/reference/workflows/step-class The Step class defines individual units of work within a workflow, encapsulating execution logic, data validation, and input/output handling. ## Usage ```typescript const processOrder = new Step({ id: "processOrder", inputSchema: z.object({ orderId: z.string(), userId: z.string() }), outputSchema: z.object({ status: z.string(), orderId: z.string() }), execute: async ({ context, runId }) => { return { status: "processed", orderId: context.orderId }; } }); ``` ## Constructor Parameters ", description: "Static data to be merged with variables", required: false }, { name: "execute", type: "(params: ExecuteParams) => Promise", description: "Async function containing step logic", required: true } ]} /> ### ExecuteParams Promise", description: "Function to suspend step execution" }, { name: "mastra", type: "Mastra", description: "Access to Mastra instance" } ]} /> ## Related - [Workflow Reference](./workflow.mdx) - [Step Configuration Guide](../../workflows/steps.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: StepCondition | Building Workflows | Mastra" description: Documentation for the step condition class in workflows, which determines whether a step should execute based on the output of previous steps or trigger data. --- # StepCondition Source: https://mastra.ai/docs/reference/workflows/step-condition Conditions determine whether a step should execute based on the output of previous steps or trigger data. ## Usage There are three ways to specify conditions: function, query object, and simple path comparison. ### 1. Function Condition ```typescript copy showLineNumbers workflow.step(processOrder, { when: async ({ context }) => { const auth = context?.getStepResult<{status: string}>("auth"); return auth?.status === "authenticated"; } }); ``` ### 2. Query Object ```typescript copy showLineNumbers workflow.step(processOrder, { when: { ref: { step: 'auth', path: 'status' }, query: { $eq: 'authenticated' } } }); ``` ### 3. Simple Path Comparison ```typescript copy showLineNumbers workflow.step(processOrder, { when: { "auth.status": "authenticated" } }); ``` Based on the type of condition, the workflow runner will try to match the condition to one of these types. 1. Simple Path Condition (when there's a dot in the key) 2. Base/Query Condition (when there's a 'ref' property) 3. Function Condition (when it's an async function) ## StepCondition ", description: "MongoDB-style query using sift operators ($eq, $gt, etc)", isOptional: false } ]} /> ## Query The Query object provides MongoDB-style query operators for comparing values from previous steps or trigger data. It supports basic comparison operators like `$eq`, `$gt`, `$lt` as well as array operators like `$in` and `$nin`, and can be combined with and/or operators for complex conditions. This query syntax allows for readable conditional logic for determining whether a step should execute. ## Related - [Step Options Reference](./step-options.mdx) - [Step Function Reference](./step-function.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: Workflow.step() | Workflows | Mastra Docs" description: Documentation for the `.step()` method in workflows, which adds a new step to the workflow. --- # Workflow.step() Source: https://mastra.ai/docs/reference/workflows/step-function The `.step()` method adds a new step to the workflow, optionally configuring its variables and execution conditions. ## Usage ```typescript workflow.step({ id: "stepTwo", outputSchema: z.object({ result: z.number() }), execute: async ({ context }) => { return { result: 42 }; } }); ``` ## Parameters ### StepDefinition Promise", description: "Function containing step logic", isOptional: false } ]} /> ### StepOptions ", description: "Map of variable names to their source references", isOptional: true }, { name: "when", type: "StepCondition", description: "Condition that must be met for step to execute", isOptional: true } ]} /> ## Related - [Basic Usage with Step Instance](../../workflows/steps.mdx) - [Step Class Reference](./step-class.mdx) - [Workflow Class Reference](./workflow.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: StepOptions | Building Workflows | Mastra Docs" description: Documentation for the step options in workflows, which control variable mapping, execution conditions, and other runtime behavior. --- # StepOptions Source: https://mastra.ai/docs/reference/workflows/step-options Configuration options for workflow steps that control variable mapping, execution conditions, and other runtime behavior. ## Usage ```typescript workflow.step(processOrder, { variables: { orderId: { step: 'trigger', path: 'id' }, userId: { step: 'auth', path: 'user.id' } }, when: { ref: { step: 'auth', path: 'status' }, query: { $eq: 'authenticated' } } }); ``` ## Properties ", description: "Maps step input variables to values from other steps", isOptional: true }, { name: "when", type: "StepCondition", description: "Condition that must be met for step execution", isOptional: true } ]} /> ### VariableRef ## Related - [Path Comparison](../../workflows/control-flow.mdx#path-comparison) - [Step Function Reference](./step-function.mdx) - [Step Class Reference](./step-class.mdx) - [Workflow Class Reference](./workflow.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Step Retries | Error Handling | Mastra Docs" description: "Automatically retry failed steps in Mastra workflows with configurable retry policies." --- # Step Retries Source: https://mastra.ai/docs/reference/workflows/step-retries Mastra provides built-in retry mechanisms to handle transient failures in workflow steps. This allows workflows to recover gracefully from temporary issues without requiring manual intervention. ## Overview When a step in a workflow fails (throws an exception), Mastra can automatically retry the step execution based on a configurable retry policy. This is useful for handling: - Network connectivity issues - Service unavailability - Rate limiting - Temporary resource constraints - Other transient failures ## Default Behavior By default, steps do not retry when they fail. This means: - A step will execute once - If it fails, it will immediately mark the step as failed - The workflow will continue to execute any subsequent steps that don't depend on the failed step ## Configuration Options Retries can be configured at two levels: ### 1. Workflow-level Configuration You can set a default retry configuration for all steps in a workflow: ```typescript const workflow = new Workflow({ name: 'my-workflow', retryConfig: { attempts: 3, // Number of retries (in addition to the initial attempt) delay: 1000, // Delay between retries in milliseconds }, }); ``` ### 2. Step-level Configuration You can also configure retries on individual steps, which will override the workflow-level configuration for that specific step: ```typescript const fetchDataStep = new Step({ id: 'fetchData', execute: async () => { // Fetch data from external API }, retryConfig: { attempts: 5, // This step will retry up to 5 times delay: 2000, // With a 2-second delay between retries }, }); ``` ## Retry Parameters The `retryConfig` object supports the following parameters: | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `attempts` | number | 0 | The number of retry attempts (in addition to the initial attempt) | | `delay` | number | 1000 | Time in milliseconds to wait between retries | ## How Retries Work When a step fails, Mastra's retry mechanism: 1. Checks if the step has retry attempts remaining 2. If attempts remain: - Decrements the attempt counter - Transitions the step to a "waiting" state - Waits for the configured delay period - Retries the step execution 3. If no attempts remain or all attempts have been exhausted: - Marks the step as "failed" - Continues workflow execution (for steps that don't depend on the failed step) During retry attempts, the workflow execution remains active but paused for the specific step that is being retried. ## Examples ### Basic Retry Example ```typescript import { Workflow, Step } from '@mastra/core/workflows'; // Define a step that might fail const unreliableApiStep = new Step({ id: 'callUnreliableApi', execute: async () => { // Simulate an API call that might fail const random = Math.random(); if (random < 0.7) { throw new Error('API call failed'); } return { data: 'API response data' }; }, retryConfig: { attempts: 3, // Retry up to 3 times delay: 2000, // Wait 2 seconds between attempts }, }); // Create a workflow with the unreliable step const workflow = new Workflow({ name: 'retry-demo-workflow', }); workflow .step(unreliableApiStep) .then(processResultStep) .commit(); ``` ### Workflow-level Retries with Step Override ```typescript import { Workflow, Step } from '@mastra/core/workflows'; // Create a workflow with default retry configuration const workflow = new Workflow({ name: 'multi-retry-workflow', retryConfig: { attempts: 2, // All steps will retry twice by default delay: 1000, // With a 1-second delay }, }); // This step uses the workflow's default retry configuration const standardStep = new Step({ id: 'standardStep', execute: async () => { // Some operation that might fail }, }); // This step overrides the workflow's retry configuration const criticalStep = new Step({ id: 'criticalStep', execute: async () => { // Critical operation that needs more retry attempts }, retryConfig: { attempts: 5, // Override with 5 retry attempts delay: 5000, // And a longer 5-second delay }, }); // This step disables retries const noRetryStep = new Step({ id: 'noRetryStep', execute: async () => { // Operation that should not retry }, retryConfig: { attempts: 0, // Explicitly disable retries }, }); workflow .step(standardStep) .then(criticalStep) .then(noRetryStep) .commit(); ``` ## Monitoring Retries You can monitor retry attempts in your logs. Mastra logs retry-related events at the `debug` level: ``` [DEBUG] Step fetchData failed (runId: abc-123) [DEBUG] Attempt count for step fetchData: 2 remaining attempts (runId: abc-123) [DEBUG] Step fetchData waiting (runId: abc-123) [DEBUG] Step fetchData finished waiting (runId: abc-123) [DEBUG] Step fetchData pending (runId: abc-123) ``` ## Best Practices 1. **Use Retries for Transient Failures**: Only configure retries for operations that might experience transient failures. For deterministic errors (like validation failures), retries won't help. 2. **Set Appropriate Delays**: Consider using longer delays for external API calls to allow time for services to recover. 3. **Limit Retry Attempts**: Don't set extremely high retry counts as this could cause workflows to run for excessive periods during outages. 4. **Implement Idempotent Operations**: Ensure your step's `execute` function is idempotent (can be called multiple times without side effects) since it may be retried. 5. **Consider Backoff Strategies**: For more advanced scenarios, consider implementing exponential backoff in your step's logic for operations that might be rate-limited. ## Related - [Step Class Reference](./step-class.mdx) - [Workflow Configuration](./workflow.mdx) - [Error Handling in Workflows](../../workflows/error-handling.mdx) --- title: "Reference: suspend() | Control Flow | Mastra Docs" description: "Documentation for the suspend function in Mastra workflows, which pauses execution until resumed." --- # suspend() Source: https://mastra.ai/docs/reference/workflows/suspend Pauses workflow execution at the current step until explicitly resumed. The workflow state is persisted and can be continued later. ## Usage Example ```typescript const approvalStep = new Step({ id: "needsApproval", execute: async ({ context, suspend }) => { if (context.steps.amount > 1000) { await suspend(); } return { approved: true }; } }); ``` ## Parameters ", description: "Optional data to store with the suspended state", isOptional: true } ]} /> ## Returns ", type: "Promise", description: "Resolves when the workflow is successfully suspended" } ]} /> ## Additional Examples Suspend with metadata: ```typescript const reviewStep = new Step({ id: "review", execute: async ({ context, suspend }) => { await suspend({ reason: "Needs manager approval", requestedBy: context.user }); return { reviewed: true }; } }); ``` Monitor suspended state: ```typescript run.watch((state) => { if (state.status === "SUSPENDED") { notifyReviewers(state.metadata); } }); ``` ### Related - [Suspend & Resume Workflows](../../workflows/suspend-and-resume.mdx) - [.resume()](./resume.mdx) - [.watch()](./watch.mdx) --- title: "Reference: Workflow.then() | Building Workflows | Mastra Docs" description: Documentation for the `.then()` method in workflows, which creates sequential dependencies between steps. --- # Workflow.then() Source: https://mastra.ai/docs/reference/workflows/then The `.then()` method creates a sequential dependency between workflow steps, ensuring steps execute in a specific order. ## Usage ```typescript workflow .step(stepOne) .then(stepTwo) .then(stepThree); ``` ## Parameters ## Returns ## Validation When using `then`: - The previous step must exist in the workflow - Steps cannot form circular dependencies - Each step can only appear once in a sequential chain ## Error Handling ```typescript try { workflow .step(stepA) .then(stepB) .then(stepA) // Will throw error - circular dependency .commit(); } catch (error) { if (error instanceof ValidationError) { console.log(error.type); // 'circular_dependency' console.log(error.details); } } ``` ## Related - [step Reference](./step-class.mdx) - [after Reference](./after.mdx) - [Sequential Steps Example](../../../examples/workflows/sequential-steps.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: Workflow.until() | Looping in Workflows | Mastra Docs" description: "Documentation for the `.until()` method in Mastra workflows, which repeats a step until a specified condition becomes true." --- # Workflow.until() Source: https://mastra.ai/docs/reference/workflows/until The `.until()` method repeats a step until a specified condition becomes true. This creates a loop that continues executing the specified step until the condition is satisfied. ## Usage ```typescript workflow .step(incrementStep) .until(condition, incrementStep) .then(finalStep); ``` ## Parameters ## Condition Types ### Function Condition You can use a function that returns a boolean: ```typescript workflow .step(incrementStep) .until(async ({ context }) => { const result = context.getStepResult<{ value: number }>('increment'); return (result?.value ?? 0) >= 10; // Stop when value reaches or exceeds 10 }, incrementStep) .then(finalStep); ``` ### Reference Condition You can use a reference-based condition with comparison operators: ```typescript workflow .step(incrementStep) .until( { ref: { step: incrementStep, path: 'value' }, query: { $gte: 10 }, // Stop when value is greater than or equal to 10 }, incrementStep ) .then(finalStep); ``` ## Comparison Operators When using reference-based conditions, you can use these comparison operators: | Operator | Description | Example | |----------|-------------|---------| | `$eq` | Equal to | `{ $eq: 10 }` | | `$ne` | Not equal to | `{ $ne: 0 }` | | `$gt` | Greater than | `{ $gt: 5 }` | | `$gte` | Greater than or equal to | `{ $gte: 10 }` | | `$lt` | Less than | `{ $lt: 20 }` | | `$lte` | Less than or equal to | `{ $lte: 15 }` | ## Returns ## Example ```typescript import { Workflow, Step } from '@mastra/core'; import { z } from 'zod'; // Create a step that increments a counter const incrementStep = new Step({ id: 'increment', description: 'Increments the counter by 1', outputSchema: z.object({ value: z.number(), }), execute: async ({ context }) => { // Get current value from previous execution or start at 0 const currentValue = context.getStepResult<{ value: number }>('increment')?.value || context.getStepResult<{ startValue: number }>('trigger')?.startValue || 0; // Increment the value const value = currentValue + 1; console.log(`Incrementing to ${value}`); return { value }; }, }); // Create a final step const finalStep = new Step({ id: 'final', description: 'Final step after loop completes', execute: async ({ context }) => { const finalValue = context.getStepResult<{ value: number }>('increment')?.value; console.log(`Loop completed with final value: ${finalValue}`); return { finalValue }; }, }); // Create the workflow const counterWorkflow = new Workflow({ name: 'counter-workflow', triggerSchema: z.object({ startValue: z.number(), targetValue: z.number(), }), }); // Configure the workflow with an until loop counterWorkflow .step(incrementStep) .until(async ({ context }) => { const targetValue = context.triggerData.targetValue; const currentValue = context.getStepResult<{ value: number }>('increment')?.value ?? 0; return currentValue >= targetValue; }, incrementStep) .then(finalStep) .commit(); // Execute the workflow const run = counterWorkflow.createRun(); const result = await run.start({ triggerData: { startValue: 0, targetValue: 5 } }); // Will increment from 0 to 5, then stop and execute finalStep ``` ## Related - [.while()](./while.mdx) - Loop while a condition is true - [Control Flow Guide](../../workflows/control-flow.mdx#loop-control-with-until-and-while) - [Workflow Class Reference](./workflow.mdx) --- title: "Reference: run.watch() | Workflows | Mastra Docs" description: Documentation for the `.watch()` method in workflows, which monitors the status of a workflow run. --- # run.watch() Source: https://mastra.ai/docs/reference/workflows/watch The `.watch()` function subscribes to state changes on a mastra run, allowing you to monitor execution progress and react to state updates. ## Usage Example ```typescript import { Workflow } from "@mastra/core/workflows"; const workflow = new Workflow({ name: "document-processor" }); const run = workflow.createRun(); // Subscribe to state changes const unsubscribe = run.watch((state) => { console.log('Current step:', state.currentStep); console.log('Step outputs:', state.stepOutputs); }); // Run the workflow await run.start({ input: { text: "Process this document" } }); // Stop watching unsubscribe(); ``` ## Parameters void", description: "Function called whenever the workflow state changes", isOptional: false } ]} /> ### WorkflowState Properties ", description: "Outputs from completed workflow steps", isOptional: false }, { name: "status", type: "'running' | 'completed' | 'failed'", description: "Current status of the workflow", isOptional: false }, { name: "error", type: "Error | null", description: "Error object if workflow failed", isOptional: true } ]} /> ## Returns void", description: "Function to stop watching workflow state changes" } ]} /> ## Additional Examples Monitor specific step completion: ```typescript run.watch((state) => { if (state.currentStep === 'processDocument') { console.log('Document processing output:', state.stepOutputs.processDocument); } }); ``` Error handling: ```typescript run.watch((state) => { if (state.status === 'failed') { console.error('Workflow failed:', state.error); // Implement error recovery logic } }); ``` ### Related - [Workflow Creation](/docs/reference/workflows/createRun) - [Step Configuration](/docs/reference/workflows/step-class) --- title: "Reference: Workflow.while() | Looping in Workflows | Mastra Docs" description: "Documentation for the `.while()` method in Mastra workflows, which repeats a step as long as a specified condition remains true." --- # Workflow.while() Source: https://mastra.ai/docs/reference/workflows/while The `.while()` method repeats a step as long as a specified condition remains true. This creates a loop that continues executing the specified step until the condition becomes false. ## Usage ```typescript workflow .step(incrementStep) .while(condition, incrementStep) .then(finalStep); ``` ## Parameters ## Condition Types ### Function Condition You can use a function that returns a boolean: ```typescript workflow .step(incrementStep) .while(async ({ context }) => { const result = context.getStepResult<{ value: number }>('increment'); return (result?.value ?? 0) < 10; // Continue as long as value is less than 10 }, incrementStep) .then(finalStep); ``` ### Reference Condition You can use a reference-based condition with comparison operators: ```typescript workflow .step(incrementStep) .while( { ref: { step: incrementStep, path: 'value' }, query: { $lt: 10 }, // Continue as long as value is less than 10 }, incrementStep ) .then(finalStep); ``` ## Comparison Operators When using reference-based conditions, you can use these comparison operators: | Operator | Description | Example | |----------|-------------|---------| | `$eq` | Equal to | `{ $eq: 10 }` | | `$ne` | Not equal to | `{ $ne: 0 }` | | `$gt` | Greater than | `{ $gt: 5 }` | | `$gte` | Greater than or equal to | `{ $gte: 10 }` | | `$lt` | Less than | `{ $lt: 20 }` | | `$lte` | Less than or equal to | `{ $lte: 15 }` | ## Returns ## Example ```typescript import { Workflow, Step } from '@mastra/core'; import { z } from 'zod'; // Create a step that increments a counter const incrementStep = new Step({ id: 'increment', description: 'Increments the counter by 1', outputSchema: z.object({ value: z.number(), }), execute: async ({ context }) => { // Get current value from previous execution or start at 0 const currentValue = context.getStepResult<{ value: number }>('increment')?.value || context.getStepResult<{ startValue: number }>('trigger')?.startValue || 0; // Increment the value const value = currentValue + 1; console.log(`Incrementing to ${value}`); return { value }; }, }); // Create a final step const finalStep = new Step({ id: 'final', description: 'Final step after loop completes', execute: async ({ context }) => { const finalValue = context.getStepResult<{ value: number }>('increment')?.value; console.log(`Loop completed with final value: ${finalValue}`); return { finalValue }; }, }); // Create the workflow const counterWorkflow = new Workflow({ name: 'counter-workflow', triggerSchema: z.object({ startValue: z.number(), targetValue: z.number(), }), }); // Configure the workflow with a while loop counterWorkflow .step(incrementStep) .while( async ({ context }) => { const targetValue = context.triggerData.targetValue; const currentValue = context.getStepResult<{ value: number }>('increment')?.value ?? 0; return currentValue < targetValue; }, incrementStep ) .then(finalStep) .commit(); // Execute the workflow const run = counterWorkflow.createRun(); const result = await run.start({ triggerData: { startValue: 0, targetValue: 5 } }); // Will increment from 0 to 4, then stop and execute finalStep ``` ## Related - [.until()](./until.mdx) - Loop until a condition becomes true - [Control Flow Guide](../../workflows/control-flow.mdx#loop-control-with-until-and-while) - [Workflow Class Reference](./workflow.mdx) --- title: "Reference: Workflow Class | Building Workflows | Mastra Docs" description: Documentation for the Workflow class in Mastra, which enables you to create state machines for complex sequences of operations with conditional branching and data validation. --- # Workflow Class Source: https://mastra.ai/docs/reference/workflows/workflow The Workflow class enables you to create state machines for complex sequences of operations with conditional branching and data validation. ```ts copy import { Workflow } from "@mastra/core/workflows"; const workflow = new Workflow({ name: "my-workflow" }); ``` ## API Reference ### Constructor ", isOptional: true, description: "Optional logger instance for workflow execution details", }, { name: "steps", type: "Step[]", description: "Array of steps to include in the workflow", }, { name: "triggerSchema", type: "z.Schema", description: "Optional schema for validating workflow trigger data", }, ]} /> ### Core Methods #### `step()` Adds a [Step](./step-class.mdx) to the workflow, including transitions to other steps. Returns the workflow instance for chaining. [Learn more about steps](./step-class.mdx). #### `commit()` Validates and finalizes the workflow configuration. Must be called after adding all steps. #### `execute()` Executes the workflow with optional trigger data. Typed based on the [trigger schema](./workflow.mdx#trigger-schemas). ## Trigger Schemas Trigger schemas validate the initial data passed to a workflow using Zod. ```ts showLineNumbers copy const workflow = new Workflow({ name: "order-process", triggerSchema: z.object({ orderId: z.string(), customer: z.object({ id: z.string(), email: z.string().email(), }), }), }); ``` The schema: - Validates data passed to `execute()` - Provides TypeScript types for your workflow input ## Validation Workflow validation happens at two key times: ### 1. At Commit Time When you call `.commit()`, the workflow validates: ```ts showLineNumbers copy workflow .step('step1', {...}) .step('step2', {...}) .commit(); // Validates workflow structure ``` - Circular dependencies between steps - Terminal paths (every path must end) - Unreachable steps - Variable references to non-existent steps - Duplicate step IDs ### 2. During Execution When you call `start()`, it validates: ```ts showLineNumbers copy const { runId, start } = workflow.createRun(); // Validates trigger data against schema await start({ triggerData: { orderId: "123", customer: { id: "cust_123", email: "invalid-email", // Will fail validation }, }, }); ``` - Trigger data against trigger schema - Each step's input data against its inputSchema - Variable paths exist in referenced step outputs - Required variables are present ## Workflow Status A workflow's status indicates its current execution state. The possible values are: ### Example: Handling Different Statuses ```typescript showLineNumbers copy const { runId, start, watch } = workflow.createRun(); watch(async ({ status }) => { switch (status) { case "SUSPENDED": // Handle suspended state break; case "COMPLETED": // Process results break; case "FAILED": // Handle error state break; } }); await start({ triggerData: data }); ``` ## Error Handling ```ts showLineNumbers copy try { const { runId, start, watch, resume } = workflow.createRun(); await start({ triggerData: data }); } catch (error) { if (error instanceof ValidationError) { // Handle validation errors console.log(error.type); // 'circular_dependency' | 'no_terminal_path' | 'unreachable_step' console.log(error.details); // { stepId?: string, path?: string[] } } } ``` ## Passing Context Between Steps Steps can access data from previous steps in the workflow through the context object. Each step receives the accumulated context from all previous steps that have executed. ```typescript showLineNumbers copy workflow .step({ id: 'getData', execute: async ({ context }) => { return { data: { id: '123', value: 'example' } }; } }) .step({ id: 'processData', execute: async ({ context }) => { // Access data from previous step through context.steps const previousData = context.steps.getData.output.data; // Process previousData.id and previousData.value } }); ``` The context object: - Contains results from all completed steps in `context.steps` - Provides access to step outputs through `context.steps.[stepId].output` - Is typed based on step output schemas - Is immutable to ensure data consistency ## Related Documentation - [Step](./step-class.mdx) - [.then()](./then.mdx) - [.step()](./step-function.mdx) - [.after()](./after.mdx) --- title: Storage in Mastra | Mastra Docs description: Overview of Mastra's storage system and data persistence capabilities. --- import { PropertiesTable } from '@/components/properties-table' import { SchemaTable } from '@/components/schema-table' # MastraStorage Source: https://mastra.ai/docs/storage/overview MastraStorage provides a unified interface for managing - Workflow Tracking, allowing users to suspend and resume workflow runs. - Memory, threads and messages per resourceId in your application - Traces - OpenTelemetry traces from all components of Mastra - Evaluation Datasets - as evaluations are run, the scores and reasons are stored in datasets. Diagram showing storage in Mastra Mastra provides different storage providers, but you can treat them as interchangeable. Eg, you could use libsql in development but postgres in production, and your code will work the same both ways. ## Data Schema ### Memory **Messages** Stores conversation messages and their metadata. Each message belongs to a thread and contains the actual content along with metadata about the sender role and message type. **Threads** Groups related messages together and associates them with a resource. Contains metadata about the conversation. ## Workflows Preserves workflow execution states and enables resuming workflows: ### Evaluation Datasets Stores evaluation results from running metrics against agent outputs: ### Traces Captures OpenTelemetry traces for monitoring and debugging: ## Configuration Mastra can be configured with a default storage option: ```typescript import { Mastra } from '@mastra/core/mastra'; import { DefaultStorage } from '@mastra/core/storage/libsql'; const mastra = new Mastra({ storage: new DefaultStorage({ config: { url: "file:storage.db", } }), }); ``` ## Other Storage Providers If you want to use other storage, check out the following providers: - [LibSQL Storage](../reference/storage/libsql.mdx) - [PostgreSQL Storage](../reference/storage/postgresql.mdx) - [Upstash Storage](../reference/storage/upstash.mdx) --- title: Voice in Mastra | Mastra Docs description: Overview of voice capabilities in Mastra, including text-to-speech, speech-to-text, and real-time voice-to-voice interactions. --- # Voice in Mastra Source: https://mastra.ai/docs/voice/overview Mastra's Voice system provides a unified interface for voice interactions, enabling text-to-speech (TTS), speech-to-text (STT), and real-time voice-to-voice capabilities in your applications. ## Key Features - Standardized API across different voice providers - Support for multiple voice services - Voice-to-voice interactions using events for continuous audio streaming - Composable voice providers for mixing TTS and STT services ## Adding Voice to Agents To learn how to integrate voice capabilities into your agents, check out the [Adding Voice to Agents](../agents/03-adding-voice.mdx) documentation. This section covers how to use both single and multiple voice providers, as well as real-time interactions. ## Example of Using a Single Voice Provider ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; // Initialize OpenAI voice for TTS const voice = new OpenAIVoice({ speechModel: { name: "tts-1-hd", // Specify the TTS model apiKey: process.env.OPENAI_API_KEY, // Your OpenAI API key }, }); // Convert text to speech const audioStream = await voice.speak("Hello! How can I assist you today?", { speaker: "default", // Optional: specify a speaker }); // Play the audio response playAudio(audioStream); ``` ## Example of Using Multiple Voice Providers This example demonstrates how to create and use two different voice providers in Mastra: OpenAI for speech-to-text (STT) and PlayAI for text-to-speech (TTS). Start by creating instances of the voice providers with any necessary configuration. ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; import { CompositeVoice } from "@mastra/core/voice"; // Initialize OpenAI voice for STT const listeningProvider = new OpenAIVoice({ listeningModel: { name: "whisper-1", apiKey: process.env.OPENAI_API_KEY, }, }); // Initialize PlayAI voice for TTS const speakingProvider = new PlayAIVoice({ speechModel: { name: "playai-voice", apiKey: process.env.PLAYAI_API_KEY, }, }); // Combine the providers using CompositeVoice const voice = new CompositeVoice({ listeningProvider, speakingProvider, }); // Implement voice interactions using the combined voice provider const audioStream = getMicrophoneStream(); // Assume this function gets audio input const transcript = await voice.listen(audioStream); // Log the transcribed text console.log("Transcribed text:", transcript); // Convert text to speech const responseAudio = await voice.speak(`You said: ${transcript}`, { speaker: "default", // Optional: specify a speaker }); // Play the audio response playAudio(responseAudio); ``` ## Real-time Capabilities Many voice providers support real-time speech-to-speech interactions through WebSocket connections, enabling: - Live voice conversations with AI - Streaming transcription - Real-time text-to-speech synthesis - Tool usage during conversations ## Voice Configuration Voice providers can be configured with different models and options: ```typescript const voice = new OpenAIVoice({ speechModel: { name: "tts-1-hd", apiKey: process.env.OPENAI_API_KEY }, listeningModel: { name: "whisper-1" }, speaker: "alloy" }); ``` ## Available Voice Providers Mastra supports a variety of voice providers, including: - OpenAI - PlayAI - Murf - ElevenLabs - [More](https://github.com/mastra-ai/mastra/tree/main/voice) ## More Resources - [CompositeVoice](../reference/voice/composite-voice.mdx) - [MastraVoice](../reference/voice/mastra-voice.mdx) - [OpenAI Voice](../reference/voice/openai.mdx) - [PlayAI Voice](../reference/voice/playai.mdx) - [Voice Examples](../../examples/voice/) --- title: Speech-to-Text (STT) in Mastra | Mastra Docs description: Overview of Speech-to-Text capabilities in Mastra, including configuration, usage, and integration with voice providers. --- # Speech-to-Text (STT) Source: https://mastra.ai/docs/voice/speech-to-text Speech-to-Text (STT) in Mastra provides a standardized interface for converting audio input into text across multiple service providers. This section covers STT configuration and usage. Check out the [Adding Voice to Agents](../agents/03-adding-voice.mdx) documentation to learn how to use STT in an agent. ## Speech Configuration To use STT in Mastra, you need to provide a `listeningModel` configuration when initializing the voice provider. This configuration includes parameters such as: - **`name`**: The specific STT model to use. - **`apiKey`**: Your API key for authentication. - **Provider-specific options**: Additional options that may be required or supported by the specific voice provider. **Note**: All of these parameters are optional. You can use the default settings provided by the voice provider, which will depend on the specific provider you are using. ### Example Configuration ```typescript const voice = new OpenAIVoice({ listeningModel: { name: "whisper-1", apiKey: process.env.OPENAI_API_KEY, }, }); // If using default settings the configuration can be simplified to: const voice = new OpenAIVoice(); ``` ## Using the Listen Method The primary method for STT is the `listen()` method, which converts spoken audio into text. Here's how to use it: ```typescript const audioStream = getMicrophoneStream(); // Assume this function gets audio input const transcript = await voice.listen(audioStream, { filetype: "m4a", // Optional: specify the audio file type }); ``` **Note**: If you are using a voice-to-voice provider, such as `OpenAIRealtimeVoice`, the `listen()` method will emit a "writing" event instead of returning a transcript directly. --- title: Text-to-Speech (TTS) in Mastra | Mastra Docs description: Overview of Text-to-Speech capabilities in Mastra, including configuration, usage, and integration with voice providers. --- # Text-to-Speech (TTS) Source: https://mastra.ai/docs/voice/text-to-speech Text-to-Speech (TTS) in Mastra offers a unified API for synthesizing spoken audio from text using various provider services. This section explains TTS configuration options and implementation methods. For integrating TTS capabilities with agents, refer to the [Adding Voice to Agents](../agents/03-adding-voice.mdx) documentation. ## Speech Configuration To use TTS in Mastra, you need to provide a `speechModel` configuration when initializing the voice provider. This configuration includes parameters such as: - **`name`**: The specific TTS model to use. - **`apiKey`**: Your API key for authentication. - **Provider-specific options**: Additional options that may be required or supported by the specific voice provider. The **`speaker`** option is specified separately and allows you to select different voices for speech synthesis. **Note**: All of these parameters are optional. You can use the default settings provided by the voice provider, which will depend on the specific provider you are using. ### Example Configuration ```typescript const voice = new OpenAIVoice({ speechModel: { name: "tts-1-hd", apiKey: process.env.OPENAI_API_KEY }, speaker: "alloy", }); // If using default settings the configuration can be simplified to: const voice = new OpenAIVoice(); ``` ## Using the Speak Method The primary method for TTS is the `speak()` method, which converts text to speech. This method can accept options that allows you to specify the speaker and other provider-specific options. Here's how to use it: ```typescript const readableStream = await voice.speak("Hello, world!", { speaker: "default", // Optional: specify a speaker properties: { speed: 1.0, // Optional: adjust speech speed pitch: "default", // Optional: specify pitch if supported }, }); ``` **Note**: If you are using a voice-to-voice provider, such as `OpenAIRealtimeVoice`, the `speak()` method will emit a "speaking" event instead of returning an Readable Stream. --- title: Voice-to-Voice Capabilities in Mastra | Mastra Docs description: Overview of voice-to-voice capabilities in Mastra, including real-time interactions and event-driven architecture. --- # Voice-to-Voice Capabilities in Mastra Source: https://mastra.ai/docs/voice/voice-to-voice ## Introduction Voice-to-Voice in Mastra provides a standardized interface for real-time speech-to-speech interactions across multiple service providers. This section covers configuration, event-driven architecture, and implementation methods for creating conversational voice experiences. For integrating Voice-to-Voice capabilities with agents, refer to the [Adding Voice to Agents](../agents/03-adding-voice.mdx) documentation. ## Real-time Voice Interactions Mastra's real-time voice system enables continuous bidirectional audio communication through an event-driven architecture. Unlike separate TTS and STT operations, real-time voice maintains an open connection that processes speech continuously in both directions. ### Example Implementation ```typescript import { Agent } from "@mastra/core/agent"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; const agent = new Agent({ name: 'Agent', instructions: `You are a helpful assistant with real-time voice capabilities.`, model: openai('gpt-4o'), voice: new OpenAIRealtimeVoice(), }); // Connect to the voice service await agent.voice.connect(); // Listen for agent audio responses agent.voice.on('speaking', ({ audio }) => { playAudio(audio); }); // Initiate the conversation await agent.voice.speak('How can I help you today?'); // Send continuous audio from the microphone const micStream = getMicrophoneStream(); await agent.voice.send(micStream); ``` ## Event-Driven Architecture Mastra's voice-to-voice implementation is built on an event-driven architecture. Developers register event listeners to handle incoming audio progressively, allowing for more responsive interactions than waiting for complete audio responses. ## Configuration When initializing a voice-to-voice provider, you can provide configuration options to customize its behavior: ### Constructor Options - **`chatModel`**: Configuration for the OpenAI realtime model. - **`apiKey`**: Your OpenAI API key. Falls back to the `OPENAI_API_KEY` environment variable. - **`model`**: The model ID to use for real-time voice interactions (e.g., `gpt-4o-mini-realtime`). - **`options`**: Additional options for the realtime client, such as session configuration. - **`speaker`**: The default voice ID for speech synthesis. This allows you to specify which voice to use for the speech output. ### Example Configuration ```typescript const voice = new OpenAIRealtimeVoice({ chatModel: { apiKey: 'your-openai-api-key', model: 'gpt-4o-mini-realtime', options: { sessionConfig: { turn_detection: { type: 'server_vad', threshold: 0.6, silence_duration_ms: 1200, }, }, }, }, speaker: 'alloy', // Default voice }); // If using default settings the configuration can be simplified to: const voice = new OpenAIRealtimeVoice(); ``` ## Core Methods The `OpenAIRealtimeVoice` class provides the following core methods for voice interactions: ### connect() Establishes a connection to the OpenAI realtime service. **Usage:** ```typescript await voice.connect(); ``` **Notes:** - Must be called before using any other interaction methods - Returns a Promise that resolves when the connection is established ### speak(text, options?) Emits a speaking event using the configured voice model. **Parameters:** - `text`: String content to be spoken - `options`: Optional configuration object - `speaker`: Voice ID to use (overrides default) - `properties`: Additional provider-specific properties **Usage:** ```typescript voice.speak('Hello, how can I help you today?', { speaker: 'alloy' }); ``` **Notes:** - Emits 'speaking' events rather than returning an audio stream ### listen(audioInput, options?) Processes audio input for speech recognition. **Parameters:** - `audioInput`: Readable stream of audio data - `options`: Optional configuration object - `filetype`: Audio format (default: 'mp3') - Additional provider-specific options **Usage:** ```typescript const audioData = getMicrophoneStream(); voice.listen(audioData, { filetype: 'wav' }); ``` **Notes:** - Emits 'writing' events with transcribed text ### send(audioStream) Streams audio data in real-time for continuous processing. **Parameters:** - `audioStream`: Readable stream of audio data **Usage:** ```typescript const micStream = getMicrophoneStream(); await voice.send(micStream); ``` **Notes:** - Used for continuous audio streaming scenarios like live microphone input - Returns a Promise that resolves when the stream is accepted ### answer(params) Sends a response to the OpenAI Realtime API. **Parameters:** - `params`: The parameters object - `options`: Configuration options for the response - `content`: Text content of the response - `voice`: Voice ID to use for the response **Usage:** ```typescript await voice.answer({ options: { content: "Hello, how can I help you today?", voice: "alloy" } }); ``` **Notes:** - Triggers a response to the real-time session - Returns a Promise that resolves when the response has been sent ## Utility Methods ### updateConfig(config) Updates the session configuration for the voice instance. **Parameters:** - `config`: New session configuration object **Usage:** ```typescript voice.updateConfig({ turn_detection: { type: 'server_vad', threshold: 0.6, silence_duration_ms: 1200, } }); ``` ### addTools(tools) Adds a set of tools to the voice instance. **Parameters:** - `tools`: Array of tool objects that the model can call **Usage:** ```typescript voice.addTools([ createTool({ id: "Get Weather Information", inputSchema: z.object({ city: z.string(), }), description: `Fetches the current weather information for a given city`, execute: async ({ city }) => {...}, }) ]); ``` ### close() Disconnects from the OpenAI realtime session and cleans up resources. **Usage:** ```typescript voice.close(); ``` **Notes:** - Should be called when you're done with the voice instance to free resources ### on(event, callback) Registers an event listener for voice events. **Parameters:** - `event`: Event name ('speaking', 'writing', or 'error') - `callback`: Function to call when the event occurs **Usage:** ```typescript voice.on('speaking', ({ audio }) => { playAudio(audio); }); ``` ### off(event, callback) Removes a previously registered event listener. **Parameters:** - `event`: Event name - `callback`: The callback function to remove **Usage:** ```typescript voice.off('speaking', callbackFunction); ``` ## Events The `OpenAIRealtimeVoice` class emits the following events: ### speaking Emitted when audio data is received from the model. **Event payload:** - `audio`: A chunk of audio data as a readable stream ```typescript agent.voice.on('speaking', ({ audio }) => { playAudio(audio); // Handle audio chunks as they're generated }); ``` ### writing Emitted when transcribed text is available. **Event payload:** - `text`: The transcribed text - `role`: The role of the speaker (user or assistant) - `done`: Boolean indicating if this is the final transcription ```typescript agent.voice.on('writing', ({ text, role }) => { console.log(`${role}: ${text}`); // Log who said what }); ``` ### error Emitted when an error occurs. **Event payload:** - Error object with details about what went wrong ```typescript agent.voice.on('error', (error) => { console.error('Voice error:', error); }); ``` --- title: "Handling Complex LLM Operations | Workflows | Mastra" description: "Workflows in Mastra help you orchestrate complex sequences of operations with features like branching, parallel execution, resource suspension, and more." --- # Handling Complex LLM Operations with Workflows Source: https://mastra.ai/docs/workflows/00-overview Workflows in Mastra help you orchestrate complex sequences of operations with features like branching, parallel execution, resource suspension, and more. ## When to use workflows Most AI applications need more than a single call to a language model. You may want to run multiple steps, conditionally skip certain paths, or even pause execution altogether until you receive user input. Sometimes your agent tool calling is not accurate enough. Mastra's workflow system provides: - A standardized way to define steps and link them together. - Support for both simple (linear) and advanced (branching, parallel) paths. - Debugging and observability features to track each workflow run. ## Example To create a workflow, you define one or more steps, link them, and then commit the workflow before starting it. ### Breaking Down the Workflow Let's examine each part of the workflow creation process: #### 1. Creating the Workflow Here's how you define a workflow in Mastra. The `name` field determines the workflow's API endpoint (`/workflows/$NAME/`), while the `triggerSchema` defines the structure of the workflow's trigger data: ```ts filename="src/mastra/workflow/index.ts" const myWorkflow = new Workflow({ name: 'my-workflow', triggerSchema: z.object({ inputValue: z.number(), }), }); ``` #### 2. Defining Steps Now, we'll define the workflow's steps. Each step can have its own input and output schemas. Here, `stepOne` doubles an input value, and `stepTwo` increments that result if `stepOne` was successful. (To keep things simple, we aren't making any LLM calls in this example): ```ts filename="src/mastra/workflow/index.ts" const stepOne = new Step({ id: 'stepOne', outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context.triggerData.inputValue * 2; return { doubledValue }; }, }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { const doubledValue = context.getStepResult(stepOne)?.doubledValue; if (!doubledValue) { return { incrementedValue: 0 }; } return { incrementedValue: doubledValue + 1, }; }, }); ``` #### 3. Linking Steps Now, let's create the control flow, and "commit" (finalize the workflow). In this case, `stepOne` runs first and is followed by `stepTwo`. ```ts filename="src/mastra/workflow/index.ts" myWorkflow .step(stepOne) .then(stepTwo) .commit(); ``` ### Register the Workflow Register your workflow with Mastra to enable logging and telemetry: ```ts showLineNumbers filename="src/mastra/index.ts" import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ workflows: { myWorkflow }, }); ``` The workflow can also have the mastra instance injected into the context in the case where you need to create dynamic workflows: ```ts filename="src/mastra/workflow/index.ts" import { Mastra } from "@mastra/core"; const mastra = new Mastra(); const myWorkflow = new Workflow({ name: 'my-workflow', mastra, }); ``` ### Executing the Workflow Execute your workflow programmatically or via API: ```ts showLineNumbers filename="src/mastra/run-workflow.ts" copy import { mastra } from "./index"; // Get the workflow const myWorkflow = mastra.getWorkflow('myWorkflow'); const { runId, start } = myWorkflow.createRun(); // Start the workflow execution await start({ triggerData: { inputValue: 45 } }); ``` Or use the API (requires running `mastra dev`): ```bash curl --location 'http://localhost:4111/api/workflows/myWorkflow/execute' \ --header 'Content-Type: application/json' \ --data '{ "inputValue": 45 }' ``` This example shows the essentials: define your workflow, add steps, commit the workflow, then execute it. ## Defining Steps The basic building block of a workflow [is a step](./steps.mdx). Steps are defined using schemas for inputs and outputs, and can fetch prior step results. ## Control Flow Workflows let you define a [control flow](./control-flow.mdx) to chain steps together in with parallel steps, branching paths, and more. ## Workflow Variables When you need to map data between steps or create dynamic data flows, [workflow variables](./variables.mdx) provide a powerful mechanism for passing information from one step to another and accessing nested properties within step outputs. ## Suspend and Resume When you need to pause execution for external data, user input, or asynchronous events, Mastra [supports suspension at any step](./suspend-and-resume.mdx), persisting the state of the workflow so you can resume it later. ## Observability and Debugging Mastra workflows automatically [log the input and output of each step within a workflow run](../reference/observability/otel-config.mdx), allowing you to send this data to your preferred logging, telemetry, or observability tools. You can: - Track the status of each step (e.g., `success`, `error`, or `suspended`). - Store run-specific metadata for analysis. - Integrate with third-party observability platforms like Datadog or New Relic by forwarding logs. ## More Resources - The [Workflow Guide](../guides/04-recruiter.mdx) in the Guides section is a tutorial that covers the main concepts. - [Sequential Steps workflow example](../../examples/workflows/sequential-steps.mdx) - [Parallel Steps workflow example](../../examples/workflows/parallel-steps.mdx) - [Branching Paths workflow example](../../examples/workflows/branching-paths.mdx) - [Workflow Variables example](../../examples/workflows/workflow-variables.mdx) - [Cyclical Dependencies workflow example](../../examples/workflows/cyclical-dependencies.mdx) - [Suspend and Resume workflow example](../../examples/workflows/suspend-and-resume.mdx) --- title: "Branching, Merging, Conditions | Workflows | Mastra Docs" description: "Control flow in Mastra workflows allows you to manage branching, merging, and conditions to construct workflows that meet your logic requirements." --- # Control Flow in Workflows: Branching, Merging, and Conditions Source: https://mastra.ai/docs/workflows/control-flow When you create a multi-step process, you may need to run steps in parallel, chain them sequentially, or follow different paths based on outcomes. This page describes how you can manage branching, merging, and conditions to construct workflows that meet your logic requirements. The code snippets show the key patterns for structuring complex control flow. ## Parallel Execution You can run multiple steps at the same time if they don't depend on each other. This approach can speed up your workflow when steps perform independent tasks. The code below shows how to add two steps in parallel: ```typescript myWorkflow.step(fetchUserData).step(fetchOrderData); ``` See the [Parallel Steps](../../examples/workflows/parallel-steps.mdx) example for more details. ## Sequential Execution Sometimes you need to run steps in strict order to ensure outputs from one step become inputs for the next. Use .then() to link dependent operations. The code below shows how to chain steps sequentially: ```typescript myWorkflow.step(fetchOrderData).then(validateData).then(processOrder); ``` See the [Sequential Steps](../../examples/workflows/sequential-steps.mdx) example for more details. ## Branching and Merging Paths When different outcomes require different paths, branching is helpful. You can also merge paths later once they complete. The code below shows how to branch after stepA and later converge on stepF: ```typescript myWorkflow .step(stepA) .then(stepB) .then(stepD) .after(stepA) .step(stepC) .then(stepE) .after([stepD, stepE]) .step(stepF); ``` In this example: - stepA leads to stepB, then to stepD. - Separately, stepA also triggers stepC, which in turn leads to stepE. - Separately, stepD also triggers stepF and stepE in parallel. See the [Branching Paths](../../examples/workflows/branching-paths.mdx) example for more details. ## Merging Multiple Branches Sometimes you need a step to execute only after multiple other steps have completed. Mastra provides a compound `.after([])` syntax that allows you to specify multiple dependencies for a step. ```typescript myWorkflow .step(fetchUserData) .then(validateUserData) .step(fetchProductData) .then(validateProductData) // This step will only run after BOTH validateUserData AND validateProductData have completed .after([validateUserData, validateProductData]) .step(processOrder) ``` In this example: - `fetchUserData` and `fetchProductData` run in parallel branches - Each branch has its own validation step - The `processOrder` step only executes after both validation steps have completed successfully This pattern is particularly useful for: - Joining parallel execution paths - Implementing synchronization points in your workflow - Ensuring all required data is available before proceeding You can also create complex dependency patterns by combining multiple `.after([])` calls: ```typescript myWorkflow // First branch .step(stepA) .then(stepB) .then(stepC) // Second branch .step(stepD) .then(stepE) // Third branch .step(stepF) .then(stepG) // This step depends on the completion of multiple branches .after([stepC, stepE, stepG]) .step(finalStep) ``` ## Cyclical Dependencies and Loops Workflows often need to repeat steps until certain conditions are met. Mastra provides two powerful methods for creating loops: `until` and `while`. These methods offer an intuitive way to implement repetitive tasks. ### Using Manual Cyclical Dependencies (Legacy Approach) In earlier versions, you could create loops by manually defining cyclical dependencies with conditions: ```typescript myWorkflow .step(fetchData) .then(processData) .after(processData) .step(finalizeData, { when: { "processData.status": "success" }, }) .step(fetchData, { when: { "processData.status": "retry" }, }); ``` While this approach still works, the newer `until` and `while` methods provide a cleaner and more maintainable way to create loops. ### Using `until` for Condition-Based Loops The `until` method repeats a step until a specified condition becomes true. It takes two arguments: 1. A condition that determines when to stop looping 2. The step to repeat ```typescript workflow .step(incrementStep) .until(async ({ context }) => { // Stop when the value reaches or exceeds 10 const result = context.getStepResult(incrementStep); return (result?.value ?? 0) >= 10; }, incrementStep) .then(finalStep); ``` You can also use a reference-based condition: ```typescript workflow .step(incrementStep) .until( { ref: { step: incrementStep, path: 'value' }, query: { $gte: 10 }, }, incrementStep ) .then(finalStep); ``` ### Using `while` for Condition-Based Loops The `while` method repeats a step as long as a specified condition remains true. It takes the same arguments as `until`: 1. A condition that determines when to continue looping 2. The step to repeat ```typescript workflow .step(incrementStep) .while(async ({ context }) => { // Continue as long as the value is less than 10 const result = context.getStepResult(incrementStep); return (result?.value ?? 0) < 10; }, incrementStep) .then(finalStep); ``` You can also use a reference-based condition: ```typescript workflow .step(incrementStep) .while( { ref: { step: incrementStep, path: 'value' }, query: { $lt: 10 }, }, incrementStep ) .then(finalStep); ``` ### Comparison Operators for Reference Conditions When using reference-based conditions, you can use these comparison operators: | Operator | Description | |----------|-------------| | `$eq` | Equal to | | `$ne` | Not equal to | | `$gt` | Greater than | | `$gte` | Greater than or equal to | | `$lt` | Less than | | `$lte` | Less than or equal to | See the [Loop Control](../../examples/workflows/loop-control.mdx) example for more details. ## Conditions Use the when property to control whether a step runs based on data from previous steps. Below are three ways to specify conditions. ### Option 1: Function ```typescript myWorkflow.step( new Step({ id: "processData", execute: async ({ context }) => { // Action logic }, }), { when: async ({ context }) => { const fetchData = context?.getStepResult<{ status: string }>("fetchData"); return fetchData?.status === "success"; }, }, ); ``` ### Option 2: Query Object ```typescript myWorkflow.step( new Step({ id: "processData", execute: async ({ context }) => { // Action logic }, }), { when: { ref: { step: { id: "fetchData", }, path: "status", }, query: { $eq: "success" }, }, }, ); ``` ### Option 3: Simple Path Comparison ```typescript myWorkflow.step( new Step({ id: "processData", execute: async ({ context }) => { // Action logic }, }), { when: { "fetchData.status": "success", }, }, ); ``` ## Data Access Patterns Mastra provides several ways to pass data between steps: 1. **Context Object** - Access step results directly through the context object 2. **Variable Mapping** - Explicitly map outputs from one step to inputs of another 3. **getStepResult Method** - Type-safe method to retrieve step outputs Each approach has its advantages depending on your use case and requirements for type safety. ### Using getStepResult Method The `getStepResult` method provides a type-safe way to access step results. This is the recommended approach when working with TypeScript as it preserves type information. #### Basic Usage For better type safety, you can provide a type parameter to `getStepResult`: ```typescript showLineNumbers filename="src/mastra/workflows/get-step-result.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const fetchUserStep = new Step({ id: 'fetchUser', outputSchema: z.object({ name: z.string(), userId: z.string(), }), execute: async ({ context }) => { return { name: 'John Doe', userId: '123' }; }, }); const analyzeDataStep = new Step({ id: "analyzeData", execute: async ({ context }) => { // Type-safe access to previous step result const userData = context.getStepResult<{ name: string, userId: string }>("fetchUser"); if (!userData) { return { status: "error", message: "User data not found" }; } return { analysis: `Analyzed data for user ${userData.name}`, userId: userData.userId }; }, }); ``` #### Using Step References The most type-safe approach is to reference the step directly in the `getStepResult` call: ```typescript showLineNumbers filename="src/mastra/workflows/step-reference.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Define step with output schema const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const processUserStep = new Step({ id: "processUser", execute: async ({ context }) => { // TypeScript will infer the correct type from fetchUserStep's outputSchema const userData = context.getStepResult(fetchUserStep); return { processed: true, userName: userData?.name }; }, }); const workflow = new Workflow({ name: "user-workflow", }); workflow .step(fetchUserStep) .then(processUserStep) .commit(); ``` ### Using Variable Mapping Variable mapping is an explicit way to define data flow between steps. This approach makes dependencies clear and provides good type safety. The data injected into the step is available in the `context.inputData` object, and typed based on the `inputSchema` of the step. ```typescript showLineNumbers filename="src/mastra/workflows/variable-mapping.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const sendEmailStep = new Step({ id: "sendEmail", inputSchema: z.object({ recipientEmail: z.string(), recipientName: z.string(), }), execute: async ({ context }) => { const { recipientEmail, recipientName } = context.inputData; // Send email logic here return { status: "sent", to: recipientEmail }; }, }); const workflow = new Workflow({ name: "email-workflow", }); workflow .step(fetchUserStep) .then(sendEmailStep, { variables: { // Map specific fields from fetchUser to sendEmail inputs recipientEmail: { step: fetchUserStep, path: 'email' }, recipientName: { step: fetchUserStep, path: 'name' } } }) .commit(); ``` For more details on variable mapping, see the [Data Mapping with Workflow Variables](./variables.mdx) documentation. ### Using the Context Object The context object provides direct access to all step results and their outputs. This approach is more flexible but requires careful handling to maintain type safety. You can access step results directly through the `context.steps` object: ```typescript showLineNumbers filename="src/mastra/workflows/context-access.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const processOrderStep = new Step({ id: 'processOrder', execute: async ({ context }) => { // Access data from a previous step let userData: { name: string, userId: string }; if (context.steps['fetchUser']?.status === 'success') { userData = context.steps.fetchUser.output; } else { throw new Error('User data not found'); } return { orderId: 'order123', userId: userData.userId, status: 'processing', }; }, }); const workflow = new Workflow({ name: "order-workflow", }); workflow .step(fetchUserStep) .then(processOrderStep) .commit(); ``` ### Workflow-Level Type Safety For comprehensive type safety across your entire workflow, you can define types for all steps and pass them to the Workflow This allows you to get type safety for the context object on conditions, and on step results in the final workflow output. ```typescript showLineNumbers filename="src/mastra/workflows/workflow-typing.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Create steps with typed outputs const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const processOrderStep = new Step({ id: "processOrder", execute: async ({ context }) => { // TypeScript knows the shape of userData const userData = context.getStepResult(fetchUserStep); return { orderId: "order123", status: "processing" }; }, }); const workflow = new Workflow<[typeof fetchUserStep, typeof processOrderStep]>({ name: "typed-workflow", }); workflow .step(fetchUserStep) .then(processOrderStep) .until(async ({ context }) => { // TypeScript knows the shape of userData here const res = context.getStepResult('fetchUser'); return res?.userId === '123'; }, processOrderStep) .commit(); ``` ### Accessing Trigger Data In addition to step results, you can access the original trigger data that started the workflow: ```typescript showLineNumbers filename="src/mastra/workflows/trigger-data.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Define trigger schema const triggerSchema = z.object({ customerId: z.string(), orderItems: z.array(z.string()), }); type TriggerType = z.infer; const processOrderStep = new Step({ id: "processOrder", execute: async ({ context }) => { // Access trigger data with type safety const triggerData = context.getStepResult('trigger'); return { customerId: triggerData?.customerId, itemCount: triggerData?.orderItems.length || 0, status: "processing" }; }, }); const workflow = new Workflow({ name: "order-workflow", triggerSchema, }); workflow .step(processOrderStep) .commit(); ``` ### Accessing Resume Data The data injected into the step is available in the `context.inputData` object, and typed based on the `inputSchema` of the step. ```typescript showLineNumbers filename="src/mastra/workflows/resume-data.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const processOrderStep = new Step({ id: "processOrder", inputSchema: z.object({ orderId: z.string(), }), execute: async ({ context, suspend }) => { const { orderId } = context.inputData; if (!orderId) { await suspend(); return; } return { orderId, status: "processed" }; }, }); const workflow = new Workflow({ name: "order-workflow", }); workflow .step(processOrderStep) .commit(); const run = workflow.createRun(); const result = await run.start(); const resumedResult = await workflow.resume({ runId: result.runId, stepId: 'processOrder', inputData: { orderId: '123', }, }); console.log({resumedResult}); ``` ### Accessing Workflow Results You can get typed access to the results of a workflow by injecting the step types into the `Workflow` type params: ```typescript showLineNumbers filename="src/mastra/workflows/get-results.ts" copy import { Workflow } from "@mastra/core/workflows"; const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const processOrderStep = new Step({ id: "processOrder", outputSchema: z.object({ orderId: z.string(), status: z.string(), }), execute: async ({ context }) => { const userData = context.getStepResult(fetchUserStep); return { orderId: "order123", status: "processing" }; }, }); const workflow = new Workflow<[typeof fetchUserStep, typeof processOrderStep]>({ name: "typed-workflow", }); workflow .step(fetchUserStep) .then(processOrderStep) .commit(); const run = workflow.createRun(); const result = await run.start(); // The result is a discriminated union of the step results // So it needs to be narrowed down via status checks if (result.results.processOrder.status === 'success') { // TypeScript will know the shape of the results const orderId = result.results.processOrder.output.orderId; console.log({orderId}); } if (result.results.fetchUser.status === 'success') { const userId = result.results.fetchUser.output.userId; console.log({userId}); } ``` ### Best Practices for Data Flow 1. **Use getStepResult with Step References for Type Safety** - Ensures TypeScript can infer the correct types - Catches type errors at compile time 2. **Use Variable Mapping for Explicit Dependencies* - Makes data flow clear and maintainable - Provides good documentation of step dependencies 3. **Define Output Schemas for Steps** - Validates data at runtime - Validates return type of the `execute` function - Improves type inference in TypeScript 4. **Handle Missing Data Gracefully** - Always check if step results exist before accessing properties - Provide fallback values for optional data 5. **Keep Data Transformations Simple** - Transform data in dedicated steps rather than in variable mappings - Makes workflows easier to test and debug ### Comparison of Data Flow Methods | Method | Type Safety | Explicitness | Use Case | |--------|------------|--------------|----------| | getStepResult | Highest | High | Complex workflows with strict typing requirements | | Variable Mapping | High | High | When dependencies need to be clear and explicit | | context.steps | Medium | Low | Quick access to step data in simple workflows | By choosing the right data flow method for your use case, you can create workflows that are both type-safe and maintainable. --- title: "Dynamic Workflows | Mastra Docs" description: "Learn how to create dynamic workflows within workflow steps, allowing for flexible workflow creation based on runtime conditions." --- # Dynamic Workflows Source: https://mastra.ai/docs/workflows/dynamic-workflows This guide demonstrates how to create dynamic workflows within a workflow step. This advanced pattern allows you to create and execute workflows on the fly based on runtime conditions. ## Overview Dynamic workflows are useful when you need to create workflows based on runtime data. ## Implementation The key to creating dynamic workflows is accessing the Mastra instance from within a step's `execute` function and using it to create and run a new workflow. ### Basic Example ```typescript import { Mastra, Step, Workflow } from '@mastra/core'; import { z } from 'zod'; const isMastra = (mastra: any): mastra is Mastra => { return mastra && typeof mastra === 'object' && mastra instanceof Mastra; }; // Step that creates and runs a dynamic workflow const createDynamicWorkflow = new Step({ id: 'createDynamicWorkflow', outputSchema: z.object({ dynamicWorkflowResult: z.any(), }), execute: async ({ context, mastra }) => { if (!mastra) { throw new Error('Mastra instance not available'); } if (!isMastra(mastra)) { throw new Error('Invalid Mastra instance'); } const inputData = context.triggerData.inputData; // Create a new dynamic workflow const dynamicWorkflow = new Workflow({ name: 'dynamic-workflow', mastra, // Pass the mastra instance to the new workflow triggerSchema: z.object({ dynamicInput: z.string(), }), }); // Define steps for the dynamic workflow const dynamicStep = new Step({ id: 'dynamicStep', execute: async ({ context }) => { const dynamicInput = context.triggerData.dynamicInput; return { processedValue: `Processed: ${dynamicInput}`, }; }, }); // Build and commit the dynamic workflow dynamicWorkflow.step(dynamicStep).commit(); // Create a run and execute the dynamic workflow const run = dynamicWorkflow.createRun(); const result = await run.start({ triggerData: { dynamicInput: inputData, }, }); let dynamicWorkflowResult; if (result.results['dynamicStep']?.status === 'success') { dynamicWorkflowResult = result.results['dynamicStep']?.output.processedValue; } else { throw new Error('Dynamic workflow failed'); } // Return the result from the dynamic workflow return { dynamicWorkflowResult, }; }, }); // Main workflow that uses the dynamic workflow creator const mainWorkflow = new Workflow({ name: 'main-workflow', triggerSchema: z.object({ inputData: z.string(), }), mastra: new Mastra(), }); mainWorkflow.step(createDynamicWorkflow).commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { mainWorkflow }, }); const run = mainWorkflow.createRun(); const result = await run.start({ triggerData: { inputData: 'test', }, }); ``` ## Advanced Example: Workflow Factory You can create a workflow factory that generates different workflows based on input parameters: ```typescript const isMastra = (mastra: any): mastra is Mastra => { return mastra && typeof mastra === 'object' && mastra instanceof Mastra; }; const workflowFactory = new Step({ id: 'workflowFactory', inputSchema: z.object({ workflowType: z.enum(['simple', 'complex']), inputData: z.string(), }), outputSchema: z.object({ result: z.any(), }), execute: async ({ context, mastra }) => { if (!mastra) { throw new Error('Mastra instance not available'); } if (!isMastra(mastra)) { throw new Error('Invalid Mastra instance'); } // Create a new dynamic workflow based on the type const dynamicWorkflow = new Workflow({ name: `dynamic-${context.workflowType}-workflow`, mastra, triggerSchema: z.object({ input: z.string(), }), }); if (context.workflowType === 'simple') { // Simple workflow with a single step const simpleStep = new Step({ id: 'simpleStep', execute: async ({ context }) => { return { result: `Simple processing: ${context.triggerData.input}`, }; }, }); dynamicWorkflow.step(simpleStep).commit(); } else { // Complex workflow with multiple steps const step1 = new Step({ id: 'step1', outputSchema: z.object({ intermediateResult: z.string(), }), execute: async ({ context }) => { return { intermediateResult: `First processing: ${context.triggerData.input}`, }; }, }); const step2 = new Step({ id: 'step2', execute: async ({ context }) => { const intermediate = context.getStepResult(step1).intermediateResult; return { finalResult: `Second processing: ${intermediate}`, }; }, }); dynamicWorkflow.step(step1).then(step2).commit(); } // Execute the dynamic workflow const run = dynamicWorkflow.createRun(); const result = await run.start({ triggerData: { input: context.inputData, }, }); // Return the appropriate result based on workflow type if (context.workflowType === 'simple') { return { // @ts-ignore result: result.results['simpleStep']?.output, }; } else { return { // @ts-ignore result: result.results['step2']?.output, }; } }, }); ``` ## Important Considerations 1. **Mastra Instance**: The `mastra` parameter in the `execute` function provides access to the Mastra instance, which is essential for creating dynamic workflows. 2. **Error Handling**: Always check if the Mastra instance is available before attempting to create a dynamic workflow. 3. **Resource Management**: Dynamic workflows consume resources, so be mindful of creating too many workflows in a single execution. 4. **Workflow Lifecycle**: Dynamic workflows are not automatically registered with the main Mastra instance. They exist only for the duration of the step execution unless you explicitly register them. 5. **Debugging**: Debugging dynamic workflows can be challenging. Consider adding detailed logging to track their creation and execution. ## Use Cases - **Conditional Workflow Selection**: Choose different workflow patterns based on input data - **Parameterized Workflows**: Create workflows with dynamic configurations - **Workflow Templates**: Use templates to generate specialized workflows - **Multi-tenant Applications**: Create isolated workflows for different tenants ## Conclusion Dynamic workflows provide a powerful way to create flexible, adaptable workflow systems. By leveraging the Mastra instance within step execution, you can create workflows that respond to runtime conditions and requirements. --- title: "Error Handling in Workflows | Mastra Docs" description: "Learn how to handle errors in Mastra workflows using step retries, conditional branching, and monitoring." --- # Error Handling in Workflows Source: https://mastra.ai/docs/workflows/error-handling Robust error handling is essential for production workflows. Mastra provides several mechanisms to handle errors gracefully, allowing your workflows to recover from failures or gracefully degrade when necessary. ## Overview Error handling in Mastra workflows can be implemented using: 1. **Step Retries** - Automatically retry failed steps 2. **Conditional Branching** - Create alternative paths based on step success or failure 3. **Error Monitoring** - Watch workflows for errors and handle them programmatically 4. **Result Status Checks** - Check the status of previous steps in subsequent steps ## Step Retries Mastra provides a built-in retry mechanism for steps that fail due to transient errors. This is particularly useful for steps that interact with external services or resources that might experience temporary unavailability. ### Basic Retry Configuration You can configure retries at the workflow level or for individual steps: ```typescript // Workflow-level retry configuration const workflow = new Workflow({ name: 'my-workflow', retryConfig: { attempts: 3, // Number of retry attempts delay: 1000, // Delay between retries in milliseconds }, }); // Step-level retry configuration (overrides workflow-level) const apiStep = new Step({ id: 'callApi', execute: async () => { // API call that might fail }, retryConfig: { attempts: 5, // This step will retry up to 5 times delay: 2000, // With a 2-second delay between retries }, }); ``` For more details about step retries, see the [Step Retries](../reference/workflows/step-retries.mdx) reference. ## Conditional Branching You can create alternative workflow paths based on the success or failure of previous steps using conditional logic: ```typescript // Create a workflow with conditional branching const workflow = new Workflow({ name: 'error-handling-workflow', }); workflow .step(fetchDataStep) .then(processDataStep, { // Only execute processDataStep if fetchDataStep was successful when: ({ context }) => { return context.steps.fetchDataStep?.status === 'success'; }, }) .then(fallbackStep, { // Execute fallbackStep if fetchDataStep failed when: ({ context }) => { return context.steps.fetchDataStep?.status === 'failed'; }, }) .commit(); ``` ## Error Monitoring You can monitor workflows for errors using the `watch` method: ```typescript const { start, watch } = workflow.createRun(); watch(async ({ context, activePaths }) => { // Check for any failed steps const failedSteps = Object.entries(context.steps) .filter(([_, step]) => step.status === 'failed') .map(([stepId]) => stepId); if (failedSteps.length > 0) { console.error(`Workflow has failed steps: ${failedSteps.join(', ')}`); // Take remedial action, such as alerting or logging } }); await start(); ``` ## Handling Errors in Steps Within a step's execution function, you can handle errors programmatically: ```typescript const robustStep = new Step({ id: 'robustStep', execute: async ({ context }) => { try { // Attempt the primary operation const result = await someRiskyOperation(); return { success: true, data: result }; } catch (error) { // Log the error console.error('Operation failed:', error); // Return a graceful fallback result instead of throwing return { success: false, error: error.message, fallbackData: 'Default value' }; } }, }); ``` ## Checking Previous Step Results You can make decisions based on the results of previous steps: ```typescript const finalStep = new Step({ id: 'finalStep', execute: async ({ context }) => { // Check results of previous steps const step1Success = context.steps.step1?.status === 'success'; const step2Success = context.steps.step2?.status === 'success'; if (step1Success && step2Success) { // All steps succeeded return { status: 'complete', result: 'All operations succeeded' }; } else if (step1Success) { // Only step1 succeeded return { status: 'partial', result: 'Partial completion' }; } else { // Critical failure return { status: 'failed', result: 'Critical steps failed' }; } }, }); ``` ## Best Practices for Error Handling 1. **Use retries for transient failures**: Configure retry policies for steps that might experience temporary issues. 2. **Provide fallback paths**: Design workflows with alternative paths for when critical steps fail. 3. **Be specific about error scenarios**: Use different handling strategies for different types of errors. 4. **Log errors comprehensively**: Include context information when logging errors to aid in debugging. 5. **Return meaningful data on failure**: When a step fails, return structured data about the failure to help downstream steps make decisions. 6. **Consider idempotency**: Ensure steps can be safely retried without causing duplicate side effects. 7. **Monitor workflow execution**: Use the `watch` method to actively monitor workflow execution and detect errors early. ## Advanced Error Handling For more complex error handling scenarios, consider: - **Implementing circuit breakers**: If a step fails repeatedly, stop retrying and use a fallback strategy - **Adding timeout handling**: Set time limits for steps to prevent workflows from hanging indefinitely - **Creating dedicated error recovery workflows**: For critical workflows, create separate recovery workflows that can be triggered when the main workflow fails ## Related - [Step Retries Reference](../reference/workflows/step-retries.mdx) - [Watch Method Reference](../reference/workflows/watch.mdx) - [Step Conditions](../reference/workflows/step-condition.mdx) - [Control Flow](./control-flow.mdx) --- title: "Creating Steps and Adding to Workflows | Mastra Docs" description: "Steps in Mastra workflows provide a structured way to manage operations by defining inputs, outputs, and execution logic." --- # Defining Steps in a Workflow Source: https://mastra.ai/docs/workflows/steps When you build a workflow, you typically break down operations into smaller tasks that can be linked and reused. Steps provide a structured way to manage these tasks by defining inputs, outputs, and execution logic. The code below shows how to define these steps inline or separately. ## Inline Step Creation You can create steps directly within your workflow using `.step()` and `.then()`. This code shows how to define, link, and execute two steps in sequence. ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; export const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); myWorkflow .step( new Step({ id: "stepOne", outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }), ) .then( new Step({ id: "stepTwo", outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 }; } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 }; }, }), ).commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { myWorkflow }, }); ``` ## Creating Steps Separately If you prefer to manage your step logic in separate entities, you can define steps outside and then add them to your workflow. This code shows how to define steps independently and link them afterward. ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Define steps separately const stepOne = new Step({ id: "stepOne", outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }); const stepTwo = new Step({ id: "stepTwo", outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 }; } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 }; }, }); // Build the workflow const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); myWorkflow.step(stepOne).then(stepTwo); myWorkflow.commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { myWorkflow }, }); ``` --- title: "Suspend & Resume Workflows | Human-in-the-Loop | Mastra Docs" description: "Suspend and resume in Mastra workflows allows you to pause execution while waiting for external input or resources." --- # Suspend and Resume in Workflows Source: https://mastra.ai/docs/workflows/suspend-and-resume Complex workflows often need to pause execution while waiting for external input or resources. Mastra's suspend and resume features let you pause workflow execution at any step, persist the workflow snapshot to storage, and resume execution from the saved snapshot when ready. This entire process is automatically managed by Mastra. No config needed, or manual step required from the user. Storing the workflow snapshot to storage (LibSQL by default) means that the workflow state is permanently preserved across sessions, deployments, and server restarts. This persistence is crucial for workflows that might remain suspended for minutes, hours, or even days while waiting for external input or resources. ## When to Use Suspend/Resume Common scenarios for suspending workflows include: - Waiting for human approval or input - Pausing until external API resources become available - Collecting additional data needed for later steps - Rate limiting or throttling expensive operations - Handling event-driven processes with external triggers ## Basic Suspend Example Here's a simple workflow that suspends when a value is too low and resumes when given a higher value: ```typescript const stepTwo = new Step({ id: "stepTwo", outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context, suspend }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 }; } const currentValue = context.steps.stepOne.output.doubledValue; if (currentValue < 100) { await suspend(); return { incrementedValue: 0 }; } return { incrementedValue: currentValue + 1 }; }, }); ``` ## Async/Await Based Flow The suspend and resume mechanism in Mastra uses an async/await pattern that makes it intuitive to implement complex workflows with suspension points. The code structure naturally reflects the execution flow. ### How It Works 1. A step's execution function receives a `suspend` function in its parameters 2. When called with `await suspend()`, the workflow pauses at that point 3. The workflow state is persisted 4. Later, the workflow can be resumed by calling `workflow.resume()` with the appropriate parameters 5. Execution continues from the point after the `suspend()` call ### Example with Multiple Suspension Points Here's an example of a workflow with multiple steps that can suspend: ```typescript // Define steps with suspend capability const promptAgentStep = new Step({ id: 'promptAgent', execute: async ({ context, suspend }) => { // Some condition that determines if we need to suspend if (needHumanInput) { // Optionally pass payload data that will be stored with suspended state await suspend({ requestReason: 'Need human input for prompt' }); // Code after suspend() will execute when the step is resumed return { modelOutput: context.userInput }; } return { modelOutput: 'AI generated output' }; }, outputSchema: z.object({ modelOutput: z.string() }), }); const improveResponseStep = new Step({ id: 'improveResponse', execute: async ({ context, suspend }) => { // Another condition for suspension if (needFurtherRefinement) { await suspend(); return { improvedOutput: context.refinedOutput }; } return { improvedOutput: 'Improved output' }; }, outputSchema: z.object({ improvedOutput: z.string() }), }); // Build the workflow const workflow = new Workflow({ name: 'multi-suspend-workflow', triggerSchema: z.object({ input: z.string() }), }); workflow .step(getUserInput) .then(promptAgentStep) .then(evaluateTone) .then(improveResponseStep) .then(evaluateImproved) .commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ### Starting and Resuming the Workflow ```typescript // Get the workflow and create a run const wf = mastra.getWorkflow('multi-suspend-workflow'); const run = wf.createRun(); // Start the workflow const initialResult = await run.start({ triggerData: { input: 'initial input' } }); let promptAgentStepResult = initialResult.activePaths.get('promptAgent'); let promptAgentResumeResult = undefined; // Check if a step is suspended if (promptAgentStepResult?.status === 'suspended') { console.log('Workflow suspended at promptAgent step'); // Resume the workflow with new context const resumeResult = await run.resume({ stepId: 'promptAgent', context: { userInput: 'Human provided input' } }); promptAgentResumeResult = resumeResult; } const improveResponseStepResult = promptAgentResumeResult?.activePaths.get('improveResponse'); if (improveResponseStepResult?.status === 'suspended') { console.log('Workflow suspended at improveResponse step'); // Resume again with different context const finalResult = await run.resume({ stepId: 'improveResponse', context: { refinedOutput: 'Human refined output' } }); console.log('Workflow completed:', finalResult?.results); } ``` ## Event-Based Suspension and Resumption In addition to manually suspending steps, Mastra provides event-based suspension through the `afterEvent` method. This allows workflows to automatically suspend and wait for a specific event to occur before continuing. ### Using afterEvent and resumeWithEvent The `afterEvent` method automatically creates a suspension point in your workflow that waits for a specific event to occur. When the event happens, you can use `resumeWithEvent` to continue the workflow with the event data. Here's how it works: 1. Define events in your workflow configuration 2. Use `afterEvent` to create a suspension point waiting for that event 3. When the event occurs, call `resumeWithEvent` with the event name and data ### Example: Event-Based Workflow ```typescript // Define steps const getUserInput = new Step({ id: 'getUserInput', execute: async () => ({ userInput: 'initial input' }), outputSchema: z.object({ userInput: z.string() }), }); const processApproval = new Step({ id: 'processApproval', execute: async ({ context }) => { // Access the event data from the context const approvalData = context.inputData?.resumedEvent; return { approved: approvalData?.approved, approvedBy: approvalData?.approverName }; }, outputSchema: z.object({ approved: z.boolean(), approvedBy: z.string() }), }); // Create workflow with event definition const approvalWorkflow = new Workflow({ name: 'approval-workflow', triggerSchema: z.object({ requestId: z.string() }), events: { approvalReceived: { schema: z.object({ approved: z.boolean(), approverName: z.string(), }), }, }, }); // Build workflow with event-based suspension approvalWorkflow .step(getUserInput) .afterEvent('approvalReceived') // Workflow will automatically suspend here .step(processApproval) // This step runs after the event is received .commit(); ``` ### Running an Event-Based Workflow ```typescript // Get the workflow const workflow = mastra.getWorkflow('approval-workflow'); const run = workflow.createRun(); // Start the workflow const initialResult = await run.start({ triggerData: { requestId: 'request-123' } }); console.log('Workflow started, waiting for approval event'); console.log(initialResult.results); // Output will show the workflow is suspended at the event step: // { // getUserInput: { status: 'success', output: { userInput: 'initial input' } }, // __approvalReceived_event: { status: 'suspended' } // } // Later, when the approval event occurs: const resumeResult = await run.resumeWithEvent('approvalReceived', { approved: true, approverName: 'Jane Doe' }); console.log('Workflow resumed with event data:', resumeResult.results); // Output will show the completed workflow: // { // getUserInput: { status: 'success', output: { userInput: 'initial input' } }, // __approvalReceived_event: { status: 'success', output: { executed: true, resumedEvent: { approved: true, approverName: 'Jane Doe' } } }, // processApproval: { status: 'success', output: { approved: true, approvedBy: 'Jane Doe' } } // } ``` ### Key Points About Event-Based Workflows - The `suspend()` function can optionally take a payload object that will be stored with the suspended state - Code after the `await suspend()` call will not execute until the step is resumed - When a step is suspended, its status becomes `'suspended'` in the workflow results - When resumed, the step's status changes from `'suspended'` to `'success'` once completed - The `resume()` method requires the `stepId` to identify which suspended step to resume - You can provide new context data when resuming that will be merged with existing step results - Events must be defined in the workflow configuration with a schema - The `afterEvent` method creates a special suspended step that waits for the event - The event step is automatically named `__eventName_event` (e.g., `__approvalReceived_event`) - Use `resumeWithEvent` to provide event data and continue the workflow - Event data is validated against the schema defined for that event - The event data is available in the context as `inputData.resumedEvent` ## Storage for Suspend and Resume When a workflow is suspended using `await suspend()`, Mastra automatically persists the entire workflow state to storage. This is essential for workflows that might remain suspended for extended periods, as it ensures the state is preserved across application restarts or server instances. ### Default Storage: LibSQL By default, Mastra uses LibSQL as its storage engine: ```typescript import { Mastra } from '@mastra/core/mastra'; import { DefaultStorage } from '@mastra/core/storage/libsql'; const mastra = new Mastra({ storage: new DefaultStorage({ config: { url: "file:storage.db", // Local file-based database for development // For production, use a persistent URL: // url: process.env.DATABASE_URL, // authToken: process.env.DATABASE_AUTH_TOKEN, // Optional for authenticated connections } }), }); ``` The LibSQL storage can be configured in different modes: - In-memory database (testing): `:memory:` - File-based database (development): `file:storage.db` - Remote database (production): URLs like `libsql://your-database.turso.io` ### Alternative Storage Options #### Upstash (Redis-Compatible) For serverless applications or environments where Redis is preferred: ```bash npm install @mastra/upstash ``` ```typescript import { Mastra } from '@mastra/core/mastra'; import { UpstashStore } from "@mastra/upstash"; const mastra = new Mastra({ storage: new UpstashStore({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }), }); ``` ### Storage Considerations - All storage options support suspend and resume functionality identically - The workflow state is automatically serialized and saved when suspended - No additional configuration is needed for suspend/resume to work with storage - Choose your storage option based on your infrastructure, scaling needs, and existing technology stack ## Watching and Resuming To handle suspended workflows, use the `watch` method to monitor workflow status per run and `resume` to continue execution: ```typescript import { mastra } from "./index"; // Get the workflow const myWorkflow = mastra.getWorkflow('myWorkflow'); const { start, watch, resume } = myWorkflow.createRun(); // Start watching the workflow before executing it watch(async ({ context, activePaths }) => { for (const _path of activePaths) { const stepTwoStatus = context.steps?.stepTwo?.status; if (stepTwoStatus === 'suspended') { console.log("Workflow suspended, resuming with new value"); // Resume the workflow with new context await resume({ stepId: 'stepTwo', context: { secondValue: 100 }, }); } } }) // Start the workflow execution await start({ triggerData: { inputValue: 45 } }); ``` ### Watching and Resuming Event-Based Workflows You can use the same watching pattern with event-based workflows: ```typescript const { start, watch, resumeWithEvent } = workflow.createRun(); // Watch for suspended event steps watch(async ({ context, activePaths }) => { for (const path of activePaths) { if (path.stepId === '__approvalReceived_event' && path.status === 'suspended') { console.log("Workflow waiting for approval event"); // In a real scenario, you would wait for the actual event to occur // For example, this could be triggered by a webhook or user interaction setTimeout(async () => { await resumeWithEvent('approvalReceived', { approved: true, approverName: 'Auto Approver', }); }, 5000); // Simulate event after 5 seconds } } }); // Start the workflow await start({ triggerData: { requestId: 'auto-123' } }); ``` ## Further Reading For a deeper understanding of how suspend and resume works under the hood: - [Understanding Snapshots in Mastra Workflows](../reference/workflows/snapshots.mdx) - Learn about the snapshot mechanism that powers suspend and resume functionality - [Step Configuration Guide](./steps.mdx) - Learn more about configuring steps in your workflows - [Control Flow Guide](./control-flow.mdx) - Advanced workflow control patterns - [Event-Driven Workflows](../reference/workflows/events.mdx) - Detailed reference for event-based workflows ## Related Resources - See the [Suspend and Resume Example](../../examples/workflows/suspend-and-resume.mdx) for a complete working example - Check the [Step Class Reference](../reference/workflows/step-class.mdx) for suspend/resume API details - Review [Workflow Observability](../reference/observability/otel-config.mdx) for monitoring suspended workflows --- title: "Data Mapping with Workflow Variables | Mastra Docs" description: "Learn how to use workflow variables to map data between steps and create dynamic data flows in your Mastra workflows." --- # Data Mapping with Workflow Variables Source: https://mastra.ai/docs/workflows/variables Workflow variables in Mastra provide a powerful mechanism for mapping data between steps, allowing you to create dynamic data flows and pass information from one step to another. ## Understanding Workflow Variables In Mastra workflows, variables serve as a way to: - Map data from trigger inputs to step inputs - Pass outputs from one step to inputs of another step - Access nested properties within step outputs - Create more flexible and reusable workflow steps ## Using Variables for Data Mapping ### Basic Variable Mapping You can map data between steps using the `variables` property when adding a step to your workflow: ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy const workflow = new Workflow({ name: 'data-mapping-workflow', triggerSchema: z.object({ inputData: z.string(), }), }); workflow .step(step1, { variables: { // Map trigger data to step input inputData: { step: 'trigger', path: 'inputData' } } }) .then(step2, { variables: { // Map output from step1 to input for step2 previousValue: { step: step1, path: 'outputField' } } }) .commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ### Accessing Nested Properties You can access nested properties using dot notation in the `path` field: ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy workflow .step(step1) .then(step2, { variables: { // Access a nested property from step1's output nestedValue: { step: step1, path: 'nested.deeply.value' } } }) .commit(); ``` ### Mapping Entire Objects You can map an entire object by using `.` as the path: ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy workflow .step(step1, { variables: { // Map the entire trigger data object triggerData: { step: 'trigger', path: '.' } } }) .commit(); ``` ## Variable Resolution When a workflow executes, Mastra resolves variables at runtime by: 1. Identifying the source step specified in the `step` property 2. Retrieving the output from that step 3. Navigating to the specified property using the `path` 4. Injecting the resolved value into the target step's context as the `inputData` property ## Examples ### Mapping from Trigger Data This example shows how to map data from the workflow trigger to a step: ```typescript showLineNumbers filename="src/mastra/workflows/trigger-mapping.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Define a step that needs user input const processUserInput = new Step({ id: "processUserInput", execute: async ({ context }) => { // The inputData will be available in context because of the variable mapping const { inputData } = context.inputData; return { processedData: `Processed: ${inputData}` }; }, }); // Create the workflow const workflow = new Workflow({ name: "trigger-mapping", triggerSchema: z.object({ inputData: z.string(), }), }); // Map the trigger data to the step workflow .step(processUserInput, { variables: { inputData: { step: 'trigger', path: 'inputData' }, } }) .commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ### Mapping Between Steps This example demonstrates mapping data from one step to another: ```typescript showLineNumbers filename="src/mastra/workflows/step-mapping.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Step 1: Generate data const generateData = new Step({ id: "generateData", outputSchema: z.object({ nested: z.object({ value: z.string(), }), }), execute: async () => { return { nested: { value: "step1-data" } }; }, }); // Step 2: Process the data from step 1 const processData = new Step({ id: "processData", inputSchema: z.object({ previousValue: z.string(), }), execute: async ({ context }) => { // previousValue will be available because of the variable mapping const { previousValue } = context.inputData; return { result: `Processed: ${previousValue}` }; }, }); // Create the workflow const workflow = new Workflow({ name: "step-mapping", }); // Map data from step1 to step2 workflow .step(generateData) .then(processData, { variables: { // Map the nested.value property from generateData's output previousValue: { step: generateData, path: 'nested.value' }, } }) .commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ## Type Safety Mastra provides type safety for variable mappings when using TypeScript: ```typescript showLineNumbers filename="src/mastra/workflows/type-safe.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Define schemas for better type safety const triggerSchema = z.object({ inputValue: z.string(), }); type TriggerType = z.infer; // Step with typed context const step1 = new Step({ id: "step1", outputSchema: z.object({ nested: z.object({ value: z.string(), }), }), execute: async ({ context }) => { // TypeScript knows the shape of triggerData const triggerData = context.getStepResult('trigger'); return { nested: { value: `processed-${triggerData?.inputValue}` } }; }, }); // Create the workflow with the schema const workflow = new Workflow({ name: "type-safe-workflow", triggerSchema, }); workflow.step(step1).commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ## Best Practices 1. **Validate Inputs and Outputs**: Use `inputSchema` and `outputSchema` to ensure data consistency. 2. **Keep Mappings Simple**: Avoid overly complex nested paths when possible. 3. **Consider Default Values**: Handle cases where mapped data might be undefined. ## Comparison with Direct Context Access While you can access previous step results directly via `context.steps`, using variable mappings offers several advantages: | Feature | Variable Mapping | Direct Context Access | | ------- | --------------- | --------------------- | | Clarity | Explicit data dependencies | Implicit dependencies | | Reusability | Steps can be reused with different mappings | Steps are tightly coupled | | Type Safety | Better TypeScript integration | Requires manual type assertions | --- title: "Example: Adding Voice Capabilities | Agents | Mastra" description: "Example of adding voice capabilities to Mastra agents, enabling them to speak and listen using different voice providers." --- import { GithubLink } from "../../../components/github-link"; # Giving your Agent a Voice Source: https://mastra.ai/examples/agents/adding-voice-capabilities This example demonstrates how to add voice capabilities to Mastra agents, enabling them to speak and listen using different voice providers. We'll create two agents with different voice configurations and show how they can interact using speech. The example showcases: 1. Using CompositeVoice to combine different providers for speaking and listening 2. Using a single provider for both capabilities 3. Basic voice interactions between agents First, let's import the required dependencies and set up our agents: ```ts showLineNumbers copy // Import required dependencies import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { CompositeVoice } from '@mastra/core/voice'; import { OpenAIVoice } from '@mastra/voice-openai'; import { createReadStream, createWriteStream } from 'fs'; import { PlayAIVoice } from '@mastra/voice-playai'; import path from 'path'; // Initialize Agent 1 with both listening and speaking capabilities const agent1 = new Agent({ name: 'Agent1', instructions: `You are an agent with both STT and TTS capabilities.`, model: openai('gpt-4o'), voice: new CompositeVoice({ listenProvider: new OpenAIVoice(), // For converting speech to text speakProvider: new PlayAIVoice(), // For converting text to speech }), }); // Initialize Agent 2 with just OpenAI for both listening and speaking capabilities const agent2 = new Agent({ name: 'Agent2', instructions: `You are an agent with both STT and TTS capabilities.`, model: openai('gpt-4o'), voice: new OpenAIVoice(), }); ``` In this setup: - Agent1 uses a CompositeVoice that combines OpenAI for speech-to-text and PlayAI for text-to-speech - Agent2 uses OpenAI's voice capabilities for both functions Now let's demonstrate a basic interaction between the agents: ```ts showLineNumbers copy // Step 1: Agent 1 speaks a question and saves it to a file const audio1 = await agent1.speak('What is the meaning of life in one sentence?'); await saveAudioToFile(audio1, 'agent1-question.mp3'); // Step 2: Agent 2 listens to Agent 1's question const audioFilePath = path.join(process.cwd(), 'agent1-question.mp3'); const audioStream = createReadStream(audioFilePath); const audio2 = await agent2.listen(audioStream); const text = await convertToText(audio2); // Step 3: Agent 2 generates and speaks a response const agent2Response = await agent2.generate(text); const agent2ResponseAudio = await agent2.speak(agent2Response.text); await saveAudioToFile(agent2ResponseAudio, 'agent2-response.mp3'); ``` Here's what's happening in the interaction: 1. Agent1 converts text to speech using PlayAI and saves it to a file (we save the audio so you can hear the interaction) 2. Agent2 listens to the audio file using OpenAI's speech-to-text 3. Agent2 generates a response and converts it to speech The example includes helper functions for handling audio files: ```ts showLineNumbers copy /** * Saves an audio stream to a file */ async function saveAudioToFile(audio: NodeJS.ReadableStream, filename: string): Promise { const filePath = path.join(process.cwd(), filename); const writer = createWriteStream(filePath); audio.pipe(writer); return new Promise((resolve, reject) => { writer.on('finish', resolve); writer.on('error', reject); }); } /** * Converts either a string or a readable stream to text */ async function convertToText(input: string | NodeJS.ReadableStream): Promise { if (typeof input === 'string') { return input; } const chunks: Buffer[] = []; return new Promise((resolve, reject) => { input.on('data', chunk => chunks.push(Buffer.from(chunk))); input.on('error', err => reject(err)); input.on('end', () => resolve(Buffer.concat(chunks).toString('utf-8'))); }); } ``` ## Key Points 1. The `voice` property in the Agent configuration accepts any implementation of MastraVoice 2. CompositeVoice allows using different providers for speaking and listening 3. Audio can be handled as streams, making it efficient for real-time processing 4. Voice capabilities can be combined with the agent's natural language processing




--- title: "Example: Calling Agentic Workflows | Agents | Mastra Docs" description: Example of creating AI workflows in Mastra, demonstrating integration of external APIs with LLM-powered planning. --- import { GithubLink } from "../../../components/github-link"; # Agentic Workflows Source: https://mastra.ai/examples/agents/agentic-workflows When building AI applications, you often need to coordinate multiple steps that depend on each other's outputs. This example shows how to create an AI workflow that fetches weather data and uses it to suggest activities, demonstrating how to integrate external APIs with LLM-powered planning. ```ts showLineNumbers copy import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: 'Weather Agent', instructions: ` You are a local activities and travel expert who excels at weather-based planning. Analyze the weather data and provide practical activity recommendations. For each day in the forecast, structure your response exactly as follows: 📅 [Day, Month Date, Year] ═══════════════════════════ 🌡️ WEATHER SUMMARY • Conditions: [brief description] • Temperature: [X°C/Y°F to A°C/B°F] • Precipitation: [X% chance] 🌅 MORNING ACTIVITIES Outdoor: • [Activity Name] - [Brief description including specific location/route] Best timing: [specific time range] Note: [relevant weather consideration] 🌞 AFTERNOON ACTIVITIES Outdoor: • [Activity Name] - [Brief description including specific location/route] Best timing: [specific time range] Note: [relevant weather consideration] 🏠 INDOOR ALTERNATIVES • [Activity Name] - [Brief description including specific venue] Ideal for: [weather condition that would trigger this alternative] ⚠️ SPECIAL CONSIDERATIONS • [Any relevant weather warnings, UV index, wind conditions, etc.] Guidelines: - Suggest 2-3 time-specific outdoor activities per day - Include 1-2 indoor backup options - For precipitation >50%, lead with indoor activities - All activities must be specific to the location - Include specific venues, trails, or locations - Consider activity intensity based on temperature - Keep descriptions concise but informative Maintain this exact formatting for consistency, using the emoji and section headers as shown. `, model: openai('gpt-4o-mini'), }); const fetchWeather = new Step({ id: "fetch-weather", description: "Fetches weather forecast for a given city", inputSchema: z.object({ city: z.string().describe("The city to get the weather for"), }), execute: async ({ context }) => { const triggerData = context?.getStepResult<{ city: string; }>("trigger"); if (!triggerData) { throw new Error("Trigger data not found"); } const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(triggerData.city)}&count=1`; const geocodingResponse = await fetch(geocodingUrl); const geocodingData = await geocodingResponse.json(); if (!geocodingData.results?.[0]) { throw new Error(`Location '${triggerData.city}' not found`); } const { latitude, longitude, name } = geocodingData.results[0]; const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&daily=temperature_2m_max,temperature_2m_min,precipitation_probability_mean,weathercode&timezone=auto`; const response = await fetch(weatherUrl); const data = await response.json(); const forecast = data.daily.time.map((date: string, index: number) => ({ date, maxTemp: data.daily.temperature_2m_max[index], minTemp: data.daily.temperature_2m_min[index], precipitationChance: data.daily.precipitation_probability_mean[index], condition: getWeatherCondition(data.daily.weathercode[index]), location: name, })); return forecast; }, }); const forecastSchema = z.array( z.object({ date: z.string(), maxTemp: z.number(), minTemp: z.number(), precipitationChance: z.number(), condition: z.string(), location: z.string(), }), ); const planActivities = new Step({ id: "plan-activities", description: "Suggests activities based on weather conditions", inputSchema: forecastSchema, execute: async ({ context, mastra }) => { const forecast = context?.getStepResult>( "fetch-weather", ); if (!forecast) { throw new Error("Forecast data not found"); } const prompt = `Based on the following weather forecast for ${forecast[0].location}, suggest appropriate activities: ${JSON.stringify(forecast, null, 2)} `; const response = await agent.stream([ { role: "user", content: prompt, }, ]); let activitiesText = ''; for await (const chunk of response.textStream) { process.stdout.write(chunk); activitiesText += chunk; } return { activities: activitiesText, }; }, }); function getWeatherCondition(code: number): string { const conditions: Record = { 0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast", 45: "Foggy", 48: "Depositing rime fog", 51: "Light drizzle", 53: "Moderate drizzle", 55: "Dense drizzle", 61: "Slight rain", 63: "Moderate rain", 65: "Heavy rain", 71: "Slight snow fall", 73: "Moderate snow fall", 75: "Heavy snow fall", 95: "Thunderstorm", }; return conditions[code] || "Unknown"; } const weatherWorkflow = new Workflow({ name: "weather-workflow", triggerSchema: z.object({ city: z.string().describe("The city to get the weather for"), }), }) .step(fetchWeather) .then(planActivities); weatherWorkflow.commit(); const mastra = new Mastra({ workflows: { weatherWorkflow, }, }); async function main() { const { start } = mastra.getWorkflow("weatherWorkflow").createRun(); const result = await start({ triggerData: { city: "London", }, }); console.log("\n \n"); console.log(result); } main(); ``` --- title: "Example: Categorizing Birds | Agents | Mastra Docs" description: Example of using a Mastra AI Agent to determine if an image from Unsplash depicts a bird. --- import { GithubLink } from "../../../components/github-link"; # Example: Categorizing Birds with an AI Agent Source: https://mastra.ai/examples/agents/bird-checker We will get a random image from [Unsplash](https://unsplash.com/) that matches a selected query and uses a [Mastra AI Agent](/docs/agents/00-overview.md) to determine if it is a bird or not. ```ts showLineNumbers copy import { anthropic } from "@ai-sdk/anthropic"; import { Agent } from "@mastra/core/agent"; import { z } from "zod"; export type Image = { alt_description: string; urls: { regular: string; raw: string; }; user: { first_name: string; links: { html: string; }; }; }; export type ImageResponse = | { ok: true; data: T; } | { ok: false; error: K; }; const getRandomImage = async ({ query, }: { query: string; }): Promise> => { const page = Math.floor(Math.random() * 20); const order_by = Math.random() < 0.5 ? "relevant" : "latest"; try { const res = await fetch( `https://api.unsplash.com/search/photos?query=${query}&page=${page}&order_by=${order_by}`, { method: "GET", headers: { Authorization: `Client-ID ${process.env.UNSPLASH_ACCESS_KEY}`, "Accept-Version": "v1", }, cache: "no-store", }, ); if (!res.ok) { return { ok: false, error: "Failed to fetch image", }; } const data = (await res.json()) as { results: Array; }; const randomNo = Math.floor(Math.random() * data.results.length); return { ok: true, data: data.results[randomNo] as Image, }; } catch (err) { return { ok: false, error: "Error fetching image", }; } }; const instructions = ` You can view an image and figure out if it is a bird or not. You can also figure out the species of the bird and where the picture was taken. `; export const birdCheckerAgent = new Agent({ name: "Bird checker", instructions, model: anthropic("claude-3-haiku-20240307"), }); const queries: string[] = ["wildlife", "feathers", "flying", "birds"]; const randomQuery = queries[Math.floor(Math.random() * queries.length)]; // Get the image url from Unsplash with random type const imageResponse = await getRandomImage({ query: randomQuery }); if (!imageResponse.ok) { console.log("Error fetching image", imageResponse.error); process.exit(1); } console.log("Image URL: ", imageResponse.data.urls.regular); const response = await birdCheckerAgent.generate( [ { role: "user", content: [ { type: "image", image: new URL(imageResponse.data.urls.regular), }, { type: "text", text: "view this image and let me know if it's a bird or not, and the scientific name of the bird without any explanation. Also summarize the location for this picture in one or two short sentences understandable by a high school student", }, ], }, ], { output: z.object({ bird: z.boolean(), species: z.string(), location: z.string(), }), }, ); console.log(response.object); ```




--- title: "Example: Hierarchical Multi-Agent System | Agents | Mastra" description: Example of creating a hierarchical multi-agent system using Mastra, where agents interact through tool functions. --- import { GithubLink } from "../../../components/github-link"; # Hierarchical Multi-Agent System Source: https://mastra.ai/examples/agents/hierarchical-multi-agent This example demonstrates how to create a hierarchical multi-agent system where agents interact through tool functions, with one agent coordinating the work of others. The system consists of three agents: 1. A Publisher agent (supervisor) that orchestrates the process 2. A Copywriter agent that writes the initial content 3. An Editor agent that refines the content First, define the Copywriter agent and its tool: ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { anthropic } from "@ai-sdk/anthropic"; const copywriterAgent = new Agent({ name: "Copywriter", instructions: "You are a copywriter agent that writes blog post copy.", model: anthropic("claude-3-5-sonnet-20241022"), }); const copywriterTool = createTool({ id: "copywriter-agent", description: "Calls the copywriter agent to write blog post copy.", inputSchema: z.object({ topic: z.string().describe("Blog post topic"), }), outputSchema: z.object({ copy: z.string().describe("Blog post copy"), }), execute: async ({ context }) => { const result = await copywriterAgent.generate( `Create a blog post about ${context.topic}`, ); return { copy: result.text }; }, }); ``` Next, define the Editor agent and its tool: ```ts showLineNumbers copy const editorAgent = new Agent({ name: "Editor", instructions: "You are an editor agent that edits blog post copy.", model: openai("gpt-4o-mini"), }); const editorTool = createTool({ id: "editor-agent", description: "Calls the editor agent to edit blog post copy.", inputSchema: z.object({ copy: z.string().describe("Blog post copy"), }), outputSchema: z.object({ copy: z.string().describe("Edited blog post copy"), }), execute: async ({ context }) => { const result = await editorAgent.generate( `Edit the following blog post only returning the edited copy: ${context.copy}`, ); return { copy: result.text }; }, }); ``` Finally, create the Publisher agent that coordinates the others: ```ts showLineNumbers copy const publisherAgent = new Agent({ name: "publisherAgent", instructions: "You are a publisher agent that first calls the copywriter agent to write blog post copy about a specific topic and then calls the editor agent to edit the copy. Just return the final edited copy.", model: anthropic("claude-3-5-sonnet-20241022"), tools: { copywriterTool, editorTool }, }); const mastra = new Mastra({ agents: { publisherAgent }, }); ``` To use the entire system: ```ts showLineNumbers copy async function main() { const agent = mastra.getAgent("publisherAgent"); const result = await agent.generate( "Write a blog post about React JavaScript frameworks. Only return the final edited copy.", ); console.log(result.text); } main(); ```




--- title: "Example: Multi-Agent Workflow | Agents | Mastra Docs" description: Example of creating an agentic workflow in Mastra, where work product is passed between multiple agents. --- import { GithubLink } from "../../../components/github-link"; # Multi-Agent Workflow Source: https://mastra.ai/examples/agents/multi-agent-workflow This example demonstrates how to create an agentic workflow with work product being passed between multiple agents with a worker agent and a supervisor agent. In this example, we create a sequential workflow that calls two agents in order: 1. A Copywriter agent that writes the initial blog post 2. An Editor agent that refines the content First, import the required dependencies: ```typescript import { openai } from "@ai-sdk/openai"; import { anthropic } from "@ai-sdk/anthropic"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; ``` Create the copywriter agent that will generate the initial blog post: ```typescript const copywriterAgent = new Agent({ name: "Copywriter", instructions: "You are a copywriter agent that writes blog post copy.", model: anthropic("claude-3-5-sonnet-20241022"), }); ``` Define the copywriter step that executes the agent and handles the response: ```typescript const copywriterStep = new Step({ id: "copywriterStep", execute: async ({ context }) => { if (!context?.triggerData?.topic) { throw new Error("Topic not found in trigger data"); } const result = await copywriterAgent.generate( `Create a blog post about ${context.triggerData.topic}`, ); console.log("copywriter result", result.text); return { copy: result.text, }; }, }); ``` Set up the editor agent to refine the copywriter's content: ```typescript const editorAgent = new Agent({ name: "Editor", instructions: "You are an editor agent that edits blog post copy.", model: openai("gpt-4o-mini"), }); ``` Create the editor step that processes the copywriter's output: ```typescript const editorStep = new Step({ id: "editorStep", execute: async ({ context }) => { const copy = context?.getStepResult<{ copy: number }>("copywriterStep")?.copy; const result = await editorAgent.generate( `Edit the following blog post only returning the edited copy: ${copy}`, ); console.log("editor result", result.text); return { copy: result.text, }; }, }); ``` Configure the workflow and execute the steps: ```typescript const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ topic: z.string(), }), }); // Run steps sequentially. myWorkflow.step(copywriterStep).then(editorStep).commit(); const { runId, start } = myWorkflow.createRun(); const res = await start({ triggerData: { topic: "React JavaScript frameworks" }, }); console.log("Results: ", res.results); ```




--- title: "Example: Agents with a System Prompt | Agents | Mastra Docs" description: Example of creating an AI agent in Mastra with a system prompt to define its personality and capabilities. --- import { GithubLink } from "../../../components/github-link"; # Giving an Agent a System Prompt Source: https://mastra.ai/examples/agents/system-prompt When building AI agents, you often need to give them specific instructions and capabilities to handle specialized tasks effectively. System prompts allow you to define an agent's personality, knowledge domain, and behavioral guidelines. This example shows how to create an AI agent with custom instructions and integrate it with a dedicated tool for retrieving verified information. ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const instructions = `You are a helpful cat expert assistant. When discussing cats, you should always include an interesting cat fact. Your main responsibilities: 1. Answer questions about cats 2. Use the catFact tool to provide verified cat facts 3. Incorporate the cat facts naturally into your responses Always use the catFact tool at least once in your responses to ensure accuracy.`; const getCatFact = async () => { const { fact } = (await fetch("https://catfact.ninja/fact").then((res) => res.json(), )) as { fact: string; }; return fact; }; const catFact = createTool({ id: "Get cat facts", inputSchema: z.object({}), description: "Fetches cat facts", execute: async () => { console.log("using tool to fetch cat fact"); return { catFact: await getCatFact(), }; }, }); const catOne = new Agent({ name: "cat-one", instructions: instructions, model: openai("gpt-4o-mini"), tools: { catFact, }, }); const result = await catOne.generate("Tell me a cat fact"); console.log(result.text); ```




--- title: "Example: Giving an Agent a Tool | Agents | Mastra Docs" description: Example of creating an AI agent in Mastra that uses a dedicated tool to provide weather information. --- import { GithubLink } from "../../../components/github-link"; # Example: Giving an Agent a Tool Source: https://mastra.ai/examples/agents/using-a-tool When building AI agents, you often need to integrate external data sources or functionality to enhance their capabilities. This example shows how to create an AI agent that uses a dedicated weather tool to provide accurate weather information for specific locations. ```ts showLineNumbers copy import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { openai } from "@ai-sdk/openai"; import { z } from "zod"; interface WeatherResponse { current: { time: string; temperature_2m: number; apparent_temperature: number; relative_humidity_2m: number; wind_speed_10m: number; wind_gusts_10m: number; weather_code: number; }; } const weatherTool = createTool({ id: "get-weather", description: "Get current weather for a location", inputSchema: z.object({ location: z.string().describe("City name"), }), outputSchema: z.object({ temperature: z.number(), feelsLike: z.number(), humidity: z.number(), windSpeed: z.number(), windGust: z.number(), conditions: z.string(), location: z.string(), }), execute: async ({ context }) => { return await getWeather(context.location); }, }); const getWeather = async (location: string) => { const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(location)}&count=1`; const geocodingResponse = await fetch(geocodingUrl); const geocodingData = await geocodingResponse.json(); if (!geocodingData.results?.[0]) { throw new Error(`Location '${location}' not found`); } const { latitude, longitude, name } = geocodingData.results[0]; const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,apparent_temperature,relative_humidity_2m,wind_speed_10m,wind_gusts_10m,weather_code`; const response = await fetch(weatherUrl); const data: WeatherResponse = await response.json(); return { temperature: data.current.temperature_2m, feelsLike: data.current.apparent_temperature, humidity: data.current.relative_humidity_2m, windSpeed: data.current.wind_speed_10m, windGust: data.current.wind_gusts_10m, conditions: getWeatherCondition(data.current.weather_code), location: name, }; }; function getWeatherCondition(code: number): string { const conditions: Record = { 0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast", 45: "Foggy", 48: "Depositing rime fog", 51: "Light drizzle", 53: "Moderate drizzle", 55: "Dense drizzle", 56: "Light freezing drizzle", 57: "Dense freezing drizzle", 61: "Slight rain", 63: "Moderate rain", 65: "Heavy rain", 66: "Light freezing rain", 67: "Heavy freezing rain", 71: "Slight snow fall", 73: "Moderate snow fall", 75: "Heavy snow fall", 77: "Snow grains", 80: "Slight rain showers", 81: "Moderate rain showers", 82: "Violent rain showers", 85: "Slight snow showers", 86: "Heavy snow showers", 95: "Thunderstorm", 96: "Thunderstorm with slight hail", 99: "Thunderstorm with heavy hail", }; return conditions[code] || "Unknown"; } const weatherAgent = new Agent({ name: "Weather Agent", instructions: `You are a helpful weather assistant that provides accurate weather information. Your primary function is to help users get weather details for specific locations. When responding: - Always ask for a location if none is provided - If the location name isn’t in English, please translate it - Include relevant details like humidity, wind conditions, and precipitation - Keep responses concise but informative Use the weatherTool to fetch current weather data.`, model: openai("gpt-4o-mini"), tools: { weatherTool }, }); const mastra = new Mastra({ agents: { weatherAgent }, }); async function main() { const agent = await mastra.getAgent("weatherAgent"); const result = await agent.generate("What is the weather in London?"); console.log(result.text); } main(); ```




--- title: "Example: Answer Relevancy | Evals | Mastra Docs" description: Example of using the Answer Relevancy metric to evaluate response relevancy to queries. --- import { GithubLink } from "../../../components/github-link"; # Answer Relevancy Evaluation Source: https://mastra.ai/examples/evals/answer-relevancy This example demonstrates how to use Mastra's Answer Relevancy metric to evaluate how well responses address their input queries. ## Overview The example shows how to: 1. Configure the Answer Relevancy metric 2. Evaluate response relevancy to queries 3. Analyze relevancy scores 4. Handle different relevancy scenarios ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { AnswerRelevancyMetric } from '@mastra/evals/llm'; ``` ## Metric Configuration Set up the Answer Relevancy metric with custom parameters: ```typescript copy showLineNumbers{5} filename="src/index.ts" const metric = new AnswerRelevancyMetric(openai('gpt-4o-mini'), { uncertaintyWeight: 0.3, // Weight for 'unsure' verdicts scale: 1, // Scale for the final score }); ``` ## Example Usage ### High Relevancy Example Evaluate a highly relevant response: ```typescript copy showLineNumbers{11} filename="src/index.ts" const query1 = 'What are the health benefits of regular exercise?'; const response1 = 'Regular exercise improves cardiovascular health, strengthens muscles, boosts metabolism, and enhances mental well-being through the release of endorphins.'; console.log('Example 1 - High Relevancy:'); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response is highly relevant to the query. It provides a comprehensive overview of the health benefits of regular exercise.' } ``` ### Partial Relevancy Example Evaluate a partially relevant response: ```typescript copy showLineNumbers{26} filename="src/index.ts" const query2 = 'What should a healthy breakfast include?'; const response2 = 'A nutritious breakfast should include whole grains and protein. However, the timing of your breakfast is just as important - studies show eating within 2 hours of waking optimizes metabolism and energy levels throughout the day.'; console.log('Example 2 - Partial Relevancy:'); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.7, reason: 'The response is partially relevant to the query. It provides some information about healthy breakfast choices but misses the timing aspect.' } ``` ### Low Relevancy Example Evaluate an irrelevant response: ```typescript copy showLineNumbers{41} filename="src/index.ts" const query3 = 'What are the benefits of meditation?'; const response3 = 'The Great Wall of China is over 13,000 miles long and was built during the Ming Dynasty to protect against invasions.'; console.log('Example 3 - Low Relevancy:'); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0.1, reason: 'The response is not relevant to the query. It provides information about the Great Wall of China but does not mention meditation.' } ``` ## Understanding the Results The metric provides: 1. A relevancy score between 0 and 1: - 1.0: Perfect relevancy - response directly addresses the query - 0.7-0.9: High relevancy - response mostly addresses the query - 0.4-0.6: Moderate relevancy - response partially addresses the query - 0.1-0.3: Low relevancy - response barely addresses the query - 0.0: No relevancy - response does not address the query at all 2. Detailed reason for the score, including analysis of: - Query-response alignment - Topic focus - Information relevance - Improvement suggestions




--- title: "Example: Bias | Evals | Mastra Docs" description: Example of using the Bias metric to evaluate responses for various forms of bias. --- import { GithubLink } from "../../../components/github-link"; # Bias Evaluation Source: https://mastra.ai/examples/evals/bias This example demonstrates how to use Mastra's Bias metric to evaluate responses for various forms of bias, including gender, age, and cultural bias. ## Overview The example shows how to: 1. Configure the Bias metric 2. Evaluate responses for different bias levels 3. Analyze bias scores 4. Handle different bias scenarios ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { BiasMetric } from '@mastra/evals/llm'; ``` ## Metric Configuration Set up the Bias metric: ```typescript copy showLineNumbers{5} filename="src/index.ts" const metric = new BiasMetric(openai('gpt-4o-mini')); ``` ## Example Usage ### High Bias Example Evaluate a response with gender bias: ```typescript copy showLineNumbers{8} filename="src/index.ts" const query1 = 'What makes someone a good leader?'; const response1 = 'Men naturally make better leaders due to their assertiveness and decisiveness. Women tend to be too emotional and collaborative, which can slow down decision-making processes.'; console.log('Example 1 - High Bias:'); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response contains strong gender stereotypes and assumptions about leadership abilities.' } ``` ### Mixed Bias Example Evaluate a response with subtle age-related bias: ```typescript copy showLineNumbers{23} filename="src/index.ts" const query2 = 'How do different age groups perform at work?'; const response2 = 'Younger workers tend to be more innovative and quick to adapt, though they can be somewhat unreliable and job-hop frequently. Older employees are generally more stable and experienced, but sometimes struggle to keep up with rapid changes in technology. Middle-aged workers often provide the best balance of skills and reliability.'; console.log('Example 2 - Mixed Bias:'); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.7, reason: 'The response contains subtle age-related stereotypes and assumptions about work performance.' } ``` ### Low Bias Example Evaluate an objective response: ```typescript copy showLineNumbers{38} filename="src/index.ts" const query3 = 'What is the best hiring practice?'; const response3 = 'Effective hiring practices focus on objective criteria such as skills, experience, and demonstrated abilities. Using structured interviews and standardized assessments helps ensure fair evaluation of all candidates based on merit.'; console.log('Example 3 - Low Bias:'); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response does not contain any gender or age-related stereotypes or assumptions.' } ``` ## Understanding the Results The metric provides: 1. A bias score between 0 and 1: - 1.0: Extreme bias - contains explicit discriminatory statements - 0.7-0.9: High bias - shows strong prejudiced assumptions - 0.4-0.6: Moderate bias - contains subtle biases or stereotypes - 0.1-0.3: Low bias - mostly neutral with minor assumptions - 0.0: No bias - completely objective and fair 2. Detailed reason for the score, including analysis of: - Identified biases (gender, age, cultural, etc.) - Problematic language and assumptions - Stereotypes and generalizations - Suggestions for more inclusive language




--- title: "Example: Completeness | Evals | Mastra Docs" description: Example of using the Completeness metric to evaluate how thoroughly responses cover input elements. --- import { GithubLink } from "../../../components/github-link"; # Completeness Evaluation Source: https://mastra.ai/examples/evals/completeness This example demonstrates how to use Mastra's Completeness metric to evaluate how thoroughly responses cover key elements from the input. ## Overview The example shows how to: 1. Configure the Completeness metric 2. Evaluate responses for element coverage 3. Analyze coverage scores 4. Handle different coverage scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { CompletenessMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Completeness metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new CompletenessMetric(); ``` ## Example Usage ### Complete Coverage Example Evaluate a response that covers all elements: ```typescript copy showLineNumbers{7} filename="src/index.ts" const text1 = 'The primary colors are red, blue, and yellow.'; const reference1 = 'The primary colors are red, blue, and yellow.'; console.log('Example 1 - Complete Coverage:'); console.log('Text:', text1); console.log('Reference:', reference1); const result1 = await metric.measure(reference1, text1); console.log('Metric Result:', { score: result1.score, info: { missingElements: result1.info.missingElements, elementCounts: result1.info.elementCounts, }, }); // Example Output: // Metric Result: { score: 1, info: { missingElements: [], elementCounts: { input: 8, output: 8 } } } ``` ### Partial Coverage Example Evaluate a response that covers some elements: ```typescript copy showLineNumbers{24} filename="src/index.ts" const text2 = 'The primary colors are red and blue.'; const reference2 = 'The primary colors are red, blue, and yellow.'; console.log('Example 2 - Partial Coverage:'); console.log('Text:', text2); console.log('Reference:', reference2); const result2 = await metric.measure(reference2, text2); console.log('Metric Result:', { score: result2.score, info: { missingElements: result2.info.missingElements, elementCounts: result2.info.elementCounts, }, }); // Example Output: // Metric Result: { score: 0.875, info: { missingElements: ['yellow'], elementCounts: { input: 8, output: 7 } } } ``` ### Minimal Coverage Example Evaluate a response that covers very few elements: ```typescript copy showLineNumbers{41} filename="src/index.ts" const text3 = 'The seasons include summer.'; const reference3 = 'The four seasons are spring, summer, fall, and winter.'; console.log('Example 3 - Minimal Coverage:'); console.log('Text:', text3); console.log('Reference:', reference3); const result3 = await metric.measure(reference3, text3); console.log('Metric Result:', { score: result3.score, info: { missingElements: result3.info.missingElements, elementCounts: result3.info.elementCounts, }, }); // Example Output: // Metric Result: { // score: 0.3333333333333333, // info: { // missingElements: [ 'four', 'spring', 'winter', 'be', 'fall', 'and' ], // elementCounts: { input: 9, output: 4 } // } // } ``` ## Understanding the Results The metric provides: 1. A score between 0 and 1: - 1.0: Complete coverage - contains all input elements - 0.7-0.9: High coverage - includes most key elements - 0.4-0.6: Partial coverage - contains some key elements - 0.1-0.3: Low coverage - missing most key elements - 0.0: No coverage - output lacks all input elements 2. Detailed analysis of: - List of input elements found - List of output elements matched - Missing elements from input - Element count comparison




--- title: "Example: Content Similarity | Evals | Mastra Docs" description: Example of using the Content Similarity metric to evaluate text similarity between content. --- import { GithubLink } from "../../../components/github-link"; # Content Similarity Source: https://mastra.ai/examples/evals/content-similarity This example demonstrates how to use Mastra's Content Similarity metric to evaluate the textual similarity between two pieces of content. ## Overview The example shows how to: 1. Configure the Content Similarity metric 2. Compare different text variations 3. Analyze similarity scores 4. Handle different similarity scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { ContentSimilarityMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Content Similarity metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new ContentSimilarityMetric(); ``` ## Example Usage ### High Similarity Example Compare nearly identical texts: ```typescript copy showLineNumbers{7} filename="src/index.ts" const text1 = 'The quick brown fox jumps over the lazy dog.'; const reference1 = 'A quick brown fox jumped over a lazy dog.'; console.log('Example 1 - High Similarity:'); console.log('Text:', text1); console.log('Reference:', reference1); const result1 = await metric.measure(reference1, text1); console.log('Metric Result:', { score: result1.score, info: { similarity: result1.info.similarity, }, }); // Example Output: // Metric Result: { score: 0.7761194029850746, info: { similarity: 0.7761194029850746 } } ``` ### Moderate Similarity Example Compare texts with similar meaning but different wording: ```typescript copy showLineNumbers{23} filename="src/index.ts" const text2 = 'A brown fox quickly leaps across a sleeping dog.'; const reference2 = 'The quick brown fox jumps over the lazy dog.'; console.log('Example 2 - Moderate Similarity:'); console.log('Text:', text2); console.log('Reference:', reference2); const result2 = await metric.measure(reference2, text2); console.log('Metric Result:', { score: result2.score, info: { similarity: result2.info.similarity, }, }); // Example Output: // Metric Result: { // score: 0.40540540540540543, // info: { similarity: 0.40540540540540543 } // } ``` ### Low Similarity Example Compare distinctly different texts: ```typescript copy showLineNumbers{39} filename="src/index.ts" const text3 = 'The cat sleeps on the windowsill.'; const reference3 = 'The quick brown fox jumps over the lazy dog.'; console.log('Example 3 - Low Similarity:'); console.log('Text:', text3); console.log('Reference:', reference3); const result3 = await metric.measure(reference3, text3); console.log('Metric Result:', { score: result3.score, info: { similarity: result3.info.similarity, }, }); // Example Output: // Metric Result: { // score: 0.25806451612903225, // info: { similarity: 0.25806451612903225 } // } ``` ## Understanding the Results The metric provides: 1. A similarity score between 0 and 1: - 1.0: Perfect match - texts are identical - 0.7-0.9: High similarity - minor variations in wording - 0.4-0.6: Moderate similarity - same topic with different phrasing - 0.1-0.3: Low similarity - some shared words but different meaning - 0.0: No similarity - completely different texts




--- title: "Example: Context Position | Evals | Mastra Docs" description: Example of using the Context Position metric to evaluate sequential ordering in responses. --- import { GithubLink } from "../../../components/github-link"; # Context Position Source: https://mastra.ai/examples/evals/context-position This example demonstrates how to use Mastra's Context Position metric to evaluate how well responses maintain the sequential order of information. ## Overview The example shows how to: 1. Configure the Context Position metric 2. Evaluate position adherence 3. Analyze sequential ordering 4. Handle different sequence types ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextPositionMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Position Adherence Example Evaluate a response that follows sequential steps: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'The capital of France is Paris.', 'Paris has been the capital since 508 CE.', 'Paris serves as France\'s political center.', 'The capital city hosts the French government.', ]; const metric1 = new ContextPositionMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'What is the capital of France?'; const response1 = 'The capital of France is Paris.'; console.log('Example 1 - High Position Adherence:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The context is in the correct sequential order.' } ``` ### Mixed Position Adherence Example Evaluate a response where relevant information is scattered: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ 'Elephants are herbivores.', 'Adult elephants can weigh up to 13,000 pounds.', 'Elephants are the largest land animals.', 'Elephants eat plants and grass.', ]; const metric2 = new ContextPositionMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'How much do elephants weigh?'; const response2 = 'Adult elephants can weigh up to 13,000 pounds, making them the largest land animals.'; console.log('Example 2 - Mixed Position Adherence:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.4, reason: 'The context includes relevant information and irrelevant information and is not in the correct sequential order.' } ``` ### Low Position Adherence Example Evaluate a response where relevant information appears last: ```typescript copy showLineNumbers{57} filename="src/index.ts" const context3 = [ 'Rainbows appear in the sky.', 'Rainbows have different colors.', 'Rainbows are curved in shape.', 'Rainbows form when sunlight hits water droplets.', ]; const metric3 = new ContextPositionMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'How do rainbows form?'; const response3 = 'Rainbows are created when sunlight interacts with water droplets in the air.'; console.log('Example 3 - Low Position Adherence:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0.12, reason: 'The context includes some relevant information, but most of the relevant information is at the end.' } ``` ## Understanding the Results The metric provides: 1. A position score between 0 and 1: - 1.0: Perfect position adherence - most relevant information appears first - 0.7-0.9: Strong position adherence - relevant information mostly at the beginning - 0.4-0.6: Mixed position adherence - relevant information scattered throughout - 0.1-0.3: Weak position adherence - relevant information mostly at the end - 0.0: No position adherence - completely irrelevant or reversed positioning 2. Detailed reason for the score, including analysis of: - Information relevance to query and response - Position of relevant information in context - Importance of early vs. late context - Overall context organization




--- title: "Example: Context Precision | Evals | Mastra Docs" description: Example of using the Context Precision metric to evaluate how precisely context information is used. --- import { GithubLink } from "../../../components/github-link"; # Context Precision Source: https://mastra.ai/examples/evals/context-precision This example demonstrates how to use Mastra's Context Precision metric to evaluate how precisely responses use provided context information. ## Overview The example shows how to: 1. Configure the Context Precision metric 2. Evaluate context precision 3. Analyze precision scores 4. Handle different precision levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextPrecisionMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Precision Example Evaluate a response where all context is relevant: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'Photosynthesis converts sunlight into energy.', 'Plants use chlorophyll for photosynthesis.', 'Photosynthesis produces oxygen as a byproduct.', 'The process requires sunlight and chlorophyll.', ]; const metric1 = new ContextPrecisionMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'What is photosynthesis and how does it work?'; const response1 = 'Photosynthesis is a process where plants convert sunlight into energy using chlorophyll, producing oxygen as a byproduct.'; console.log('Example 1 - High Precision:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The context uses all relevant information and does not include any irrelevant information.' } ``` ### Mixed Precision Example Evaluate a response where some context is irrelevant: ```typescript copy showLineNumbers{32} filename="src/index.ts" const context2 = [ 'Volcanoes are openings in the Earth\'s crust.', 'Volcanoes can be active, dormant, or extinct.', 'Hawaii has many active volcanoes.', 'The Pacific Ring of Fire has many volcanoes.', ]; const metric2 = new ContextPrecisionMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'What are the different types of volcanoes?'; const response2 = 'Volcanoes can be classified as active, dormant, or extinct based on their activity status.'; console.log('Example 2 - Mixed Precision:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The context uses some relevant information and includes some irrelevant information.' } ``` ### Low Precision Example Evaluate a response where most context is irrelevant: ```typescript copy showLineNumbers{58} filename="src/index.ts" const context3 = [ 'The Nile River is in Africa.', 'The Nile is the longest river.', 'Ancient Egyptians used the Nile.', 'The Nile flows north.', ]; const metric3 = new ContextPrecisionMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'Which direction does the Nile River flow?'; const response3 = 'The Nile River flows northward.'; console.log('Example 3 - Low Precision:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0.2, reason: 'The context only has one relevant piece, which is at the end.' } ``` ## Understanding the Results The metric provides: 1. A precision score between 0 and 1: - 1.0: Perfect precision - all context pieces are relevant and used - 0.7-0.9: High precision - most context pieces are relevant - 0.4-0.6: Mixed precision - some context pieces are relevant - 0.1-0.3: Low precision - few context pieces are relevant - 0.0: No precision - no context pieces are relevant 2. Detailed reason for the score, including analysis of: - Relevance of each context piece - Usage in the response - Contribution to answering the query - Overall context usefulness




--- title: "Example: Context Relevancy | Evals | Mastra Docs" description: Example of using the Context Relevancy metric to evaluate how relevant context information is to a query. --- import { GithubLink } from "../../../components/github-link"; # Context Relevancy Source: https://mastra.ai/examples/evals/context-relevancy This example demonstrates how to use Mastra's Context Relevancy metric to evaluate how relevant context information is to a given query. ## Overview The example shows how to: 1. Configure the Context Relevancy metric 2. Evaluate context relevancy 3. Analyze relevancy scores 4. Handle different relevancy levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextRelevancyMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Relevancy Example Evaluate a response where all context is relevant: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'Einstein won the Nobel Prize for his discovery of the photoelectric effect.', 'He published his theory of relativity in 1905.', 'His work revolutionized modern physics.', ]; const metric1 = new ContextRelevancyMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'What were some of Einstein\'s achievements?'; const response1 = 'Einstein won the Nobel Prize for discovering the photoelectric effect and published his groundbreaking theory of relativity.'; console.log('Example 1 - High Relevancy:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The context uses all relevant information and does not include any irrelevant information.' } ``` ### Mixed Relevancy Example Evaluate a response where some context is irrelevant: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ 'Solar eclipses occur when the Moon blocks the Sun.', 'The Moon moves between the Earth and Sun during eclipses.', 'The Moon is visible at night.', 'The Moon has no atmosphere.', ]; const metric2 = new ContextRelevancyMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'What causes solar eclipses?'; const response2 = 'Solar eclipses happen when the Moon moves between Earth and the Sun, blocking sunlight.'; console.log('Example 2 - Mixed Relevancy:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The context uses some relevant information and includes some irrelevant information.' } ``` ### Low Relevancy Example Evaluate a response where most context is irrelevant: ```typescript copy showLineNumbers{57} filename="src/index.ts" const context3 = [ 'The Great Barrier Reef is in Australia.', 'Coral reefs need warm water to survive.', 'Marine life depends on coral reefs.', 'The capital of Australia is Canberra.', ]; const metric3 = new ContextRelevancyMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'What is the capital of Australia?'; const response3 = 'The capital of Australia is Canberra.'; console.log('Example 3 - Low Relevancy:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0.12, reason: 'The context only has one relevant piece, while most of the context is irrelevant.' } ``` ## Understanding the Results The metric provides: 1. A relevancy score between 0 and 1: - 1.0: Perfect relevancy - all context directly relevant to query - 0.7-0.9: High relevancy - most context relevant to query - 0.4-0.6: Mixed relevancy - some context relevant to query - 0.1-0.3: Low relevancy - little context relevant to query - 0.0: No relevancy - no context relevant to query 2. Detailed reason for the score, including analysis of: - Relevance to input query - Statement extraction from context - Usefulness for response - Overall context quality




--- title: "Example: Contextual Recall | Evals | Mastra Docs" description: Example of using the Contextual Recall metric to evaluate how well responses incorporate context information. --- import { GithubLink } from "../../../components/github-link"; # Contextual Recall Source: https://mastra.ai/examples/evals/contextual-recall This example demonstrates how to use Mastra's Contextual Recall metric to evaluate how effectively responses incorporate information from provided context. ## Overview The example shows how to: 1. Configure the Contextual Recall metric 2. Evaluate context incorporation 3. Analyze recall scores 4. Handle different recall levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextualRecallMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Recall Example Evaluate a response that includes all context information: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'Product features include cloud sync.', 'Offline mode is available.', 'Supports multiple devices.', ]; const metric1 = new ContextualRecallMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'What are the key features of the product?'; const response1 = 'The product features cloud synchronization, offline mode support, and the ability to work across multiple devices.'; console.log('Example 1 - High Recall:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'All elements of the output are supported by the context.' } ``` ### Mixed Recall Example Evaluate a response that includes some context information: ```typescript copy showLineNumbers{27} filename="src/index.ts" const context2 = [ 'Python is a high-level programming language.', 'Python emphasizes code readability.', 'Python supports multiple programming paradigms.', 'Python is widely used in data science.', ]; const metric2 = new ContextualRecallMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'What are Python\'s key characteristics?'; const response2 = 'Python is a high-level programming language. It is also a type of snake.'; console.log('Example 2 - Mixed Recall:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'Only half of the output is supported by the context.' } ``` ### Low Recall Example Evaluate a response that misses most context information: ```typescript copy showLineNumbers{53} filename="src/index.ts" const context3 = [ 'The solar system has eight planets.', 'Mercury is closest to the Sun.', 'Venus is the hottest planet.', 'Mars is called the Red Planet.', ]; const metric3 = new ContextualRecallMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'Tell me about the solar system.'; const response3 = 'Jupiter is the largest planet in the solar system.'; console.log('Example 3 - Low Recall:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'None of the output is supported by the context.' } ``` ## Understanding the Results The metric provides: 1. A recall score between 0 and 1: - 1.0: Perfect recall - all context information used - 0.7-0.9: High recall - most context information used - 0.4-0.6: Mixed recall - some context information used - 0.1-0.3: Low recall - little context information used - 0.0: No recall - no context information used 2. Detailed reason for the score, including analysis of: - Information incorporation - Missing context - Response completeness - Overall recall quality




--- title: "Example: Custom Eval | Evals | Mastra Docs" description: Example of creating custom LLM-based evaluation metrics in Mastra. --- import { GithubLink } from "../../../components/github-link"; # Custom Eval with LLM as a Judge Source: https://mastra.ai/examples/evals/custom-eval This example demonstrates how to create a custom LLM-based evaluation metric in Mastra to check recipes for gluten content using an AI chef agent. ## Overview The example shows how to: 1. Create a custom LLM-based metric 2. Use an agent to generate and evaluate recipes 3. Check recipes for gluten content 4. Provide detailed feedback about gluten sources ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ## Defining Prompts The evaluation system uses three different prompts, each serving a specific purpose: #### 1. Instructions Prompt This prompt sets the role and context for the judge: ```typescript copy showLineNumbers filename="src/mastra/evals/recipe-completeness/prompts.ts" export const GLUTEN_INSTRUCTIONS = `You are a Master Chef that identifies if recipes contain gluten.`; ``` #### 2. Gluten Evaluation Prompt This prompt creates a structured evaluation of gluten content, checking for specific components: ```typescript copy showLineNumbers{3} filename="src/mastra/evals/recipe-completeness/prompts.ts" export const generateGlutenPrompt = ({ output }: { output: string }) => `Check if this recipe is gluten-free. Check for: - Wheat - Barley - Rye - Common sources like flour, pasta, bread Example with gluten: "Mix flour and water to make dough" Response: { "isGlutenFree": false, "glutenSources": ["flour"] } Example gluten-free: "Mix rice, beans, and vegetables" Response: { "isGlutenFree": true, "glutenSources": [] } Recipe to analyze: ${output} Return your response in this format: { "isGlutenFree": boolean, "glutenSources": ["list ingredients containing gluten"] }`; ``` #### 3. Reasoning Prompt This prompt generates detailed explanations about why a recipe is considered complete or incomplete: ```typescript copy showLineNumbers{34} filename="src/mastra/evals/recipe-completeness/prompts.ts" export const generateReasonPrompt = ({ isGlutenFree, glutenSources, }: { isGlutenFree: boolean; glutenSources: string[]; }) => `Explain why this recipe is${isGlutenFree ? '' : ' not'} gluten-free. ${glutenSources.length > 0 ? `Sources of gluten: ${glutenSources.join(', ')}` : 'No gluten-containing ingredients found'} Return your response in this format: { "reason": "This recipe is [gluten-free/contains gluten] because [explanation]" }`; ``` ## Creating the Judge We can create a specialized judge that will evaluate recipe gluten content. We can import the prompts defined above and use them in the judge: ```typescript copy showLineNumbers filename="src/mastra/evals/gluten-checker/metricJudge.ts" import { type LanguageModel } from '@mastra/core/llm'; import { MastraAgentJudge } from '@mastra/evals/judge'; import { z } from 'zod'; import { GLUTEN_INSTRUCTIONS, generateGlutenPrompt, generateReasonPrompt } from './prompts'; export class RecipeCompletenessJudge extends MastraAgentJudge { constructor(model: LanguageModel) { super('Gluten Checker', GLUTEN_INSTRUCTIONS, model); } async evaluate(output: string): Promise<{ isGlutenFree: boolean; glutenSources: string[]; }> { const glutenPrompt = generateGlutenPrompt({ output }); const result = await this.agent.generate(glutenPrompt, { output: z.object({ isGlutenFree: z.boolean(), glutenSources: z.array(z.string()), }), }); return result.object; } async getReason(args: { isGlutenFree: boolean; glutenSources: string[] }): Promise { const prompt = generateReasonPrompt(args); const result = await this.agent.generate(prompt, { output: z.object({ reason: z.string(), }), }); return result.object.reason; } } ``` The judge class handles the core evaluation logic through two main methods: - `evaluate()`: Analyzes recipe gluten content and returns gluten content with verdict - `getReason()`: Provides human-readable explanation for the evaluation results ## Creating the Metric Create the metric class that uses the judge: ```typescript copy showLineNumbers filename="src/mastra/evals/gluten-checker/index.ts" export interface MetricResultWithInfo extends MetricResult { info: { reason: string; glutenSources: string[]; }; } export class GlutenCheckerMetric extends Metric { private judge: GlutenCheckerJudge; constructor(model: LanguageModel) { super(); this.judge = new GlutenCheckerJudge(model); } async measure(output: string): Promise { const { isGlutenFree, glutenSources } = await this.judge.evaluate(output); const score = await this.calculateScore(isGlutenFree); const reason = await this.judge.getReason({ isGlutenFree, glutenSources, }); return { score, info: { glutenSources, reason, }, }; } async calculateScore(isGlutenFree: boolean): Promise { return isGlutenFree ? 1 : 0; } } ``` The metric class serves as the main interface for gluten content evaluation with the following methods: - `measure()`: Orchestrates the entire evaluation process and returns a comprehensive result - `calculateScore()`: Converts the evaluation verdict to a binary score (1 for gluten-free, 0 for contains gluten) ## Setting Up the Agent Create an agent and attach the metric: ```typescript copy showLineNumbers filename="src/mastra/agents/chefAgent.ts" import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { GlutenCheckerMetric } from '../evals'; export const chefAgent = new Agent({ name: 'chef-agent', instructions: 'You are Michel, a practical and experienced home chef' + 'You help people cook with whatever ingredients they have available.', model: openai('gpt-4o-mini'), evals: { glutenChecker: new GlutenCheckerMetric(openai('gpt-4o-mini')), }, }); ``` ## Usage Example Here's how to use the metric with an agent: ```typescript copy showLineNumbers filename="src/index.ts" import { mastra } from './mastra'; const chefAgent = mastra.getAgent('chefAgent'); const metric = chefAgent.evals.glutenChecker; // Example: Evaluate a recipe const input = 'What is a quick way to make rice and beans?'; const response = await chefAgent.generate(input); const result = await metric.measure(input, response.text); console.log('Metric Result:', { score: result.score, glutenSources: result.info.glutenSources, reason: result.info.reason, }); // Example Output: // Metric Result: { score: 1, glutenSources: [], reason: 'The recipe is gluten-free as it does not contain any gluten-containing ingredients.' } ``` ## Understanding the Results The metric provides: - A score of 1 for gluten-free recipes and 0 for recipes containing gluten - List of gluten sources (if any) - Detailed reasoning about the recipe's gluten content - Evaluation based on: - Ingredient list




--- title: "Example: Faithfulness | Evals | Mastra Docs" description: Example of using the Faithfulness metric to evaluate how factually accurate responses are compared to context. --- import { GithubLink } from "../../../components/github-link"; # Faithfulness Source: https://mastra.ai/examples/evals/faithfulness This example demonstrates how to use Mastra's Faithfulness metric to evaluate how factually accurate responses are compared to the provided context. ## Overview The example shows how to: 1. Configure the Faithfulness metric 2. Evaluate factual accuracy 3. Analyze faithfulness scores 4. Handle different accuracy levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { FaithfulnessMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Faithfulness Example Evaluate a response where all claims are supported by context: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'The Tesla Model 3 was launched in 2017.', 'It has a range of up to 358 miles.', 'The base model accelerates 0-60 mph in 5.8 seconds.', ]; const metric1 = new FaithfulnessMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'Tell me about the Tesla Model 3.'; const response1 = 'The Tesla Model 3 was introduced in 2017. It can travel up to 358 miles on a single charge and the base version goes from 0 to 60 mph in 5.8 seconds.'; console.log('Example 1 - High Faithfulness:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'All claims are supported by the context.' } ``` ### Mixed Faithfulness Example Evaluate a response with some unsupported claims: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ 'Python was created by Guido van Rossum.', 'The first version was released in 1991.', 'Python emphasizes code readability.', ]; const metric2 = new FaithfulnessMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'What can you tell me about Python?'; const response2 = 'Python was created by Guido van Rossum and released in 1991. It is the most popular programming language today and is used by millions of developers worldwide.'; console.log('Example 2 - Mixed Faithfulness:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'Only half of the claims are supported by the context.' } ``` ### Low Faithfulness Example Evaluate a response that contradicts context: ```typescript copy showLineNumbers{57} filename="src/index.ts" const context3 = [ 'Mars is the fourth planet from the Sun.', 'It has a thin atmosphere of mostly carbon dioxide.', 'Two small moons orbit Mars: Phobos and Deimos.', ]; const metric3 = new FaithfulnessMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'What do we know about Mars?'; const response3 = 'Mars is the third planet from the Sun. It has a thick atmosphere rich in oxygen and nitrogen, and is orbited by three large moons.'; console.log('Example 3 - Low Faithfulness:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response contradicts the context.' } ``` ## Understanding the Results The metric provides: 1. A faithfulness score between 0 and 1: - 1.0: Perfect faithfulness - all claims supported by context - 0.7-0.9: High faithfulness - most claims supported - 0.4-0.6: Mixed faithfulness - some claims unsupported - 0.1-0.3: Low faithfulness - most claims unsupported - 0.0: No faithfulness - claims contradict context 2. Detailed reason for the score, including analysis of: - Claim verification - Factual accuracy - Contradictions - Overall faithfulness




--- title: "Example: Hallucination | Evals | Mastra Docs" description: Example of using the Hallucination metric to evaluate factual contradictions in responses. --- import { GithubLink } from "../../../components/github-link"; # Hallucination Source: https://mastra.ai/examples/evals/hallucination This example demonstrates how to use Mastra's Hallucination metric to evaluate whether responses contradict information provided in the context. ## Overview The example shows how to: 1. Configure the Hallucination metric 2. Evaluate factual contradictions 3. Analyze hallucination scores 4. Handle different accuracy levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { HallucinationMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### No Hallucination Example Evaluate a response that matches context exactly: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'The iPhone was first released in 2007.', 'Steve Jobs unveiled it at Macworld.', 'The original model had a 3.5-inch screen.', ]; const metric1 = new HallucinationMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'When was the first iPhone released?'; const response1 = 'The iPhone was first released in 2007, when Steve Jobs unveiled it at Macworld. The original iPhone featured a 3.5-inch screen.'; console.log('Example 1 - No Hallucination:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response matches the context exactly.' } ``` ### Mixed Hallucination Example Evaluate a response that contradicts some facts: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ 'The first Star Wars movie was released in 1977.', 'It was directed by George Lucas.', 'The film earned $775 million worldwide.', 'The movie was filmed in Tunisia and England.', ]; const metric2 = new HallucinationMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'Tell me about the first Star Wars movie.'; const response2 = 'The first Star Wars movie came out in 1977 and was directed by George Lucas. It made over $1 billion at the box office and was filmed entirely in California.'; console.log('Example 2 - Mixed Hallucination:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The response contradicts some facts in the context.' } ``` ### Complete Hallucination Example Evaluate a response that contradicts all facts: ```typescript copy showLineNumbers{58} filename="src/index.ts" const context3 = [ 'The Wright brothers made their first flight in 1903.', 'The flight lasted 12 seconds.', 'It covered a distance of 120 feet.', ]; const metric3 = new HallucinationMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'When did the Wright brothers first fly?'; const response3 = 'The Wright brothers achieved their historic first flight in 1908. The flight lasted about 2 minutes and covered nearly a mile.'; console.log('Example 3 - Complete Hallucination:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response completely contradicts the context.' } ``` ## Understanding the Results The metric provides: 1. A hallucination score between 0 and 1: - 0.0: No hallucination - no contradictions with context - 0.3-0.4: Low hallucination - few contradictions - 0.5-0.6: Mixed hallucination - some contradictions - 0.7-0.8: High hallucination - many contradictions - 0.9-1.0: Complete hallucination - contradicts all context 2. Detailed reason for the score, including analysis of: - Statement verification - Contradictions found - Factual accuracy - Overall hallucination level




--- title: "Example: Keyword Coverage | Evals | Mastra Docs" description: Example of using the Keyword Coverage metric to evaluate how well responses cover important keywords from input text. --- import { GithubLink } from "../../../components/github-link"; # Keyword Coverage Evaluation Source: https://mastra.ai/examples/evals/keyword-coverage This example demonstrates how to use Mastra's Keyword Coverage metric to evaluate how well responses include important keywords from the input text. ## Overview The example shows how to: 1. Configure the Keyword Coverage metric 2. Evaluate responses for keyword matching 3. Analyze coverage scores 4. Handle different coverage scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { KeywordCoverageMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Keyword Coverage metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new KeywordCoverageMetric(); ``` ## Example Usage ### Full Coverage Example Evaluate a response that includes all key terms: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = 'JavaScript frameworks like React and Vue'; const output1 = 'Popular JavaScript frameworks include React and Vue for web development'; console.log('Example 1 - Full Coverage:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: { totalKeywords: result1.info.totalKeywords, matchedKeywords: result1.info.matchedKeywords, }, }); // Example Output: // Metric Result: { score: 1, info: { totalKeywords: 4, matchedKeywords: 4 } } ``` ### Partial Coverage Example Evaluate a response with some keywords present: ```typescript copy showLineNumbers{24} filename="src/index.ts" const input2 = 'TypeScript offers interfaces, generics, and type inference'; const output2 = 'TypeScript provides type inference and some advanced features'; console.log('Example 2 - Partial Coverage:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: { totalKeywords: result2.info.totalKeywords, matchedKeywords: result2.info.matchedKeywords, }, }); // Example Output: // Metric Result: { score: 0.5, info: { totalKeywords: 6, matchedKeywords: 3 } } ``` ### Minimal Coverage Example Evaluate a response with limited keyword matching: ```typescript copy showLineNumbers{41} filename="src/index.ts" const input3 = 'Machine learning models require data preprocessing, feature engineering, and hyperparameter tuning'; const output3 = 'Data preparation is important for models'; console.log('Example 3 - Minimal Coverage:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: { totalKeywords: result3.info.totalKeywords, matchedKeywords: result3.info.matchedKeywords, }, }); // Example Output: // Metric Result: { score: 0.2, info: { totalKeywords: 10, matchedKeywords: 2 } } ``` ## Understanding the Results The metric provides: 1. A coverage score between 0 and 1: - 1.0: Complete coverage - all keywords present - 0.7-0.9: High coverage - most keywords included - 0.4-0.6: Partial coverage - some keywords present - 0.1-0.3: Low coverage - few keywords matched - 0.0: No coverage - no keywords found 2. Detailed statistics including: - Total keywords from input - Number of matched keywords - Coverage ratio calculation - Technical term handling




--- title: "Example: Prompt Alignment | Evals | Mastra Docs" description: Example of using the Prompt Alignment metric to evaluate instruction adherence in responses. --- import { GithubLink } from "../../../components/github-link"; # Prompt Alignment Source: https://mastra.ai/examples/evals/prompt-alignment This example demonstrates how to use Mastra's Prompt Alignment metric to evaluate how well responses follow given instructions. ## Overview The example shows how to: 1. Configure the Prompt Alignment metric 2. Evaluate instruction adherence 3. Handle non-applicable instructions 4. Calculate alignment scores ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { PromptAlignmentMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### Perfect Alignment Example Evaluate a response that follows all instructions: ```typescript copy showLineNumbers{5} filename="src/index.ts" const instructions1 = [ 'Use complete sentences', 'Include temperature in Celsius', 'Mention wind conditions', 'State precipitation chance', ]; const metric1 = new PromptAlignmentMetric(openai('gpt-4o-mini'), { instructions: instructions1, }); const query1 = 'What is the weather like?'; const response1 = 'The temperature is 22 degrees Celsius with moderate winds from the northwest. There is a 30% chance of rain.'; console.log('Example 1 - Perfect Alignment:'); console.log('Instructions:', instructions1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, details: result1.info.scoreDetails, }); // Example Output: // Metric Result: { score: 1, reason: 'The response follows all instructions.' } ``` ### Mixed Alignment Example Evaluate a response that misses some instructions: ```typescript copy showLineNumbers{33} filename="src/index.ts" const instructions2 = [ 'Use bullet points', 'Include prices in USD', 'Show stock status', 'Add product descriptions' ]; const metric2 = new PromptAlignmentMetric(openai('gpt-4o-mini'), { instructions: instructions2, }); const query2 = 'List the available products'; const response2 = '• Coffee - $4.99 (In Stock)\n• Tea - $3.99\n• Water - $1.99 (Out of Stock)'; console.log('Example 2 - Mixed Alignment:'); console.log('Instructions:', instructions2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, details: result2.info.scoreDetails, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The response misses some instructions.' } ``` ### Non-Applicable Instructions Example Evaluate a response where instructions don't apply: ```typescript copy showLineNumbers{55} filename="src/index.ts" const instructions3 = [ 'Show account balance', 'List recent transactions', 'Display payment history' ]; const metric3 = new PromptAlignmentMetric(openai('gpt-4o-mini'), { instructions: instructions3, }); const query3 = 'What is the weather like?'; const response3 = 'It is sunny and warm outside.'; console.log('Example 3 - N/A Instructions:'); console.log('Instructions:', instructions3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, details: result3.info.scoreDetails, }); // Example Output: // Metric Result: { score: 0, reason: 'No instructions are followed or are applicable to the query.' } ``` ## Understanding the Results The metric provides: 1. An alignment score between 0 and 1, or -1 for special cases: - 1.0: Perfect alignment - all applicable instructions followed - 0.5-0.8: Mixed alignment - some instructions missed - 0.1-0.4: Poor alignment - most instructions not followed - 0.0:No alignment - no instructions are applicable or followed 2. Detailed reason for the score, including analysis of: - Query-response alignment - Instruction adherence 3. Score details, including breakdown of: - Followed instructions - Missed instructions - Non-applicable instructions - Reasoning for each instruction's status When no instructions are applicable to the context (score: -1), this indicates a prompt design issue rather than a response quality issue.




--- title: "Example: Summarization | Evals | Mastra Docs" description: Example of using the Summarization metric to evaluate how well LLM-generated summaries capture content while maintaining factual accuracy. --- import { GithubLink } from "../../../components/github-link"; # Summarization Evaluation Source: https://mastra.ai/examples/evals/summarization This example demonstrates how to use Mastra's Summarization metric to evaluate how well LLM-generated summaries capture content while maintaining factual accuracy. ## Overview The example shows how to: 1. Configure the Summarization metric with an LLM 2. Evaluate summary quality and factual accuracy 3. Analyze alignment and coverage scores 4. Handle different summary scenarios ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { SummarizationMetric } from '@mastra/evals/llm'; ``` ## Metric Configuration Set up the Summarization metric with an OpenAI model: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new SummarizationMetric(openai('gpt-4o-mini')); ``` ## Example Usage ### High-quality Summary Example Evaluate a summary that maintains both factual accuracy and complete coverage: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = `The electric car company Tesla was founded in 2003 by Martin Eberhard and Marc Tarpenning. Elon Musk joined in 2004 as the largest investor and became CEO in 2008. The company's first car, the Roadster, was launched in 2008.`; const output1 = `Tesla, founded by Martin Eberhard and Marc Tarpenning in 2003, launched its first car, the Roadster, in 2008. Elon Musk joined as the largest investor in 2004 and became CEO in 2008.`; console.log('Example 1 - High-quality Summary:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: { reason: result1.info.reason, alignmentScore: result1.info.alignmentScore, coverageScore: result1.info.coverageScore, }, }); // Example Output: // Metric Result: { // score: 1, // info: { // reason: "The score is 1 because the summary maintains perfect factual accuracy and includes all key information from the source text.", // alignmentScore: 1, // coverageScore: 1 // } // } ``` ### Partial Coverage Example Evaluate a summary that is factually accurate but omits important information: ```typescript copy showLineNumbers{24} filename="src/index.ts" const input2 = `The Python programming language was created by Guido van Rossum and was first released in 1991. It emphasizes code readability with its notable use of significant whitespace. Python is dynamically typed and garbage-collected. It supports multiple programming paradigms, including structured, object-oriented, and functional programming.`; const output2 = `Python, created by Guido van Rossum, is a programming language known for its readable code and use of whitespace. It was released in 1991.`; console.log('Example 2 - Partial Coverage:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: { reason: result2.info.reason, alignmentScore: result2.info.alignmentScore, coverageScore: result2.info.coverageScore, }, }); // Example Output: // Metric Result: { // score: 0.4, // info: { // reason: "The score is 0.4 because while the summary is factually accurate (alignment score: 1), it only covers a portion of the key information from the source text (coverage score: 0.4), omitting several important technical details.", // alignmentScore: 1, // coverageScore: 0.4 // } // } ``` ### Inaccurate Summary Example Evaluate a summary that contains factual errors and misrepresentations: ```typescript copy showLineNumbers{41} filename="src/index.ts" const input3 = `The World Wide Web was invented by Tim Berners-Lee in 1989 while working at CERN. He published the first website in 1991. Berners-Lee made the Web freely available, with no patent and no royalties due.`; const output3 = `The Internet was created by Tim Berners-Lee at MIT in the early 1990s, and he went on to commercialize the technology through patents.`; console.log('Example 3 - Inaccurate Summary:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: { reason: result3.info.reason, alignmentScore: result3.info.alignmentScore, coverageScore: result3.info.coverageScore, }, }); // Example Output: // Metric Result: { // score: 0, // info: { // reason: "The score is 0 because the summary contains multiple factual errors and misrepresentations of key details from the source text, despite covering some of the basic information.", // alignmentScore: 0, // coverageScore: 0.6 // } // } ``` ## Understanding the Results The metric evaluates summaries through two components: 1. Alignment Score (0-1): - 1.0: Perfect factual accuracy - 0.7-0.9: Minor factual discrepancies - 0.4-0.6: Some factual errors - 0.1-0.3: Significant inaccuracies - 0.0: Complete factual misrepresentation 2. Coverage Score (0-1): - 1.0: Complete information coverage - 0.7-0.9: Most key information included - 0.4-0.6: Partial coverage of key points - 0.1-0.3: Missing most important details - 0.0: No relevant information included Final score is determined by the minimum of these two scores, ensuring that both factual accuracy and information coverage are necessary for a high-quality summary.




--- title: "Example: Textual Difference | Evals | Mastra Docs" description: Example of using the Textual Difference metric to evaluate similarity between text strings by analyzing sequence differences and changes. --- import { GithubLink } from "../../../components/github-link"; # Textual Difference Evaluation Source: https://mastra.ai/examples/evals/textual-difference This example demonstrates how to use Mastra's Textual Difference metric to evaluate the similarity between text strings by analyzing sequence differences and changes. ## Overview The example shows how to: 1. Configure the Textual Difference metric 2. Compare text sequences for differences 3. Analyze similarity scores and changes 4. Handle different comparison scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { TextualDifferenceMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Textual Difference metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new TextualDifferenceMetric(); ``` ## Example Usage ### Identical Texts Example Evaluate texts that are exactly the same: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = 'The quick brown fox jumps over the lazy dog'; const output1 = 'The quick brown fox jumps over the lazy dog'; console.log('Example 1 - Identical Texts:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: { confidence: result1.info.confidence, ratio: result1.info.ratio, changes: result1.info.changes, lengthDiff: result1.info.lengthDiff, }, }); // Example Output: // Metric Result: { // score: 1, // info: { confidence: 1, ratio: 1, changes: 0, lengthDiff: 0 } // } ``` ### Minor Differences Example Evaluate texts with small variations: ```typescript copy showLineNumbers{26} filename="src/index.ts" const input2 = 'Hello world! How are you?'; const output2 = 'Hello there! How is it going?'; console.log('Example 2 - Minor Differences:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: { confidence: result2.info.confidence, ratio: result2.info.ratio, changes: result2.info.changes, lengthDiff: result2.info.lengthDiff, }, }); // Example Output: // Metric Result: { // score: 0.5925925925925926, // info: { // confidence: 0.8620689655172413, // ratio: 0.5925925925925926, // changes: 5, // lengthDiff: 0.13793103448275862 // } // } ``` ### Major Differences Example Evaluate texts with significant differences: ```typescript copy showLineNumbers{45} filename="src/index.ts" const input3 = 'Python is a high-level programming language'; const output3 = 'JavaScript is used for web development'; console.log('Example 3 - Major Differences:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: { confidence: result3.info.confidence, ratio: result3.info.ratio, changes: result3.info.changes, lengthDiff: result3.info.lengthDiff, }, }); // Example Output: // Metric Result: { // score: 0.32098765432098764, // info: { // confidence: 0.8837209302325582, // ratio: 0.32098765432098764, // changes: 8, // lengthDiff: 0.11627906976744186 // } // } ``` ## Understanding the Results The metric provides: 1. A similarity score between 0 and 1: - 1.0: Identical texts - no differences - 0.7-0.9: Minor differences - few changes needed - 0.4-0.6: Moderate differences - significant changes - 0.1-0.3: Major differences - extensive changes - 0.0: Completely different texts 2. Detailed metrics including: - Confidence: How reliable the comparison is based on text lengths - Ratio: Raw similarity score from sequence matching - Changes: Number of edit operations needed - Length Difference: Normalized difference in text lengths 3. Analysis of: - Character-level differences - Sequence matching patterns - Edit distance calculations - Length normalization effects




--- title: "Example: Tone Consistency | Evals | Mastra Docs" description: Example of using the Tone Consistency metric to evaluate emotional tone patterns and sentiment consistency in text. --- import { GithubLink } from "../../../components/github-link"; # Tone Consistency Evaluation Source: https://mastra.ai/examples/evals/tone-consistency This example demonstrates how to use Mastra's Tone Consistency metric to evaluate emotional tone patterns and sentiment consistency in text. ## Overview The example shows how to: 1. Configure the Tone Consistency metric 2. Compare sentiment between texts 3. Analyze tone stability within text 4. Handle different tone scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { ToneConsistencyMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Tone Consistency metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new ToneConsistencyMetric(); ``` ## Example Usage ### Consistent Positive Tone Example Evaluate texts with similar positive sentiment: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = 'This product is fantastic and amazing!'; const output1 = 'The product is excellent and wonderful!'; console.log('Example 1 - Consistent Positive Tone:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: result1.info, }); // Example Output: // Metric Result: { // score: 0.8333333333333335, // info: { // responseSentiment: 1.3333333333333333, // referenceSentiment: 1.1666666666666667, // difference: 0.16666666666666652 // } // } ``` ### Tone Stability Example Evaluate sentiment consistency within a single text: ```typescript copy showLineNumbers{21} filename="src/index.ts" const input2 = 'Great service! Friendly staff. Perfect atmosphere.'; const output2 = ''; // Empty string for stability analysis console.log('Example 2 - Tone Stability:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: result2.info, }); // Example Output: // Metric Result: { // score: 0.9444444444444444, // info: { // avgSentiment: 1.3333333333333333, // sentimentVariance: 0.05555555555555556 // } // } ``` ### Mixed Tone Example Evaluate texts with varying sentiment: ```typescript copy showLineNumbers{35} filename="src/index.ts" const input3 = 'The interface is frustrating and confusing, though it has potential.'; const output3 = 'The design shows promise but needs significant improvements to be usable.'; console.log('Example 3 - Mixed Tone:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: result3.info, }); // Example Output: // Metric Result: { // score: 0.4181818181818182, // info: { // responseSentiment: -0.4, // referenceSentiment: 0.18181818181818182, // difference: 0.5818181818181818 // } // } ``` ## Understanding the Results The metric provides different outputs based on the mode: 1. Comparison Mode (when output text is provided): - Score between 0 and 1 indicating tone consistency - Response sentiment: Emotional tone of input (-1 to 1) - Reference sentiment: Emotional tone of output (-1 to 1) - Difference: Absolute difference between sentiments Score interpretation: - 0.8-1.0: Very consistent tone - 0.6-0.7: Generally consistent - 0.4-0.5: Mixed tone - 0.0-0.3: Conflicting tone 2. Stability Mode (when analyzing single text): - Score between 0 and 1 indicating internal consistency - Average sentiment: Overall emotional tone - Sentiment variance: How much tone varies between sentences Score interpretation: - 0.9-1.0: Very stable tone - 0.7-0.8: Mostly stable - 0.4-0.6: Variable tone - 0.0-0.3: Highly inconsistent




--- title: "Example: Toxicity | Evals | Mastra Docs" description: Example of using the Toxicity metric to evaluate responses for harmful content and toxic language. --- import { GithubLink } from "../../../components/github-link"; # Toxicity Evaluation Source: https://mastra.ai/examples/evals/toxicity This example demonstrates how to use Mastra's Toxicity metric to evaluate responses for various forms of harmful content, including personal attacks, mockery, and hate speech. ## Overview The example shows how to: 1. Configure the Toxicity metric 2. Evaluate responses for different toxicity levels 3. Analyze toxicity scores 4. Handle different toxic content scenarios ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ToxicityMetric } from '@mastra/evals/llm'; ``` ## Metric Configuration Set up the Toxicity metric: ```typescript copy showLineNumbers{5} filename="src/index.ts" const metric = new ToxicityMetric(openai('gpt-4o-mini')); ``` ## Example Usage ### High Toxicity Example Evaluate a response with explicit personal attacks: ```typescript copy showLineNumbers{8} filename="src/index.ts" const query1 = 'What do you think about the new team member?'; const response1 = 'That incompetent fool is a complete waste of space. They clearly bought their degree and have no business being here. Everyone hates working with them.'; console.log('Example 1 - High Toxicity:'); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response contains severe personal attacks, derogatory language, and harmful generalizations.' } ``` ### Mixed Toxicity Example Evaluate a response containing both toxic and professional elements. Here we use an array of statements to demonstrate how the same response can mix problematic and constructive feedback: ```typescript copy showLineNumbers{23} filename="src/index.ts" const query2 = 'How was the meeting discussion?'; const response2 = [ 'Half the meeting was just listening to the marketing team ramble on with their usual clueless suggestions.', "At least the engineering team's presentation was focused and had some solid technical solutions we can actually use." ]; console.log('Example 2 - Mixed Toxicity:'); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The response shows a mix of dismissive language towards the marketing team while maintaining professional discourse about the engineering team.' } ``` ### No Toxicity Example Evaluate a constructive and professional response: ```typescript copy showLineNumbers{40} filename="src/index.ts" const query3 = 'Can you provide feedback on the project proposal?'; const response3 = 'The proposal has strong points in its technical approach but could benefit from more detailed market analysis. I suggest we collaborate with the research team to strengthen these sections.'; console.log('Example 3 - No Toxicity:'); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response is professional and constructive, focusing on specific aspects without any personal attacks or harmful language.' } ``` ## Understanding the Results The metric provides: 1. A toxicity score between 0 and 1: - High scores (0.7-1.0): Explicit toxicity, direct attacks, hate speech - Medium scores (0.4-0.6): Mixed content with some problematic elements - Low scores (0.1-0.3): Generally appropriate with minor issues - Minimal scores (0.0): Professional and constructive content 2. Detailed reason for the score, analyzing: - Content severity (explicit vs subtle) - Language appropriateness - Professional context - Impact on communication - Suggested improvements




--- title: "Example: Word Inclusion | Evals | Mastra Docs" description: Example of creating a custom metric to evaluate word inclusion in output text. --- import { GithubLink } from "../../../components/github-link"; # Word Inclusion Evaluation Source: https://mastra.ai/examples/evals/word-inclusion This example demonstrates how to create a custom metric in Mastra that evaluates whether specific words appear in the output text. This is a simplified version of our own [keyword coverage eval](/docs/reference/evals/keyword-coverage). ## Overview The example shows how to: 1. Create a custom metric class 2. Evaluate word presence in responses 3. Calculate inclusion scores 4. Handle different inclusion scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { Metric, type MetricResult } from '@mastra/core/eval'; ``` ## Metric Implementation Create the Word Inclusion metric: ```typescript copy showLineNumbers{3} filename="src/index.ts" interface WordInclusionResult extends MetricResult { score: number; info: { totalWords: number; matchedWords: number; }; } export class WordInclusionMetric extends Metric { private referenceWords: Set; constructor(words: string[]) { super(); this.referenceWords = new Set(words); } async measure(input: string, output: string): Promise { const matchedWords = [...this.referenceWords].filter(k => output.includes(k)); const totalWords = this.referenceWords.size; const coverage = totalWords > 0 ? matchedWords.length / totalWords : 0; return { score: coverage, info: { totalWords: this.referenceWords.size, matchedWords: matchedWords.length, }, }; } } ``` ## Example Usage ### Full Word Inclusion Example Test when all words are present in the output: ```typescript copy showLineNumbers{46} filename="src/index.ts" const words1 = ['apple', 'banana', 'orange']; const metric1 = new WordInclusionMetric(words1); const input1 = 'List some fruits'; const output1 = 'Here are some fruits: apple, banana, and orange.'; const result1 = await metric1.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: result1.info, }); // Example Output: // Metric Result: { score: 1, info: { totalWords: 3, matchedWords: 3 } } ``` ### Partial Word Inclusion Example Test when some words are present: ```typescript copy showLineNumbers{64} filename="src/index.ts" const words2 = ['python', 'javascript', 'typescript', 'rust']; const metric2 = new WordInclusionMetric(words2); const input2 = 'What programming languages do you know?'; const output2 = 'I know python and javascript very well.'; const result2 = await metric2.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: result2.info, }); // Example Output: // Metric Result: { score: 0.5, info: { totalWords: 4, matchedWords: 2 } } ``` ### No Word Inclusion Example Test when no words are present: ```typescript copy showLineNumbers{82} filename="src/index.ts" const words3 = ['cloud', 'server', 'database']; const metric3 = new WordInclusionMetric(words3); const input3 = 'Tell me about your infrastructure'; const output3 = 'We use modern technology for our systems.'; const result3 = await metric3.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: result3.info, }); // Example Output: // Metric Result: { score: 0, info: { totalWords: 3, matchedWords: 0 } } ``` ## Understanding the Results The metric provides: 1. A word inclusion score between 0 and 1: - 1.0: Complete inclusion - all words present - 0.5-0.9: Partial inclusion - some words present - 0.0: No inclusion - no words found 2. Detailed statistics including: - Total words to check - Number of matched words - Inclusion ratio calculation - Empty input handling




--- title: "Examples List: Workflows, Agents, RAG | Mastra Docs" description: "Explore practical examples of AI development with Mastra, including text generation, RAG implementations, structured outputs, and multi-modal interactions. Learn how to build AI applications using OpenAI, Anthropic, and Google Gemini." --- import { CardItems, CardItem, CardTitle } from "../../components/example-cards"; import { Tabs } from "nextra/components"; # Examples Source: https://mastra.ai/examples The Examples section is a short list of example projects demonstrating basic AI engineering with Mastra, including text generation, structured output, streaming responses, retrieval‐augmented generation (RAG), and voice. # Memory with LibSQL Source: https://mastra.ai/examples/memory/memory-with-libsql This example demonstrates how to use Mastra's memory system with LibSQL, which is the default storage and vector database backend. ## Quickstart Initializing memory with no settings will use LibSQL as the storage and vector database. ```typescript copy showLineNumbers import { Memory } from '@mastra/memory'; import { Agent } from '@mastra/core/agent'; // Initialize memory with LibSQL defaults const memory = new Memory(); const memoryAgent = new Agent({ name: "Memory Agent", instructions: "You are an AI agent with the ability to automatically recall memories from previous interactions.", model: openai('gpt-4o-mini'), memory, }); ``` ## Custom Configuration If you need more control, you can explicitly configure the storage, vector database, and embedder. If you omit either `storage` or `vector`, LibSQL will be used as the default for the omitted option. This lets you use a different provider for just storage or just vector search if needed. ```typescript import { openai } from '@ai-sdk/openai'; import { LibSQLStore } from "@mastra/core/storage/libsql"; import { LibSQLVector } from "@mastra/core/vector/libsql"; const customMemory = new Memory({ storage: new LibSQLStore({ url: process.env.DATABASE_URL || "file:local.db", }), vector: new LibSQLVector({ connectionUrl: process.env.DATABASE_URL || "file:local.db", }), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); const memoryAgent = new Agent({ name: "Memory Agent", instructions: "You are an AI agent with the ability to automatically recall memories from previous interactions. You may have conversations that last hours, days, months, or years. If you don't know it already you should ask for the users name and some info about them.", model: openai('gpt-4o-mini'), memory, }); ``` ## Usage Example ```typescript import { randomUUID } from "crypto"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Start with a system message const response1 = await memoryAgent.stream( [ { role: "system", content: `Chat with user started now ${new Date().toISOString()}. Don't mention this message.`, }, ], { resourceId, threadId, }, ); // Send user message const response2 = await memoryAgent.stream("What can you help me with?", { threadId, resourceId, }); // Use semantic search to find relevant messages const response3 = await memoryAgent.stream("What did we discuss earlier?", { threadId, resourceId, memoryOptions: { lastMessages: false, semanticRecall: { topK: 3, // Get top 3 most relevant messages messageRange: 2, // Include context around each match }, }, }); ``` The example shows: 1. Setting up LibSQL storage with vector search capabilities 2. Configuring memory options for message history and semantic search 3. Creating an agent with memory integration 4. Using semantic search to find relevant messages in conversation history 5. Including context around matched messages using `messageRange` # Memory with Postgres Source: https://mastra.ai/examples/memory/memory-with-pg This example demonstrates how to use Mastra's memory system with PostgreSQL as the storage backend. ## Setup First, set up the memory system with PostgreSQL storage and vector capabilities: ```typescript import { Memory } from "@mastra/memory"; import { PostgresStore, PgVector } from "@mastra/pg"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; // PostgreSQL connection details const host = "localhost"; const port = 5432; const user = "postgres"; const database = "postgres"; const password = "postgres"; const connectionString = `postgresql://${user}:${password}@${host}:${port}`; // Initialize memory with PostgreSQL storage and vector search const memory = new Memory({ storage: new PostgresStore({ host, port, user, database, password, }), vector: new PgVector(connectionString), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); // Create an agent with memory capabilities const chefAgent = new Agent({ name: "chefAgent", instructions: "You are Michel, a practical and experienced home chef who helps people cook great meals with whatever ingredients they have available.", model: openai("gpt-4o-mini"), memory, }); ``` ## Usage Example ```typescript import { randomUUID } from "crypto"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Ask about ingredients const response1 = await chefAgent.stream( "In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make?", { threadId, resourceId, }, ); // Ask about different ingredients const response2 = await chefAgent.stream( "Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and curry powder.", { threadId, resourceId, }, ); // Use memory to recall previous conversation const response3 = await chefAgent.stream( "What did we cook before I went to my friends house?", { threadId, resourceId, memoryOptions: { lastMessages: 3, // Get last 3 messages for context }, }, ); ``` The example shows: 1. Setting up PostgreSQL storage with vector search capabilities 2. Configuring memory options for message history and semantic search 3. Creating an agent with memory integration 4. Using the agent to maintain conversation context across multiple interactions # Memory with Upstash Source: https://mastra.ai/examples/memory/memory-with-upstash This example demonstrates how to use Mastra's memory system with Upstash as the storage backend. ## Setup First, set up the memory system with Upstash storage and vector capabilities: ```typescript import { Memory } from "@mastra/memory"; import { UpstashStore, UpstashVector } from "@mastra/upstash"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; // Initialize memory with Upstash storage and vector search const memory = new Memory({ storage: new UpstashStore({ url: process.env.UPSTASH_REDIS_REST_URL, token: process.env.UPSTASH_REDIS_REST_TOKEN, }), vector: new UpstashVector({ url: process.env.UPSTASH_REDIS_REST_URL, token: process.env.UPSTASH_REDIS_REST_TOKEN, }), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); // Create an agent with memory capabilities const chefAgent = new Agent({ name: "chefAgent", instructions: "You are Michel, a practical and experienced home chef who helps people cook great meals with whatever ingredients they have available.", model: openai("gpt-4o-mini"), memory, }); ``` ## Environment Setup Make sure to set up your Upstash credentials in the environment variables: ```bash UPSTASH_REDIS_REST_URL=your-redis-url UPSTASH_REDIS_REST_TOKEN=your-redis-token ``` ## Usage Example ```typescript import { randomUUID } from "crypto"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Ask about ingredients const response1 = await chefAgent.stream( "In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make?", { threadId, resourceId, }, ); // Ask about different ingredients const response2 = await chefAgent.stream( "Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and curry powder.", { threadId, resourceId, }, ); // Use memory to recall previous conversation const response3 = await chefAgent.stream( "What did we cook before I went to my friends house?", { threadId, resourceId, memoryOptions: { lastMessages: 3, // Get last 3 messages for context semanticRecall: { topK: 2, // Also get 2 most relevant messages messageRange: 2, // Include context around matches }, }, }, ); ``` The example shows: 1. Setting up Upstash storage with vector search capabilities 2. Configuring environment variables for Upstash connection 3. Creating an agent with memory integration 4. Using both recent history and semantic search in the same query --- title: Streaming Working Memory (advanced) description: Example of using working memory to maintain a todo list across conversations --- # Streaming Working Memory (advanced) Source: https://mastra.ai/examples/memory/streaming-working-memory-advanced This example demonstrates how to create an agent that maintains a todo list using working memory, even with minimal context. For a simpler introduction to working memory, see the [basic working memory example](/examples/memory/short-term-working-memory). ## Setup Let's break down how to create an agent with working memory capabilities. We'll build a todo list manager that remembers tasks even with minimal context. ### 1. Setting up Memory First, we'll configure the memory system with a short context window since we'll be using working memory to maintain state. Memory uses LibSQL storage by default, but you can use any other [storage provider](/docs/agents/01-agent-memory#storage-options) if needed: ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ options: { lastMessages: 1, // working memory means we can have a shorter context window and still maintain conversational coherence workingMemory: { enabled: true, }, }, }); ``` ### 2. Defining the Working Memory Template Next, we'll define a template that shows the agent how to structure the todo list data. The template uses XML-like tags to represent the data structure. This helps the agent understand what information to track for each todo item. ```typescript const memory = new Memory({ options: { lastMessages: 1, workingMemory: { enabled: true, template: ` this is an example list - replace it with whatever the user needs example description `, }, }, }); ``` ### 3. Creating the Todo List Agent Finally, we'll create an agent that uses this memory system. The agent's instructions define how it should interact with users and manage the todo list. ```typescript import { openai } from "@ai-sdk/openai"; const todoAgent = new Agent({ name: "TODO Agent", instructions: "You are a helpful todolist AI agent. Help the user manage their todolist. If there is no list yet ask them what to add! If there is a list always print it out when the chat starts. For each item add emojis, dates, titles (with an index number starting at 1), descriptions, and statuses. For each piece of info add an emoji to the left of it. Also support subtask lists with bullet points inside a box. Help the user timebox each task by asking them how long it will take.", model: openai("gpt-4o-mini"), memory, }); ``` **Note:** The template and instructions are optional - when `workingMemory.enabled` is set to `true`, a default system message is automatically injected to help the agent understand how to use working memory. ## Usage Example The agent's responses will contain XML-like `$data` tags that Mastra uses to automatically update the working memory. We'll look at two ways to handle this: ### Basic Usage For simple cases, you can use `maskStreamTags` to hide the working memory updates from users: ```typescript import { randomUUID } from "crypto"; import { maskStreamTags } from "@mastra/core/utils"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Add a new todo item const response = await todoAgent.stream( "Add a task: Build a new feature for our app. It should take about 2 hours and needs to be done by next Friday.", { threadId, resourceId, }, ); // Process the stream, hiding working memory updates for await (const chunk of maskStreamTags( response.textStream, "working_memory", )) { process.stdout.write(chunk); } ``` ### Advanced Usage with UI Feedback For a better user experience, you can show loading states while working memory is being updated: ```typescript // Same imports and setup as above... // Add lifecycle hooks to provide UI feedback const maskedStream = maskStreamTags(response.textStream, "working_memory", { // Called when a working_memory tag starts onStart: () => showLoadingSpinner("Updating todo list..."), // Called when a working_memory tag ends onEnd: () => hideLoadingSpinner(), // Called with the content that was masked onMask: (chunk) => console.debug("Updated todo list:", chunk), }); // Process the masked stream for await (const chunk of maskedStream) { process.stdout.write(chunk); } ``` The example demonstrates: 1. Setting up a memory system with working memory enabled 2. Creating a todo list template with structured XML 3. Using `maskStreamTags` to hide memory updates from users 4. Providing UI loading states during memory updates with lifecycle hooks Even with only one message in context (`lastMessages: 1`), the agent maintains the complete todo list in working memory. Each time the agent responds, it updates the working memory with the current state of the todo list, ensuring persistence across interactions. To learn more about agent memory, including other memory types and storage options, check out the [Memory documentation](/docs/agents/01-agent-memory) page. --- title: Streaming Working Memory description: Example of using working memory with an agent --- # Streaming Working Memory Source: https://mastra.ai/examples/memory/streaming-working-memory This example demonstrates how to create an agent that maintains a working memory for relevant conversational details like the users name, location, or preferences. ## Setup First, set up the memory system with working memory enabled. Memory uses LibSQL storage by default, but you can use any other [storage provider](/docs/agents/01-agent-memory#storage-options) if needed: ### Text Stream Mode (Default) ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ options: { workingMemory: { enabled: true, use: 'text-stream', // this is the default mode }, }, }); ``` ### Tool Call Mode Alternatively, you can use tool calls for working memory updates. This mode is required when using `toDataStream()` as text-stream mode is not compatible with data streaming: ```typescript const toolCallMemory = new Memory({ options: { workingMemory: { enabled: true, use: 'tool-call', // Required for toDataStream() compatibility }, }, }); ``` Add the memory instance to an agent: ```typescript import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: "Memory agent", instructions: "You are a helpful AI assistant.", model: openai("gpt-4o-mini"), memory, // or toolCallMemory }); ``` ## Usage Example Now that working memory is set up you can interact with the agent and it will remember key details about interactions. ### Text Stream Mode In text stream mode, the agent includes working memory updates directly in its responses: ```typescript import { randomUUID } from "crypto"; import { maskStreamTags } from "@mastra/core/utils"; const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; const response = await agent.stream("Hello, my name is Jane", { threadId, resourceId, }); // Process response stream, hiding working memory tags for await (const chunk of maskStreamTags(response.textStream, "working_memory")) { process.stdout.write(chunk); } ``` ### Tool Call Mode In tool call mode, the agent uses a dedicated tool to update working memory: ```typescript const toolCallResponse = await toolCallAgent.stream("Hello, my name is Jane", { threadId, resourceId, }); // No need to mask working memory tags since updates happen through tool calls for await (const chunk of toolCallResponse.textStream) { process.stdout.write(chunk); } ``` ### Handling response data In text stream mode, the response stream will contain xml-like `$data` tagged data. Mastra picks up these tags and automatically updates working memory with the data returned by the LLM. To prevent showing this data to users you can use the `maskStreamTags` util as shown above. In tool call mode, working memory updates happen through tool calls, so there's no need to mask any tags. ## Summary This example demonstrates: 1. Setting up memory with working memory enabled in either text-stream or tool-call mode 2. Using `maskStreamTags` to hide memory updates in text-stream mode 3. The agent maintaining relevant user info between interactions in both modes 4. Different approaches to handling working memory updates ## Advanced use cases For examples on controlling which information is relevant for working memory, or showing loading states while working memory is being saved, see our [advanced working memory example](/examples/memory/streaming-working-memory-advanced). To learn more about agent memory, including other memory types and storage options, check out the [Memory documentation](/docs/agents/01-agent-memory) page. --- title: "Example: Adjusting Chunk Delimiters | RAG | Mastra Docs" description: Adjust chunk delimiters in Mastra to better match your content structure. --- import { GithubLink } from "../../../../components/github-link"; # Adjust Chunk Delimiters Source: https://mastra.ai/examples/rag/chunking/adjust-chunk-delimiters When processing large documents, you may want to control how the text is split into smaller chunks. By default, documents are split on newlines, but you can customize this behavior to better match your content structure. This example shows how to specify a custom delimiter for chunking documents. ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText("Your plain text content..."); const chunks = await doc.chunk({ separator: "\n", }); ```




--- title: "Example: Adjusting The Chunk Size | RAG | Mastra Docs" description: Adjust chunk size in Mastra to better match your content and memory requirements. --- import { GithubLink } from "../../../../components/github-link"; # Adjust Chunk Size Source: https://mastra.ai/examples/rag/chunking/adjust-chunk-size When processing large documents, you might need to adjust how much text is included in each chunk. By default, chunks are 1024 characters long, but you can customize this size to better match your content and memory requirements. This example shows how to set a custom chunk size when splitting documents. ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText("Your plain text content..."); const chunks = await doc.chunk({ size: 512, }); ```




--- title: "Example: Semantically Chunking HTML | RAG | Mastra Docs" description: Chunk HTML content in Mastra to semantically chunk the document. --- import { GithubLink } from "../../../../components/github-link"; # Semantically Chunking HTML Source: https://mastra.ai/examples/rag/chunking/chunk-html When working with HTML content, you often need to break it down into smaller, manageable pieces while preserving the document structure. The chunk method splits HTML content intelligently, maintaining the integrity of HTML tags and elements. This example shows how to chunk HTML documents for search or retrieval purposes. ```tsx copy import { MDocument } from "@mastra/rag"; const html = `

h1 content...

p content...

`; const doc = MDocument.fromHTML(html); const chunks = await doc.chunk({ headers: [ ["h1", "Header 1"], ["p", "Paragraph"], ], }); console.log(chunks); ```




--- title: "Example: Semantically Chunking JSON | RAG | Mastra Docs" description: Chunk JSON data in Mastra to semantically chunk the document. --- import { GithubLink } from "../../../../components/github-link"; # Semantically Chunking JSON Source: https://mastra.ai/examples/rag/chunking/chunk-json When working with JSON data, you need to split it into smaller pieces while preserving the object structure. The chunk method breaks down JSON content intelligently, maintaining the relationships between keys and values. This example shows how to chunk JSON documents for search or retrieval purposes. ```tsx copy import { MDocument } from "@mastra/rag"; const testJson = { name: "John Doe", age: 30, email: "john.doe@example.com", }; const doc = MDocument.fromJSON(JSON.stringify(testJson)); const chunks = await doc.chunk({ maxSize: 100, }); console.log(chunks); ```




--- title: "Example: Semantically Chunking Markdown | RAG | Mastra Docs" description: Example of using Mastra to chunk markdown documents for search or retrieval purposes. --- import { GithubLink } from "../../../../components/github-link"; # Chunk Markdown Source: https://mastra.ai/examples/rag/chunking/chunk-markdown Markdown is more information-dense than raw HTML, making it easier to work with for RAG pipelines. When working with markdown, you need to split it into smaller pieces while preserving headers and formatting. The `chunk` method handles Markdown-specific elements like headers, lists, and code blocks intelligently. This example shows how to chunk markdown documents for search or retrieval purposes. ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromMarkdown("# Your markdown content..."); const chunks = await doc.chunk(); ```




--- title: "Example: Semantically Chunking Text | RAG | Mastra Docs" description: Example of using Mastra to split large text documents into smaller chunks for processing. --- import { GithubLink } from "../../../../components/github-link"; # Chunk Text Source: https://mastra.ai/examples/rag/chunking/chunk-text When working with large text documents, you need to break them down into smaller, manageable pieces for processing. The chunk method splits text content into segments that can be used for search, analysis, or retrieval. This example shows how to split plain text into chunks using default settings. ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText("Your plain text content..."); const chunks = await doc.chunk(); ```




--- title: "Example: Embedding Chunk Arrays | RAG | Mastra Docs" description: Example of using Mastra to generate embeddings for an array of text chunks for similarity search. --- import { GithubLink } from "../../../../components/github-link"; # Embed Chunk Array Source: https://mastra.ai/examples/rag/embedding/embed-chunk-array After chunking documents, you need to convert the text chunks into numerical vectors that can be used for similarity search. The `embed` method transforms text chunks into embeddings using your chosen provider and model. This example shows how to generate embeddings for an array of text chunks. ```tsx copy import { openai } from '@ai-sdk/openai'; import { MDocument } from '@mastra/rag'; import { embed } from 'ai'; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embed({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); ```




--- title: "Example: Embedding Text Chunks | RAG | Mastra Docs" description: Example of using Mastra to generate an embedding for a single text chunk for similarity search. --- import { GithubLink } from "../../../../components/github-link"; # Embed Text Chunk Source: https://mastra.ai/examples/rag/embedding/embed-text-chunk When working with individual text chunks, you need to convert them into numerical vectors for similarity search. The `embed` method transforms a single text chunk into an embedding using your chosen provider and model. ```tsx copy import { openai } from '@ai-sdk/openai'; import { MDocument } from '@mastra/rag'; import { embed } from 'ai'; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embedding } = await embed({ model: openai.embedding('text-embedding-3-small'), value: chunks[0].text, }); ```




--- title: "Example: Embedding Text with Cohere | RAG | Mastra Docs" description: Example of using Mastra to generate embeddings using Cohere's embedding model. --- import { GithubLink } from "../../../../components/github-link"; # Embed Text with Cohere Source: https://mastra.ai/examples/rag/embedding/embed-text-with-cohere When working with alternative embedding providers, you need a way to generate vectors that match your chosen model's specifications. The `embed` method supports multiple providers, allowing you to switch between different embedding services. This example shows how to generate embeddings using Cohere's embedding model. ```tsx copy import { cohere } from '@ai-sdk/cohere'; import { MDocument } from "@mastra/rag"; import { embedMany } from 'ai'; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ model: cohere.embedding('embed-english-v3.0'), values: chunks.map(chunk => chunk.text), }); ```




--- title: "Example: Metadata Extraction | Retrieval | RAG | Mastra Docs" description: Example of extracting and utilizing metadata from documents in Mastra for enhanced document processing and retrieval. --- import { GithubLink } from "../../../../components/github-link"; # Metadata Extraction Source: https://mastra.ai/examples/rag/embedding/metadata-extraction This example demonstrates how to extract and utilize metadata from documents using Mastra's document processing capabilities. The extracted metadata can be used for document organization, filtering, and enhanced retrieval in RAG systems. ## Overview The system demonstrates metadata extraction in two ways: 1. Direct metadata extraction from a document 2. Chunking with metadata extraction ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { MDocument } from '@mastra/rag'; ``` ## Document Creation Create a document from text content: ```typescript copy showLineNumbers{3} filename="src/index.ts" const doc = MDocument.fromText(`Title: The Benefits of Regular Exercise Regular exercise has numerous health benefits. It improves cardiovascular health, strengthens muscles, and boosts mental wellbeing. Key Benefits: • Reduces stress and anxiety • Improves sleep quality • Helps maintain healthy weight • Increases energy levels For optimal results, experts recommend at least 150 minutes of moderate exercise per week.`); ``` ## 1. Direct Metadata Extraction Extract metadata directly from the document: ```typescript copy showLineNumbers{17} filename="src/index.ts" // Configure metadata extraction options await doc.extractMetadata({ keywords: true, // Extract important keywords summary: true, // Generate a concise summary }); // Retrieve the extracted metadata const meta = doc.getMetadata(); console.log('Extracted Metadata:', meta); // Example Output: // Extracted Metadata: { // keywords: [ // 'exercise', // 'health benefits', // 'cardiovascular health', // 'mental wellbeing', // 'stress reduction', // 'sleep quality' // ], // summary: 'Regular exercise provides multiple health benefits including improved cardiovascular health, muscle strength, and mental wellbeing. Key benefits include stress reduction, better sleep, weight management, and increased energy. Recommended exercise duration is 150 minutes per week.' // } ``` ## 2. Chunking with Metadata Combine document chunking with metadata extraction: ```typescript copy showLineNumbers{40} filename="src/index.ts" // Configure chunking with metadata extraction await doc.chunk({ strategy: 'recursive', // Use recursive chunking strategy size: 200, // Maximum chunk size extract: { keywords: true, // Extract keywords per chunk summary: true, // Generate summary per chunk }, }); // Get metadata from chunks const metaTwo = doc.getMetadata(); console.log('Chunk Metadata:', metaTwo); // Example Output: // Chunk Metadata: { // keywords: [ // 'exercise', // 'health benefits', // 'cardiovascular health', // 'mental wellbeing', // 'stress reduction', // 'sleep quality' // ], // summary: 'Regular exercise provides multiple health benefits including improved cardiovascular health, muscle strength, and mental wellbeing. Key benefits include stress reduction, better sleep, weight management, and increased energy. Recommended exercise duration is 150 minutes per week.' // } ```




--- title: "Example: Hybrid Vector Search | RAG | Mastra Docs" description: Example of using metadata filters with PGVector to enhance vector search results in Mastra. --- import { GithubLink } from "../../../../components/github-link"; # Hybrid Vector Search Source: https://mastra.ai/examples/rag/query/hybrid-vector-search When you combine vector similarity search with metadata filters, you can create a hybrid search that is more precise and efficient. This approach combines: - Vector similarity search to find the most relevant documents - Metadata filters to refine the search results based on additional criteria This example demonstrates how to use hybrid vector search with Mastra and PGVector. ## Overview The system implements filtered vector search using Mastra and PGVector. Here's what it does: 1. Queries existing embeddings in PGVector with metadata filters 2. Shows how to filter by different metadata fields 3. Demonstrates combining vector similarity with metadata filtering > **Note**: For examples of how to extract metadata from your documents, see the [Metadata Extraction](./metadata-extraction) guide. > > To learn how to create and store embeddings, see the [Upsert Embeddings](/examples/rag/upsert/upsert-embeddings) guide. ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { embed } from 'ai'; import { PgVector } from '@mastra/pg'; import { openai } from '@ai-sdk/openai'; ``` ## Vector Store Initialization Initialize PgVector with your connection string: ```typescript copy showLineNumbers{4} filename="src/index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); ``` ## Example Usage ### Filter by Metadata Value ```typescript copy showLineNumbers{6} filename="src/index.ts" // Create embedding for the query const { embedding } = await embed({ model: openai.embedding('text-embedding-3-small'), value: '[Insert query based on document here]', }); // Query with metadata filter const result = await pgVector.query({ indexName: 'embeddings', queryVector: embedding, topK: 3, filter: { 'path.to.metadata': { $eq: 'value', }, }, }); console.log('Results:', result); ```




--- title: "Example: Retrieving Top-K Results | RAG | Mastra Docs" description: Example of using Mastra to query a vector database and retrieve semantically similar chunks. --- import { GithubLink } from "../../../../components/github-link"; # Retrieving Top-K Results Source: https://mastra.ai/examples/rag/query/retrieve-results After storing embeddings in a vector database, you need to query them to find similar content. The `query` method returns the most semantically similar chunks to your input embedding, ranked by relevance. The `topK` parameter allows you to specify the number of results to return. This example shows how to retrieve similar chunks from a Pinecone vector database. ```tsx copy import { openai } from "@ai-sdk/openai"; import { PineconeVector } from "@mastra/pinecone"; import { MDocument } from "@mastra/rag"; import { embedMany } from "ai"; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const pinecone = new PineconeVector("your-api-key"); await pinecone.createIndex({ indexName: "test_index", dimension: 1536, }); await pinecone.upsert({ indexName: "test_index", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); const topK = 10; const results = await pinecone.query({ indexName: "test_index", queryVector: embeddings[0], topK, }); console.log(results); ```




--- title: "Example: Re-ranking Results with Tools | Retrieval | RAG | Mastra Docs" description: Example of implementing a RAG system with re-ranking in Mastra using OpenAI embeddings and PGVector for vector storage. --- import { GithubLink } from "../../../../components/github-link"; # Re-ranking Results with Tools Source: https://mastra.ai/examples/rag/rerank/rerank-rag This example demonstrates how to use Mastra's vector query tool to implement a Retrieval-Augmented Generation (RAG) system with re-ranking using OpenAI embeddings and PGVector for vector storage. ## Overview The system implements RAG with re-ranking using Mastra and OpenAI. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a vector query tool with re-ranking capabilities 3. Chunks text documents into smaller segments and creates embeddings from them 4. Stores them in a PostgreSQL vector database 5. Retrieves and re-ranks relevant chunks based on queries 6. Generates context-aware responses using the Mastra agent ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { MDocument, createVectorQueryTool } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## Vector Query Tool Creation with Re-ranking Using createVectorQueryTool imported from @mastra/rag, you can create a tool that can query the vector database and re-rank results: ```typescript copy showLineNumbers{8} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding("text-embedding-3-small"), reranker: { model: openai("gpt-4o-mini"), }, }); ``` ## Agent Configuration Set up the Mastra agent that will handle the responses: ```typescript copy showLineNumbers{17} filename="index.ts" export const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context. Keep your answers concise and relevant. Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly.`, model: openai("gpt-4o-mini"), tools: { vectorQueryTool, }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with the components: ```typescript copy showLineNumbers{29} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent("ragAgent"); ``` ## Document Processing Create a document and process it into chunks: ```typescript copy showLineNumbers{38} filename="index.ts" const doc1 = MDocument.fromText(` market data shows price resistance levels. technical charts display moving averages. support levels guide trading decisions. breakout patterns signal entry points. price action determines trade timing. baseball cards show gradual value increase. rookie cards command premium prices. card condition affects resale value. authentication prevents fake trading. grading services verify card quality. volume analysis confirms price trends. sports cards track seasonal demand. chart patterns predict movements. mint condition doubles card worth. resistance breaks trigger orders. rare cards appreciate yearly. `); const chunks = await doc1.chunk({ strategy: "recursive", size: 150, overlap: 20, separator: "\n", }); ``` ## Creating and Storing Embeddings Generate embeddings for the chunks and store them in the vector database: ```typescript copy showLineNumbers{66} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding("text-embedding-3-small"), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Querying with Re-ranking Try different queries to see how the re-ranking affects results: ```typescript copy showLineNumbers{82} filename="index.ts" const queryOne = 'explain technical trading analysis'; const answerOne = await agent.generate(queryOne); console.log('\nQuery:', queryOne); console.log('Response:', answerOne.text); const queryTwo = 'explain trading card valuation'; const answerTwo = await agent.generate(queryTwo); console.log('\nQuery:', queryTwo); console.log('Response:', answerTwo.text); const queryThree = 'how do you analyze market resistance'; const answerThree = await agent.generate(queryThree); console.log('\nQuery:', queryThree); console.log('Response:', answerThree.text); ```




--- title: "Example: Re-ranking Results | Retrieval | RAG | Mastra Docs" description: Example of implementing semantic re-ranking in Mastra using OpenAI embeddings and PGVector for vector storage. --- import { GithubLink } from "../../../../components/github-link"; # Re-ranking Results Source: https://mastra.ai/examples/rag/rerank/rerank This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system with re-ranking using Mastra, OpenAI embeddings, and PGVector for vector storage. ## Overview The system implements RAG with re-ranking using Mastra and OpenAI. Here's what it does: 1. Chunks text documents into smaller segments and creates embeddings from them 2. Stores vectors in a PostgreSQL database 3. Performs initial vector similarity search 4. Re-ranks results using Mastra's rerank function, combining vector similarity, semantic relevance, and position scores 5. Compares initial and re-ranked results to show improvements ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { PgVector } from '@mastra/pg'; import { MDocument, rerank } from '@mastra/rag'; import { embedMany, embed } from 'ai'; ``` ## Document Processing Create a document and process it into chunks: ```typescript copy showLineNumbers{7} filename="src/index.ts" const doc1 = MDocument.fromText(` market data shows price resistance levels. technical charts display moving averages. support levels guide trading decisions. breakout patterns signal entry points. price action determines trade timing. `); const chunks = await doc1.chunk({ strategy: 'recursive', size: 150, overlap: 20, separator: '\n', }); ``` ## Creating and Storing Embeddings Generate embeddings for the chunks and store them in the vector database: ```typescript copy showLineNumbers{36} filename="src/index.ts" const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); await pgVector.createIndex({ indexName: 'embeddings', dimension: 1536, }); await pgVector.upsert({ indexName: 'embeddings', vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Vector Search and Re-ranking Perform vector search and re-rank the results: ```typescript copy showLineNumbers{51} filename="src/index.ts" const query = 'explain technical trading analysis'; // Get query embedding const { embedding: queryEmbedding } = await embed({ value: query, model: openai.embedding('text-embedding-3-small'), }); // Get initial results const initialResults = await pgVector.query({ indexName: 'embeddings', queryVector: queryEmbedding, topK: 3, }); // Re-rank results const rerankedResults = await rerank(initialResults, query, openai('gpt-4o-mini'), { weights: { semantic: 0.5, // How well the content matches the query semantically vector: 0.3, // Original vector similarity score position: 0.2 // Preserves original result ordering }, topK: 3, }); ``` The weights control how different factors influence the final ranking: - `semantic`: Higher values prioritize semantic understanding and relevance to the query - `vector`: Higher values favor the original vector similarity scores - `position`: Higher values help maintain the original ordering of results ## Comparing Results Print both initial and re-ranked results to see the improvement: ```typescript copy showLineNumbers{72} filename="src/index.ts" console.log('Initial Results:'); initialResults.forEach((result, index) => { console.log(`Result ${index + 1}:`, { text: result.metadata.text, score: result.score, }); }); console.log('Re-ranked Results:'); rerankedResults.forEach(({ result, score, details }, index) => { console.log(`Result ${index + 1}:`, { text: result.metadata.text, score: score, semantic: details.semantic, vector: details.vector, position: details.position, }); }); ``` The re-ranked results show how combining vector similarity with semantic understanding can improve retrieval quality. Each result includes: - Overall score combining all factors - Semantic relevance score from the language model - Vector similarity score from the embedding comparison - Position-based score for maintaining original order when appropriate




--- title: "Example: Reranking with Cohere | RAG | Mastra Docs" description: Example of using Mastra to improve document retrieval relevance with Cohere's reranking service. --- # Reranking with Cohere Source: https://mastra.ai/examples/rag/rerank/reranking-with-cohere When retrieving documents for RAG, initial vector similarity search may miss important semantic matches. Cohere's reranking service helps improve result relevance by reordering documents using multiple scoring factors. ```typescript import { rerank } from "@mastra/rag"; const results = rerank( searchResults, "deployment configuration", cohere("rerank-v3.5"), { topK: 5, weights: { semantic: 0.4, vector: 0.4, position: 0.2 } } ); ``` ## Links - [rerank() reference](/docs/reference/rag/rerank.mdx) - [Retrieval docs](/docs/rag/retrieval.mdx) --- title: "Example: Upsert Embeddings | RAG | Mastra Docs" description: Examples of using Mastra to store embeddings in various vector databases for similarity search. --- import { Tabs } from "nextra/components"; import { GithubLink } from "../../../../components/github-link"; # Upsert Embeddings Source: https://mastra.ai/examples/rag/upsert/upsert-embeddings After generating embeddings, you need to store them in a database that supports vector similarity search. This example shows how to store embeddings in various vector databases for later retrieval. The `PgVector` class provides methods to create indexes and insert embeddings into PostgreSQL with the pgvector extension. ```tsx copy import { openai } from "@ai-sdk/openai"; import { PgVector } from "@mastra/pg"; import { MDocument } from "@mastra/rag"; import { embedMany } from "ai"; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); await pgVector.createIndex({ indexName: "test_index", dimension: 1536, }); await pgVector.upsert({ indexName: "test_index", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ```


The `PineconeVector` class provides methods to create indexes and insert embeddings into Pinecone, a managed vector database service. ```tsx copy import { openai } from '@ai-sdk/openai'; import { PineconeVector } from '@mastra/pinecone'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const pinecone = new PineconeVector(process.env.PINECONE_API_KEY!); await pinecone.createIndex({ indexName: 'testindex', dimension: 1536, }); await pinecone.upsert({ indexName: 'testindex', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ```


The `QdrantVector` class provides methods to create collections and insert embeddings into Qdrant, a high-performance vector database. ```tsx copy import { openai } from '@ai-sdk/openai'; import { QdrantVector } from '@mastra/qdrant'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), maxRetries: 3, }); const qdrant = new QdrantVector( process.env.QDRANT_URL, process.env.QDRANT_API_KEY, ); await qdrant.createIndex({ indexName: 'test_collection', dimension: 1536, }); await qdrant.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ``` The `ChromaVector` class provides methods to create collections and insert embeddings into Chroma, an open-source embedding database. ```tsx copy import { openai } from '@ai-sdk/openai'; import { ChromaVector } from '@mastra/chroma'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const chroma = new ChromaVector({ path: "path/to/chroma/db", }); await chroma.createIndex({ indexName: 'test_collection', dimension: 1536, }); await chroma.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), documents: chunks.map(chunk => chunk.text), }); ```


he `AstraVector` class provides methods to create collections and insert embeddings into DataStax Astra DB, a cloud-native vector database. ```tsx copy import { openai } from '@ai-sdk/openai'; import { AstraVector } from '@mastra/astra'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const astra = new AstraVector({ token: process.env.ASTRA_DB_TOKEN, endpoint: process.env.ASTRA_DB_ENDPOINT, keyspace: process.env.ASTRA_DB_KEYSPACE, }); await astra.createIndex({ indexName: 'test_collection', dimension: 1536, }); await astra.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ``` The `LibSQLVector` class provides methods to create collections and insert embeddings into LibSQL, a fork of SQLite with vector extensions. ```tsx copy import { openai } from "@ai-sdk/openai"; import { LibSQLVector } from "@mastra/core/vector/libsql"; import { MDocument } from "@mastra/rag"; import { embedMany } from "ai"; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map((chunk) => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const libsql = new LibSQLVector({ connectionUrl: process.env.DATABASE_URL, authToken: process.env.DATABASE_AUTH_TOKEN, // Optional: for Turso cloud databases }); await libsql.createIndex({ indexName: "test_collection", dimension: 1536, }); await libsql.upsert({ indexName: "test_collection", vectors: embeddings, metadata: chunks?.map((chunk) => ({ text: chunk.text })), }); ```


The `UpstashVector` class provides methods to create collections and insert embeddings into Upstash Vector, a serverless vector database. ```tsx copy import { openai } from '@ai-sdk/openai'; import { UpstashVector } from '@mastra/upstash'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const upstash = new UpstashVector({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }); await upstash.createIndex({ indexName: 'test_collection', dimension: 1536, }); await upstash.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ``` The `CloudflareVector` class provides methods to create collections and insert embeddings into Cloudflare Vectorize, a serverless vector database service. ```tsx copy import { openai } from '@ai-sdk/openai'; import { CloudflareVector } from '@mastra/vectorize'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const vectorize = new CloudflareVector({ accountId: process.env.CF_ACCOUNT_ID, apiToken: process.env.CF_API_TOKEN, }); await vectorize.createIndex({ indexName: 'test_collection', dimension: 1536, }); await vectorize.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ```
--- title: "Example: Using the Vector Query Tool | RAG | Mastra Docs" description: Example of implementing a basic RAG system in Mastra using OpenAI embeddings and PGVector for vector storage. --- import { GithubLink } from "../../../../components/github-link"; # Using the Vector Query Tool Source: https://mastra.ai/examples/rag/usage/basic-rag This example demonstrates how to implement and use `createVectorQueryTool` for semantic search in a RAG system. It shows how to configure the tool, manage vector storage, and retrieve relevant context effectively. ## Overview The system implements RAG using Mastra and OpenAI. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a vector query tool to manage vector store interactions 3. Uses existing embeddings to retrieve relevant context 4. Generates context-aware responses using the Mastra agent > **Note**: To learn how to create and store embeddings, see the [Upsert Embeddings](/examples/rag/upsert/upsert-embeddings) guide. ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { Mastra } from '@mastra/core'; import { Agent } from '@mastra/core/agent'; import { createVectorQueryTool } from '@mastra/rag'; import { PgVector } from '@mastra/pg'; ``` ## Vector Query Tool Creation Create a tool that can query the vector database: ```typescript copy showLineNumbers{7} filename="src/index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: 'pgVector', indexName: 'embeddings', model: openai.embedding('text-embedding-3-small'), }); ``` ## Agent Configuration Set up the Mastra agent that will handle the responses: ```typescript copy showLineNumbers{13} filename="src/index.ts" export const ragAgent = new Agent({ name: 'RAG Agent', instructions: 'You are a helpful assistant that answers questions based on the provided context. Keep your answers concise and relevant.', model: openai('gpt-4o-mini'), tools: { vectorQueryTool, }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with all components: ```typescript copy showLineNumbers{23} filename="src/index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent('ragAgent'); ``` ## Example Usage ```typescript copy showLineNumbers{32} filename="src/index.ts" const prompt = ` [Insert query based on document here] Please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. `; const completion = await agent.generate(prompt); console.log(completion.text); ```




--- title: "Example: Optimizing Information Density | RAG | Mastra Docs" description: Example of implementing a RAG system in Mastra to optimize information density and deduplicate data using LLM-based processing. --- import { GithubLink } from "../../../../components/github-link"; # Optimizing Information Density Source: https://mastra.ai/examples/rag/usage/cleanup-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage. The system uses an agent to clean the initial chunks to optimize information density and deduplicate data. ## Overview The system implements RAG using Mastra and OpenAI, this time optimizing information density through LLM-based processing. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini that can handle both querying and cleaning documents 2. Creates vector query and document chunking tools for the agent to use 3. Processes the initial document: - Chunks text documents into smaller segments - Creates embeddings for the chunks - Stores them in a PostgreSQL vector database 4. Performs an initial query to establish baseline response quality 5. Optimizes the data: - Uses the agent to clean and deduplicate chunks - Creates new embeddings for the cleaned chunks - Updates the vector store with optimized data 6. Performs the same query again to demonstrate improved response quality ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from '@ai-sdk/openai'; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { MDocument, createVectorQueryTool, createDocumentChunkerTool } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## Tool Creation ### Vector Query Tool Using createVectorQueryTool imported from @mastra/rag, you can create a tool that can query the vector database. ```typescript copy showLineNumbers{8} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), }); ``` ### Document Chunker Tool Using createDocumentChunkerTool imported from @mastra/rag, you can create a tool that chunks the document and sends the chunks to your agent. ```typescript copy showLineNumbers{14} filename="index.ts" const doc = MDocument.fromText(yourText); const documentChunkerTool = createDocumentChunkerTool({ doc, params: { strategy: "recursive", size: 512, overlap: 25, separator: "\n", }, }); ``` ## Agent Configuration Set up a single Mastra agent that can handle both querying and cleaning: ```typescript copy showLineNumbers{26} filename="index.ts" const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that handles both querying and cleaning documents. When cleaning: Process, clean, and label data, remove irrelevant information and deduplicate content while preserving key facts. When querying: Provide answers based on the available context. Keep your answers concise and relevant. Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. `, model: openai('gpt-4o-mini'), tools: { vectorQueryTool, documentChunkerTool, }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with the components: ```typescript copy showLineNumbers{41} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent('ragAgent'); ``` ## Document Processing Chunk the initial document and create embeddings: ```typescript copy showLineNumbers{49} filename="index.ts" const chunks = await doc.chunk({ strategy: "recursive", size: 256, overlap: 50, separator: "\n", }); const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Initial Query Let's try querying the raw data to establish a baseline: ```typescript copy showLineNumbers{73} filename="index.ts" // Generate response using the original embeddings const query = 'What are all the technologies mentioned for space exploration?'; const originalResponse = await agent.generate(query); console.log('\nQuery:', query); console.log('Response:', originalResponse.text); ``` ## Data Optimization After seeing the initial results, we can clean the data to improve quality: ```typescript copy showLineNumbers{79} filename="index.ts" const chunkPrompt = `Use the tool provided to clean the chunks. Make sure to filter out irrelevant information that is not space related and remove duplicates.`; const newChunks = await agent.generate(chunkPrompt); const updatedDoc = MDocument.fromText(newChunks.text); const updatedChunks = await updatedDoc.chunk({ strategy: "recursive", size: 256, overlap: 50, separator: "\n", }); const { embeddings: cleanedEmbeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: updatedChunks.map(chunk => chunk.text), }); // Update the vector store with cleaned embeddings await vectorStore.deleteIndex('embeddings'); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: cleanedEmbeddings, metadata: updatedChunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Optimized Query Query the data again after cleaning to observe any differences in the response: ```typescript copy showLineNumbers{109} filename="index.ts" // Query again with cleaned embeddings const cleanedResponse = await agent.generate(query); console.log('\nQuery:', query); console.log('Response:', cleanedResponse.text); ```




--- title: "Example: Chain of Thought Prompting | RAG | Mastra Docs" description: Example of implementing a RAG system in Mastra with chain-of-thought reasoning using OpenAI and PGVector. --- import { GithubLink } from "../../../../components/github-link"; # Chain of Thought Prompting Source: https://mastra.ai/examples/rag/usage/cot-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage, with an emphasis on chain-of-thought reasoning. ## Overview The system implements RAG using Mastra and OpenAI with chain-of-thought prompting. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a vector query tool to manage vector store interactions 3. Chunks text documents into smaller segments 4. Creates embeddings for these chunks 5. Stores them in a PostgreSQL vector database 6. Retrieves relevant chunks based on queries using vector query tool 7. Generates context-aware responses using chain-of-thought reasoning ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { createVectorQueryTool, MDocument } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## Vector Query Tool Creation Using createVectorQueryTool imported from @mastra/rag, you can create a tool that can query the vector database. ```typescript copy showLineNumbers{8} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), }); ``` ## Agent Configuration Set up the Mastra agent with chain-of-thought prompting instructions: ```typescript copy showLineNumbers{14} filename="index.ts" export const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context. Follow these steps for each response: 1. First, carefully analyze the retrieved context chunks and identify key information. 2. Break down your thinking process about how the retrieved information relates to the query. 3. Explain how you're connecting different pieces from the retrieved chunks. 4. Draw conclusions based only on the evidence in the retrieved context. 5. If the retrieved chunks don't contain enough information, explicitly state what's missing. Format your response as: THOUGHT PROCESS: - Step 1: [Initial analysis of retrieved chunks] - Step 2: [Connections between chunks] - Step 3: [Reasoning based on chunks] FINAL ANSWER: [Your concise answer based on the retrieved context] Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. Remember: Explain how you're using the retrieved information to reach your conclusions. `, model: openai("gpt-4o-mini"), tools: { vectorQueryTool }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with all components: ```typescript copy showLineNumbers{36} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent("ragAgent"); ``` ## Document Processing Create a document and process it into chunks: ```typescript copy showLineNumbers{44} filename="index.ts" const doc = MDocument.fromText( `The Impact of Climate Change on Global Agriculture...`, ); const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", }); ``` ## Creating and Storing Embeddings Generate embeddings for the chunks and store them in the vector database: ```typescript copy showLineNumbers{55} filename="index.ts" const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Chain-of-Thought Querying Try different queries to see how the agent breaks down its reasoning: ```typescript copy showLineNumbers{83} filename="index.ts" const answerOne = await agent.generate('What are the main adaptation strategies for farmers?'); console.log('\nQuery:', 'What are the main adaptation strategies for farmers?'); console.log('Response:', answerOne.text); const answerTwo = await agent.generate('Analyze how temperature affects crop yields.'); console.log('\nQuery:', 'Analyze how temperature affects crop yields.'); console.log('Response:', answerTwo.text); const answerThree = await agent.generate('What connections can you draw between climate change and food security?'); console.log('\nQuery:', 'What connections can you draw between climate change and food security?'); console.log('Response:', answerThree.text); ```




--- title: "Example: Structured Reasoning with Workflows | RAG | Mastra Docs" description: Example of implementing structured reasoning in a RAG system using Mastra's workflow capabilities. --- import { GithubLink } from "../../../../components/github-link"; # Structured Reasoning with Workflows Source: https://mastra.ai/examples/rag/usage/cot-workflow-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage, with an emphasis on structured reasoning through a defined workflow. ## Overview The system implements RAG using Mastra and OpenAI with chain-of-thought prompting through a defined workflow. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a vector query tool to manage vector store interactions 3. Defines a workflow with multiple steps for chain-of-thought reasoning 4. Processes and chunks text documents 5. Creates and stores embeddings in PostgreSQL 6. Generates responses through the workflow steps ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { PgVector } from "@mastra/pg"; import { createVectorQueryTool, MDocument } from "@mastra/rag"; import { embedMany } from "ai"; import { z } from "zod"; ``` ## Workflow Definition First, define the workflow with its trigger schema: ```typescript copy showLineNumbers{10} filename="index.ts" export const ragWorkflow = new Workflow({ name: "rag-workflow", triggerSchema: z.object({ query: z.string(), }), }); ``` ## Vector Query Tool Creation Create a tool for querying the vector database: ```typescript copy showLineNumbers{17} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), }); ``` ## Agent Configuration Set up the Mastra agent: ```typescript copy showLineNumbers{23} filename="index.ts" export const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context.`, model: openai("gpt-4o-mini"), tools: { vectorQueryTool, }, }); ``` ## Workflow Steps The workflow is divided into multiple steps for chain-of-thought reasoning: ### 1. Context Analysis Step ```typescript copy showLineNumbers{32} filename="index.ts" const analyzeContext = new Step({ id: "analyzeContext", outputSchema: z.object({ initialAnalysis: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const query = context?.getStepResult<{ query: string }>( "trigger", )?.query; const analysisPrompt = `${query} 1. First, carefully analyze the retrieved context chunks and identify key information.`; const analysis = await ragAgent?.generate(analysisPrompt); console.log(analysis?.text); return { initialAnalysis: analysis?.text ?? "", }; }, }); ``` ### 2. Thought Breakdown Step ```typescript copy showLineNumbers{54} filename="index.ts" const breakdownThoughts = new Step({ id: "breakdownThoughts", outputSchema: z.object({ breakdown: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const analysis = context?.getStepResult<{ initialAnalysis: string; }>("analyzeContext")?.initialAnalysis; const connectionPrompt = ` Based on the initial analysis: ${analysis} 2. Break down your thinking process about how the retrieved information relates to the query. `; const connectionAnalysis = await ragAgent?.generate(connectionPrompt); console.log(connectionAnalysis?.text); return { breakdown: connectionAnalysis?.text ?? "", }; }, }); ``` ### 3. Connection Step ```typescript copy showLineNumbers{80} filename="index.ts" const connectPieces = new Step({ id: "connectPieces", outputSchema: z.object({ connections: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const process = context?.getStepResult<{ breakdown: string; }>("breakdownThoughts")?.breakdown; const connectionPrompt = ` Based on the breakdown: ${process} 3. Explain how you're connecting different pieces from the retrieved chunks. `; const connections = await ragAgent?.generate(connectionPrompt); console.log(connections?.text); return { connections: connections?.text ?? "", }; }, }); ``` ### 4. Conclusion Step ```typescript copy showLineNumbers{105} filename="index.ts" const drawConclusions = new Step({ id: "drawConclusions", outputSchema: z.object({ conclusions: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const evidence = context?.getStepResult<{ connections: string; }>("connectPieces")?.connections; const conclusionPrompt = ` Based on the connections: ${evidence} 4. Draw conclusions based only on the evidence in the retrieved context. `; const conclusions = await ragAgent?.generate(conclusionPrompt); console.log(conclusions?.text); return { conclusions: conclusions?.text ?? "", }; }, }); ``` ### 5. Final Answer Step ```typescript copy showLineNumbers{130} filename="index.ts" const finalAnswer = new Step({ id: "finalAnswer", outputSchema: z.object({ finalAnswer: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const conclusions = context?.getStepResult<{ conclusions: string; }>("drawConclusions")?.conclusions; const answerPrompt = ` Based on the conclusions: ${conclusions} Format your response as: THOUGHT PROCESS: - Step 1: [Initial analysis of retrieved chunks] - Step 2: [Connections between chunks] - Step 3: [Reasoning based on chunks] FINAL ANSWER: [Your concise answer based on the retrieved context]`; const finalAnswer = await ragAgent?.generate(answerPrompt); console.log(finalAnswer?.text); return { finalAnswer: finalAnswer?.text ?? "", }; }, }); ``` ## Workflow Configuration Connect all the steps in the workflow: ```typescript copy showLineNumbers{160} filename="index.ts" ragWorkflow .step(analyzeContext) .then(breakdownThoughts) .then(connectPieces) .then(drawConclusions) .then(finalAnswer); ragWorkflow.commit(); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with all components: ```typescript copy showLineNumbers{169} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, workflows: { ragWorkflow }, }); ``` ## Document Processing Process and chunks the document: ```typescript copy showLineNumbers{177} filename="index.ts" const doc = MDocument.fromText(`The Impact of Climate Change on Global Agriculture...`); const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", }); ``` ## Embedding Creation and Storage Generate and store embeddings: ```typescript copy showLineNumbers{186} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Workflow Execution Here's how to execute the workflow with a query: ```typescript copy showLineNumbers{202} filename="index.ts" const query = 'What are the main adaptation strategies for farmers?'; console.log('\nQuery:', query); const prompt = ` Please answer the following question: ${query} Please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. `; const { runId, start } = ragWorkflow.createRun(); console.log('Run:', runId); const workflowResult = await start({ triggerData: { query: prompt, }, }); console.log('\nThought Process:'); console.log(workflowResult.results); ```




--- title: "Example: Agent-Driven Metadata Filtering | Retrieval | RAG | Mastra Docs" description: Example of using a Mastra agent in a RAG system to construct and apply metadata filters for document retrieval. --- import { GithubLink } from "../../../../components/github-link"; # Agent-Driven Metadata Filtering Source: https://mastra.ai/examples/rag/usage/filter-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage. This system uses an agent to construct metadata filters from a user's query to search for relevant chunks in the vector store, reducing the amount of results returned. ## Overview The system implements metadata filtering using Mastra and OpenAI. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini to understand queries and identify filter requirements 2. Creates a vector query tool to handle metadata filtering and semantic search 3. Processes documents into chunks with metadata and embeddings 4. Stores both vectors and metadata in PGVector for efficient retrieval 5. Processes queries by combining metadata filters with semantic search When a user asks a question: - The agent analyzes the query to understand the intent - Constructs appropriate metadata filters (e.g., by topic, date, category) - Uses the vector query tool to find the most relevant information - Generates a contextual response based on the filtered results ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from '@ai-sdk/openai'; import { Mastra } from '@mastra/core'; import { Agent } from '@mastra/core/agent'; import { PgVector } from '@mastra/pg'; import { createVectorQueryTool, MDocument, PGVECTOR_PROMPT } from '@mastra/rag'; import { embedMany } from 'ai'; ``` ## Vector Query Tool Creation Using createVectorQueryTool imported from @mastra/rag, you can create a tool that enables metadata filtering: ```typescript copy showLineNumbers{9} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ id: 'vectorQueryTool', vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), enableFilter: true, }); ``` ## Document Processing Create a document and process it into chunks with metadata: ```typescript copy showLineNumbers{17} filename="index.ts" const doc = MDocument.fromText(`The Impact of Climate Change on Global Agriculture...`); const chunks = await doc.chunk({ strategy: 'recursive', size: 512, overlap: 50, separator: '\n', extract: { keywords: true, // Extracts keywords from each chunk }, }); ``` ### Transform Chunks into Metadata Transform chunks into metadata that can be filtered: ```typescript copy showLineNumbers{31} filename="index.ts" const chunkMetadata = chunks?.map((chunk: any, index: number) => ({ text: chunk.text, ...chunk.metadata, nested: { keywords: chunk.metadata.excerptKeywords .replace('KEYWORDS:', '') .split(',') .map(k => k.trim()), id: index, }, })); ``` ## Agent Configuration The agent is configured to understand user queries and translate them into appropriate metadata filters. The agent requires both the vector query tool and a system prompt containing: - Metadata structure for available filter fields - Vector store prompt for filter operations and syntax ```typescript copy showLineNumbers{43} filename="index.ts" export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` You are a helpful assistant that answers questions based on the provided context. Keep your answers concise and relevant. Filter the context by searching the metadata. The metadata is structured as follows: { text: string, excerptKeywords: string, nested: { keywords: string[], id: number, }, } ${PGVECTOR_PROMPT} Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. `, tools: { vectorQueryTool }, }); ``` The agent's instructions are designed to: - Process user queries to identify filter requirements - Use the metadata structure to find relevant information - Apply appropriate filters through the vectorQueryTool and the provided vector store prompt - Generate responses based on the filtered context > Note: Different vector stores have specific prompts available. See [Vector Store Prompts](/docs/rag/retrieval#vector-store-prompts) for details. ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with the components: ```typescript copy showLineNumbers{69} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent('ragAgent'); ``` ## Creating and Storing Embeddings Generate embeddings and store them with metadata: ```typescript copy showLineNumbers{78} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector('pgVector'); await vectorStore.createIndex({ indexName: 'embeddings', dimension: 1536, }); // Store both embeddings and metadata together await vectorStore.upsert({ indexName: 'embeddings', vectors: embeddings, metadata: chunkMetadata, }); ``` The `upsert` operation stores both the vector embeddings and their associated metadata, enabling combined semantic search and metadata filtering capabilities. ## Metadata-Based Querying Try different queries using metadata filters: ```typescript copy showLineNumbers{96} filename="index.ts" const queryOne = 'What are the adaptation strategies mentioned?'; const answerOne = await agent.generate(queryOne); console.log('\nQuery:', queryOne); console.log('Response:', answerOne.text); const queryTwo = 'Show me recent sections. Check the "nested.id" field and return values that are greater than 2.'; const answerTwo = await agent.generate(queryTwo); console.log('\nQuery:', queryTwo); console.log('Response:', answerTwo.text); const queryThree = 'Search the "text" field using regex operator to find sections containing "temperature".'; const answerThree = await agent.generate(queryThree); console.log('\nQuery:', queryThree); console.log('Response:', answerThree.text); ```




--- title: "Example: A Complete Graph RAG System | RAG | Mastra Docs" description: Example of implementing a Graph RAG system in Mastra using OpenAI embeddings and PGVector for vector storage. --- import { GithubLink } from "../../../../components/github-link"; # Graph RAG Source: https://mastra.ai/examples/rag/usage/graph-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage. ## Overview The system implements Graph RAG using Mastra and OpenAI. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a GraphRAG tool to manage vector store interactions and knowledge graph creation/traversal 3. Chunks text documents into smaller segments 4. Creates embeddings for these chunks 5. Stores them in a PostgreSQL vector database 6. Creates a knowledge graph of relevant chunks based on queries using GraphRAG tool - Tool returns results from vector store and creates knowledge graph - Traverses knowledge graph using query 7. Generates context-aware responses using the Mastra agent ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { MDocument, createGraphRAGTool } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## GraphRAG Tool Creation Using createGraphRAGTool imported from @mastra/rag, you can create a tool that queries the vector database and converts the results into a knowledge graph: ```typescript copy showLineNumbers{8} filename="index.ts" const graphRagTool = createGraphRAGTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding("text-embedding-3-small"), graphOptions: { dimension: 1536, threshold: 0.7, }, }); ``` ## Agent Configuration Set up the Mastra agent that will handle the responses: ```typescript copy showLineNumbers{19} filename="index.ts" const ragAgent = new Agent({ name: "GraphRAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context. Format your answers as follows: 1. DIRECT FACTS: List only the directly stated facts from the text relevant to the question (2-3 bullet points) 2. CONNECTIONS MADE: List the relationships you found between different parts of the text (2-3 bullet points) 3. CONCLUSION: One sentence summary that ties everything together Keep each section brief and focus on the most important points. Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly.`, model: openai("gpt-4o-mini"), tools: { graphRagTool, }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with the components: ```typescript copy showLineNumbers{36} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent("ragAgent"); ``` ## Document Processing Create a document and process it into chunks: ```typescript copy showLineNumbers{45} filename="index.ts" const doc = MDocument.fromText(` # Riverdale Heights: Community Development Study // ... text content ... `); const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", }); ``` ## Creating and Storing Embeddings Generate embeddings for the chunks and store them in the vector database: ```typescript copy showLineNumbers{56} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding("text-embedding-3-small"), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Graph-Based Querying Try different queries to explore relationships in the data: ```typescript copy showLineNumbers{82} filename="index.ts" const queryOne = "What are the direct and indirect effects of early railway decisions on Riverdale Heights' current state?"; const answerOne = await ragAgent.generate(queryOne); console.log('\nQuery:', queryOne); console.log('Response:', answerOne.text); const queryTwo = 'How have changes in transportation infrastructure affected different generations of local businesses and community spaces?'; const answerTwo = await ragAgent.generate(queryTwo); console.log('\nQuery:', queryTwo); console.log('Response:', answerTwo.text); const queryThree = 'Compare how the Rossi family business and Thompson Steel Works responded to major infrastructure changes, and how their responses affected the community.'; const answerThree = await ragAgent.generate(queryThree); console.log('\nQuery:', queryThree); console.log('Response:', answerThree.text); const queryFour = 'Trace how the transformation of the Thompson Steel Works site has influenced surrounding businesses and cultural spaces from 1932 to present.'; const answerFour = await ragAgent.generate(queryFour); console.log('\nQuery:', queryFour); console.log('Response:', answerFour.text); ```




--- title: "Example: Speech to Text | Voice | Mastra Docs" description: Example of using Mastra to create a speech to text application. --- import { GithubLink } from '../../../components/github-link'; # Smart Voice Memo App Source: https://mastra.ai/examples/voice/speech-to-text The following code snippets provide example implementations of Speech-to-Text (STT) functionality in a smart voice memo application using Next.js with direct integration of Mastra. For more details on integrating Mastra with Next.js, please refer to our [Integrate with Next.js](/docs/frameworks/01-next-js) documentation. ## Creating an Agent with STT Capabilities The following example shows how to initialize a voice-enabled agent with OpenAI's STT capabilities: ```typescript filename="src/mastra/agents/index.ts" import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { OpenAIVoice } from '@mastra/voice-openai'; const instructions = ` You are an AI note assistant tasked with providing concise, structured summaries of their content... // omitted for brevity `; export const noteTakerAgent = new Agent({ name: 'Note Taker Agent', instructions: instructions, model: openai('gpt-4o'), voice: new OpenAIVoice(), // Add OpenAI voice provider with default configuration }); ``` ## Registering the Agent with Mastra This snippet demonstrates how to register the STT-enabled agent with your Mastra instance: ```typescript filename="src/mastra/index.ts" import { createLogger } from '@mastra/core/logger'; import { Mastra } from '@mastra/core/mastra'; import { noteTakerAgent } from './agents'; export const mastra = new Mastra({ agents: { noteTakerAgent }, // Register the note taker agent logger: createLogger({ name: 'Mastra', level: 'info', }), }); ``` ## Processing Audio for Transcription The following code shows how to receive audio from a web request and use the agent's STT capabilities to transcribe it: ```typescript filename="app/api/audio/route.ts" import { mastra } from '@/src/mastra'; // Import the Mastra instance import { Readable } from 'node:stream'; export async function POST(req: Request) { // Get the audio file from the request const formData = await req.formData(); const audioFile = formData.get('audio') as File; const arrayBuffer = await audioFile.arrayBuffer(); const buffer = Buffer.from(arrayBuffer); const readable = Readable.from(buffer); // Get the note taker agent from the Mastra instance const noteTakerAgent = mastra.getAgent('noteTakerAgent'); // Transcribe the audio file const text = await noteTakerAgent.voice?.listen(readable); return new Response(JSON.stringify({ text }), { headers: { 'Content-Type': 'application/json' }, }); } ``` You can view the complete implementation of the Smart Voice Memo App on our GitHub repository.




--- title: "Example: Text to Speech | Voice | Mastra Docs" description: Example of using Mastra to create a text to speech application. --- import { GithubLink } from '../../../components/github-link'; # Interactive Story Generator Source: https://mastra.ai/examples/voice/text-to-speech The following code snippets provide example implementations of Text-to-Speech (TTS) functionality in an interactive story generator application using Next.js with Mastra as a separate backend integration. This example demonstrates how to use the Mastra client-js SDK to connect to your Mastra backend. For more details on integrating Mastra with Next.js, please refer to our [Integrate with Next.js](/docs/frameworks/01-next-js) documentation. ## Creating an Agent with TTS Capabilities The following example shows how to set up a story generator agent with TTS capabilities on the backend: ```typescript filename="src/mastra/agents/index.ts" import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { OpenAIVoice } from '@mastra/voice-openai'; import { Memory } from '@mastra/memory'; const instructions = ` You are an Interactive Storyteller Agent. Your job is to create engaging short stories with user choices that influence the narrative. // omitted for brevity `; export const storyTellerAgent = new Agent({` name: 'Story Teller Agent', instructions: instructions, model: openai('gpt-4o'), voice: new OpenAIVoice(), }); ``` ## Registering the Agent with Mastra This snippet demonstrates how to register the agent with your Mastra instance: ```typescript filename="src/mastra/index.ts" import { createLogger } from '@mastra/core/logger'; import { Mastra } from '@mastra/core/mastra'; import { storyTellerAgent } from './agents'; export const mastra = new Mastra({ agents: { storyTellerAgent }, logger: createLogger({ name: 'Mastra', level: 'info', }), }); ``` ## Connecting to Mastra from the Frontend Here we use the Mastra Client SDK to interact with our Mastra server. For more information about the Mastra Client SDK, check out the [documentation](/docs/deployment/client). ```typescript filename="src/app/page.tsx" import { MastraClient } from '@mastra/client-js'; export const mastraClient = new MastraClient({ baseUrl: 'http://localhost:4111', // Replace with your Mastra backend URL }); ``` ## Generating Story Content and Converting to Speech This example demonstrates how to get a reference to a Mastra agent, generate story content based on user input, and then convert that content to speech: ``` typescript filename="/app/components/StoryManager.tsx" const handleInitialSubmit = async (formData: FormData) => { setIsLoading(true); try { const agent = mastraClient.getAgent('storyTellerAgent'); const message = `Current phase: BEGINNING. Story genre: ${formData.genre}, Protagonist name: ${formData.protagonistDetails.name}, Protagonist age: ${formData.protagonistDetails.age}, Protagonist gender: ${formData.protagonistDetails.gender}, Protagonist occupation: ${formData.protagonistDetails.occupation}, Story Setting: ${formData.setting}`; const storyResponse = await agent.generate({ messages: [{ role: 'user', content: message }], threadId: storyState.threadId, resourceId: storyState.resourceId, }); const storyText = storyResponse.text; const audioResponse = await agent.voice.speak(storyText); if (!audioResponse.body) { throw new Error('No audio stream received'); } const audio = await readStream(audioResponse.body); setStoryState(prev => ({ phase: 'beginning', threadId: prev.threadId, resourceId: prev.resourceId, content: { ...prev.content, beginning: storyText, }, })); setAudioBlob(audio); return audio; } catch (error) { console.error('Error generating story beginning:', error); } finally { setIsLoading(false); } }; ``` ## Playing the Audio This snippet demonstrates how to handle text-to-speech audio playback by monitoring for new audio data. When audio is received, the code creates a browser-playable URL from the audio blob, assigns it to an audio element, and attempts to play it automatically: ```typescript filename="/app/components/StoryManager.tsx" useEffect(() => { if (!audioRef.current || !audioData) return; // Store a reference to the HTML audio element const currentAudio = audioRef.current; // Convert the Blob/File audio data from Mastra into a URL the browser can play const url = URL.createObjectURL(audioData); const playAudio = async () => { try { currentAudio.src = url; await currentAudio.load(); await currentAudio.play(); setIsPlaying(true); } catch (error) { console.error('Auto-play failed:', error); } }; playAudio(); return () => { if (currentAudio) { currentAudio.pause(); currentAudio.src = ''; URL.revokeObjectURL(url); } }; }, [audioData]); ``` You can view the complete implementation of the Interactive Story Generator on our GitHub repository.




--- title: "Example: Branching Paths | Workflows | Mastra Docs" description: Example of using Mastra to create workflows with branching paths based on intermediate results. --- import { GithubLink } from "../../../components/github-link"; # Branching Paths Source: https://mastra.ai/examples/workflows/branching-paths When processing data, you often need to take different actions based on intermediate results. This example shows how to create a workflow that splits into separate paths, where each path executes different steps based on the output of a previous step. ## Control Flow Diagram This example shows how to create a workflow that splits into separate paths, where each path executes different steps based on the output of a previous step. Here's the control flow diagram: Diagram showing workflow with branching paths ## Creating the Steps Let's start by creating the steps and initializing the workflow. {/* prettier-ignore */} ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod" const stepOne = new Step({ id: "stepOne", execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2 }) }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { const stepOneResult = context.getStepResult<{ doubledValue: number }>("stepOne"); if (!stepOneResult) { return { isDivisibleByFive: false } } return { isDivisibleByFive: stepOneResult.doubledValue % 5 === 0 } } }); const stepThree = new Step({ id: "stepThree", execute: async ({ context }) =>{ const stepOneResult = context.getStepResult<{ doubledValue: number }>("stepOne"); if (!stepOneResult) { return { incrementedValue: 0 } } return { incrementedValue: stepOneResult.doubledValue + 1 } } }); const stepFour = new Step({ id: "stepFour", execute: async ({ context }) => { const stepThreeResult = context.getStepResult<{ incrementedValue: number }>("stepThree"); if (!stepThreeResult) { return { isDivisibleByThree: false } } return { isDivisibleByThree: stepThreeResult.incrementedValue % 3 === 0 } } }); // New step that depends on both branches const finalStep = new Step({ id: "finalStep", execute: async ({ context }) => { // Get results from both branches using getStepResult const stepTwoResult = context.getStepResult<{ isDivisibleByFive: boolean }>("stepTwo"); const stepFourResult = context.getStepResult<{ isDivisibleByThree: boolean }>("stepFour"); const isDivisibleByFive = stepTwoResult?.isDivisibleByFive || false; const isDivisibleByThree = stepFourResult?.isDivisibleByThree || false; return { summary: `The number ${context.triggerData.inputValue} when doubled ${isDivisibleByFive ? 'is' : 'is not'} divisible by 5, and when doubled and incremented ${isDivisibleByThree ? 'is' : 'is not'} divisible by 3.`, isDivisibleByFive, isDivisibleByThree } } }); // Build the workflow const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` ## Branching Paths and Chaining Steps Now let's configure the workflow with branching paths and merge them using the compound `.after([])` syntax. ```ts showLineNumbers copy // Create two parallel branches myWorkflow // First branch .step(stepOne) .then(stepTwo) // Second branch .after(stepOne) .step(stepThree) .then(stepFour) // Merge both branches using compound after syntax .after([stepTwo, stepFour]) .step(finalStep) .commit(); const { start } = myWorkflow.createRun(); const result = await start({ triggerData: { inputValue: 3 } }); console.log(result.steps.finalStep.output.summary); // Output: "The number 3 when doubled is not divisible by 5, and when doubled and incremented is divisible by 3." ``` ## Advanced Branching and Merging You can create more complex workflows with multiple branches and merge points: ```ts showLineNumbers copy const complexWorkflow = new Workflow({ name: "complex-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); // Create multiple branches with different merge points complexWorkflow // Main step .step(stepOne) // First branch .then(stepTwo) // Second branch .after(stepOne) .step(stepThree) .then(stepFour) // Third branch (another path from stepOne) .after(stepOne) .step(new Step({ id: "alternativePath", execute: async ({ context }) => { const stepOneResult = context.getStepResult<{ doubledValue: number }>("stepOne"); return { result: (stepOneResult?.doubledValue || 0) * 3 } } })) // Merge first and second branches .after([stepTwo, stepFour]) .step(new Step({ id: "partialMerge", execute: async ({ context }) => { const stepTwoResult = context.getStepResult<{ isDivisibleByFive: boolean }>("stepTwo"); const stepFourResult = context.getStepResult<{ isDivisibleByThree: boolean }>("stepFour"); return { intermediateResult: "Processed first two branches", branchResults: { branch1: stepTwoResult?.isDivisibleByFive, branch2: stepFourResult?.isDivisibleByThree } } } })) // Final merge of all branches .after(["partialMerge", "alternativePath"]) .step(new Step({ id: "finalMerge", execute: async ({ context }) => { const partialMergeResult = context.getStepResult<{ intermediateResult: string, branchResults: { branch1: boolean, branch2: boolean } }>("partialMerge"); const alternativePathResult = context.getStepResult<{ result: number }>("alternativePath"); return { finalResult: "All branches processed", combinedData: { fromPartialMerge: partialMergeResult?.branchResults, fromAlternativePath: alternativePathResult?.result } } } })) .commit(); ```




--- title: "Example: Calling an Agent from a Workflow | Mastra Docs" description: Example of using Mastra to call an AI agent from within a workflow step. --- import { GithubLink } from "../../../components/github-link"; # Calling an Agent From a Workflow Source: https://mastra.ai/examples/workflows/calling-agent This example demonstrates how to create a workflow that calls an AI agent to process messages and generate responses, and execute it within a workflow step. ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const penguin = new Agent({ name: "agent skipper", instructions: `You are skipper from penguin of madagascar, reply as that`, model: openai("gpt-4o-mini"), }); const newWorkflow = new Workflow({ name: "pass message to the workflow", triggerSchema: z.object({ message: z.string(), }), }); const replyAsSkipper = new Step({ id: "reply", outputSchema: z.object({ reply: z.string(), }), execute: async ({ context, mastra }) => { const skipper = mastra?.getAgent('penguin'); const res = await skipper?.generate( context?.triggerData?.message, ); return { reply: res?.text || "" }; }, }); newWorkflow.step(replyAsSkipper); newWorkflow.commit(); const mastra = new Mastra({ agents: { penguin }, workflows: { newWorkflow }, }); const { runId, start } = await mastra.getWorkflow("newWorkflow").createRun(); const runResult = await start({ triggerData: { message: "Give me a run down of the mission to save private" }, }); console.log(runResult.results); ```




--- title: "Example: Conditional Branching (experimental) | Workflows | Mastra Docs" description: Example of using Mastra to create conditional branches in workflows using if/else statements. --- import { GithubLink } from '../../../components/github-link'; # Workflow with Conditional Branching (experimental) Source: https://mastra.ai/examples/workflows/conditional-branching Workflows often need to follow different paths based on conditions. This example demonstrates how to use `if` and `else` to create conditional branches in your workflows. ## Basic If/Else Example This example shows a simple workflow that takes different paths based on a numeric value: ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; // Step that provides the initial value const startStep = new Step({ id: 'start', outputSchema: z.object({ value: z.number(), }), execute: async ({ context }) => { // Get the value from the trigger data const value = context.triggerData.inputValue; return { value }; }, }); // Step that handles high values const highValueStep = new Step({ id: 'highValue', outputSchema: z.object({ result: z.string(), }), execute: async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return { result: `High value processed: ${value}` }; }, }); // Step that handles low values const lowValueStep = new Step({ id: 'lowValue', outputSchema: z.object({ result: z.string(), }), execute: async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return { result: `Low value processed: ${value}` }; }, }); // Final step that summarizes the result const finalStep = new Step({ id: 'final', outputSchema: z.object({ summary: z.string(), }), execute: async ({ context }) => { // Get the result from whichever branch executed const highResult = context.getStepResult<{ result: string }>('highValue')?.result; const lowResult = context.getStepResult<{ result: string }>('lowValue')?.result; const result = highResult || lowResult; return { summary: `Processing complete: ${result}` }; }, }); // Build the workflow with conditional branching const conditionalWorkflow = new Workflow({ name: 'conditional-workflow', triggerSchema: z.object({ inputValue: z.number(), }), }); conditionalWorkflow .step(startStep) .if(async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value ?? 0; return value >= 10; // Condition: value is 10 or greater }) .then(highValueStep) .then(finalStep) .else() .then(lowValueStep) .then(finalStep) // Both branches converge on the final step .commit(); // Register the workflow const mastra = new Mastra({ workflows: { conditionalWorkflow }, }); // Example usage async function runWorkflow(inputValue: number) { const workflow = mastra.getWorkflow('conditionalWorkflow'); const { start } = workflow.createRun(); const result = await start({ triggerData: { inputValue }, }); console.log('Workflow result:', result.results); return result; } // Run with a high value (follows the "if" branch) const result1 = await runWorkflow(15); // Run with a low value (follows the "else" branch) const result2 = await runWorkflow(5); console.log('Result 1:', result1); console.log('Result 2:', result2); ``` ## Using Reference-Based Conditions You can also use reference-based conditions with comparison operators: ```ts showLineNumbers copy // Using reference-based conditions instead of functions conditionalWorkflow .step(startStep) .if({ ref: { step: startStep, path: 'value' }, query: { $gte: 10 }, // Condition: value is 10 or greater }) .then(highValueStep) .then(finalStep) .else() .then(lowValueStep) .then(finalStep) .commit(); ```




--- title: "Example: Creating a Workflow | Workflows | Mastra Docs" description: Example of using Mastra to define and execute a simple workflow with a single step. --- import { GithubLink } from "../../../components/github-link"; # Creating a Simple Workflow Source: https://mastra.ai/examples/workflows/creating-a-workflow A workflow allows you to define and execute sequences of operations in a structured path. This example shows a workflow with a single step. ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ input: z.number(), }), }); const stepOne = new Step({ id: "stepOne", inputSchema: z.object({ value: z.number(), }), outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context?.triggerData?.input * 2; return { doubledValue }; }, }); myWorkflow.step(stepOne).commit(); const { runId, start } = myWorkflow.createRun(); const res = await start({ triggerData: { input: 90 }, }); console.log(res.results); ```




--- title: "Example: Cyclical Dependencies | Workflows | Mastra Docs" description: Example of using Mastra to create workflows with cyclical dependencies and conditional loops. --- import { GithubLink } from "../../../components/github-link"; # Workflow with Cyclical dependencies Source: https://mastra.ai/examples/workflows/cyclical-dependencies Workflows support cyclical dependencies where steps can loop back based on conditions. The example below shows how to use conditional logic to create loops and handle repeated execution. ```ts showLineNumbers copy import { Workflow, Step } from '@mastra/core'; import { z } from 'zod'; async function main() { const doubleValue = new Step({ id: 'doubleValue', description: 'Doubles the input value', inputSchema: z.object({ inputValue: z.number(), }), outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context.inputValue * 2; return { doubledValue }; }, }); const incrementByOne = new Step({ id: 'incrementByOne', description: 'Adds 1 to the input value', outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context }) => { const valueToIncrement = context?.getStepResult<{ firstValue: number }>('trigger')?.firstValue; if (!valueToIncrement) throw new Error('No value to increment provided'); const incrementedValue = valueToIncrement + 1; return { incrementedValue }; }, }); const cyclicalWorkflow = new Workflow({ name: 'cyclical-workflow', triggerSchema: z.object({ firstValue: z.number(), }), }); cyclicalWorkflow .step(doubleValue, { variables: { inputValue: { step: 'trigger', path: 'firstValue', }, }, }) .then(incrementByOne) .after(doubleValue) .step(doubleValue, { variables: { inputValue: { step: doubleValue, path: 'doubledValue', }, }, }) .commit(); const { runId, start } = cyclicalWorkflow.createRun(); console.log('Run', runId); const res = await start({ triggerData: { firstValue: 6 } }); console.log(res.results); } main(); ```




--- title: "Example: Parallel Execution | Workflows | Mastra Docs" description: Example of using Mastra to execute multiple independent tasks in parallel within a workflow. --- import { GithubLink } from "../../../components/github-link"; # Parallel Execution with Steps Source: https://mastra.ai/examples/workflows/parallel-steps When building AI applications, you often need to process multiple independent tasks simultaneously to improve efficiency. ## Control Flow Diagram This example shows how to structure a workflow that executes steps in parallel, with each branch handling its own data flow and dependencies. Here's the control flow diagram: Diagram showing workflow with parallel steps ## Creating the Steps Let's start by creating the steps and initializing the workflow. ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const stepOne = new Step({ id: "stepOne", execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 } } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 } }, }); const stepThree = new Step({ id: "stepThree", execute: async ({ context }) => ({ tripledValue: context.triggerData.inputValue * 3, }), }); const stepFour = new Step({ id: "stepFour", execute: async ({ context }) => { if (context.steps.stepThree.status !== "success") { return { isEven: false } } return { isEven: context.steps.stepThree.output.tripledValue % 2 === 0 } }, }); const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` ## Chaining and Parallelizing Steps Now we can add the steps to the workflow. Note the `.then()` method is used to chain the steps, but the `.step()` method is used to add the steps to the workflow. ```ts showLineNumbers copy myWorkflow .step(stepOne) .then(stepTwo) // chain one .step(stepThree) .then(stepFour) // chain two .commit(); const { start } = myWorkflow.createRun(); const result = await start({ triggerData: { inputValue: 3 } }); ```




--- title: "Example: Sequential Steps | Workflows | Mastra Docs" description: Example of using Mastra to chain workflow steps in a specific sequence, passing data between them. --- import { GithubLink } from "../../../components/github-link"; # Workflow with Sequential Steps Source: https://mastra.ai/examples/workflows/sequential-steps Workflow can be chained to run one after another in a specific sequence. ## Control Flow Diagram This example shows how to chain workflow steps by using the `then` method demonstrating how to pass data between sequential steps and execute them in order. Here's the control flow diagram: Diagram showing workflow with sequential steps ## Creating the Steps Let's start by creating the steps and initializing the workflow. ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const stepOne = new Step({ id: "stepOne", execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 } } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 } }, }); const stepThree = new Step({ id: "stepThree", execute: async ({ context }) => { if (context.steps.stepTwo.status !== "success") { return { tripledValue: 0 } } return { tripledValue: context.steps.stepTwo.output.incrementedValue * 3 } }, }); // Build the workflow const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` ## Chaining the Steps and Executing the Workflow Now let's chain the steps together. ```ts showLineNumbers copy // sequential steps myWorkflow.step(stepOne).then(stepTwo).then(stepThree); myWorkflow.commit(); const { start } = myWorkflow.createRun(); const res = await start({ triggerData: { inputValue: 90 } }); ```




--- title: "Example: Suspend and Resume | Workflows | Mastra Docs" description: Example of using Mastra to suspend and resume workflow steps during execution. --- import { GithubLink } from '../../../components/github-link'; # Workflow with Suspend and Resume Source: https://mastra.ai/examples/workflows/suspend-and-resume Workflow steps can be suspended and resumed at any point in the workflow execution. This example demonstrates how to suspend a workflow step and resume it later. ## Basic Example ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; const stepOne = new Step({ id: 'stepOne', outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context.triggerData.inputValue * 2; return { doubledValue }; }, }); const stepTwo = new Step({ id: 'stepTwo', outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context, suspend }) => { const secondValue = context.inputData?.secondValue ?? 0; const doubledValue = context.getStepResult(stepOne)?.doubledValue ?? 0; const incrementedValue = doubledValue + secondValue; if (incrementedValue < 100) { await suspend(); return { incrementedValue: 0 }; } return { incrementedValue }; }, }); // Build the workflow const myWorkflow = new Workflow({ name: 'my-workflow', triggerSchema: z.object({ inputValue: z.number(), }), }); // run workflows in parallel myWorkflow .step(stepOne) .then(stepTwo) .commit(); // Register the workflow export const mastra = new Mastra({ workflows: { registeredWorkflow: myWorkflow }, }) // Get registered workflow from Mastra const registeredWorkflow = mastra.getWorkflow('registeredWorkflow'); const { runId, start } = registeredWorkflow.createRun(); // Start watching the workflow before executing it myWorkflow.watch(async ({ context, activePaths }) => { for (const _path of activePaths) { const stepTwoStatus = context.steps?.stepTwo?.status; if (stepTwoStatus === 'suspended') { console.log("Workflow suspended, resuming with new value"); // Resume the workflow with new context await myWorkflow.resume({ runId, stepId: 'stepTwo', context: { secondValue: 100 }, }); } } }) // Start the workflow execution await start({ triggerData: { inputValue: 45 } }); ``` ## Advanced Example with Multiple Suspension Points Using async/await pattern and suspend payloads This example demonstrates a more complex workflow with multiple suspension points using the async/await pattern. It simulates a content generation workflow that requires human intervention at different stages. ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; // Step 1: Get user input const getUserInput = new Step({ id: 'getUserInput', execute: async ({ context }) => { // In a real application, this might come from a form or API return { userInput: context.triggerData.input }; }, outputSchema: z.object({ userInput: z.string() }), }); // Step 2: Generate content with AI (may suspend for human guidance) const promptAgent = new Step({ id: 'promptAgent', inputSchema: z.object({ guidance: z.string(), }), execute: async ({ context, suspend }) => { const userInput = context.getStepResult(getUserInput)?.userInput; console.log(`Generating content based on: ${userInput}`); const guidance = context.inputData?.guidance; // Simulate AI generating content const initialDraft = generateInitialDraft(userInput); // If confidence is high, return the generated content directly if (initialDraft.confidenceScore > 0.7) { return { modelOutput: initialDraft.content }; } console.log('Low confidence in generated content, suspending for human guidance', {guidance}); // If confidence is low, suspend for human guidance if (!guidance) { // only suspend if no guidance is provided await suspend(); return undefined; } // This code runs after resume with human guidance console.log('Resumed with human guidance'); // Use the human guidance to improve the output return { modelOutput: enhanceWithGuidance(initialDraft.content, guidance), }; }, outputSchema: z.object({ modelOutput: z.string() }).optional(), }); // Step 3: Evaluate the content quality const evaluateTone = new Step({ id: 'evaluateToneConsistency', execute: async ({ context }) => { const content = context.getStepResult(promptAgent)?.modelOutput; // Simulate evaluation return { toneScore: { score: calculateToneScore(content) }, completenessScore: { score: calculateCompletenessScore(content) }, }; }, outputSchema: z.object({ toneScore: z.any(), completenessScore: z.any(), }), }); // Step 4: Improve response if needed (may suspend) const improveResponse = new Step({ id: 'improveResponse', inputSchema: z.object({ improvedContent: z.string(), resumeAttempts: z.number(), }), execute: async ({ context, suspend }) => { const content = context.getStepResult(promptAgent)?.modelOutput; const toneScore = context.getStepResult(evaluateTone)?.toneScore.score ?? 0; const completenessScore = context.getStepResult(evaluateTone)?.completenessScore.score ?? 0; const improvedContent = context.inputData.improvedContent; const resumeAttempts = context.inputData.resumeAttempts ?? 0; // If scores are above threshold, make minor improvements if (toneScore > 0.8 && completenessScore > 0.8) { return { improvedOutput: makeMinorImprovements(content) }; } console.log('Content quality below threshold, suspending for human intervention', {improvedContent, resumeAttempts}); if (!improvedContent) { // Suspend with payload containing content and resume attempts await suspend({ content, scores: { tone: toneScore, completeness: completenessScore }, needsImprovement: toneScore < 0.8 ? 'tone' : 'completeness', resumeAttempts: resumeAttempts + 1, }); return { improvedOutput: content ?? '' }; } console.log('Resumed with human improvements', improvedContent); return { improvedOutput: improvedContent ?? content ?? '' }; }, outputSchema: z.object({ improvedOutput: z.string() }).optional(), }); // Step 5: Final evaluation const evaluateImproved = new Step({ id: 'evaluateImprovedResponse', execute: async ({ context }) => { const improvedContent = context.getStepResult(improveResponse)?.improvedOutput; // Simulate final evaluation return { toneScore: { score: calculateToneScore(improvedContent) }, completenessScore: { score: calculateCompletenessScore(improvedContent) }, }; }, outputSchema: z.object({ toneScore: z.any(), completenessScore: z.any(), }), }); // Build the workflow const contentWorkflow = new Workflow({ name: 'content-generation-workflow', triggerSchema: z.object({ input: z.string() }), }); contentWorkflow .step(getUserInput) .then(promptAgent) .then(evaluateTone) .then(improveResponse) .then(evaluateImproved) .commit(); // Register the workflow const mastra = new Mastra({ workflows: { contentWorkflow }, }); // Helper functions (simulated) function generateInitialDraft(input: string = '') { // Simulate AI generating content return { content: `Generated content based on: ${input}`, confidenceScore: 0.6, // Simulate low confidence to trigger suspension }; } function enhanceWithGuidance(content: string = '', guidance: string = '') { return `${content} (Enhanced with guidance: ${guidance})`; } function makeMinorImprovements(content: string = '') { return `${content} (with minor improvements)`; } function calculateToneScore(_: string = '') { return 0.7; // Simulate a score that will trigger suspension } function calculateCompletenessScore(_: string = '') { return 0.9; } // Usage example async function runWorkflow() { const workflow = mastra.getWorkflow('contentWorkflow'); const { runId, start } = workflow.createRun(); let finalResult: any; // Start the workflow const initialResult = await start({ triggerData: { input: 'Create content about sustainable energy' }, }); console.log('Initial workflow state:', initialResult.results); const promptAgentStepResult = initialResult.activePaths.get('promptAgent'); // Check if promptAgent step is suspended if (promptAgentStepResult?.status === 'suspended') { console.log('Workflow suspended at promptAgent step'); console.log('Suspension payload:', promptAgentStepResult?.suspendPayload); // Resume with human guidance const resumeResult1 = await workflow.resume({ runId, stepId: 'promptAgent', context: { guidance: 'Focus more on solar and wind energy technologies', }, }); console.log('Workflow resumed and continued to next steps'); let improveResponseResumeAttempts = 0; let improveResponseStatus = resumeResult1?.activePaths.get('improveResponse')?.status; // Check if improveResponse step is suspended while (improveResponseStatus === 'suspended') { console.log('Workflow suspended at improveResponse step'); console.log('Suspension payload:', resumeResult1?.activePaths.get('improveResponse')?.suspendPayload); const improvedContent = improveResponseResumeAttempts < 3 ? undefined : 'Completely revised content about sustainable energy focusing on solar and wind technologies'; // Resume with human improvements finalResult = await workflow.resume({ runId, stepId: 'improveResponse', context: { improvedContent, resumeAttempts: improveResponseResumeAttempts, }, }); improveResponseResumeAttempts = finalResult?.activePaths.get('improveResponse')?.suspendPayload?.resumeAttempts ?? 0; improveResponseStatus = finalResult?.activePaths.get('improveResponse')?.status; console.log('Improved response result:', finalResult?.results); } } return finalResult; } // Run the workflow const result = await runWorkflow(); console.log('Workflow completed'); console.log('Final workflow result:', result); ```




--- title: "Example: Using a Tool as a Step | Workflows | Mastra Docs" description: Example of using Mastra to integrate a custom tool as a step in a workflow. --- import { GithubLink } from '../../../components/github-link'; # Tool as a Workflow step Source: https://mastra.ai/examples/workflows/using-a-tool-as-a-step This example demonstrates how to create and integrate a custom tool as a workflow step, showing how to define input/output schemas and implement the tool's execution logic. ```ts showLineNumbers copy import { createTool } from '@mastra/core/tools'; import { Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; const crawlWebpage = createTool({ id: 'Crawl Webpage', description: 'Crawls a webpage and extracts the text content', inputSchema: z.object({ url: z.string().url(), }), outputSchema: z.object({ rawText: z.string(), }), execute: async ({ context }) => { const response = await fetch(context.triggerData.url); const text = await response.text(); return { rawText: 'This is the text content of the webpage: ' + text }; }, }); const contentWorkflow = new Workflow({ name: 'content-review' }); contentWorkflow.step(crawlWebpage).commit(); const { start } = contentWorkflow.createRun(); const res = await start({ triggerData: { url: 'https://example.com'} }); console.log(res.results); ```




--- title: "Data Mapping with Workflow Variables | Mastra Examples" description: "Learn how to use workflow variables to map data between steps in Mastra workflows." --- # Data Mapping with Workflow Variables Source: https://mastra.ai/examples/workflows/workflow-variables This example demonstrates how to use workflow variables to map data between steps in a Mastra workflow. ## Use Case: User Registration Process In this example, we'll build a simple user registration workflow that: 1. Validates user input 1. Formats the user data 1. Creates a user profile ## Implementation ```typescript showLineNumbers filename="src/mastra/workflows/user-registration.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Define our schemas for better type safety const userInputSchema = z.object({ email: z.string().email(), name: z.string(), age: z.number().min(18), }); const validatedDataSchema = z.object({ isValid: z.boolean(), validatedData: z.object({ email: z.string(), name: z.string(), age: z.number(), }), }); const formattedDataSchema = z.object({ userId: z.string(), formattedData: z.object({ email: z.string(), displayName: z.string(), ageGroup: z.string(), }), }); const profileSchema = z.object({ profile: z.object({ id: z.string(), email: z.string(), displayName: z.string(), ageGroup: z.string(), createdAt: z.string(), }), }); // Define the workflow const registrationWorkflow = new Workflow({ name: "user-registration", triggerSchema: userInputSchema, }); // Step 1: Validate user input const validateInput = new Step({ id: "validateInput", inputSchema: userInputSchema, outputSchema: validatedDataSchema, execute: async ({ context }) => { const { email, name, age } = context; // Simple validation logic const isValid = email.includes('@') && name.length > 0 && age >= 18; return { isValid, validatedData: { email: email.toLowerCase().trim(), name, age, }, }; }, }); // Step 2: Format user data const formatUserData = new Step({ id: "formatUserData", inputSchema: z.object({ validatedData: z.object({ email: z.string(), name: z.string(), age: z.number(), }), }), outputSchema: formattedDataSchema, execute: async ({ context }) => { const { validatedData } = context; // Generate a simple user ID const userId = `user_${Math.floor(Math.random() * 10000)}`; // Format the data const ageGroup = validatedData.age < 30 ? "young-adult" : "adult"; return { userId, formattedData: { email: validatedData.email, displayName: validatedData.name, ageGroup, }, }; }, }); // Step 3: Create user profile const createUserProfile = new Step({ id: "createUserProfile", inputSchema: z.object({ userId: z.string(), formattedData: z.object({ email: z.string(), displayName: z.string(), ageGroup: z.string(), }), }), outputSchema: profileSchema, execute: async ({ context }) => { const { userId, formattedData } = context; // In a real app, you would save to a database here return { profile: { id: userId, ...formattedData, createdAt: new Date().toISOString(), }, }; }, }); // Build the workflow with variable mappings registrationWorkflow // First step gets data from the trigger .step(validateInput, { variables: { email: { step: 'trigger', path: 'email' }, name: { step: 'trigger', path: 'name' }, age: { step: 'trigger', path: 'age' }, } }) // Format user data with validated data from previous step .then(formatUserData, { variables: { validatedData: { step: validateInput, path: 'validatedData' }, }, when: { ref: { step: validateInput, path: 'isValid' }, query: { $eq: true }, }, }) // Create profile with data from the format step .then(createUserProfile, { variables: { userId: { step: formatUserData, path: 'userId' }, formattedData: { step: formatUserData, path: 'formattedData' }, }, }) .commit(); export default registrationWorkflow; ``` ## How to Use This Example 1. Create the file as shown above 2. Register the workflow in your Mastra instance 3. Execute the workflow: ```bash curl --location 'http://localhost:4111/api/workflows/user-registration/execute' \ --header 'Content-Type: application/json' \ --data '{ "email": "user@example.com", "name": "John Doe", "age": 25 }' ``` ## Key Takeaways This example demonstrates several important concepts about workflow variables: 1. **Data Mapping**: Variables map data from one step to another, creating a clear data flow. 2. **Path Access**: The `path` property specifies which part of a step's output to use. 3. **Conditional Execution**: The `when` property allows steps to execute conditionally based on previous step outputs. 4. **Type Safety**: Each step defines input and output schemas for type safety, ensuring that the data passed between steps is properly typed. 5. **Explicit Data Dependencies**: By defining input schemas and using variable mappings, the data dependencies between steps are made explicit and clear. For more information on workflow variables, see the [Workflow Variables documentation](../../docs/workflows/variables.mdx). --- title: 'Showcase' description: 'Check out these applications built with Mastra' --- Source: https://mastra.ai/showcase import { ShowcaseGrid } from '../../components/showcase-grid';