--- title: "Agent Tool Selection | Agent Documentation | Mastra" description: Tools are typed functions that can be executed by agents or workflows, with built-in integration access and parameter validation. Each tool has a schema that defines its inputs, an executor function that implements its logic, and access to configured integrations. --- # Agent Tool Selection [EN] Source: https://mastra.ai/en/docs/agents/adding-tools Tools are typed functions that can be executed by agents or workflows, with built-in integration access and parameter validation. Each tool has a schema that defines its inputs, an executor function that implements its logic, and access to configured integrations. ## Creating Tools In this section, we'll walk through the process of creating a tool that can be used by your agents. Let's create a simple tool that fetches current weather information for a given city. ```typescript filename="src/mastra/tools/weatherInfo.ts" copy import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const getWeatherInfo = async (city: string) => { // Replace with an actual API call to a weather service const data = await fetch(`https://api.example.com/weather?city=${city}`).then( (r) => r.json(), ); return data; }; export const weatherInfo = createTool({ id: "Get Weather Information", inputSchema: z.object({ city: z.string(), }), description: `Fetches the current weather information for a given city`, execute: async ({ context: { city } }) => { console.log("Using tool to fetch weather information for", city); return await getWeatherInfo(city); }, }); ``` ## Adding Tools to an Agent Now we'll add the tool to an agent. We'll create an agent that can answer questions about the weather and configure it to use our `weatherInfo` tool. ```typescript filename="src/mastra/agents/weatherAgent.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import * as tools from "../tools/weatherInfo"; export const weatherAgent = new Agent({ name: "Weather Agent", instructions: "You are a helpful assistant that provides current weather information. When asked about the weather, use the weather information tool to fetch the data.", model: openai("gpt-4o-mini"), tools: { weatherInfo: tools.weatherInfo, }, }); ``` ## Registering the Agent We need to initialize Mastra with our agent. ```typescript filename="src/index.ts" import { Mastra } from "@mastra/core"; import { weatherAgent } from "./agents/weatherAgent"; export const mastra = new Mastra({ agents: { weatherAgent }, }); ``` This registers your agent with Mastra, making it available for use. ## Abort Signals The abort signals from `generate` and `stream` (text generation) are forwarded to the tool execution. You can access them in the second parameter of the execute function and e.g. abort long-running computations or forward them to fetch calls inside tools. ```typescript import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const agent = new Agent({ name: "Weather agent", tools: { weather: createTool({ id: "Get Weather Information", description: "Get the weather in a location", inputSchema: z.object({ location: z.string() }), execute: async ({ context: { location } }, { abortSignal }) => { return fetch( `https://api.weatherapi.com/v1/current.json?q=${location}`, { signal: abortSignal }, // forward the abort signal to fetch ); }, }), }, }); const result = await agent.generate("What is the weather in San Francisco?", { abortSignal: myAbortSignal, // signal that will be forwarded to tools }); ``` ## Injecting Request/User-Specific Variables We support dependency injection for tools and workflows. You can pass a container directly to your `generate` or `stream` function calls, or inject it using [server middleware](/docs/deployment/server#Middleware). Here's an example where we change the temperature scale between Fahrenheit and Celsius: ```typescript filename="src/agents/weather-agent.ts" import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; export const agent = new Agent({ name: "Weather agent", tools: { weather: createTool({ id: "Get Weather Information", description: "Get the weather in a location", inputSchema: z.object({ location: z.string() }), execute: async ({ context: { location }, container }) => { const scale = container.get("temperature-scale"); const result = await fetch( `https://api.weatherapi.com/v1/current.json?q=${location}`, ); const json = await result.json(); return { temperature: scale === "celsius" ? json.temp_c : json.temp_f, }; }, }), }, }); ``` ```typescript import { agent } from "./agents/weather"; type MyContainer = {"temperature-scale", "celsius" | "farenheit"} const container = new Container(); container.set("temperature-scale", "celsius"); const result = await agent.generate("What is the weather in San Francisco?", { container, }); ``` Let's derive temperature from the user's country using cloudflare ` CF-IPCountry` header. We're keeping the same agent code as the example above ```typescript filename="src/index.ts" import { Mastra } from "@mastra/core"; import { Container } from "@mastra/core/di"; import { agent as weatherAgent } from "./agents/weather"; type MyContainer = {"temperature-scale", "celsius" | "farenheit"} export const mastra = new Mastra({ agents: { weather: weatherAgent, }, server: { middleware: [ (c, next) => { const country = c.req.header("CF-IPCountry"); const container: MyContainer = c.get("container"); container.set( "temperature-scale", country === "US" ? "farenheit" : "celsius", ); }, ], }, }); ``` ## Debugging Tools You can test tools using Vitest or any other testing framework. Writing unit tests for your tools ensures they behave as expected and helps catch errors early. ## Calling an Agent with a Tool Now we can call the agent, and it will use the tool to fetch the weather information. ## Example: Interacting with the Agent ```typescript filename="src/index.ts" import { mastra } from "./index"; async function main() { const agent = mastra.getAgent("weatherAgent"); const response = await agent.generate( "What's the weather like in New York City today?", ); console.log(response.text); } main(); ``` The agent will use the `weatherInfo` tool to get the current weather in New York City and respond accordingly. ## Vercel AI SDK Tool Format Mastra supports tools created using the Vercel AI SDK format. You can import and use these tools directly: ```typescript filename="src/mastra/tools/vercelTool.ts" copy import { tool } from "ai"; import { z } from "zod"; export const weatherInfo = tool({ description: "Fetches the current weather information for a given city", parameters: z.object({ city: z.string().describe("The city to get weather for"), }), execute: async ({ city }) => { // Replace with actual API call const data = await fetch(`https://api.example.com/weather?city=${city}`); return data.json(); }, }); ``` You can use Vercel tools alongside Mastra tools in your agents: ```typescript filename="src/mastra/agents/weatherAgent.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import { weatherInfo } from "../tools/vercelTool"; import * as mastraTools from "../tools/mastraTools"; export const weatherAgent = new Agent({ name: "Weather Agent", instructions: "You are a helpful assistant that provides weather information.", model: openai("gpt-4"), tools: { weatherInfo, // Vercel tool ...mastraTools, // Mastra tools }, }); ``` Both tool formats will work seamlessly within your agent's workflow. ## Tool Design Best Practices When creating tools for your agents, following these guidelines will help ensure reliable and intuitive tool usage: ### Tool Descriptions Your tool's main description should focus on its purpose and value: - Keep descriptions simple and focused on **what** the tool does - Emphasize the tool's primary use case - Avoid implementation details in the main description - Focus on helping the agent understand **when** to use the tool ```typescript createTool({ id: "documentSearch", description: "Access the knowledge base to find information needed to answer user questions", // ... rest of tool configuration }); ``` ### Parameter Schemas Technical details belong in the parameter schemas, where they help the agent use the tool correctly: - Make parameters self-documenting with clear descriptions - Include default values and their implications - Provide examples where helpful - Describe the impact of different parameter choices ```typescript inputSchema: z.object({ query: z.string().describe("The search query to find relevant information"), limit: z.number().describe( "Number of results to return. Higher values provide more context, lower values focus on best matches" ), options: z.string().describe( "Optional configuration. Example: '{'filter': 'category=news'}'" ), }), ``` ### Agent Interaction Patterns Tools are more likely to be used effectively when: - Queries or tasks are complex enough to clearly require tool assistance - Agent instructions provide clear guidance on tool usage - Parameter requirements are well-documented in the schema - The tool's purpose aligns with the query's needs ### Common Pitfalls - Overloading the main description with technical details - Mixing implementation details with usage guidance - Unclear parameter descriptions or missing examples Following these practices helps ensure your tools are discoverable and usable by agents while maintaining clean separation between purpose (main description) and implementation details (parameter schemas). ## Model Context Protocol (MCP) Tools Mastra also supports tools from MCP-compatible servers through the `@mastra/mcp` package. MCP provides a standardized way for AI models to discover and interact with external tools and resources. This makes it easy to integrate third-party tools into your agents without writing custom integrations. For detailed information about using MCP tools, including configuration options and best practices, see our [MCP guide](/docs/agents/mcp-guide). # Adding Voice to Agents [EN] Source: https://mastra.ai/en/docs/agents/adding-voice Mastra agents can be enhanced with voice capabilities, allowing them to speak responses and listen to user input. You can configure an agent to use either a single voice provider or combine multiple providers for different operations. ## Using a Single Provider The simplest way to add voice to an agent is to use a single provider for both speaking and listening: ```typescript import { createReadStream } from "fs"; import path from "path"; import { Agent } from "@mastra/core/agent"; import { OpenAIVoice } from "@mastra/voice-openai"; import { openai } from "@ai-sdk/openai"; // Initialize the voice provider with default settings const voice = new OpenAIVoice(); // Create an agent with voice capabilities export const agent = new Agent({ name: "Agent", instructions: `You are a helpful assistant with both STT and TTS capabilities.`, model: openai("gpt-4o"), voice, }); // The agent can now use voice for interaction const audioStream = await agent.voice.speak("Hello, I'm your AI assistant!", { filetype: "m4a", }); playAudio(audioStream!); try { const transcription = await agent.voice.listen(audioStream); console.log(transcription) } catch (error) { console.error("Error transcribing audio:", error); } ``` ## Using Multiple Providers For more flexibility, you can use different providers for speaking and listening using the CompositeVoice class: ```typescript import { Agent } from "@mastra/core/agent"; import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; import { openai } from "@ai-sdk/openai"; export const agent = new Agent({ name: "Agent", instructions: `You are a helpful assistant with both STT and TTS capabilities.`, model: openai("gpt-4o"), // Create a composite voice using OpenAI for listening and PlayAI for speaking voice: new CompositeVoice({ input: new OpenAIVoice(), output: new PlayAIVoice(), }), }); ``` ## Working with Audio Streams The `speak()` and `listen()` methods work with Node.js streams. Here's how to save and load audio files: ### Saving Speech Output The `speak` method returns a stream that you can pipe to a file or speaker. ```typescript import { createWriteStream } from "fs"; import path from "path"; // Generate speech and save to file const audio = await agent.voice.speak("Hello, World!"); const filePath = path.join(process.cwd(), "agent.mp3"); const writer = createWriteStream(filePath); audio.pipe(writer); await new Promise((resolve, reject) => { writer.on("finish", () => resolve()); writer.on("error", reject); }); ``` ### Transcribing Audio Input The `listen` method expects a stream of audio data from a microphone or file. ```typescript import { createReadStream } from "fs"; import path from "path"; // Read audio file and transcribe const audioFilePath = path.join(process.cwd(), "/agent.m4a"); const audioStream = createReadStream(audioFilePath); try { console.log("Transcribing audio file..."); const transcription = await agent.voice.listen(audioStream, { filetype: "m4a", }); console.log("Transcription:", transcription); } catch (error) { console.error("Error transcribing audio:", error); } ``` ## Speech-to-Speech Voice Interactions For more dynamic and interactive voice experiences, you can use real-time voice providers that support speech-to-speech capabilities: ```typescript import { Agent } from "@mastra/core/agent"; import { getMicrophoneStream } from "@mastra/node-audio"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { search, calculate } from "../tools"; // Initialize the realtime voice provider const voice = new OpenAIRealtimeVoice({ chatModel: { apiKey: process.env.OPENAI_API_KEY, model: "gpt-4o-mini-realtime", }, speaker: "alloy", }); // Create an agent with speech-to-speech voice capabilities export const agent = new Agent({ name: "Agent", instructions: `You are a helpful assistant with speech-to-speech capabilities.`, model: openai("gpt-4o"), tools: { // Tools configured on Agent are passed to voice provider search, calculate, }, voice, }); // Establish a WebSocket connection await agent.voice.connect(); // Start a conversation agent.voice.speak("Hello, I'm your AI assistant!"); // Stream audio from a microphone const microphoneStream = getMicrophoneStream(); agent.voice.send(microphoneStream); // When done with the conversation agent.voice.close(); ``` ### Event System The realtime voice provider emits several events you can listen for: ```typescript // Listen for speech audio data sent from voice provider agent.voice.on("speaking", ({ audio }) => { // audio contains ReadableStream or Int16Array audio data }); // Listen for transcribed text sent from both voice provider and user agent.voice.on("writing", ({ text, role }) => { console.log(`${role} said: ${text}`); }); // Listen for errors agent.voice.on("error", (error) => { console.error("Voice error:", error); }); ``` ## Supported Voice Providers Mastra supports multiple voice providers for text-to-speech (TTS) and speech-to-text (STT) capabilities: | Provider | Package | Features | Reference | |----------|---------|----------|-----------| | OpenAI | `@mastra/voice-openai` | TTS, STT | [Documentation](/reference/voice/openai) | | OpenAI Realtime | `@mastra/voice-openai-realtime` | Realtime speech-to-speech | [Documentation](/reference/voice/openai-realtime) | | ElevenLabs | `@mastra/voice-elevenlabs` | High-quality TTS | [Documentation](/reference/voice/elevenlabs) | | PlayAI | `@mastra/voice-playai` | TTS | [Documentation](/reference/voice/playai) | | Google | `@mastra/voice-google` | TTS, STT | [Documentation](/reference/voice/google) | | Deepgram | `@mastra/voice-deepgram` | STT | [Documentation](/reference/voice/deepgram) | | Murf | `@mastra/voice-murf` | TTS | [Documentation](/reference/voice/murf) | | Speechify | `@mastra/voice-speechify` | TTS | [Documentation](/reference/voice/speechify) | | Sarvam | `@mastra/voice-sarvam` | TTS, STT | [Documentation](/reference/voice/sarvam) | | Azure | `@mastra/voice-azure` | TTS, STT | [Documentation](/reference/voice/mastra-voice) | | Cloudflare | `@mastra/voice-cloudflare` | TTS | [Documentation](/reference/voice/mastra-voice) | For more details on voice capabilities, see the [Voice API Reference](/reference/voice/mastra-voice). --- title: "Using Agent Memory | Agents | Mastra Docs" description: Documentation on how agents in Mastra use memory to store conversation history and contextual information. --- # Agent Memory [EN] Source: https://mastra.ai/en/docs/agents/agent-memory Agents in Mastra can leverage a powerful memory system to store conversation history, recall relevant information, and maintain persistent context across interactions. This allows agents to have more natural, stateful conversations. ## Enabling Memory for an Agent To enable memory, simply instantiate the `Memory` class and pass it to your agent's configuration. You also need to install the memory package: ```bash npm2yarn copy npm install @mastra/memory ``` ```typescript import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; // Basic memory setup const memory = new Memory(); const agent = new Agent({ name: "MyMemoryAgent", instructions: "You are a helpful assistant with memory.", model: openai("gpt-4o"), memory: memory, // Attach the memory instance }); ``` This basic setup uses default settings, including LibSQL for storage and FastEmbed for embeddings. For detailed setup instructions, see [Memory](/docs/memory/overview). ## Using Memory in Agent Calls To utilize memory during interactions, you **must** provide `resourceId` and `threadId` when calling the agent's `stream()` or `generate()` methods. - `resourceId`: Typically identifies the user or entity (e.g., `user_123`). - `threadId`: Identifies a specific conversation thread (e.g., `support_chat_456`). ```typescript // Example agent call using memory await agent.stream("Remember my favorite color is blue.", { resourceId: "user_alice", threadId: "preferences_thread", }); // Later in the same thread... const response = await agent.stream("What's my favorite color?", { resourceId: "user_alice", threadId: "preferences_thread", }); // Agent will use memory to recall the favorite color. ``` These IDs ensure that conversation history and context are correctly stored and retrieved for the appropriate user and conversation. ## Next Steps Keep exploring Mastra's [memory capabilities](/docs/memory/overview) like threads, conversation history, semantic recall, and working memory. --- title: "Using MCP With Mastra | Agents | Mastra Docs" description: "Use MCP in Mastra to integrate third party tools and resources in your AI agents." --- # Using MCP With Mastra [EN] Source: https://mastra.ai/en/docs/agents/mcp-guide [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is a standardized way for AI models to discover and interact with external tools and resources. ## Overview MCP in Mastra provides a standardized way to connect to tool servers and supports both stdio and SSE-based connections. ## Installation Using pnpm: ```bash pnpm add @mastra/mcp@latest ``` Using npm: ```bash npm install @mastra/mcp@latest ``` ## Using MCP in Your Code The `MCPConfiguration` class provides a way to manage multiple tool servers in your Mastra applications without managing multiple MCP clients. You can configure both stdio-based and SSE-based servers: ```typescript import { MCPConfiguration } from "@mastra/mcp"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const mcp = new MCPConfiguration({ servers: { // stdio example sequential: { name: "sequential-thinking", server: { command: "npx", args: ["-y", "@modelcontextprotocol/server-sequential-thinking"], }, }, // SSE example weather: { url: new URL("http://localhost:8080/sse"), requestInit: { headers: { Authorization: "Bearer your-token", }, }, }, }, }); ``` ### Tools vs Toolsets The `MCPConfiguration` class provides two ways to access MCP tools, each suited for different use cases: #### Using Tools (`getTools()`) Use this approach when: - You have a single MCP connection - The tools are used by a single user/context - Tool configuration (API keys, credentials) remains constant - You want to initialize an Agent with a fixed set of tools ```typescript const agent = new Agent({ name: "CLI Assistant", instructions: "You help users with CLI tasks", model: openai("gpt-4o-mini"), tools: await mcp.getTools(), // Tools are fixed at agent creation }); ``` #### Using Toolsets (`getToolsets()`) Use this approach when: - You need per-request tool configuration - Tools need different credentials per user - Running in a multi-user environment (web app, API, etc) - Tool configuration needs to change dynamically ```typescript const mcp = new MCPConfiguration({ servers: { example: { command: "npx", args: ["-y", "@example/fakemcp"], env: { API_KEY: "your-api-key", }, }, }, }); // Get the current toolsets configured for this user const toolsets = await mcp.getToolsets(); // Use the agent with user-specific tool configurations const response = await agent.stream( "What's new in Mastra and how's the weather?", { toolsets, }, ); ``` ## MCP Registries MCP servers can be accessed through registries that provide curated collections of tools. We've curated an [MCP Registry Registry](/mcp-registry-registry) to help you find the best places to source MCP servers, but here's how you can use tools from some of our favorites: ### mcp.run Registry [mcp.run](https://www.mcp.run/) makes it easy for you to call pre-authenticated, secure MCP Servers. The tools from mcp.run are free, and entirely managed, so your agent only needs a SSE URL and can use any tools a user has installed. MCP Servers are grouped into [Profiles](https://docs.mcp.run/user-guide/manage-profiles), and accessed with a unique SSE URL. For each Profile, you can copy/paste unique, signed URLs into your `MCPConfiguration` like this: ```typescript const mcp = new MCPConfiguration({ servers: { marketing: { url: new URL(process.env.MCP_RUN_SSE_URL!), }, }, }); ``` > Important: Each SSE URL on on [mcp.run](https://mcp.run) contains a unique signature, that should be treated like a password. It's best to read your SSE URL as an environment variable and manage it outside of your application code. ```bash filename=".env" copy MCP_RUN_SSE_URL=https://www.mcp.run/api/mcp/sse?nonce=... ``` ### Composio.dev Registry [Composio.dev](https://composio.dev) provides a registry of [SSE-based MCP servers](https://mcp.composio.dev) that can be easily integrated with Mastra. The SSE URL that's generated for Cursor is compatible with Mastra - you can use it directly in your configuration: ```typescript const mcp = new MCPConfiguration({ servers: { googleSheets: { url: new URL("https://mcp.composio.dev/googlesheets/[private-url-path]"), }, gmail: { url: new URL("https://mcp.composio.dev/gmail/[private-url-path]"), }, }, }); ``` When using Composio-provided tools, you can authenticate with services (like Google Sheets or Gmail) directly through conversation with your agent. The tools include authentication capabilities that guide you through the process while chatting. Note: The Composio.dev integration is best suited for single-user scenarios like personal automation, as the SSE URL is tied to your account and can't be used for multiple users. Each URL represents a single account's authentication context. ### Smithery.ai Registry [Smithery.ai](https://smithery.ai) provides a registry of MCP servers that you can easily use with Mastra: ```typescript // Unix/Mac const mcp = new MCPConfiguration({ servers: { sequentialThinking: { command: "npx", args: [ "-y", "@smithery/cli@latest", "run", "@smithery-ai/server-sequential-thinking", "--config", "{}", ], }, }, }); // Windows const mcp = new MCPConfiguration({ servers: { sequentialThinking: { command: "cmd", args: [ "/c", "npx", "-y", "@smithery/cli@latest", "run", "@smithery-ai/server-sequential-thinking", "--config", "{}", ], }, }, }); ``` This example is adapted from the Claude integration example in the Smithery documentation. ## Using the Mastra Documentation Server Looking to use Mastra's MCP documentation server in your IDE? Check out our [MCP Documentation Server guide](/docs/getting-started/mcp-docs-server) to get started. ## Next Steps - Learn more about [MCPConfiguration](/reference/tools/mcp-configuration) - Check out our [example projects](/examples) that use MCP --- title: "Creating and Calling Agents | Agent Documentation | Mastra" description: Overview of agents in Mastra, detailing their capabilities and how they interact with tools, workflows, and external systems. --- # Creating and Calling Agents [EN] Source: https://mastra.ai/en/docs/agents/overview Agents in Mastra are systems where the language model can autonomously decide on a sequence of actions to perform tasks. They have access to tools, workflows, and synced data, enabling them to perform complex tasks and interact with external systems. Agents can invoke your custom functions, utilize third-party APIs through integrations, and access knowledge bases you have built. Agents are like employees who can be used for ongoing projects. They have names, persistent memory, consistent model configurations, and instructions across calls, as well as a set of enabled tools. ## 1. Creating an Agent To create an agent in Mastra, you use the `Agent` class and define its properties: ```ts showLineNumbers filename="src/mastra/agents/index.ts" copy import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; export const myAgent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant.", model: openai("gpt-4o-mini"), }); ``` **Note:** Ensure that you have set the necessary environment variables, such as your OpenAI API key, in your `.env` file: ```.env filename=".env" copy OPENAI_API_KEY=your_openai_api_key ``` Also, make sure you have the `@mastra/core` package installed: ```bash npm2yarn copy npm install @mastra/core ``` ### Registering the Agent Register your agent with Mastra to enable logging and access to configured tools and integrations: ```ts showLineNumbers filename="src/mastra/index.ts" copy import { Mastra } from "@mastra/core"; import { myAgent } from "./agents"; export const mastra = new Mastra({ agents: { myAgent }, }); ``` ## 2. Generating and streaming text ### Generating text Use the `.generate()` method to have your agent produce text responses: ```ts showLineNumbers filename="src/mastra/index.ts" copy const response = await myAgent.generate([ { role: "user", content: "Hello, how can you assist me today?" }, ]); console.log("Agent:", response.text); ``` For more details about the generate method and its options, see the [generate reference documentation](/reference/agents/generate). ### Streaming responses For more real-time responses, you can stream the agent's response: ```ts showLineNumbers filename="src/mastra/index.ts" copy const stream = await myAgent.stream([ { role: "user", content: "Tell me a story." }, ]); console.log("Agent:"); for await (const chunk of stream.textStream) { process.stdout.write(chunk); } ``` For more details about streaming responses, see the [stream reference documentation](/reference/agents/stream). ## 3. Structured Output Agents can return structured data by providing a JSON Schema or using a Zod schema. ### Using JSON Schema ```typescript const schema = { type: "object", properties: { summary: { type: "string" }, keywords: { type: "array", items: { type: "string" } }, }, additionalProperties: false, required: ["summary", "keywords"], }; const response = await myAgent.generate( [ { role: "user", content: "Please provide a summary and keywords for the following text: ...", }, ], { output: schema, }, ); console.log("Structured Output:", response.object); ``` ### Using Zod You can also use Zod schemas for type-safe structured outputs. First, install Zod: ```bash npm2yarn copy npm install zod ``` Then, define a Zod schema and use it with the agent: ```ts showLineNumbers filename="src/mastra/index.ts" copy import { z } from "zod"; // Define the Zod schema const schema = z.object({ summary: z.string(), keywords: z.array(z.string()), }); // Use the schema with the agent const response = await myAgent.generate( [ { role: "user", content: "Please provide a summary and keywords for the following text: ...", }, ], { output: schema, }, ); console.log("Structured Output:", response.object); ``` ### Using Tools If you need to generate structured output alongside tool calls, you'll need to use the `experimental_output` property instead of `output`. Here's how: ```typescript const schema = z.object({ summary: z.string(), keywords: z.array(z.string()), }); const response = await myAgent.generate( [ { role: "user", content: "Please analyze this repository and provide a summary and keywords...", }, ], { // Use experimental_output to enable both structured output and tool calls experimental_output: schema, }, ); console.log("Structured Output:", response.object); ```
This allows you to have strong typing and validation for the structured data returned by the agent. ## 4. Multi-step Tool use Agents Agents can be enhanced with tools - functions that extend their capabilities beyond text generation. Tools allow agents to perform calculations, access external systems, and process data. For details on creating and configuring tools, see the [Adding Tools documentation](/docs/agents/adding-tools). ### Using maxSteps The `maxSteps` parameter controls the maximum number of sequential LLM calls an agent can make, particularly important when using tool calls. By default, it is set to 1 to prevent infinite loops in case of misconfigured tools. You can increase this limit based on your use case: ```ts showLineNumbers filename="src/mastra/agents/index.ts" copy import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import * as mathjs from "mathjs"; import { z } from "zod"; export const myAgent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant that can solve math problems.", model: openai("gpt-4o-mini"), tools: { calculate: { description: "Calculator for mathematical expressions", schema: z.object({ expression: z.string() }), execute: async ({ expression }) => mathjs.evaluate(expression), }, }, }); const response = await myAgent.generate( [ { role: "user", content: "If a taxi driver earns $9461 per hour and works 12 hours a day, how much does they earn in one day?", }, ], { maxSteps: 5, // Allow up to 5 tool usage steps }, ); ``` ### Using onStepFinish You can monitor the progress of multi-step operations using the `onStepFinish` callback. This is useful for debugging or providing progress updates to users. `onStepFinish` is only available when streaming or generating text without structured output. ```ts showLineNumbers filename="src/mastra/agents/index.ts" copy const response = await myAgent.generate( [{ role: "user", content: "Calculate the taxi driver's daily earnings." }], { maxSteps: 5, onStepFinish: ({ text, toolCalls, toolResults }) => { console.log("Step completed:", { text, toolCalls, toolResults }); }, }, ); ``` ### Using onFinish The `onFinish` callback is available when streaming responses and provides detailed information about the completed interaction. It is called after the LLM has finished generating its response and all tool executions have completed. This callback receives the final response text, execution steps, token usage statistics, and other metadata that can be useful for monitoring and logging: ```ts showLineNumbers filename="src/mastra/agents/index.ts" copy const stream = await myAgent.stream( [{ role: "user", content: "Calculate the taxi driver's daily earnings." }], { maxSteps: 5, onFinish: ({ steps, text, finishReason, // 'complete', 'length', 'tool', etc. usage, // token usage statistics reasoningDetails, // additional context about the agent's decisions }) => { console.log("Stream complete:", { totalSteps: steps.length, finishReason, usage, }); }, }, ); ``` ## 5. Running Agents Mastra provides a CLI command `mastra dev` to run your agents behind an API. By default, this looks for exported agents in files in the `src/mastra/agents` directory. ### Starting the Server ```bash mastra dev ``` This will start the server and make your agent available at `http://localhost:4111/api/agents/myAgent/generate`. ### Interacting with the Agent You can interact with the agent using `curl` from the command line: ```bash curl -X POST http://localhost:4111/api/agents/myAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "Hello, how can you assist me today?" } ] }' ``` ## Next Steps - Learn about Agent Memory in the [Agent Memory](./agent-memory.mdx) guide. - Learn about Agent Tools in the [Agent Tools](./adding-tools.mdx) guide. - See an example agent in the [Chef Michel](../guides/chef-michel.mdx) example. --- title: "Discord Community and Bot | Documentation | Mastra" description: Information about the Mastra Discord community and MCP bot. --- # Discord Community [EN] Source: https://mastra.ai/en/docs/community/discord The Discord server has over 1000 members and serves as the main discussion forum for Mastra. The Mastra team monitors Discord during North American and European business hours, with community members active across other time zones.[Join the Discord server](https://discord.gg/BTYqqHKUrf). ## Discord MCP Bot In addition to community members, we have an (experimental!) Discord bot that can also help answer questions. It uses [Model Context Protocol (MCP)](/docs/agents/mcp-guide). You can ask it a question with `/ask` (either in public channels or DMs) and clear history (in DMs only) with `/cleardm`. --- title: "Licensing" description: "Mastra License" --- # License [EN] Source: https://mastra.ai/en/docs/community/licensing ## Elastic License 2.0 (ELv2) Mastra is licensed under the Elastic License 2.0 (ELv2), a modern license designed to balance open-source principles with sustainable business practices. ### What is Elastic License 2.0? The Elastic License 2.0 is a source-available license that grants users broad rights to use, modify, and distribute the software while including specific limitations to protect the project's sustainability. It allows: - Free use for most purposes - Viewing, modifying, and redistributing the source code - Creating and distributing derivative works - Commercial use within your organization The primary limitation is that you cannot provide Mastra as a hosted or managed service that offers users access to the substantial functionality of the software. ### Why We Chose Elastic License 2.0 We selected the Elastic License 2.0 for several important reasons: 1. **Sustainability**: It enables us to maintain a healthy balance between openness and the ability to sustain long-term development. 2. **Innovation Protection**: It ensures we can continue investing in innovation without concerns about our work being repackaged as competing services. 3. **Community Focus**: It maintains the spirit of open source by allowing users to view, modify, and learn from our code while protecting our ability to support the community. 4. **Business Clarity**: It provides clear guidelines for how Mastra can be used in commercial contexts. ### Building Your Business with Mastra Despite the licensing restrictions, there are numerous ways to build successful businesses using Mastra: #### Allowed Business Models - **Building Applications**: Create and sell applications built with Mastra - **Offering Consulting Services**: Provide expertise, implementation, and customization services - **Developing Custom Solutions**: Build bespoke AI solutions for clients using Mastra - **Creating Add-ons and Extensions**: Develop and sell complementary tools that extend Mastra's functionality - **Training and Education**: Offer courses and educational materials about using Mastra effectively #### Examples of Compliant Usage - A company builds an AI-powered customer service application using Mastra and sells it to clients - A consulting firm offers implementation and customization services for Mastra - A developer creates specialized agents and tools with Mastra and licenses them to other businesses - A startup builds a vertical-specific solution (e.g., healthcare AI assistant) powered by Mastra #### What to Avoid The main restriction is that you cannot offer Mastra itself as a hosted service where users access its core functionality. This means: - Don't create a SaaS platform that is essentially Mastra with minimal modifications - Don't offer a managed Mastra service where customers are primarily paying to use Mastra's features ### Questions About Licensing? If you have specific questions about how the Elastic License 2.0 applies to your use case, please [contact us](https://discord.gg/BTYqqHKUrf) on Discord for clarification. We're committed to supporting legitimate business use cases while protecting the sustainability of the project. --- title: "MastraClient" description: "Learn how to set up and use the Mastra Client SDK" --- # Mastra Client SDK [EN] Source: https://mastra.ai/en/docs/deployment/client The Mastra Client SDK provides a simple and type-safe interface for interacting with your [Mastra Server](/docs/deployment/server) from your client environment. ## Development Requirements To ensure smooth local development, make sure you have: - Node.js 18.x or later installed - TypeScript 4.7+ (if using TypeScript) - A modern browser environment with Fetch API support - Your local Mastra server running (typically on port 4111) ## Installation import { Tabs } from "nextra/components"; ```bash copy npm install @mastra/client-js ``` ```bash copy yarn add @mastra/client-js ``` ```bash copy pnpm add @mastra/client-js ``` ## Initialize Mastra Client To get started you'll need to initialize your MastraClient with necessary parameters: ```typescript import { MastraClient } from "@mastra/client-js"; const client = new MastraClient({ baseUrl: "http://localhost:4111", // Default Mastra development server port }); ``` ### Configuration Options You can customize the client with various options: ```typescript const client = new MastraClient({ // Required baseUrl: "http://localhost:4111", // Optional configurations for development retries: 3, // Number of retry attempts backoffMs: 300, // Initial retry backoff time maxBackoffMs: 5000, // Maximum retry backoff time headers: { // Custom headers for development "X-Development": "true" } }); ``` ## Example Once your MastraClient is initialized you can start making client calls via the type-safe interface ```typescript // Get a reference to your local agent const agent = client.getAgent("dev-agent-id"); // Generate responses const response = await agent.generate({ messages: [ { role: "user", content: "Hello, I'm testing the local development setup!" } ] }); ``` ## Available Features Mastra client exposes all resources served by the Mastra Server - [**Agents**](/reference/client-js/agents): Create and manage AI agents, generate responses, and handle streaming interactions - [**Memory**](/reference/client-js/memory): Manage conversation threads and message history - [**Tools**](/reference/client-js/tools): Access and execute tools available to agents - [**Workflows**](/reference/client-js/workflows): Create and manage automated workflows - [**Vectors**](/reference/client-js/vectors): Handle vector operations for semantic search and similarity matching ## Best Practices 1. **Error Handling**: Implement proper error handling for development scenarios 2. **Environment Variables**: Use environment variables for configuration 3. **Debugging**: Enable detailed logging when needed ```typescript // Example with error handling and logging try { const agent = client.getAgent("dev-agent-id"); const response = await agent.generate({ messages: [{ role: "user", content: "Test message" }] }); console.log("Response:", response); } catch (error) { console.error("Development error:", error); } ``` --- title: "Serverless Deployment" description: "Build and deploy Mastra applications using platform-specific deployers or standard HTTP servers" --- # Serverless Deployment [EN] Source: https://mastra.ai/en/docs/deployment/deployment This guide covers deploying Mastra to Cloudflare Workers, Vercel, and Netlify using platform-specific deployers For self-hosted Node.js server deployment, see the [Creating A Mastra Server](/docs/deployment/server) guide. ## Prerequisites Before you begin, ensure you have: - **Node.js** installed (version 18 or higher is recommended) - If using a platform-specific deployer: - An account with your chosen platform - Required API keys or credentials ## Serverless Platform Deployers Platform-specific deployers handle configuration and deployment for: - **[Cloudflare Workers](/reference/deployer/cloudflare)** - **[Vercel](/reference/deployer/vercel)** - **[Netlify](/reference/deployer/netlify)** As of April 2025, Mastra also offers [Mastra Cloud](https://mastra.ai/cloud-beta), a serverless agent environment with atomic deployments. You can sign up for the waitlist [here](https://mastra.ai/cloud-beta). ### Installing Deployers ```bash copy # For Cloudflare npm install @mastra/deployer-cloudflare # For Vercel npm install @mastra/deployer-vercel # For Netlify npm install @mastra/deployer-netlify ``` ### Configuring Deployers Configure the deployer in your entry file: ```typescript copy showLineNumbers import { Mastra, createLogger } from '@mastra/core'; import { CloudflareDeployer } from '@mastra/deployer-cloudflare'; export const mastra = new Mastra({ agents: { /* your agents here */ }, logger: createLogger({ name: 'MyApp', level: 'debug' }), deployer: new CloudflareDeployer({ scope: 'your-cloudflare-scope', projectName: 'your-project-name', // See complete configuration options in the reference docs }), }); ``` ### Deployer Configuration Each deployer has specific configuration options. Below are basic examples, but refer to the reference documentation for complete details. #### Cloudflare Deployer ```typescript copy showLineNumbers new CloudflareDeployer({ scope: 'your-cloudflare-account-id', projectName: 'your-project-name', // For complete configuration options, see the reference documentation }) ``` [View Cloudflare Deployer Reference →](/reference/deployer/cloudflare) #### Vercel Deployer ```typescript copy showLineNumbers new VercelDeployer({ teamSlug: 'your-vercel-team-slug', projectName: 'your-project-name', token: 'your-vercel-token' // For complete configuration options, see the reference documentation }) ``` [View Vercel Deployer Reference →](/reference/deployer/vercel) #### Netlify Deployer ```typescript copy showLineNumbers new NetlifyDeployer({ scope: 'your-netlify-team-slug', projectName: 'your-project-name', token: 'your-netlify-token' }) ``` [View Netlify Deployer Reference →](/reference/deployer/netlify) ## Environment Variables Required variables: 1. Platform deployer variables (if using platform deployers): - Platform credentials 2. Agent API keys: - `OPENAI_API_KEY` - `ANTHROPIC_API_KEY` 3. Server configuration (for universal deployment): - `PORT`: HTTP server port (default: 3000) - `HOST`: Server host (default: 0.0.0.0) ## Build Mastra Project To build your Mastra project for your target platform run: ```bash npx mastra build ``` When a Deployer is used, the build output is automatically prepared for the target platform. You can then deploy the build output `.mastra/output` via your platform's (Vercel, netlify, cloudfare e.t.c) CLI/UI. --- title: Deployment Overview description: Learn about different deployment options for your Mastra applications --- # Deployment Overview [EN] Source: https://mastra.ai/en/docs/deployment/overview Mastra offers multiple deployment options to suit your application's needs, from fully-managed solutions to self-hosted options. This guide will help you understand the available deployment paths and choose the right one for your project. ## Deployment Options ### Mastra Cloud Mastra Cloud is a deployment platform that connects to your GitHub repository, automatically deploys on code changes, and provides monitoring tools. It includes: - GitHub repository integration - Deployment on git push - Agent testing interface - Comprehensive logs and traces - Custom domains for each project [View Mastra Cloud documentation →](/docs/mastra-cloud/overview) ### With a Server You can deploy Mastra as a standard Node.js HTTP server, which gives you full control over your infrastructure and deployment environment. - Custom API routes and middleware - Configurable CORS and authentication - Deploy to VMs, containers, or PaaS platforms - Ideal for integrating with existing Node.js applications [Server deployment guide →](/docs/deployment/server) ### Serverless Platforms Mastra provides platform-specific deployers for popular serverless platforms, enabling you to deploy your application with minimal configuration. - Deploy to Cloudflare Workers, Vercel, or Netlify - Platform-specific optimizations - Simplified deployment process - Automatic scaling through the platform [Serverless deployment guide →](/docs/deployment/deployment) ## Client Configuration Once your Mastra application is deployed, you'll need to configure your client to communicate with it. The Mastra Client SDK provides a simple and type-safe interface for interacting with your Mastra server. - Type-safe API interactions - Authentication and request handling - Retries and error handling - Support for streaming responses [Client configuration guide →](/docs/deployment/client) ## Choosing a Deployment Option | Option | Best For | Key Benefits | | --- | --- | --- | | **Mastra Cloud** | Teams wanting to ship quickly without infrastructure concerns | Fully-managed, automatic scaling, built-in observability | | **Server Deployment** | Teams needing maximum control and customization | Full control, custom middleware, integrate with existing apps | | **Serverless Platforms** | Teams already using Vercel, Netlify, or Cloudflare | Platform integration, simplified deployment, automatic scaling | --- title: "Creating A Mastra Server" description: "Configure and customize the Mastra server with middleware and other options" --- # Creating A Mastra Server [EN] Source: https://mastra.ai/en/docs/deployment/server While developing or when you deploy a Mastra application, it runs as an HTTP server that exposes your agents, workflows, and other functionality as API endpoints. This page explains how to configure and customize the server behavior. ## Server Architecture Mastra uses [Hono](https://hono.dev) as its underlying HTTP server framework. When you build a Mastra application using `mastra build`, it generates a Hono-based HTTP server in the `.mastra` directory. The server provides: - API endpoints for all registered agents - API endpoints for all registered workflows - Custom api route supports - Custom middleware support - Configuration of timeout - Configuration of port ## Server configuration You can configure server `port` and `timeout` in the Mastra instance. ```typescript copy showLineNumbers import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ server: { port: 3000, // Defaults to 4111 timeout: 10000, // Defaults to 30000 (30s) }, }); ``` ## Custom API Routes Mastra provides a list of api routes that are automatically generated based on the registered agents and workflows. You can also define custom api routes on the Mastra instance. These routes can live in the same file as the Mastra instance or in a separate file. We recommend keeping them in a separate file to keep the Mastra instance clean. ```typescript copy showLineNumbers import { Mastra } from "@mastra/core"; import { registerApiRoute } from "@mastra/core/server"; export const mastra = new Mastra({ server: { apiRoutes: [ registerApiRoute("/my-custom-route", { method: "GET", handler: async (c) => { // you have access to mastra instance here const mastra = c.get("mastra"); // you can use the mastra instance to get agents, workflows, etc. const agents = await mastra.getAgent("my-agent"); return c.json({ message: "Hello, world!" }); }, }), ], }, // Other configuration options }); ``` ## Custom CORS Config Mastra allows you to configure CORS (Cross-Origin Resource Sharing) settings for your server. ```typescript copy showLineNumbers import { Mastra } from '@mastra/core'; export const mastra = new Mastra({ server: { cors: { origin: ['https://example.com'], // Allow specific origins or '*' for all allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'], allowHeaders: ['Content-Type', 'Authorization'], credentials: false, } } }); ``` ## Middleware Mastra allows you to configure custom middleware functions that will be applied to API routes. This is useful for adding authentication, logging, CORS, or other HTTP-level functionality to your API endpoints. ```typescript copy showLineNumbers import { Mastra } from '@mastra/core'; export const mastra = new Mastra({ // Other configuration options server: { middleware: [ { handler: async (c, next) => { // Example: Add authentication check const authHeader = c.req.header('Authorization'); if (!authHeader) { return new Response('Unauthorized', { status: 401 }); } // Continue to the next middleware or route handler await next(); }, path: '/api/*' }, // add middleware to all routes async (c, next) => { // Example: Add request logging console.log(`${c.req.method} ${c.req.url}`); await next(); }, ] }); ``` if you want to add a middleware to a single route, you can also specify that in the registerApiRoute using `registerApiRoute`. ```typescript copy showLineNumbers registerApiRoute("/my-custom-route", { method: "GET", middleware: [ async (c, next) => { // Example: Add request logging console.log(`${c.req.method} ${c.req.url}`); await next(); }, ], handler: async (c) => { // you have access to mastra instance here const mastra = c.get("mastra"); // you can use the mastra instance to get agents, workflows, etc. const agents = await mastra.getAgent("my-agent"); return c.json({ message: "Hello, world!" }); }, }); ``` ### Middleware Behavior Each middleware function: - Receives a Hono context object (`c`) and a `next` function - Can return a `Response` to short-circuit the request handling - Can call `next()` to continue to the next middleware or route handler - Can optionally specify a path pattern (defaults to '/api/\*') - Inject request specific data for agent tool calling or workflows ### Common Middleware Use Cases #### Authentication ```typescript copy { handler: async (c, next) => { const authHeader = c.req.header('Authorization'); if (!authHeader || !authHeader.startsWith('Bearer ')) { return new Response('Unauthorized', { status: 401 }); } const token = authHeader.split(' ')[1]; // Validate token here await next(); }, path: '/api/*', } ``` #### CORS Support ```typescript copy { handler: async (c, next) => { // Add CORS headers c.header("Access-Control-Allow-Origin", "*"); c.header("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS"); c.header("Access-Control-Allow-Headers", "Content-Type, Authorization"); // Handle preflight requests if (c.req.method === "OPTIONS") { return new Response(null, { status: 204 }); } await next(); }; } ``` #### Request Logging ```typescript copy { handler: async (c, next) => { const start = Date.now(); await next(); const duration = Date.now() - start; console.log(`${c.req.method} ${c.req.url} - ${duration}ms`); }; } ``` ### Special Mastra Headers When integrating with Mastra Cloud or building custom clients, there are special headers that clients send to identify themselves and enable specific features. Your server middleware can check for these headers to customize behavior: ```typescript copy { handler: async (c, next) => { // Check for Mastra-specific headers in incoming requests const isFromMastraCloud = c.req.header("x-mastra-cloud") === "true"; const clientType = c.req.header("x-mastra-client-type"); // e.g., 'js', 'python' const isDevPlayground = c.req.header("x-mastra-dev-playground") === "true"; // Customize behavior based on client information if (isFromMastraCloud) { // Special handling for Mastra Cloud requests } await next(); }; } ``` These headers have the following purposes: - `x-mastra-cloud`: Indicates that the request is coming from Mastra Cloud - `x-mastra-client-type`: Specifies the client SDK type (e.g., 'js', 'python') - `x-mastra-dev-playground`: Indicates that the request is from the development playground You can use these headers in your middleware to implement client-specific logic or enable features only for certain environments. ## Deployment Since Mastra builds to a standard Node.js server, you can deploy to any platform that runs Node.js applications: - Cloud VMs (AWS EC2, DigitalOcean Droplets, GCP Compute Engine) - Container platforms (Docker, Kubernetes) - Platform as a Service (Heroku, Railway) - Self-hosted servers ### Building Build the application: ```bash copy # Build from current directory mastra build # Or specify a directory mastra build --dir ./my-project ``` The build process: 1. Locates entry file (`src/mastra/index.ts` or `src/mastra/index.js`) 2. Creates `.mastra` output directory 3. Bundles code using Rollup with tree shaking and source maps 4. Generates [Hono](https://hono.dev) HTTP server See [`mastra build`](/reference/cli/build) for all options. ### Running the Server Start the HTTP server: ```bash copy node .mastra/output/index.mjs ``` ## Serverless Deployment Mastra also supports serverless deployment on Cloudflare Workers, Vercel, and Netlify. See our [Serverless Deployment](/docs/deployment/deployment) guide for setup instructions. --- title: "Create your own Eval" description: "Mastra allows so create your own evals, here is how." --- # Create your own Eval [EN] Source: https://mastra.ai/en/docs/evals/custom-eval Creating your own eval is as easy as creating a new function. You simply create a class that extends the `Metric` class and implement the `measure` method. ## Basic example For a simple example of creating a custom metric that checks if the output contains certain words, see our [Word Inclusion example](/examples/evals/word-inclusion). ## Creating a custom LLM-Judge A custom LLM judge helps evaluate specific aspects of your AI's responses. Think of it like having an expert reviewer for your particular use case: - Medical Q&A → Judge checks for medical accuracy and safety - Customer Service → Judge evaluates tone and helpfulness - Code Generation → Judge verifies code correctness and style For a practical example, see how we evaluate [Chef Michel's](/docs/guides/chef-michel) recipes for gluten content in our [Gluten Checker example](/examples/evals/custom-eval). --- title: "Overview" description: "Understanding how to evaluate and measure AI agent quality using Mastra evals." --- # Testing your agents with evals [EN] Source: https://mastra.ai/en/docs/evals/overview While traditional software tests have clear pass/fail conditions, AI outputs are non-deterministic — they can vary with the same input. Evals help bridge this gap by providing quantifiable metrics for measuring agent quality. Evals are automated tests that evaluate Agents outputs using model-graded, rule-based, and statistical methods. Each eval returns a normalized score between 0-1 that can be logged and compared. Evals can be customized with your own prompts and scoring functions. Evals can be run in the cloud, capturing real-time results. But evals can also be part of your CI/CD pipeline, allowing you to test and monitor your agents over time. ## Types of Evals There are different kinds of evals, each serving a specific purpose. Here are some common types: 1. **Textual Evals**: Evaluate accuracy, reliability, and context understanding of agent responses 2. **Classification Evals**: Measure accuracy in categorizing data based on predefined categories 3. **Tool Usage Evals**: Assess how effectively an agent uses external tools or APIs 4. **Prompt Engineering Evals**: Explore impact of different instructions and input formats ## Getting Started Evals need to be added to an agent. Here's an example using the summarization, content similarity, and tone consistency metrics: ```typescript copy showLineNumbers filename="src/mastra/agents/index.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import { SummarizationMetric } from "@mastra/evals/llm"; import { ContentSimilarityMetric, ToneConsistencyMetric, } from "@mastra/evals/nlp"; const model = openai("gpt-4o"); export const myAgent = new Agent({ name: "ContentWriter", instructions: "You are a content writer that creates accurate summaries", model, evals: { summarization: new SummarizationMetric(model), contentSimilarity: new ContentSimilarityMetric(), tone: new ToneConsistencyMetric(), }, }); ``` You can view eval results in the Mastra dashboard when using `mastra dev`. ## Beyond Automated Testing While automated evals are valuable, high-performing AI teams often combine them with: 1. **A/B Testing**: Compare different versions with real users 2. **Human Review**: Regular review of production data and traces 3. **Continuous Monitoring**: Track eval metrics over time to detect regressions ## Understanding Eval Results Each eval metric measures a specific aspect of your agent's output. Here's how to interpret and improve your results: ### Understanding Scores For any metric: 1. Check the metric documentation to understand the scoring process 2. Look for patterns in when scores change 3. Compare scores across different inputs and contexts 4. Track changes over time to spot trends ### Improving Results When scores aren't meeting your targets: 1. Check your instructions - Are they clear? Try making them more specific 2. Look at your context - Is it giving the agent what it needs? 3. Simplify your prompts - Break complex tasks into smaller steps 4. Add guardrails - Include specific rules for tricky cases ### Maintaining Quality Once you're hitting your targets: 1. Monitor stability - Do scores remain consistent? 2. Document what works - Keep notes on successful approaches 3. Test edge cases - Add examples that cover unusual scenarios 4. Fine-tune - Look for ways to improve efficiency See [Textual Evals](/docs/evals/textual-evals) for more info on what evals can do. For more info on how to create your own evals, see the [Custom Evals](/docs/evals/custom-eval) guide. For running evals in your CI pipeline, see the [Running in CI](/docs/evals/running-in-ci) guide. --- title: "Running in CI" description: "Learn how to run Mastra evals in your CI/CD pipeline to monitor agent quality over time." --- # Running Evals in CI [EN] Source: https://mastra.ai/en/docs/evals/running-in-ci Running evals in your CI pipeline helps bridge this gap by providing quantifiable metrics for measuring agent quality over time. ## Setting Up CI Integration We support any testing framework that supports ESM modules. For example, you can use [Vitest](https://vitest.dev/), [Jest](https://jestjs.io/) or [Mocha](https://mochajs.org/) to run evals in your CI/CD pipeline. ```typescript copy showLineNumbers filename="src/mastra/agents/index.test.ts" import { describe, it, expect } from 'vitest'; import { evaluate } from "@mastra/evals"; import { ToneConsistencyMetric } from "@mastra/evals/nlp"; import { myAgent } from './index'; describe('My Agent', () => { it('should validate tone consistency', async () => { const metric = new ToneConsistencyMetric(); const result = await evaluate(myAgent, 'Hello, world!', metric) expect(result.score).toBe(1); }); }); ``` You will need to configure a testSetup and globalSetup script for your testing framework to capture the eval results. It allows us to show these results in your mastra dashboard. ## Framework Configuration ### Vitest Setup Add these files to your project to run evals in your CI/CD pipeline: ```typescript copy showLineNumbers filename="globalSetup.ts" import { globalSetup } from '@mastra/evals'; export default function setup() { globalSetup() } ``` ```typescript copy showLineNumbers filename="testSetup.ts" import { beforeAll } from 'vitest'; import { attachListeners } from '@mastra/evals'; beforeAll(async () => { await attachListeners(); }); ``` ```typescript copy showLineNumbers filename="vitest.config.ts" import { defineConfig } from 'vitest/config' export default defineConfig({ test: { globalSetup: './globalSetup.ts', setupFiles: ['./testSetup.ts'], }, }) ``` ## Storage Configuration To store eval results in Mastra Storage and capture results in the Mastra dashboard: ```typescript copy showLineNumbers filename="testSetup.ts" import { beforeAll } from 'vitest'; import { attachListeners } from '@mastra/evals'; import { mastra } from './your-mastra-setup'; beforeAll(async () => { // Store evals in Mastra Storage (requires storage to be enabled) await attachListeners(mastra); }); ``` With file storage, evals persist and can be queried later. With memory storage, evals are isolated to the test process. --- title: "Textual Evals" description: "Understand how Mastra uses LLM-as-judge methodology to evaluate text quality." --- # Textual Evals [EN] Source: https://mastra.ai/en/docs/evals/textual-evals Textual evals use an LLM-as-judge methodology to evaluate agent outputs. This approach leverages language models to assess various aspects of text quality, similar to how a teaching assistant might grade assignments using a rubric. Each eval focuses on specific quality aspects and returns a score between 0 and 1, providing quantifiable metrics for non-deterministic AI outputs. Mastra provides several eval metrics for assessing Agent outputs. Mastra is not limited to these metrics, and you can also [define your own evals](/docs/evals/custom-eval). ## Why Use Textual Evals? Textual evals help ensure your agent: - Produces accurate and reliable responses - Uses context effectively - Follows output requirements - Maintains consistent quality over time ## Available Metrics ### Accuracy and Reliability These metrics evaluate how correct, truthful, and complete your agent's answers are: - [`hallucination`](/reference/evals/hallucination): Detects facts or claims not present in provided context - [`faithfulness`](/reference/evals/faithfulness): Measures how accurately responses represent provided context - [`content-similarity`](/reference/evals/content-similarity): Evaluates consistency of information across different phrasings - [`completeness`](/reference/evals/completeness): Checks if responses include all necessary information - [`answer-relevancy`](/reference/evals/answer-relevancy): Assesses how well responses address the original query - [`textual-difference`](/reference/evals/textual-difference): Measures textual differences between strings ### Understanding Context These metrics evaluate how well your agent uses provided context: - [`context-position`](/reference/evals/context-position): Analyzes where context appears in responses - [`context-precision`](/reference/evals/context-precision): Evaluates whether context chunks are grouped logically - [`context-relevancy`](/reference/evals/context-relevancy): Measures use of appropriate context pieces - [`contextual-recall`](/reference/evals/contextual-recall): Assesses completeness of context usage ### Output Quality These metrics evaluate adherence to format and style requirements: - [`tone`](/reference/evals/tone-consistency): Measures consistency in formality, complexity, and style - [`toxicity`](/reference/evals/toxicity): Detects harmful or inappropriate content - [`bias`](/reference/evals/bias): Detects potential biases in the output - [`prompt-alignment`](/reference/evals/prompt-alignment): Checks adherence to explicit instructions like length restrictions, formatting requirements, or other constraints - [`summarization`](/reference/evals/summarization): Evaluates information retention and conciseness - [`keyword-coverage`](/reference/evals/keyword-coverage): Assesses technical terminology usage --- title: "Licensing" description: "Mastra License" --- # License [EN] Source: https://mastra.ai/en/docs/faq ## Elastic License 2.0 (ELv2) Mastra is licensed under the Elastic License 2.0 (ELv2), a modern license designed to balance open-source principles with sustainable business practices. ### What is Elastic License 2.0? The Elastic License 2.0 is a source-available license that grants users broad rights to use, modify, and distribute the software while including specific limitations to protect the project's sustainability. It allows: - Free use for most purposes - Viewing, modifying, and redistributing the source code - Creating and distributing derivative works - Commercial use within your organization The primary limitation is that you cannot provide Mastra as a hosted or managed service that offers users access to the substantial functionality of the software. ### Why We Chose Elastic License 2.0 We selected the Elastic License 2.0 for several important reasons: 1. **Sustainability**: It enables us to maintain a healthy balance between openness and the ability to sustain long-term development. 2. **Innovation Protection**: It ensures we can continue investing in innovation without concerns about our work being repackaged as competing services. 3. **Community Focus**: It maintains the spirit of open source by allowing users to view, modify, and learn from our code while protecting our ability to support the community. 4. **Business Clarity**: It provides clear guidelines for how Mastra can be used in commercial contexts. ### Building Your Business with Mastra Despite the licensing restrictions, there are numerous ways to build successful businesses using Mastra: #### Allowed Business Models - **Building Applications**: Create and sell applications built with Mastra - **Offering Consulting Services**: Provide expertise, implementation, and customization services - **Developing Custom Solutions**: Build bespoke AI solutions for clients using Mastra - **Creating Add-ons and Extensions**: Develop and sell complementary tools that extend Mastra's functionality - **Training and Education**: Offer courses and educational materials about using Mastra effectively #### Examples of Compliant Usage - A company builds an AI-powered customer service application using Mastra and sells it to clients - A consulting firm offers implementation and customization services for Mastra - A developer creates specialized agents and tools with Mastra and licenses them to other businesses - A startup builds a vertical-specific solution (e.g., healthcare AI assistant) powered by Mastra #### What to Avoid The main restriction is that you cannot offer Mastra itself as a hosted service where users access its core functionality. This means: - Don't create a SaaS platform that is essentially Mastra with minimal modifications - Don't offer a managed Mastra service where customers are primarily paying to use Mastra's features ### Questions About Licensing? If you have specific questions about how the Elastic License 2.0 applies to your use case, please [contact us](https://discord.gg/BTYqqHKUrf) on Discord for clarification. We're committed to supporting legitimate business use cases while protecting the sustainability of the project. --- title: "Using with Vercel AI SDK" description: "Learn how Mastra leverages the Vercel AI SDK library and how you can leverage it further with Mastra" --- # Using with Vercel AI SDK [EN] Source: https://mastra.ai/en/docs/frameworks/ai-sdk Mastra leverages AI SDK's model routing (a unified interface on top of OpenAI, Anthropic, etc), structured output, and tool calling. We explain this in greater detail in [this blog post](https://mastra.ai/blog/using-ai-sdk-with-mastra) ## Mastra + AI SDK Mastra acts as a layer on top of AI SDK to help teams productionize their proof-of-concepts quickly and easily. Agent interaction trace showing spans, LLM calls, and tool executions ## Model routing When creating agents in Mastra, you can specify any AI SDK-supported model: ```typescript import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; const agent = new Agent({ name: "WeatherAgent", instructions: "Instructions for the agent...", model: openai("gpt-4-turbo"), // Model comes directly from AI SDK }); const result = await agent.generate("What is the weather like?"); ``` ## AI SDK Hooks Mastra is compatible with AI SDK's hooks for seamless frontend integration: ### useChat The `useChat` hook enables real-time chat interactions in your frontend application - Works with agent data streams i.e. `.toDataStreamResponse()` - The useChat `api` defaults to `/api/chat` - Works with the Mastra REST API agent stream endpoint `{MASTRA_BASE_URL}/agents/:agentId/stream` for data streams, i.e. no structured output is defined. ```typescript filename="app/api/chat/route.ts" copy import { mastra } from "@/src/mastra"; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent("weatherAgent"); const stream = await myAgent.stream(messages); return stream.toDataStreamResponse(); } ``` ```typescript copy import { useChat } from '@ai-sdk/react'; export function ChatComponent() { const { messages, input, handleInputChange, handleSubmit } = useChat({ api: '/path-to-your-agent-stream-api-endpoint' }); return (
{messages.map(m => (
{m.role}: {m.content}
))}
); } ``` > **Gotcha**: When using `useChat` with agent memory functionality, make sure to check out the [Agent Memory section](/docs/agents/agent-memory#usechat) for important implementation details. ### useCompletion For single-turn completions, use the `useCompletion` hook: - Works with agent data streams i.e. `.toDataStreamResponse()` - The useCompletion `api` defaults to `/api/completion` - Works with the Mastra REST API agent stream endpoint `{MASTRA_BASE_URL}/agents/:agentId/stream` for data streams, i.e. no structured output is defined. ```typescript filename="app/api/completion/route.ts" copy import { mastra } from "@/src/mastra"; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent("weatherAgent"); const stream = await myAgent.stream(messages); return stream.toDataStreamResponse(); } ``` ```typescript import { useCompletion } from "@ai-sdk/react"; export function CompletionComponent() { const { completion, input, handleInputChange, handleSubmit, } = useCompletion({ api: '/path-to-your-agent-stream-api-endpoint' }); return (

Completion result: {completion}

); } ``` ### useObject For consuming text streams that represent JSON objects and parsing them into a complete object based on a schema. - Works with agent text streams i.e. `.toTextStreamResponse()` - The useObject `api` defaults to `/api/completion` - Works with the Mastra REST API agent stream endpoint `{MASTRA_BASE_URL}/agents/:agentId/stream` for text streams, i.e. structured output is defined. ```typescript filename="app/api/use-object/route.ts" copy import { mastra } from "@/src/mastra"; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent("weatherAgent"); const stream = await myAgent.stream(messages, { output: z.object({ weather: z.string(), }), }); return stream.toTextStreamResponse(); } ``` ```typescript import { experimental_useObject as useObject } from '@ai-sdk/react'; export default function Page() { const { object, submit } = useObject({ api: '/api/use-object', schema: z.object({ weather: z.string(), }), }); return (
{object?.content &&

{object.content}

}
); } ``` ## Tool Calling ### AI SDK Tool Format Mastra supports tools created using the AI SDK format, so you can use them directly with Mastra agents. See our tools doc on [Vercel AI SDK Tool Format ](/docs/agents/adding-tools#vercel-ai-sdk-tool-format) for more details. ### Client-side tool calling Mastra leverages AI SDK's tool calling, so what applies in AI SDK applies here still. [Agent Tools](/docs/agents/adding-tools) in Mastra are 100% percent compatible with AI SDK tools. Mastra tools also expose an optional `execute` async function. It is optional because you might want to forward tool calls to the client or to a queue instead of executing them in the same process. One way to then leverage client-side tool calling is to use the `@ai-sdk/react` `useChat` hook's `onToolCall` property for client-side tool execution ## Custom DataStream In certain scenarios you need to write custom data, message annotations to an agent's dataStream. This can be useful for: - Streaming additional data to the client - Passing progress info back to the client in real time Mastra integrates well with AI SDK to make this possible ### CreateDataStream The `createDataStream` function allows you to stream additional data to the client ```typescript copy import { createDataStream } from "ai" import { Agent } from '@mastra/core/agent'; export const weatherAgent = new Agent({ name: 'Weather Agent', instructions: ` You are a helpful weather assistant that provides accurate weather information. Your primary function is to help users get weather details for specific locations. When responding: - Always ask for a location if none is provided - If the location name isn’t in English, please translate it - If giving a location with multiple parts (e.g. "New York, NY"), use the most relevant part (e.g. "New York") - Include relevant details like humidity, wind conditions, and precipitation - Keep responses concise but informative Use the weatherTool to fetch current weather data. `, model: openai('gpt-4o'), tools: { weatherTool }, }); const stream = createDataStream({ async execute(dataStream) { // Write data dataStream.writeData({ value: 'Hello' }); // Write annotation dataStream.writeMessageAnnotation({ type: 'status', value: 'processing' }); //mastra agent stream const agentStream = await weatherAgent.stream('What is the weather') // Merge agent stream agentStream.mergeIntoDataStream(dataStream); }, onError: error => `Custom error: ${error.message}`, }); ``` ### CreateDataStreamResponse The `createDataStreamResponse` function creates a Response object that streams data to the client ```typescript filename="app/api/chat/route.ts" copy import { mastra } from "@/src/mastra"; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent("weatherAgent"); //mastra agent stream const agentStream = await myAgent.stream(messages); const response = createDataStreamResponse({ status: 200, statusText: 'OK', headers: { 'Custom-Header': 'value', }, async execute(dataStream) { // Write data dataStream.writeData({ value: 'Hello' }); // Write annotation dataStream.writeMessageAnnotation({ type: 'status', value: 'processing' }); // Merge agent stream agentStream.mergeIntoDataStream(dataStream); }, onError: error => `Custom error: ${error.message}`, }); return response } ``` --- title: "Getting started with Mastra and NextJS | Mastra Guides" description: Guide on integrating Mastra with NextJS. --- import { Callout, Steps, Tabs } from "nextra/components"; # Integrate Mastra in your Next.js project [EN] Source: https://mastra.ai/en/docs/frameworks/next-js There are two main ways to integrate Mastra with your Next.js application: as a separate backend service or directly integrated into your Next.js app. ## 1. Separate Backend Integration Best for larger projects where you want to: - Scale your AI backend independently - Keep clear separation of concerns - Have more deployment flexibility ### Create Mastra Backend Create a new Mastra project using our CLI: ```bash copy npx create-mastra@latest ``` ```bash copy npm create mastra ``` ```bash copy yarn create mastra ``` ```bash copy pnpm create mastra ``` For detailed setup instructions, see our [installation guide](/docs/getting-started/installation). ### Install MastraClient ```bash copy npm install @mastra/client-js ``` ```bash copy yarn add @mastra/client-js ``` ```bash copy pnpm add @mastra/client-js ``` ### Use MastraClient Create a client instance and use it in your Next.js application: ```typescript filename="lib/mastra.ts" copy import { MastraClient } from '@mastra/client-js'; // Initialize the client export const mastraClient = new MastraClient({ baseUrl: process.env.NEXT_PUBLIC_MASTRA_API_URL || 'http://localhost:4111', }); ``` Example usage in your React component: ```typescript filename="app/components/SimpleWeather.tsx" copy 'use client' import { mastraClient } from '@/lib/mastra' export function SimpleWeather() { async function handleSubmit(formData: FormData) { const city = formData.get('city') const agent = mastraClient.getAgent('weatherAgent') try { const response = await agent.generate({ messages: [{ role: 'user', content: `What's the weather like in ${city}?` }], }) // Handle the response console.log(response.text) } catch (error) { console.error('Error:', error) } } return (
) } ``` ### Deployment When you're ready to deploy, you can use any of our platform-specific deployers (Vercel, Netlify, Cloudflare) or deploy to any Node.js hosting platform. Check our [deployment guide](/docs/deployment/deployment) for detailed instructions.
## 2. Direct Integration Better for smaller projects or prototypes. This approach bundles Mastra directly with your Next.js application. ### Initialize Mastra in your Next.js Root First, navigate to your Next.js project root and initialize Mastra: ```bash copy cd your-nextjs-app ``` Then run the initialization command: ```bash copy npx mastra@latest init ``` ```bash copy yarn dlx mastra@latest init ``` ```bash copy pnpm dlx mastra@latest init ``` This will set up Mastra in your Next.js project. For more details about init and other configuration options, see our [mastra init reference](/reference/cli/init). ### Configure Next.js Add to your `next.config.js`: ```js filename="next.config.js" copy /** @type {import('next').NextConfig} */ const nextConfig = { serverExternalPackages: ["@mastra/*"], // ... your other Next.js config } module.exports = nextConfig ``` #### Server Actions Example ```typescript filename="app/actions.ts" copy 'use server' import { mastra } from '@/mastra' export async function getWeatherInfo(city: string) { const agent = mastra.getAgent('weatherAgent') const result = await agent.generate(`What's the weather like in ${city}?`) return result } ``` Use it in your component: ```typescript filename="app/components/Weather.tsx" copy 'use client' import { getWeatherInfo } from '../actions' export function Weather() { async function handleSubmit(formData: FormData) { const city = formData.get('city') as string const result = await getWeatherInfo(city) // Handle the result console.log(result) } return (
) } ``` #### API Routes Example ```typescript filename="app/api/chat/route.ts" copy import { mastra } from '@/mastra' import { NextResponse } from 'next/server' export async function POST(req: Request) { const { city } = await req.json() const agent = mastra.getAgent('weatherAgent') const result = await agent.stream(`What's the weather like in ${city}?`) return result.toDataStreamResponse() } ``` ### Deployment When using direct integration, your Mastra instance will be deployed alongside your Next.js application. Ensure you: - Set up environment variables for your LLM API keys in your deployment platform - Implement proper error handling for production use - Monitor your AI agent's performance and costs
## Observability Mastra provides built-in observability features to help you monitor, debug, and optimize your AI operations. This includes: - Tracing of AI operations and their performance - Logging of prompts, completions, and errors - Integration with observability platforms like Langfuse and LangSmith For detailed setup instructions and configuration options specific to Next.js local development, see our [Next.js Observability Configuration Guide](/docs/deployment/logging-and-tracing#nextjs-configuration). --- title: "Installing Mastra Locally | Getting Started | Mastra Docs" description: Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers. --- import { Callout, Steps, Tabs } from "nextra/components"; import YouTube from "@/components/youtube"; # Installing Mastra Locally [EN] Source: https://mastra.ai/en/docs/getting-started/installation To run Mastra, you need access to an LLM. Typically, you'll want to get an API key from an LLM provider such as [OpenAI](https://platform.openai.com/), [Anthropic](https://console.anthropic.com/settings/keys), or [Google Gemini](https://ai.google.dev/gemini-api/docs). You can also run Mastra with a local LLM using [Ollama](https://ollama.ai/). ## Prerequisites - Node.js `v20.0` or higher - Access to a [supported large language model (LLM)](/docs/frameworks/ai-sdk) ## Automatic Installation ### Create a New Project We recommend starting a new Mastra project using `create-mastra`, which will scaffold your project. To create a project, run: ```bash copy npx create-mastra@latest ``` ```bash copy npm create mastra@latest ``` ```bash copy yarn create mastra@latest ``` ```bash copy pnpm create mastra@latest ``` On installation, you'll be guided through the following prompts: ```bash What do you want to name your project? my-mastra-app Choose components to install: ◯ Agents (recommended) ◯ Tools ◯ Workflows Select default provider: ◯ OpenAI (recommended) ◯ Anthropic ◯ Groq Would you like to include example code? No / Yes Turn your IDE into a Mastra expert? (Installs MCP server) ◯ Skip for now ◯ Cursor ◯ Windsurf ``` After the prompts, `create-mastra` will: 1. Set up your project directory with TypeScript 2. Install dependencies 3. Configure your selected components and LLM provider 4. Configure the MCP server in your IDE (if selected) for instant access to docs, examples, and help while you code **MCP Note:** If you're using a different IDE, you can install the MCP server manually by following the instructions in the [MCP server docs](/docs/getting-started/mcp-docs-server). **Also** note that there are additional steps for [Cursor and Windsurf](/docs/getting-started/mcp-docs-server#after-configuration) to activate the MCP server. ### Set Up your API Key Add the API key for your configured LLM provider in your `.env` file. ```bash filename=".env" copy OPENAI_API_KEY= ``` **Non-Interactive mode**: If you prefer to run the command with flags (non-interactive mode) and include the example code, you can use: ```bash copy npx create-mastra@latest --components agents,tools --llm openai --example ``` **Configure Installation Timeout**: To configure and specify a timeout if installation takes too long use the timeout flag: ```bash copy npx create-mastra@latest --timeout ``` **Note for LLMs**: For a quick one liner that includes examples you can run `npx -y mastra@latest --project-name --example --components "tools,agents,workflows" --llm `. The available options for the llm flag are `openai|anthropic|groq|google|cerebras` ## Manual Installation
If you prefer to set up your Mastra project manually, follow these steps: ### Create a New Project Create a project directory and navigate into it: ```bash copy mkdir hello-mastra cd hello-mastra ``` Then, initialize a TypeScript project including the `@mastra/core` package: ```bash copy npm init -y npm install typescript tsx @types/node mastra --save-dev npm install @mastra/core zod @ai-sdk/openai npx tsc --init ``` ```bash copy pnpm init pnpm add typescript tsx @types/node mastra --save-dev pnpm add @mastra/core zod @ai-sdk/openai pnpm dlx tsc --init ``` ```bash copy yarn init -y yarn add typescript tsx @types/node mastra --dev yarn add @mastra/core zod @ai-sdk/openai yarn dlx tsc --init ``` ```bash copy bun init -y bun add typescript tsx @types/node mastra --dev bun add @mastra/core zod @ai-sdk/openai bunx tsc --init ``` ### Initialize TypeScript Create a `tsconfig.json` file in your project root with the following configuration: ```json copy { "compilerOptions": { "target": "ES2022", "module": "ES2022", "moduleResolution": "bundler", "esModuleInterop": true, "forceConsistentCasingInFileNames": true, "strict": true, "skipLibCheck": true, "outDir": "dist" }, "include": ["src/**/*"], "exclude": ["node_modules", "dist", ".mastra"] } ``` This TypeScript configuration is optimized for Mastra projects, using modern module resolution and strict type checking. ### Set Up your API Key Create a `.env` file in your project root directory and add your API key: ```bash filename=".env" copy OPENAI_API_KEY= ``` Replace your_openai_api_key with your actual API key. ### Create a Tool Create a `weather-tool` tool file: ```bash copy mkdir -p src/mastra/tools && touch src/mastra/tools/weather-tool.ts ``` Then, add the following code to `src/mastra/tools/weather-tool.ts`: ```ts filename="src/mastra/tools/weather-tool.ts" showLineNumbers copy import { createTool } from "@mastra/core/tools"; import { z } from "zod"; interface WeatherResponse { current: { time: string; temperature_2m: number; apparent_temperature: number; relative_humidity_2m: number; wind_speed_10m: number; wind_gusts_10m: number; weather_code: number; }; } export const weatherTool = createTool({ id: "get-weather", description: "Get current weather for a location", inputSchema: z.object({ location: z.string().describe("City name"), }), outputSchema: z.object({ temperature: z.number(), feelsLike: z.number(), humidity: z.number(), windSpeed: z.number(), windGust: z.number(), conditions: z.string(), location: z.string(), }), execute: async ({ context }) => { return await getWeather(context.location); }, }); const getWeather = async (location: string) => { const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(location)}&count=1`; const geocodingResponse = await fetch(geocodingUrl); const geocodingData = await geocodingResponse.json(); if (!geocodingData.results?.[0]) { throw new Error(`Location '${location}' not found`); } const { latitude, longitude, name } = geocodingData.results[0]; const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,apparent_temperature,relative_humidity_2m,wind_speed_10m,wind_gusts_10m,weather_code`; const response = await fetch(weatherUrl); const data: WeatherResponse = await response.json(); return { temperature: data.current.temperature_2m, feelsLike: data.current.apparent_temperature, humidity: data.current.relative_humidity_2m, windSpeed: data.current.wind_speed_10m, windGust: data.current.wind_gusts_10m, conditions: getWeatherCondition(data.current.weather_code), location: name, }; }; function getWeatherCondition(code: number): string { const conditions: Record = { 0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast", 45: "Foggy", 48: "Depositing rime fog", 51: "Light drizzle", 53: "Moderate drizzle", 55: "Dense drizzle", 56: "Light freezing drizzle", 57: "Dense freezing drizzle", 61: "Slight rain", 63: "Moderate rain", 65: "Heavy rain", 66: "Light freezing rain", 67: "Heavy freezing rain", 71: "Slight snow fall", 73: "Moderate snow fall", 75: "Heavy snow fall", 77: "Snow grains", 80: "Slight rain showers", 81: "Moderate rain showers", 82: "Violent rain showers", 85: "Slight snow showers", 86: "Heavy snow showers", 95: "Thunderstorm", 96: "Thunderstorm with slight hail", 99: "Thunderstorm with heavy hail", }; return conditions[code] || "Unknown"; } ``` ### Create an Agent Create a `weather` agent file: ```bash copy mkdir -p src/mastra/agents && touch src/mastra/agents/weather.ts ``` Then, add the following code to `src/mastra/agents/weather.ts`: ```ts filename="src/mastra/agents/weather.ts" showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; import { weatherTool } from "../tools/weather-tool"; export const weatherAgent = new Agent({ name: "Weather Agent", instructions: `You are a helpful weather assistant that provides accurate weather information. Your primary function is to help users get weather details for specific locations. When responding: - Always ask for a location if none is provided - If the location name isn’t in English, please translate it - Include relevant details like humidity, wind conditions, and precipitation - Keep responses concise but informative Use the weatherTool to fetch current weather data.`, model: openai("gpt-4o-mini"), tools: { weatherTool }, }); ``` ### Register Agent Finally, create the Mastra entry point in `src/mastra/index.ts` and register agent: ```ts filename="src/mastra/index.ts" showLineNumbers copy import { Mastra } from "@mastra/core"; import { weatherAgent } from "./agents/weather"; export const mastra = new Mastra({ agents: { weatherAgent }, }); ``` This registers your agent with Mastra so that `mastra dev` can discover and serve it. ## Existing Project Installation To add Mastra to an existing project, see our Local development docs on [adding mastra to an existing project](/docs/local-dev/add-to-existing-project). You can also checkout our framework specific docs e.g [Next.js](/docs/frameworks/next-js) ## Start the Mastra Server Mastra provides commands to serve your agents via REST endpoints ### Development Server Run the following command to start the Mastra server: ```bash copy npm run dev ``` If you have the mastra CLI installed, run: ```bash copy mastra dev ``` This command creates REST API endpoints for your agents. ### Test the Endpoint You can test the agent's endpoint using `curl` or `fetch`: ```bash copy curl -X POST http://localhost:4111/api/agents/weatherAgent/generate \ -H "Content-Type: application/json" \ -d '{"messages": ["What is the weather in London?"]}' ``` ```js copy showLineNumbers fetch('http://localhost:4111/api/agents/weatherAgent/generate', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ messages: ['What is the weather in London?'], }), }) .then(response => response.json()) .then(data => { console.log('Agent response:', data.text); }) .catch(error => { console.error('Error:', error); }); ``` ## Use Mastra on the Client To use Mastra in your frontend applications, you can use our type-safe client SDK to interact with your Mastra REST APIs. See the [Mastra Client SDK documentation](/docs/deployment/client) for detailed usage instructions. ## Run from the command line If you'd like to directly call agents from the command line, you can create a script to get an agent and call it: ```ts filename="src/index.ts" showLineNumbers copy import { mastra } from "./mastra"; async function main() { const agent = await mastra.getAgent("weatherAgent"); const result = await agent.generate("What is the weather in London?"); console.log("Agent response:", result.text); } main(); ``` Then, run the script to test that everything is set up correctly: ```bash copy npx tsx src/index.ts ``` This should output the agent's response to your console. --- title: "Using with Cursor/Windsurf | Getting Started | Mastra Docs" description: "Learn how to use the Mastra MCP documentation server in your IDE to turn it into an agentic Mastra expert." --- import YouTube from "@/components/youtube"; # Mastra Tools for your agentic IDE [EN] Source: https://mastra.ai/en/docs/getting-started/mcp-docs-server `@mastra/mcp-docs-server` provides direct access to Mastra's complete knowledge base in Cursor, Windsurf, Cline, or any other IDE that supports MCP. It has access to documentation, code examples, technical blog posts / feature announcements, and package changelogs which your IDE can read to help you build with Mastra. The MCP server tools have been designed to allow an agent to query the specific information it needs to complete a Mastra related task - for example: adding a Mastra feature to an agent, scaffolding a new project, or helping you understand how something works. ## How it works Once it's installed in your IDE you can write prompts and assume the agent will understand everything about Mastra. ### Add features - "Add evals to my agent and write tests" - "Write me a workflow that does the following `[task]`" - "Make a new tool that allows my agent to access `[3rd party API]`" ### Ask about integrations - "Does Mastra work with the AI SDK? How can I use it in my `[React/Svelte/etc]` project?" - "What's the latest Mastra news around MCP?" - "Does Mastra support `[provider]` speech and voice APIs? Show me an example in my code of how I can use it." ### Debug or update existing code - "I'm running into a bug with agent memory, have there been any related changes or bug fixes recently?" - "How does working memory behave in Mastra and how can I use it to do `[task]`? It doesn't seem to work the way I expect." - "I saw there are new workflow features, explain them to me and then update `[workflow]` to use them." **And more** - if you have a question, try asking your IDE and let it look it up for you. ## Automatic Installation Run `pnpm create mastra@latest` and select Cursor or Windsurf when prompted to install the MCP server. For other IDEs, or if you already have a Mastra project, install the MCP server by following the instructions below. ## Manual Installation - **Cursor**: Edit `.cursor/mcp.json` in your project root, or `~/.cursor/mcp.json` for global configuration - **Windsurf**: Edit `~/.codeium/windsurf/mcp_config.json` (only supports global configuration) Add the following configuration: ### MacOS/Linux ```json { "mcpServers": { "mastra": { "command": "npx", "args": ["-y", "@mastra/mcp-docs-server@latest"] } } } ``` ### Windows ```json { "mcpServers": { "mastra": { "command": "cmd", "args": ["/c", "npx", "-y", "@mastra/mcp-docs-server@latest"] } } } ``` ## After Configuration ### Cursor 1. Open Cursor settings 2. Navigate to MCP settings 3. Click "enable" on the Mastra MCP server 4. If you have an agent chat open, you'll need to re-open it or start a new chat to use the MCP server ### Windsurf 1. Fully quit and re-open Windsurf 2. If tool calls start failing, go to Windsurfs MCP settings and re-start the MCP server. This is a common Windsurf MCP issue and isn't related to Mastra. Right now Cursor's MCP implementation is more stable than Windsurfs is. In both IDEs it may take a minute for the MCP server to start the first time as it needs to download the package from npm. ## Available Agent Tools ### Documentation Access Mastra's complete documentation: - Getting started / installation - Guides and tutorials - API references ### Examples Browse code examples: - Complete project structures - Implementation patterns - Best practices ### Blog Posts Search the blog for: - Technical posts - Changelog and feature announcements - AI news and updates ### Package Changes Track updates for Mastra and `@mastra/*` packages: - Bug fixes - New features - Breaking changes ## Common Issues 1. **Server Not Starting** - Ensure npx is installed and working - Check for conflicting MCP servers - Verify your configuration file syntax - On Windows, make sure to use the Windows-specific configuration 2. **Tool Calls Failing** - Restart the MCP server and/or your IDE - Update to the latest version of your IDE ## Model Capability [EN] Source: https://mastra.ai/en/docs/getting-started/model-capability import { ProviderTable } from "@/components/provider-table"; The AI providers support different language models with various capabilities. Not all models support structured output, image input, object generation, tool usage, or tool streaming. Here are the capabilities of popular models: Source: [https://sdk.vercel.ai/docs/foundations/providers-and-models#model-capabilities](https://sdk.vercel.ai/docs/foundations/providers-and-models#model-capabilities) --- title: "Local Project Structure | Getting Started | Mastra Docs" description: Guide on organizing folders and files in Mastra, including best practices and recommended structures. --- import { FileTree } from 'nextra/components'; # Project Structure [EN] Source: https://mastra.ai/en/docs/getting-started/project-structure This page provides a guide for organizing folders and files in Mastra. Mastra is a modular framework, and you can use any of the modules separately or together. You could write everything in a single file (as we showed in the quick start), or separate each agent, tool, and workflow into their own files. We don't enforce a specific folder structure, but we do recommend some best practices, and the CLI will scaffold a project with a sensible structure. ## Using the CLI `mastra init` is an interactive CLI that allows you to: - **Choose a directory for Mastra files**: Specify where you want the Mastra files to be placed (default is `src/mastra`). - **Select components to install**: Choose which components you want to include in your project: - Agents - Tools - Workflows - **Select a default LLM provider**: Choose from supported providers like OpenAI, Anthropic, or Groq. - **Include example code**: Decide whether to include example code to help you get started. ### Example Project Structure Assuming you select all components and include example code, your project structure will look like this: {/* ``` root/ ├── src/ │ └── mastra/ │ ├── agents/ │ │ └── index.ts │ ├── tools/ │ │ └── index.ts │ ├── workflows/ │ │ └── index.ts │ ├── index.ts ├── .env ``` */} ### Top-level Folders | Folder | Description | | ---------------------- | ------------------------------------ | | `src/mastra` | Core application folder | | `src/mastra/agents` | Agent configurations and definitions | | `src/mastra/tools` | Custom tool definitions | | `src/mastra/workflows` | Workflow definitions | ### Top-level Files | File | Description | | --------------------- | ---------------------------------- | | `src/mastra/index.ts` | Main configuration file for Mastra | | `.env` | Environment variables | --- title: "Introduction | Mastra Docs" description: "Mastra is a TypeScript agent framework. It helps you build AI applications and features quickly. It gives you the set of primitives you need: workflows, agents, RAG, integrations, syncs and evals." --- # About Mastra [EN] Source: https://mastra.ai/en/docs Mastra is an open-source TypeScript agent framework. It's designed to give you the primitives you need to build AI applications and features. You can use Mastra to build [AI agents](/docs/agents/overview.mdx) that have memory and can execute functions, or chain LLM calls in deterministic [workflows](/docs/workflows/overview.mdx). You can chat with your agents in Mastra's [local dev environment](/docs/local-dev/mastra-dev.mdx), feed them application-specific knowledge with [RAG](/docs/rag/overview.mdx), and score their outputs with Mastra's [evals](/docs/evals/overview.mdx). The main features include: * **[Model routing](https://sdk.vercel.ai/docs/introduction)**: Mastra uses the [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction) for model routing, providing a unified interface to interact with any LLM provider including OpenAI, Anthropic, and Google Gemini. * **[Agent memory and tool calling](/docs/agents/agent-memory.mdx)**: With Mastra, you can give your agent tools (functions) that it can call. You can persist agent memory and retrieve it based on recency, semantic similarity, or conversation thread. * **[Workflow graphs](/docs/workflows/overview.mdx)**: When you want to execute LLM calls in a deterministic way, Mastra gives you a graph-based workflow engine. You can define discrete steps, log inputs and outputs at each step of each run, and pipe them into an observability tool. Mastra workflows have a simple syntax for control flow (`step()`, `.then()`, `.after()`) that allows branching and chaining. * **[Agent development environment](/docs/local-dev/mastra-dev.mdx)**: When you're developing an agent locally, you can chat with it and see its state and memory in Mastra's agent development environment. * **[Retrieval-augmented generation (RAG)](/docs/rag/overview.mdx)**: Mastra gives you APIs to process documents (text, HTML, Markdown, JSON) into chunks, create embeddings, and store them in a vector database. At query time, it retrieves relevant chunks to ground LLM responses in your data, with a unified API on top of multiple vector stores (Pinecone, pgvector, etc) and embedding providers (OpenAI, Cohere, etc). * **[Deployment](/docs/deployment/deployment.mdx)**: Mastra supports bundling your agents and workflows within an existing React, Next.js, or Node.js application, or into standalone endpoints. The Mastra deploy helper lets you easily bundle agents and workflows into a Node.js server using Hono, or deploy it onto a serverless platform like Vercel, Cloudflare Workers, or Netlify. * **[Evals](/docs/evals/overview.mdx)**: Mastra provides automated evaluation metrics that use model-graded, rule-based, and statistical methods to assess LLM outputs, with built-in metrics for toxicity, bias, relevance, and factual accuracy. You can also define your own evals. --- title: "Using Mastra Integrations | Mastra Local Development Docs" description: Documentation for Mastra integrations, which are auto-generated, type-safe API clients for third-party services. --- # Using Mastra Integrations [EN] Source: https://mastra.ai/en/docs/integrations Integrations in Mastra are auto-generated, type-safe API clients for third-party services. They can be used as tools for agents or as steps in workflows. ## Installing an Integration Mastra's default integrations are packaged as individually installable npm modules. You can add an integration to your project by installing it via npm and importing it into your Mastra configuration. ### Example: Adding the GitHub Integration 1. **Install the Integration Package** To install the GitHub integration, run: ```bash npm install @mastra/github ``` 2. **Add the Integration to Your Project** Create a new file for your integrations (e.g., `src/mastra/integrations/index.ts`) and import the integration: ```typescript filename="src/mastra/integrations/index.ts" showLineNumbers copy import { GithubIntegration } from "@mastra/github"; export const github = new GithubIntegration({ config: { PERSONAL_ACCESS_TOKEN: process.env.GITHUB_PAT!, }, }); ``` Make sure to replace `process.env.GITHUB_PAT!` with your actual GitHub Personal Access Token or ensure that the environment variable is properly set. 3. **Use the Integration in Tools or Workflows** You can now use the integration when defining tools for your agents or in workflows. ```typescript filename="src/mastra/tools/index.ts" showLineNumbers copy import { createTool } from "@mastra/core"; import { z } from "zod"; import { github } from "../integrations"; export const getMainBranchRef = createTool({ id: "getMainBranchRef", description: "Fetch the main branch reference from a GitHub repository", inputSchema: z.object({ owner: z.string(), repo: z.string(), }), outputSchema: z.object({ ref: z.string().optional(), }), execute: async ({ context }) => { const client = await github.getApiClient(); const mainRef = await client.gitGetRef({ path: { owner: context.owner, repo: context.repo, ref: "heads/main", }, }); return { ref: mainRef.data?.ref }; }, }); ``` In the example above: - We import the `github` integration. - We define a tool called `getMainBranchRef` that uses the GitHub API client to fetch the reference of the main branch of a repository. - The tool accepts `owner` and `repo` as inputs and returns the reference string. ## Using Integrations in Agents Once you've defined tools that utilize integrations, you can include these tools in your agents. ```typescript filename="src/mastra/agents/index.ts" showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; import { getMainBranchRef } from "../tools"; export const codeReviewAgent = new Agent({ name: "Code Review Agent", instructions: "An agent that reviews code repositories and provides feedback.", model: openai("gpt-4o-mini"), tools: { getMainBranchRef, // other tools... }, }); ``` In this setup: - We create an agent named `Code Review Agent`. - We include the `getMainBranchRef` tool in the agent's available tools. - The agent can now use this tool to interact with GitHub repositories during conversations. ## Environment Configuration Ensure that any required API keys or tokens for your integrations are properly set in your environment variables. For example, with the GitHub integration, you need to set your GitHub Personal Access Token: ```bash GITHUB_PAT=your_personal_access_token ``` Consider using a `.env` file or another secure method to manage sensitive credentials. ### Example: Adding the Mem0 Integration In this example you'll learn how to use the [Mem0](https://mem0.ai) platform to add long-term memory capabilities to an agent via tool-use. This memory integration can work alongside Mastra's own [agent memory features](https://mastra.ai/docs/agents/agent-memory). Mem0 enables your agent to memorize and later remember facts per-user across all interactions with that user, while Mastra's memory works per-thread. Using the two in conjunction will allow Mem0 to store long term memories across conversations/interactions, while Mastra's memory will maintain linear conversation history in individual conversations. 1. **Install the Integration Package** To install the Mem0 integration, run: ```bash npm install @mastra/mem0 ``` 2. **Add the Integration to Your Project** Create a new file for your integrations (e.g., `src/mastra/integrations/index.ts`) and import the integration: ```typescript filename="src/mastra/integrations/index.ts" showLineNumbers copy import { Mem0Integration } from "@mastra/mem0"; export const mem0 = new Mem0Integration({ config: { apiKey: process.env.MEM0_API_KEY!, userId: "alice", }, }); ``` 3. **Use the Integration in Tools or Workflows** You can now use the integration when defining tools for your agents or in workflows. ```typescript filename="src/mastra/tools/index.ts" showLineNumbers copy import { createTool } from "@mastra/core"; import { z } from "zod"; import { mem0 } from "../integrations"; export const mem0RememberTool = createTool({ id: "Mem0-remember", description: "Remember your agent memories that you've previously saved using the Mem0-memorize tool.", inputSchema: z.object({ question: z .string() .describe("Question used to look up the answer in saved memories."), }), outputSchema: z.object({ answer: z.string().describe("Remembered answer"), }), execute: async ({ context }) => { console.log(`Searching memory "${context.question}"`); const memory = await mem0.searchMemory(context.question); console.log(`\nFound memory "${memory}"\n`); return { answer: memory, }; }, }); export const mem0MemorizeTool = createTool({ id: "Mem0-memorize", description: "Save information to mem0 so you can remember it later using the Mem0-remember tool.", inputSchema: z.object({ statement: z.string().describe("A statement to save into memory"), }), execute: async ({ context }) => { console.log(`\nCreating memory "${context.statement}"\n`); // to reduce latency memories can be saved async without blocking tool execution void mem0.createMemory(context.statement).then(() => { console.log(`\nMemory "${context.statement}" saved.\n`); }); return { success: true }; }, }); ``` In the example above: - We import the `@mastra/mem0` integration. - We define two tools that uses the Mem0 API client to create new memories and recall previously saved memories. - The tool accepts `question` as an input and returns the memory as a string. ## Available Integrations Mastra provides several built-in integrations; primarily API-key based integrations that do not require OAuth. Some available integrations including Github, Stripe, Resend, Firecrawl, and more. Check [Mastra's codebase](https://github.com/mastra-ai/mastra/tree/main/integrations) or [npm packages](https://www.npmjs.com/search?q=%22%40mastra%22) for a full list of available integrations. ## Conclusion Integrations in Mastra enable your AI agents and workflows to interact with external services seamlessly. By installing and configuring integrations, you can extend the capabilities of your application to include operations such as fetching data from APIs, sending messages, or managing resources in third-party systems. Remember to consult the documentation of each integration for specific usage details and to adhere to best practices for security and type safety. --- title: "Adding to an Existing Project | Mastra Local Development Docs" description: "Add Mastra to your existing Node.js applications" --- # Adding to an Existing Project [EN] Source: https://mastra.ai/en/docs/local-dev/add-to-existing-project You can add Mastra to an existing project using the CLI: ```bash npm2yarn copy npm install -g mastra@latest mastra init ``` Changes made to project: 1. Creates `src/mastra` directory with entry point 2. Adds required dependencies 3. Configures TypeScript compiler options ## Interactive Setup Running commands without arguments starts a CLI prompt for: 1. Component selection 2. LLM provider configuration 3. API key setup 4. Example code inclusion ## Non-Interactive Setup To initialize mastra in non-interactive mode use the following command arguments: ```bash Arguments: --components Specify components: agents, tools, workflows --llm-provider LLM provider: openai, anthropic, or groq --add-example Include example implementation --llm-api-key Provider API key --dir Directory for Mastra files (defaults to src/) ``` For more details, refer to the [mastra init CLI documentation](../../reference/cli/init). --- title: "Creating a new Project | Mastra Local Development Docs" description: "Create new Mastra projects or add Mastra to existing Node.js applications using the CLI" --- # Creating a new project [EN] Source: https://mastra.ai/en/docs/local-dev/creating-a-new-project You can create a new project using the `create-mastra` package: ```bash npm2yarn copy npm create mastra@latest ``` You can also create a new project by using the `mastra` CLI directly: ```bash npm2yarn copy npm install -g mastra@latest mastra create ``` ## Interactive Setup Running commands without arguments starts a CLI prompt for: 1. Project name 1. Component selection 2. LLM provider configuration 3. API key setup 4. Example code inclusion ## Non-Interactive Setup To initialize mastra in non-interactive mode use the following command arguments: ```bash Arguments: --components Specify components: agents, tools, workflows --llm-provider LLM provider: openai, anthropic, groq, google, or cerebras --add-example Include example implementation --llm-api-key Provider API key --project-name Project name that will be used in package.json and as the project directory name ``` Generated project structure: ``` my-project/ ├── src/ │ └── mastra/ │ └── index.ts # Mastra entry point ├── package.json └── tsconfig.json ``` --- title: "Inspecting Agents with `mastra dev` | Mastra Local Dev Docs" description: Documentation for the Mastra local development environment for Mastra applications. --- import YouTube from "@/components/youtube"; # Local Development Environment [EN] Source: https://mastra.ai/en/docs/local-dev/mastra-dev Mastra provides a local development environment where you can test your agents, workflows, and tools while developing locally. ## Launch Development Server You can launch the Mastra development environment using the Mastra CLI by running: ```bash mastra dev ``` By default, the server runs at http://localhost:4111, but you can change the port with the `--port` flag. ## Dev Playground `mastra dev` serves a playground UI for interacting with your agents, workflows, and tools. The playground provides dedicated interfaces for testing each component of your Mastra application during development. ### Agent Playground The Agent playground provides an interactive chat interface where you can test and debug your agents during development. Key features include: - **Chat Interface**: Directly interact with your agents to test their responses and behavior. - **Prompt CMS**: Experiment with different system instructions for your agent: - A/B test different prompt versions. - Track performance metrics for each variant. - Select and deploy the most effective prompt version. - **Agent Traces**: View detailed execution traces to understand how your agent processes requests, including: - Prompt construction. - Tool usage. - Decision-making steps. - Response generation. - **Agent Evals**: When you've set up [Agent evaluation metrics](/docs/evals/overview), you can: - Run evaluations directly from the playground. - View evaluation results and metrics. - Compare agent performance across different test cases. ### Workflow Playground The Workflow playground helps you visualize and test your workflow implementations: - **Workflow Visualization**: Workflow graph visualization. - **Run Workflows**: - Trigger test workflow runs with custom input data. - Debug workflow logic and conditions. - Simulate different execution paths. - View detailed execution logs for each step. - **Workflow Traces**: Examine detailed execution traces that show: - Step-by-step workflow progression. - State transitions and data flow. - Tool invocations and their results. - Decision points and branching logic. - Error handling and recovery paths. ### Tools Playground The Tools playground allows you to test your custom tools in isolation: - Test individual tools without running a full agent or workflow. - Input test data and view tool responses. - Debug tool implementation and error handling. - Verify tool input/output schemas. - Monitor tool performance and execution time. ## REST API Endpoints `mastra dev` also spins up REST API routes for your agents and workflows via the local [Mastra Server](/docs/deployment/server). This allows you to test your API endpoints before deployment. See [Mastra Dev reference](/reference/cli/dev#routes) for more details about all endpoints. You can then leverage the [Mastra Client](/docs/deployment/client) SDK to interact with your served REST API routes seamlessly. ## OpenAPI Specification `mastra dev` provides an OpenAPI spec at http://localhost:4111/openapi.json ## Local Dev Architecture The local development server is designed to run without any external dependencies or containerization. This is achieved through: - **Dev Server**: Uses [Hono](https://hono.dev) as the underlying framework to power the [Mastra Server](/docs/deployment/server). - **In-Memory Storage**: Uses [LibSQL](https://libsql.org/) memory adapters for: - Agent memory management. - Trace storage. - Evals storage. - Workflow snapshots. - **Vector Storage**: Uses [FastEmbed](https://github.com/qdrant/fastembed) for: - Default embedding generation. - Vector storage and retrieval. - Semantic search capabilities. This architecture allows you to start developing immediately without setting up databases or vector stores, while still maintaining production-like behavior in your local environment. ### Model settings The local devleopment server also lets you configure the model settings in Overview > Model Settings. You can configure the following settings: - **Temperature**: Controls randomness in model outputs. Higher values (0-2) produce more creative responses while lower values make outputs more focused and deterministic. - **Top P**: Sets cumulative probability threshold for token sampling. Lower values (0-1) make outputs more focused by considering only the most likely tokens. - **Top K**: Limits the number of tokens considered for each generation step. Lower values produce more focused outputs by sampling from fewer options. - **Frequency Penalty**: Reduces repetition by penalizing tokens based on their frequency in previous text. Higher values (0-2) discourage reuse of common tokens. - **Presence Penalty**: Reduces repetition by penalizing tokens that appear in previous text. Higher values (0-2) encourage the model to discuss new topics. - **Max Tokens**: Maximum number of tokens allowed in the model's response. Higher values allow for longer outputs but may increase latency. - **Max Steps**: Maximum number of steps a workflow or agent can execute before stopping. Prevents infinite loops and runaway processes. - **Max Retries**: Number of times to retry failed API calls or model requests before giving up. Helps handle temporary failures gracefully. ## Summary `mastra dev` makes it easy to develop, debug, and iterate on your AI logic in a self-contained environment before deploying to production. - [Mastra Dev reference](../../reference/cli/dev.mdx) --- title: Deploying to Mastra Cloud description: GitHub-based deployment process for Mastra applications --- # Deploying to Mastra Cloud [EN] Source: https://mastra.ai/en/docs/mastra-cloud/deploying This page describes the deployment process for Mastra applications to Mastra Cloud using GitHub integration. ## Prerequisites - A GitHub account - A GitHub repository containing a Mastra application - Access to Mastra Cloud ## Deployment Process Mastra Cloud uses a Git-based deployment workflow similar to platforms like Vercel and Netlify: 1. **Import GitHub Repository** - From the Projects dashboard, click "Add new" - Select the repository containing your Mastra application - Click "Import" next to the desired repository 2. **Configure Deployment Settings** - Set the project name (defaults to repository name) - Select branch to deploy (typically `main`) - Configure the Mastra directory path (defaults to `src/mastra`) - Add necessary environment variables (like API keys) 3. **Deploy from Git** - After initial configuration, deployments are triggered by pushes to the selected branch - Mastra Cloud automatically builds and deploys your application - Each deployment creates an atomic snapshot of your agents and workflows ## Automatic Deployments Mastra Cloud follows a Git-driven workflow: 1. Make changes to your Mastra application locally 2. Commit changes to the `main` branch 3. Push to GitHub 4. Mastra Cloud automatically detects the push and creates a new deployment 5. Once the build completes, your application is live ## Deployment Domains Each project receives two URLs: 1. **Project-specific domain**: `https://[project-name].mastra.cloud` - Example: `https://gray-acoustic-helicopter.mastra.cloud` 2. **Deployment-specific domain**: `https://[deployment-id].mastra.cloud` - Example: `https://young-loud-caravan-6156280f-ad56-4ec8-9701-6bb5271fd73d.mastra.cloud` These URLs provide direct access to your deployed agents and workflows. ## Viewing Deployments ![Deployments List](/docs/cloud-agents.png) The deployments section in the dashboard shows: - **Title**: Deployment identifier (based on commit hash) - **Status**: Current state (success or archived) - **Branch**: The branch used (typically `main`) - **Commit**: The Git commit hash - **Updated At**: Timestamp of the deployment Each deployment represents an atomic snapshot of your Mastra application at a specific point in time. ## Interacting with Agents ![Agent Interface](/docs/cloud-agent.png) After deployment, interact with your agents: 1. Navigate to your project in the dashboard 2. Go to the Agents section 3. Select an agent to view its details and interface 4. Use the Chat tab to communicate with your agent 5. View the agent's configuration in the right panel: - Model information (e.g., OpenAI) - Available tools (e.g., getWeather) - Complete system prompt 6. Use suggested prompts (like "What capabilities do you have?") or enter custom messages The interface shows the agent's branch (typically "main") and indicates whether conversation memory is enabled. ## Monitoring Logs The Logs section provides detailed information about your application: - **Time**: When the log entry was created - **Level**: Log level (info, debug) - **Hostname**: Server identification - **Message**: Detailed log information, including: - API initialization - Storage connections - Agent and workflow activity These logs help debug and monitor your application's behavior in the production environment. ## Workflows ![Workflows Interface](/docs/cloud-workflows.png) The Workflows section allows you to view and interact with your deployed workflows: 1. View all workflows in your project 2. Examine workflow structure and steps 3. Access execution history and performance data ## Database Usage Mastra Cloud tracks database utilization metrics: - Number of reads - Number of writes - Storage used (MB) These metrics appear in the project overview, helping you monitor resource consumption. ## Deployment Configuration Configure your deployment through the dashboard: 1. Navigate to your project settings 2. Set environment variables (like `OPENAI_API_KEY`) 3. Configure project-specific settings Changes to configuration require a new deployment to take effect. ## Next Steps After deployment, [trace and monitor execution](/docs/mastra-cloud/observability) using the observability tools. --- title: Observability in Mastra Cloud description: Monitoring and debugging tools for Mastra Cloud deployments --- # Observability in Mastra Cloud [EN] Source: https://mastra.ai/en/docs/mastra-cloud/observability Mastra Cloud records execution data for monitoring and debugging. It captures traces, logs, and runtime information from agents and workflows. ## Agent Interface The agent interface offers three main views, accessible via tabs: 1. **Chat**: Interactive messaging interface to test your agent 2. **Traces**: Detailed execution records 3. **Evaluation**: Agent performance assessment ![Agent Interface with Chat Tab](/docs/cloud-agent.png) ### Chat Interface The Chat tab provides: - Interactive messaging with deployed agents - System response to user queries - Suggested prompt buttons (e.g., "What capabilities do you have?") - Message input area - Branch indicator (e.g., "main") - Note about agent memory limitations ### Agent Configuration Panel The right sidebar displays agent details: - Agent name and deployment identifier - Model information (e.g., "OpenAI") - Tools available to the agent (e.g., "getWeather") - Complete system prompt text This panel provides visibility into how the agent is configured without needing to check the source code. ## Trace System Mastra Cloud records traces for agent and workflow interactions. ### Trace Explorer Interface ![Agent Traces View](/docs/cloud-agent-traces.png) The Trace Explorer interface shows: - All agent and workflow interactions - Specific trace details - Input and output data - Tool calls with parameters and results - Workflow execution paths - Filtering options by type, status, timestamp, and agent/workflow ### Trace Data Structure Each trace contains: 1. **Request Data**: The request that initiated the agent or workflow 2. **Tool Call Records**: Tool calls during execution with parameters 3. **Tool Response Data**: The responses from tool calls 4. **Agent Response Data**: The generated agent response 5. **Execution Timestamps**: Timing information for each execution step 6. **Model Metadata**: Information about model usage and tokens The trace view displays all API calls and results throughout execution. This data helps debug tool usage and agent logic flows. ### Agent Interaction Data Agent interaction traces include: - User input text - Agent processing steps - Tool calls (e.g., weather API calls) - Parameters and results for each tool call - Final agent response text ## Dashboard Structure The Mastra Cloud dashboard contains: - Project deployment history - Environment variable configuration - Agent configuration details (model, system prompt, tools) - Workflow step visualization - Deployment URLs - Recent activity log ## Agent Testing Test your agents using the Chat interface: 1. Navigate to the Agents section 2. Select the agent you want to test 3. Use the Chat tab to interact with your agent 4. Send messages and view responses 5. Use suggested prompts for common queries 6. Switch to the Traces tab to view execution details Note that by default, agents do not remember conversation history across sessions. The interface indicates this with the message: "Agent will not remember previous messages. To enable memory for agent see docs." ## Workflow Monitoring ![Workflow Interface](/docs/cloud-workflow.png) Workflow monitoring shows: - Diagram of workflow steps and connections - Status for each workflow step - Execution details for each step - Execution trace records - Multi-step process execution (e.g., weather lookup followed by activity planning) ### Workflow Execution ![Workflow Run Details](/docs/cloud-workflow-run.png) When examining a specific workflow execution, you can see the detailed steps and their outputs. ## Logs ![Logs Interface](/docs/cloud-logs.png) The Logs section provides detailed information about your application: - **Time**: When the log entry was created - **Level**: Log level (info, debug) - **Hostname**: Server identification - **Message**: Detailed log information, including: - API initialization - Storage connections - Agent and workflow activity ## Technical Features The observability system includes: - **API Endpoints**: For programmatic access to trace data - **Structured Trace Format**: JSON format for filtering and query operations - **Historical Data Storage**: Retention of past execution records - **Deployment Version Links**: Correlation between traces and deployment versions ## Debugging Patterns - Compare trace data when testing agent behavior changes - Use the chat interface to test edge case inputs - View system prompts to understand agent behavior - Examine tool call parameters and results - Verify workflow execution step sequencing - Identify execution bottlenecks in trace timing data - Compare trace differences between agent versions ## Support Resources For technical assistance with observability: - Review the [Troubleshooting Documentation]() - Contact technical support through the dashboard - Join the [Discord developer channel](https://discord.gg/mastra) --- title: Mastra Cloud description: Deployment and monitoring service for Mastra applications --- # Mastra Cloud [EN] Source: https://mastra.ai/en/docs/mastra-cloud/overview Mastra Cloud is a deployment service that runs, manages, and monitors Mastra applications. It works with standard Mastra projects and handles deployment, scaling, and operational tasks. ## Core Functionality - **Atomic Deployments** - Agents and workflows deploy as a single unit - **Project Organization** - Group agents and workflows into projects with assigned URLs - **Environment Variables** - Store configuration securely by environment - **Testing Console** - Send messages to agents through a web interface - **Execution Tracing** - Record agent interactions and tool calls - **Workflow Visualization** - Display workflow steps and execution paths - **Logs** - Standard logging output for debugging - **Platform Compatibility** - Uses the same infrastructure as Cloudflare, Vercel, and Netlify deployers ## Dashboard Components The Mastra Cloud dashboard contains: - **Projects List** - All projects in the account - **Project Details** - Deployments, environment variables, and access URLs - **Deployment History** - Record of deployments with timestamps and status - **Agent Inspector** - Agent configuration view showing models, tools, and system prompts - **Testing Console** - Interface for sending messages to agents - **Trace Explorer** - Records of tool calls, parameters, and responses - **Workflow Viewer** - Diagram of workflow steps and connections ## Technical Implementation Mastra Cloud runs on the same core code as the platform-specific deployers with these modifications: - **Edge Network Distribution** - Geographically distributed execution - **Dynamic Resource Allocation** - Adjusts compute resources based on traffic - **Mastra-specific Runtime** - Runtime optimized for agent execution - **Standard Deployment API** - Consistent deployment interface across environments - **Tracing Infrastructure** - Records all agent and workflow execution steps ## Use Cases Common usage patterns: - Deploying applications without managing infrastructure - Maintaining staging and production environments - Monitoring agent behavior across many requests - Testing agent responses through a web interface - Deploying to multiple regions ## Setup Process 1. [Configure a Mastra Cloud project](/docs/mastra-cloud/setting-up) 2. [Deploy code](/docs/mastra-cloud/deploying) 3. [View execution traces](/docs/mastra-cloud/observability) --- title: Setting Up a Project description: Configuration steps for Mastra Cloud projects --- # Setting Up a Mastra Cloud Project [EN] Source: https://mastra.ai/en/docs/mastra-cloud/setting-up This page describes the steps to set up a project on Mastra Cloud using GitHub integration. ## Prerequisites - A Mastra Cloud account - A GitHub account - A GitHub repository containing a Mastra application ## Project Creation Process 1. **Sign in to Mastra Cloud** - Navigate to the Mastra Cloud dashboard at https://cloud.mastra.ai - Sign in with your account credentials 2. **Add a New Project** - From the "All Projects" view, click the "Add new" button in the top right - This opens the GitHub repository import dialog ![Mastra Cloud Projects Dashboard](/docs/cloud-agents.png) 3. **Import Git Repository** - Search for repositories or select from the list of available GitHub repositories - Click the "Import" button next to the repository you want to deploy 4. **Configure Deployment Details** The deployment configuration page includes: - **Repo Name**: The GitHub repository name (read-only) - **Project Name**: Customize the project name (defaults to repo name) - **Branch**: Select the branch to deploy (dropdown, defaults to `main`) - **Project root**: Set the root directory of your project (defaults to `/`) - **Mastra Directory**: Specify where Mastra files are located (defaults to `src/mastra`) - **Build Command**: Optional command to run during build process - **Store Settings**: Configure data storage options - **Environment Variables**: Add key-value pairs for configuration (e.g., API keys) ## Project Structure Requirements Mastra Cloud scans the GitHub repository for: - **Agents**: Agent definitions (e.g., Weather Agent) with models and tools - **Workflows**: Workflow step definitions (e.g., weather-workflow) - **Environment Variables**: Required API keys and configuration variables The repository should contain a standard Mastra project structure for proper detection and deployment. ## Understanding the Dashboard After creating a project, the dashboard shows: ### Project Overview - **Created Date**: When the project was created - **Domains**: URLs for accessing your deployed application - Format: `https://[project-name].mastra.cloud` - Format: `https://[random-id].mastra.cloud` - **Status**: Current deployment status (success or archived) - **Branch**: The branch deployed (typically `main`) - **Environment Variables**: Configured API keys and settings - **Workflows**: List of detected workflows with step counts - **Agents**: List of detected agents with models and tools - **Database Usage**: Reads, writes, and storage statistics ### Deployments Section - List of all deployments with: - Deployment ID (based on commit hash) - Status (success/archived) - Branch - Commit hash - Timestamp ### Logs Section The Logs view displays: - Timestamp for each log entry - Log level (info, debug) - Hostname - Detailed log messages, including: - API startup information - Storage initialization - Agent and workflow activity ## Navigation The sidebar provides access to: - **Overview**: Project summary and statistics - **Deployments**: Deployment history and details - **Logs**: Application logs for debugging - **Agents**: List and configuration of all agents - **Workflows**: List and structure of all workflows - **Settings**: Project configuration options ## Environment Variable Configuration Set environment variables through the dashboard: 1. Navigate to your project in the dashboard 2. Go to the "Environment Variables" section 3. Add or edit variables (such as `OPENAI_API_KEY`) 4. Save the configuration Environment variables are encrypted and made available to your application during deployment and execution. ## Testing Your Deployment After deployment, you can test your agents and workflows using: 1. The custom domain assigned to your project: `https://[project-name].mastra.cloud` 2. The dashboard interface for direct interaction with agents ## Next Steps After setting up your project, automatic deployments occur whenever you push to the `main` branch of your GitHub repository. See the [deployment documentation](/docs/mastra-cloud/deploying) for more details. # Memory Processors [EN] Source: https://mastra.ai/en/docs/memory/memory-processors Memory Processors allow you to modify the list of messages retrieved from memory _before_ they are added to the agent's context window and sent to the LLM. This is useful for managing context size, filtering content, and optimizing performance. Processors operate on the messages retrieved based on your memory configuration (e.g., `lastMessages`, `semanticRecall`). They do **not** affect the new incoming user message. ## Built-in Processors Mastra provides built-in processors: ### `TokenLimiter` This processor is used to prevent errors caused by exceeding the LLM's context window limit. It counts the tokens in the retrieved memory messages and removes the oldest messages until the total count is below the specified `limit`. ```typescript copy showLineNumbers {9-12} import { Memory } from "@mastra/memory"; import { TokenLimiter } from "@mastra/memory/processors"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ model: openai("gpt-4o"), memory: new Memory({ processors: [ // Ensure the total tokens from memory don't exceed ~127k new TokenLimiter(127000), ], }), }); ``` The `TokenLimiter` uses the `o200k_base` encoding by default (suitable for GPT-4o). You can specify other encodings if needed for different models: ```typescript copy showLineNumbers {6-9} // Import the encoding you need (e.g., for older OpenAI models) import cl100k_base from "js-tiktoken/ranks/cl100k_base"; const memoryForOlderModel = new Memory({ processors: [ new TokenLimiter({ limit: 16000, // Example limit for a 16k context model encoding: cl100k_base, }), ], }); ``` See the [OpenAI cookbook](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken#encodings) or [`js-tiktoken` repo](https://github.com/dqbd/tiktoken) for more on encodings. ### `ToolCallFilter` This processor removes tool calls from the memory messages sent to the LLM. It saves tokens by excluding potentially verbose tool interactions from the context, which is useful if the details aren't needed for future interactions. It's also useful if you always want your agent to call a specific tool again and not rely on previous tool results in memory. ```typescript copy showLineNumbers {5-14} import { Memory } from "@mastra/memory"; import { ToolCallFilter, TokenLimiter } from "@mastra/memory/processors"; const memoryFilteringTools = new Memory({ processors: [ // Example 1: Remove all tool calls/results new ToolCallFilter(), // Example 2: Remove only noisy image generation tool calls/results new ToolCallFilter({ exclude: ["generateImageTool"] }), // Always place TokenLimiter last new TokenLimiter(127000), ], }); ``` ## Applying Multiple Processors You can chain multiple processors. They execute in the order they appear in the `processors` array. The output of one processor becomes the input for the next. **Order matters!** It's generally best practice to place `TokenLimiter` **last** in the chain. This ensures it operates on the final set of messages after other filtering has occurred, providing the most accurate token limit enforcement. ```typescript copy showLineNumbers {7-14} import { Memory } from "@mastra/memory"; import { ToolCallFilter, TokenLimiter } from "@mastra/memory/processors"; // Assume a hypothetical 'PIIFilter' custom processor exists // import { PIIFilter } from './custom-processors'; const memoryWithMultipleProcessors = new Memory({ processors: [ // 1. Filter specific tool calls first new ToolCallFilter({ exclude: ["verboseDebugTool"] }), // 2. Apply custom filtering (e.g., remove hypothetical PII - use with caution) // new PIIFilter(), // 3. Apply token limiting as the final step new TokenLimiter(127000), ], }); ``` ## Creating Custom Processors You can create custom logic by extending the base `MemoryProcessor` class. ```typescript copy showLineNumbers {4-19,23-26} import { Memory, CoreMessage } from "@mastra/memory"; import { MemoryProcessor, MemoryProcessorOpts } from "@mastra/core/memory"; class ConversationOnlyFilter extends MemoryProcessor { constructor() { // Provide a name for easier debugging if needed super({ name: "ConversationOnlyFilter" }); } process( messages: CoreMessage[], _opts: MemoryProcessorOpts = {}, // Options passed during memory retrieval, rarely needed here ): CoreMessage[] { // Filter messages based on role return messages.filter( (msg) => msg.role === "user" || msg.role === "assistant", ); } } // Use the custom processor const memoryWithCustomFilter = new Memory({ processors: [ new ConversationOnlyFilter(), new TokenLimiter(127000), // Still apply token limiting ], }); ``` When creating custom processors avoid mutating the input `messages` array or its objects directly. # Memory overview [EN] Source: https://mastra.ai/en/docs/memory/overview Memory is how agents manage the context that's available to them, it's a condensation of all chat messages into their context window. ## The Context Window The context window is the total information visible to the language model at any given time. In Mastra, context is broken up into three parts: system instructions and information about the user ([working memory](./working-memory.mdx)), recent messages ([message history](#conversation-history)), and older messages that are relevant to the user’s query ([semantic recall](./semantic-recall.mdx)). In addition, we provide [memory processors](./memory-processors.mdx) to trim context or remove information if the context is too long. ## Quick Start The fastest way to see memory in action is using the built-in development playground. If you haven't already, create a new Mastra project following the main [Getting Started guide](/docs/getting-started/installation). **1. Install the memory package:** ```bash npm2yarn copy npm install @mastra/memory ``` **2. Create an agent and attach a `Memory` instance:** ```typescript filename="src/mastra/agents/index.ts" {10} import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; export const myMemoryAgent = new Agent({ name: "MemoryAgent", instructions: "...", model: openai("gpt-4o"), memory: new Memory(), }); ``` **3. Start the Development Server:** ```bash npm2yarn copy npm run dev ``` **4. Open the playground (http://localhost:4111) and select your `MemoryAgent`:** Send a few messages and notice that it remembers information across turns: ``` ➡️ You: My favorite color is blue. ⬅️ Agent: Got it! I'll remember that your favorite color is blue. ➡️ You: What is my favorite color? ⬅️ Agent: Your favorite color is blue. ``` ## Memory Threads Mastra organizes memory into threads, which are records that identify specific conversation histories, using two identifiers: 1. **`threadId`**: A specific conversation id (e.g., `support_123`). 2. **`resourceId`**: The user or entity id that owns each thread (e.g., `user_123`, `org_456`). ```typescript {2,3} const response = await myMemoryAgent.stream("Hello, my name is Alice.", { resourceId: "user_alice", threadId: "conversation_123", }); ``` **Important:** without these ID's your agent will not use memory, even if memory is properly configured. The playground handles this for you, but you need to add ID's yourself when using memory in your application. ## Conversation History By default, the `Memory` instance includes the [last 40 messages](../../reference/memory/Memory.mdx) from the current Memory thread in each new request. This provides the agent with immediate conversational context. ```ts {3} const memory = new Memory({ options: { lastMessages: 10, }, }); ``` **Important:** Only send the newest user message in each agent call. Mastra handles retrieving and injecting the necessary history. Sending the full history yourself will cause duplication. See the [AI SDK Memory Example](../../examples/memory/use-chat.mdx) for how to handle this with when using the `useChat` frontend hooks. ### Storage Configuration Conversation history relies on a [storage adapter](/reference/memory/Memory#parameters) to store messages. ```ts {7-12} import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; import { LibSQLStore } from "@mastra/core/storage/libsql"; const agent = new Agent({ memory: new Memory({ // this is the default storage db if omitted storage: new LibSQLStore({ config: { url: "file:local.db", }, }), }), }); ``` **Storage code Examples**: - [LibSQL](/examples/memory/memory-with-libsql) - [Postgres](/examples/memory/memory-with-pg) - [Upstash](/examples/memory/memory-with-upstash) ## Next Steps Now that you understand the core concepts, continue to [semantic recall](./semantic-recall.mdx) to learn how to add RAG memory to your Mastra agents. Alternatively you can visit the [configuration reference](../../reference/memory/Memory.mdx) for available options, or browse [usage examples](../../examples/memory/use-chat.mdx). # Semantic Recall [EN] Source: https://mastra.ai/en/docs/memory/semantic-recall If you ask your friend what they did last weekend, they will search in their memory for events associated with "last weekend" and then tell you what they did. That's sort of like how semantic recall works in Mastra. ## How Semantic Recall Works Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent conversation history](./overview.mdx#conversation-history). It uses vector embeddings of messages for similarity search, integrates with various vector stores, and has configurable context windows around retrieved messages.
Diagram showing Mastra Memory semantic recall When it's enabled, new messages are used to query a vector DB for semantically similar messages. After getting a response from the LLM, all new messages (user, assistant, and tool calls/results) are inserted into the vector DB to be recalled in later interactions. ## Quick Start Semantic recall is enabled by default, so if you give your agent memory it will be included: ```typescript {9} import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: "SupportAgent", instructions: "You are a helpful support agent.", model: openai("gpt-4o"), memory: new Memory(), }); ``` ## Recall configuration The two main parameters that control semantic recall behavior are: 1. **topK**: How many semantically similar messages to retrieve 2. **messageRange**: How much surrounding context to include with each match ```typescript {5-6} const agent = new Agent({ memory: new Memory({ options: { semanticRecall: { topK: 3, // Retrieve 3 most similar messages messageRange: 2, // Include 2 messages before and after each match }, }, }), }); ``` ### Storage configuration Semantic recall relies on a [storage and vector db](/reference/memory/Memory#parameters) to store messages and their embeddings. ```ts {8-17} import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; import { LibSQLStore } from "@mastra/core/storage/libsql"; import { LibSQLVector } from "@mastra/core/vector/libsql"; const agent = new Agent({ memory: new Memory({ // this is the default storage db if omitted storage: new LibSQLStore({ config: { url: "file:local.db", }, }), // this is the default vector db if omitted vector: new LibSQLVector({ connectionUrl: "file:local.db", }), }), }); ``` **Storage/vector code Examples**: - [LibSQL](/examples/memory/memory-with-libsql) - [Postgres](/examples/memory/memory-with-pg) - [Upstash](/examples/memory/memory-with-upstash) ### Embedder configuration Semantic recall relies an [embedding model](/reference/memory/Memory#embedder) to convert messages into embeddings. By default Mastra uses FastEmbed but you can specify another [embedding model](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings). ```ts {7} import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ memory: new Memory({ embedder: openai.embedding("text-embedding-3-small"), }), }); ``` ### Disabling There is a performance impact to using semantic recall. New messages are converted into embeddings and used to query a vector database before new messages are sent to the LLM. Semantic recall is enabled by default but can be disabled when not needed: ```typescript {4} const agent = new Agent({ memory: new Memory({ options: { semanticRecall: false, }, }), }); ``` You might want to disable semantic recall in scenarios like: - When [conversation history](./getting-started.mdx#conversation-history-last-messages) provide sufficient context for the current conversation. - In performance-sensitive applications, like realtime two-way audio, where the added latency of creating embeddings and running vector queries is noticeable. import YouTube from "@/components/youtube"; # Working Memory [EN] Source: https://mastra.ai/en/docs/memory/working-memory While [conversation history](/docs/memory/overview#conversation-history) and [semantic recall](./semantic-recall.mdx) help agents remember conversations, working memory allows them to maintain persistent information about users across interactions within a thread. Think of it as the agent's active thoughts or scratchpad – the key information they keep available about the user or task. It's similar to how a person would naturally remember someone's name, preferences, or important details during a conversation. This is useful for maintaining ongoing state that's always relevant and should always be available to the agent. ## Quick Start Here's a minimal example of setting up an agent with working memory: ```typescript {12-15} import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; // Create agent with working memory enabled const agent = new Agent({ name: "PersonalAssistant", instructions: "You are a helpful personal assistant.", model: openai("gpt-4o"), memory: new Memory({ options: { workingMemory: { enabled: true, use: "tool-call", // Recommended setting }, }, }), }); ``` ## How it Works Working memory is a block of Markdown text that the agent is able to update over time to store continuously relevant information: ## Custom Templates Templates guide the agent on what information to track and update in working memory. While a default template is used if none is provided, you'll typically want to define a custom template tailored to your agent's specific use case to ensure it remembers the most relevant information. Here's an example of a custom template. In this example the agent will store the users name, location, timezone, etc as soon as the user sends a message containing any of the info: ```typescript {5-28} const memory = new Memory({ options: { workingMemory: { enabled: true, template: ` # User Profile ## Personal Info - Name: - Location: - Timezone: ## Preferences - Communication Style: [e.g., Formal, Casual] - Project Goal: - Key Deadlines: - [Deadline 1]: [Date] - [Deadline 2]: [Date] ## Session State - Last Task Discussed: - Open Questions: - [Question 1] - [Question 2] `, }, }, }); ``` If your agent is not properly updating working memory when you expect it to, you can add system instructions on _how_ and _when_ to use this template in your agents `instruction` setting. ## Examples - [Streaming working memory](/examples/memory/streaming-working-memory) - [Using a working memory template](/examples/memory/streaming-working-memory-advanced) --- title: "Logging | Mastra Observability Documentation" description: Documentation on effective logging in Mastra, crucial for understanding application behavior and improving AI accuracy. --- import Image from "next/image"; # Logging [EN] Source: https://mastra.ai/en/docs/observability/logging In Mastra, logs can detail when certain functions run, what input data they receive, and how they respond. ## Basic Setup Here's a minimal example that sets up a **console logger** at the `INFO` level. This will print out informational messages and above (i.e., `DEBUG`, `INFO`, `WARN`, `ERROR`) to the console. ```typescript filename="mastra.config.ts" showLineNumbers copy import { Mastra } from "@mastra/core"; import { createLogger } from "@mastra/core/logger"; export const mastra = new Mastra({ // Other Mastra configuration... logger: createLogger({ name: "Mastra", level: "info", }), }); ``` In this configuration: - `name: "Mastra"` specifies the name to group logs under. - `level: "info"` sets the minimum severity of logs to record. ## Configuration - For more details on the options you can pass to `createLogger()`, see the [createLogger reference documentation](/reference/observability/create-logger.mdx). - Once you have a `Logger` instance, you can call its methods (e.g., `.info()`, `.warn()`, `.error()`) in the [Logger instance reference documentation](/reference/observability/logger.mdx). - If you want to send your logs to an external service for centralized collection, analysis, or storage, you can configure other logger types such as Upstash Redis. Consult the [createLogger reference documentation](/reference/observability/create-logger.mdx) for details on parameters like `url`, `token`, and `key` when using the `UPSTASH` logger type. --- title: "Next.js Tracing | Mastra Observability Documentation" description: "Set up OpenTelemetry tracing for Next.js applications" --- # Next.js Tracing [EN] Source: https://mastra.ai/en/docs/observability/nextjs-tracing Next.js requires additional configuration to enable OpenTelemetry tracing. ### Step 1: Next.js Configuration Start by enabling the instrumentation hook in your Next.js config: ```ts filename="next.config.ts" showLineNumbers copy import type { NextConfig } from "next"; const nextConfig: NextConfig = { experimental: { instrumentationHook: true // Not required in Next.js 15+ } }; export default nextConfig; ``` ### Step 2: Mastra Configuration Configure your Mastra instance: ```typescript filename="mastra.config.ts" copy import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-project-name", enabled: true } }); ``` ### Step 3: Configure your providers If you're using Next.js, you have two options for setting up OpenTelemetry instrumentation: #### Option 1: Using a Custom Exporter The default that will work across providers is to configure a custom exporter: 1. Install the required dependencies (example using Langfuse): ```bash copy npm install @opentelemetry/api langfuse-vercel ``` 2. Create an instrumentation file: ```ts filename="instrumentation.ts" copy import { NodeSDK, ATTR_SERVICE_NAME, Resource, } from '@mastra/core/telemetry/otel-vendor'; import { LangfuseExporter } from 'langfuse-vercel'; export function register() { const exporter = new LangfuseExporter({ // ... Langfuse config }) const sdk = new NodeSDK({ resource: new Resource({ [ATTR_SERVICE_NAME]: 'ai', }), traceExporter: exporter, }); sdk.start(); } ``` #### Option 2: Using Vercel's Otel Setup If you're deploying to Vercel, you can use their OpenTelemetry setup: 1. Install the required dependencies: ```bash copy npm install @opentelemetry/api @vercel/otel ``` 2. Create an instrumentation file at the root of your project (or in the src folder if using one): ```ts filename="instrumentation.ts" copy import { registerOTel } from '@vercel/otel' export function register() { registerOTel({ serviceName: 'your-project-name' }) } ``` ### Summary This setup will enable OpenTelemetry tracing for your Next.js application and Mastra operations. For more details, see the documentation for: - [Next.js Instrumentation](https://nextjs.org/docs/app/building-your-application/optimizing/instrumentation) - [Vercel OpenTelemetry](https://vercel.com/docs/observability/otel-overview/quickstart) --- title: "Tracing | Mastra Observability Documentation" description: "Set up OpenTelemetry tracing for Mastra applications" --- import Image from "next/image"; # Tracing [EN] Source: https://mastra.ai/en/docs/observability/tracing Mastra supports the OpenTelemetry Protocol (OTLP) for tracing and monitoring your application. When telemetry is enabled, Mastra automatically traces all core primitives including agent operations, LLM interactions, tool executions, integration calls, workflow runs, and database operations. Your telemetry data can then be exported to any OTEL collector. ### Basic Configuration Here's a simple example of enabling telemetry: ```ts filename="mastra.config.ts" showLineNumbers copy export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "my-app", enabled: true, sampling: { type: "always_on", }, export: { type: "otlp", endpoint: "http://localhost:4318", // SigNoz local endpoint }, }, }); ``` ### Configuration Options The telemetry config accepts these properties: ```ts type OtelConfig = { // Name to identify your service in traces (optional) serviceName?: string; // Enable/disable telemetry (defaults to true) enabled?: boolean; // Control how many traces are sampled sampling?: { type: "ratio" | "always_on" | "always_off" | "parent_based"; probability?: number; // For ratio sampling root?: { probability: number; // For parent_based sampling }; }; // Where to send telemetry data export?: { type: "otlp" | "console"; endpoint?: string; headers?: Record; }; }; ``` See the [OtelConfig reference documentation](../../reference/observability/otel-config.mdx) for more details. ### Environment Variables You can configure the OTLP endpoint and headers through environment variables: ```env filename=".env" copy OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 OTEL_EXPORTER_OTLP_HEADERS=x-api-key=your-api-key ``` Then in your config: ```ts filename="mastra.config.ts" showLineNumbers copy export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "my-app", enabled: true, export: { type: "otlp", // endpoint and headers will be picked up from env vars }, }, }); ``` ### Example: SigNoz Integration Here's what a traced agent interaction looks like in [SigNoz](https://signoz.io): Agent interaction trace showing spans, LLM calls, and tool executions ### Other Supported Providers For a complete list of supported observability providers and their configuration details, see the [Observability Providers reference](../../reference/observability/providers/). ### Next.js-specific Tracing steps If you're using Next.js, you have three additional configuration steps: 1. Enable the instrumentation hook in `next.config.ts` 2. Configure Mastra telemetry settings 3. Set up an OpenTelemetry exporter For implementation details, see the [Next.js Tracing](./nextjs-tracing) guide. --- title: Chunking and Embedding Documents | RAG | Mastra Docs description: Guide on chunking and embedding documents in Mastra for efficient processing and retrieval. --- ## Chunking and Embedding Documents [EN] Source: https://mastra.ai/en/docs/rag/chunking-and-embedding Before processing, create a MDocument instance from your content. You can initialize it from various formats: ```ts showLineNumbers copy const docFromText = MDocument.fromText("Your plain text content..."); const docFromHTML = MDocument.fromHTML("Your HTML content..."); const docFromMarkdown = MDocument.fromMarkdown("# Your Markdown content..."); const docFromJSON = MDocument.fromJSON(`{ "key": "value" }`); ``` ## Step 1: Document Processing Use `chunk` to split documents into manageable pieces. Mastra supports multiple chunking strategies optimized for different document types: - `recursive`: Smart splitting based on content structure - `character`: Simple character-based splits - `token`: Token-aware splitting - `markdown`: Markdown-aware splitting - `html`: HTML structure-aware splitting - `json`: JSON structure-aware splitting - `latex`: LaTeX structure-aware splitting Here's an example of how to use the `recursive` strategy: ```ts showLineNumbers copy const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", extract: { metadata: true, // Optionally extract metadata }, }); ``` **Note:** Metadata extraction may use LLM calls, so ensure your API key is set. We go deeper into chunking strategies in our [chunk documentation](/reference/rag/chunk.mdx). ## Step 2: Embedding Generation Transform chunks into embeddings using your preferred provider. Mastra supports many embedding providers, including OpenAI and Cohere: ### Using OpenAI ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { embedMany } from "ai"; const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); ``` ### Using Cohere ```ts showLineNumbers copy import { cohere } from '@ai-sdk/cohere'; import { embedMany } from 'ai'; const { embeddings } = await embedMany({ model: cohere.embedding('embed-english-v3.0'), values: chunks.map(chunk => chunk.text), }); ``` The embedding functions return vectors, arrays of numbers representing the semantic meaning of your text, ready for similarity searches in your vector database. ### Configuring Embedding Dimensions Embedding models typically output vectors with a fixed number of dimensions (e.g., 1536 for OpenAI's `text-embedding-3-small`). Some models support reducing this dimensionality, which can help: - Decrease storage requirements in vector databases - Reduce computational costs for similarity searches Here are some supported models: OpenAI (text-embedding-3 models): ```ts const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small', { dimensions: 256 // Only supported in text-embedding-3 and later }), values: chunks.map(chunk => chunk.text), }); ``` Google (text-embedding-004): ```ts const { embeddings } = await embedMany({ model: google.textEmbeddingModel('text-embedding-004', { outputDimensionality: 256 // Truncates excessive values from the end }), values: chunks.map(chunk => chunk.text), }); ``` ## Example: Complete Pipeline Here's an example showing document processing and embedding generation with both providers: ```ts showLineNumbers copy import { embedMany } from "ai"; import { openai } from "@ai-sdk/openai"; import { cohere } from "@ai-sdk/cohere"; import { MDocument } from "@mastra/rag"; // Initialize document const doc = MDocument.fromText(` Climate change poses significant challenges to global agriculture. Rising temperatures and changing precipitation patterns affect crop yields. `); // Create chunks const chunks = await doc.chunk({ strategy: "recursive", size: 256, overlap: 50, }); // Generate embeddings with OpenAI const { embeddings: openAIEmbeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); // OR // Generate embeddings with Cohere const { embeddings: cohereEmbeddings } = await embedMany({ model: cohere.embedding('embed-english-v3.0'), values: chunks.map(chunk => chunk.text), }); // Store embeddings in your vector database await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, }); ``` ## For more examples of different chunking strategies and embedding configurations, see: - [Adjust Chunk Size](/reference/rag/chunk.mdx#adjust-chunk-size) - [Adjust Chunk Delimiters](/reference/rag/chunk.mdx#adjust-chunk-delimiters) - [Embed Text with Cohere](/reference/rag/embeddings.mdx#using-cohere) --- title: RAG (Retrieval-Augmented Generation) in Mastra | Mastra Docs description: Overview of Retrieval-Augmented Generation (RAG) in Mastra, detailing its capabilities for enhancing LLM outputs with relevant context. --- # RAG (Retrieval-Augmented Generation) in Mastra [EN] Source: https://mastra.ai/en/docs/rag/overview RAG in Mastra helps you enhance LLM outputs by incorporating relevant context from your own data sources, improving accuracy and grounding responses in real information. Mastra's RAG system provides: - Standardized APIs to process and embed documents - Support for multiple vector stores - Chunking and embedding strategies for optimal retrieval - Observability for tracking embedding and retrieval performance ## Example To implement RAG, you process your documents into chunks, create embeddings, store them in a vector database, and then retrieve relevant context at query time. ```ts showLineNumbers copy import { embedMany } from "ai"; import { openai } from "@ai-sdk/openai"; import { PgVector } from "@mastra/pg"; import { MDocument } from "@mastra/rag"; import { z } from "zod"; // 1. Initialize document const doc = MDocument.fromText(`Your document text here...`); // 2. Create chunks const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, }); // 3. Generate embeddings; we need to pass the text of each chunk const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); // 4. Store in vector database const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING); await pgVector.upsert({ indexName: "embeddings", vectors: embeddings, }); // using an index name of 'embeddings' // 5. Query similar chunks const results = await pgVector.query({ indexName: "embeddings", queryVector: queryVector, topK: 3, }); // queryVector is the embedding of the query console.log("Similar chunks:", results); ``` This example shows the essentials: initialize a document, create chunks, generate embeddings, store them, and query for similar content. ## Document Processing The basic building block of RAG is document processing. Documents can be chunked using various strategies (recursive, sliding window, etc.) and enriched with metadata. See the [chunking and embedding doc](./chunking-and-embedding.mdx). ## Vector Storage Mastra supports multiple vector stores for embedding persistence and similarity search, including pgvector, Pinecone, and Qdrant. See the [vector database doc](./vector-databases.mdx). ## Observability and Debugging Mastra's RAG system includes observability features to help you optimize your retrieval pipeline: - Track embedding generation performance and costs - Monitor chunk quality and retrieval relevance - Analyze query patterns and cache hit rates - Export metrics to your observability platform See the [OTel Configuration](../reference/observability/otel-config.mdx) page for more details. ## More resources - [Chain of Thought RAG Example](../../examples/rag/usage/cot-rag.mdx) - [All RAG Examples](../../examples/) (including different chunking strategies, embedding models, and vector stores) --- title: "Retrieval, Semantic Search, Reranking | RAG | Mastra Docs" description: Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking. --- import { Tabs } from "nextra/components"; ## Retrieval in RAG Systems [EN] Source: https://mastra.ai/en/docs/rag/retrieval After storing embeddings, you need to retrieve relevant chunks to answer user queries. Mastra provides flexible retrieval options with support for semantic search, filtering, and re-ranking. ## How Retrieval Works 1. The user's query is converted to an embedding using the same model used for document embeddings 2. This embedding is compared to stored embeddings using vector similarity 3. The most similar chunks are retrieved and can be optionally: - Filtered by metadata - Re-ranked for better relevance - Processed through a knowledge graph ## Basic Retrieval The simplest approach is direct semantic search. This method uses vector similarity to find chunks that are semantically similar to the query: ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { embed } from "ai"; import { PgVector } from "@mastra/pg"; // Convert query to embedding const { embedding } = await embed({ value: "What are the main points in the article?", model: openai.embedding('text-embedding-3-small'), }); // Query vector store const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING); const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, }); // Display results console.log(results); ``` Results include both the text content and a similarity score: ```ts showLineNumbers copy [ { text: "Climate change poses significant challenges...", score: 0.89, metadata: { source: "article1.txt" } }, { text: "Rising temperatures affect crop yields...", score: 0.82, metadata: { source: "article1.txt" } } // ... more results ] ``` For an example of how to use the basic retrieval method, see the [Retrieve Results](../../examples/rag/query/retrieve-results.mdx) example. ## Advanced Retrieval options ### Metadata Filtering Filter results based on metadata fields to narrow down the search space. This is useful when you have documents from different sources, time periods, or with specific attributes. Mastra provides a unified MongoDB-style query syntax that works across all supported vector stores. For detailed information about available operators and syntax, see the [Metadata Filters Reference](/reference/rag/metadata-filters). Basic filtering examples: ```ts showLineNumbers copy // Simple equality filter const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { source: "article1.txt" } }); // Numeric comparison const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { price: { $gt: 100 } } }); // Multiple conditions const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { category: "electronics", price: { $lt: 1000 }, inStock: true } }); // Array operations const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { tags: { $in: ["sale", "new"] } } }); // Logical operators const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { $or: [ { category: "electronics" }, { category: "accessories" } ], $and: [ { price: { $gt: 50 } }, { price: { $lt: 200 } } ] } }); ``` Common use cases for metadata filtering: - Filter by document source or type - Filter by date ranges - Filter by specific categories or tags - Filter by numerical ranges (e.g., price, rating) - Combine multiple conditions for precise querying - Filter by document attributes (e.g., language, author) For an example of how to use metadata filtering, see the [Hybrid Vector Search](../../examples/rag/query/hybrid-vector-search.mdx) example. ### Vector Query Tool Sometimes you want to give your agent the ability to query a vector database directly. The Vector Query Tool allows your agent to be in charge of retrieval decisions, combining semantic search with optional filtering and reranking based on the agent's understanding of the user's needs. ```ts showLineNumbers copy const vectorQueryTool = createVectorQueryTool({ vectorStoreName: 'pgVector', indexName: 'embeddings', model: openai.embedding('text-embedding-3-small'), }); ``` When creating the tool, pay special attention to the tool's name and description - these help the agent understand when and how to use the retrieval capabilities. For example, you might name it "SearchKnowledgeBase" and describe it as "Search through our documentation to find relevant information about X topic." This is particularly useful when: - Your agent needs to dynamically decide what information to retrieve - The retrieval process requires complex decision-making - You want the agent to combine multiple retrieval strategies based on context For detailed configuration options and advanced usage, see the [Vector Query Tool Reference](/reference/tools/vector-query-tool). ### Vector Store Prompts Vector store prompts define query patterns and filtering capabilities for each vector database implementation. When implementing filtering, these prompts are required in the agent's instructions to specify valid operators and syntax for each vector store implementation. ```ts showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { PGVECTOR_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${PGVECTOR_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { PINECONE_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${PINECONE_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { QDRANT_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${QDRANT_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { CHROMA_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${CHROMA_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { ASTRA_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${ASTRA_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { LIBSQL_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${LIBSQL_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { UPSTASH_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${UPSTASH_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { VECTORIZE_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` Process queries using the provided context. Structure responses to be concise and relevant. ${VECTORIZE_PROMPT} `, tools: { vectorQueryTool }, }); ``` ### Re-ranking Initial vector similarity search can sometimes miss nuanced relevance. Re-ranking is a more computationally expensive process, but more accurate algorithm that improves results by: - Considering word order and exact matches - Applying more sophisticated relevance scoring - Using a method called cross-attention between query and documents Here's how to use re-ranking: ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { rerank } from "@mastra/rag"; // Get initial results from vector search const initialResults = await pgVector.query({ indexName: "embeddings", queryVector: queryEmbedding, topK: 10, }); // Re-rank the results const rerankedResults = await rerank(initialResults, query, openai('gpt-4o-mini')); ``` > **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field. The re-ranked results combine vector similarity with semantic understanding to improve retrieval quality. For more details about re-ranking, see the [rerank()](/reference/rag/rerank) method. For an example of how to use the re-ranking method, see the [Re-ranking Results](../../examples/rag/rerank/rerank.mdx) example. ### Graph-based Retrieval For documents with complex relationships, graph-based retrieval can follow connections between chunks. This helps when: - Information is spread across multiple documents - Documents reference each other - You need to traverse relationships to find complete answers Example setup: ```ts showLineNumbers copy const graphQueryTool = createGraphQueryTool({ vectorStoreName: 'pgVector', indexName: 'embeddings', model: openai.embedding('text-embedding-3-small'), graphOptions: { threshold: 0.7, } }); ``` For more details about graph-based retrieval, see the [GraphRAG](/reference/rag/graph-rag) class and the [createGraphQueryTool()](/reference/tools/graph-rag-tool) function. For an example of how to use the graph-based retrieval method, see the [Graph-based Retrieval](../../examples/rag/usage/graph-rag.mdx) example. --- title: "Storing Embeddings in A Vector Database | Mastra Docs" description: Guide on vector storage options in Mastra, including embedded and dedicated vector databases for similarity search. --- import { Tabs } from "nextra/components"; ## Storing Embeddings in A Vector Database [EN] Source: https://mastra.ai/en/docs/rag/vector-databases After generating embeddings, you need to store them in a database that supports vector similarity search. Mastra provides a consistent interface for storing and querying embeddings across different vector databases. ## Supported Databases ```ts filename="vector-store.ts" showLineNumbers copy import { PgVector } from '@mastra/pg'; const store = new PgVector(process.env.POSTGRES_CONNECTION_STRING) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ### Using PostgreSQL with pgvector PostgreSQL with the pgvector extension is a good solution for teams already using PostgreSQL who want to minimize infrastructure complexity. For detailed setup instructions and best practices, see the [official pgvector repository](https://github.com/pgvector/pgvector). ```ts filename="vector-store.ts" showLineNumbers copy import { PineconeVector } from '@mastra/pinecone' const store = new PineconeVector(process.env.PINECONE_API_KEY) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { QdrantVector } from '@mastra/qdrant' const store = new QdrantVector({ url: process.env.QDRANT_URL, apiKey: process.env.QDRANT_API_KEY }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { ChromaVector } from '@mastra/chroma' const store = new ChromaVector() await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { AstraVector } from '@mastra/astra' const store = new AstraVector({ token: process.env.ASTRA_DB_TOKEN, endpoint: process.env.ASTRA_DB_ENDPOINT, keyspace: process.env.ASTRA_DB_KEYSPACE }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { LibSQLVector } from "@mastra/core/vector/libsql"; const store = new LibSQLVector({ connectionUrl: process.env.DATABASE_URL, authToken: process.env.DATABASE_AUTH_TOKEN // Optional: for Turso cloud databases }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { UpstashVector } from '@mastra/upstash' const store = new UpstashVector({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { CloudflareVector } from '@mastra/vectorize' const store = new CloudflareVector({ accountId: process.env.CF_ACCOUNT_ID, apiToken: process.env.CF_API_TOKEN }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ## Using Vector Storage Once initialized, all vector stores share the same interface for creating indexes, upserting embeddings, and querying. ### Creating Indexes Before storing embeddings, you need to create an index with the appropriate dimension size for your embedding model: ```ts filename="store-embeddings.ts" showLineNumbers copy // Create an index with dimension 1536 (for text-embedding-3-small) await store.createIndex({ indexName: 'myCollection', dimension: 1536, }); // For other models, use their corresponding dimensions: // - text-embedding-3-large: 3072 // - text-embedding-ada-002: 1536 // - cohere-embed-multilingual-v3: 1024 ``` The dimension size must match the output dimension of your chosen embedding model. Common dimension sizes are: - OpenAI text-embedding-3-small: 1536 dimensions - OpenAI text-embedding-3-large: 3072 dimensions - Cohere embed-multilingual-v3: 1024 dimensions > **Important**: Index dimensions cannot be changed after creation. To use a different model, delete and recreate the index with the new dimension size. ### Naming Rules for Databases Each vector database enforces specific naming conventions for indexes and collections to ensure compatibility and prevent conflicts. Index names must: - Start with a letter or underscore - Contain only letters, numbers, and underscores - Example: `my_index_123` is valid - Example: `my-index` is not valid (contains hyphen) Index names must: - Use only lowercase letters, numbers, and dashes - Not contain dots (used for DNS routing) - Not use non-Latin characters or emojis - Have a combined length (with project ID) under 52 characters - Example: `my-index-123` is valid - Example: `my.index` is not valid (contains dot) Collection names must: - Be 1-255 characters long - Not contain any of these special characters: - `< > : " / \ | ? *` - Null character (`\0`) - Unit separator (`\u{1F}`) - Example: `my_collection_123` is valid - Example: `my/collection` is not valid (contains slash) Collection names must: - Be 3-63 characters long - Start and end with a letter or number - Contain only letters, numbers, underscores, or hyphens - Not contain consecutive periods (..) - Not be a valid IPv4 address - Example: `my-collection-123` is valid - Example: `my..collection` is not valid (consecutive periods) Collection names must: - Not be empty - Be 48 characters or less - Contain only letters, numbers, and underscores - Example: `my_collection_123` is valid - Example: `my-collection` is not valid (contains hyphen) Index names must: - Start with a letter or underscore - Contain only letters, numbers, and underscores - Example: `my_index_123` is valid - Example: `my-index` is not valid (contains hyphen) Namespace names must: - Be 2-100 characters long - Contain only: - Alphanumeric characters (a-z, A-Z, 0-9) - Underscores, hyphens, dots - Not start or end with special characters (_, -, .) - Can be case-sensitive - Example: `MyNamespace123` is valid - Example: `_namespace` is not valid (starts with underscore) Index names must: - Start with a letter - Be shorter than 32 characters - Contain only lowercase ASCII letters, numbers, and dashes - Use dashes instead of spaces - Example: `my-index-123` is valid - Example: `My_Index` is not valid (uppercase and underscore) ### Upserting Embeddings After creating an index, you can store embeddings along with their basic metadata: ```ts filename="store-embeddings.ts" showLineNumbers copy // Store embeddings with their corresponding metadata await store.upsert({ indexName: 'myCollection', // index name vectors: embeddings, // array of embedding vectors metadata: chunks.map(chunk => ({ text: chunk.text, // The original text content id: chunk.id // Optional unique identifier })) }); ``` The upsert operation: - Takes an array of embedding vectors and their corresponding metadata - Updates existing vectors if they share the same ID - Creates new vectors if they don't exist - Automatically handles batching for large datasets For complete examples of upserting embeddings in different vector stores, see the [Upsert Embeddings](../../examples/rag/upsert/upsert-embeddings.mdx) guide. ## Adding Metadata Vector stores support rich metadata (any JSON-serializable fields) for filtering and organization. Since metadata is stored with no fixed schema, use consistent field naming to avoid unexpected query results. **Important**: Metadata is crucial for vector storage - without it, you'd only have numerical embeddings with no way to return the original text or filter results. Always store at least the source text as metadata. ```ts showLineNumbers copy // Store embeddings with rich metadata for better organization and filtering await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map((chunk) => ({ // Basic content text: chunk.text, id: chunk.id, // Document organization source: chunk.source, category: chunk.category, // Temporal metadata createdAt: new Date().toISOString(), version: "1.0", // Custom fields language: chunk.language, author: chunk.author, confidenceScore: chunk.score, })), }); ``` Key metadata considerations: - Be strict with field naming - inconsistencies like 'category' vs 'Category' will affect queries - Only include fields you plan to filter or sort by - extra fields add overhead - Add timestamps (e.g., 'createdAt', 'lastUpdated') to track content freshness ## Best Practices - Create indexes before bulk insertions - Use batch operations for large insertions (the upsert method handles batching automatically) - Only store metadata you'll query against - Match embedding dimensions to your model (e.g., 1536 for `text-embedding-3-small`) --- title: Storage in Mastra | Mastra Docs description: Overview of Mastra's storage system and data persistence capabilities. --- import { Tabs } from "nextra/components"; import { PropertiesTable } from "@/components/properties-table"; import { SchemaTable } from "@/components/schema-table"; import { StorageOverviewImage } from "@/components/storage-overview-image"; # MastraStorage [EN] Source: https://mastra.ai/en/docs/storage/overview `MastraStorage` provides a unified interface for managing: - **Suspended Workflows**: the serialized state of suspended workflows (so they can be resumed later) - **Memory**: threads and messages per `resourceId` in your application - **Traces**: OpenTelemetry traces from all components of Mastra - **Eval Datasets**: scores and scoring reasons from eval runs

Mastra provides different storage providers, but you can treat them as interchangeable. Eg, you could use libsql in development but postgres in production, and your code will work the same both ways. ## Configuration Mastra can be configured with a default storage option: ```typescript copy import { Mastra } from "@mastra/core/mastra"; import { DefaultStorage } from "@mastra/core/storage/libsql"; const mastra = new Mastra({ storage: new DefaultStorage({ config: { url: "file:.mastra/mastra.db", }, }), }); ``` ## Data Schema Stores conversation messages and their metadata. Each message belongs to a thread and contains the actual content along with metadata about the sender role and message type.
Groups related messages together and associates them with a resource. Contains metadata about the conversation.
When `suspend` is called on a workflow, its state is saved in the following format. When `resume` is called, that state is rehydrated.
Stores eval results from running metrics against agent outputs.
Captures OpenTelemetry traces for monitoring and debugging.
## Storage Providers Mastra supports the following providers: - For local development, check out [LibSQL Storage](../../reference/storage/libsql.mdx) - For production, check out [PostgreSQL Storage](../../reference/storage/postgresql.mdx) - For serverless deployments, check out [Upstash Storage](../../reference/storage/upstash.mdx) --- title: Voice in Mastra | Mastra Docs description: Overview of voice capabilities in Mastra, including text-to-speech, speech-to-text, and real-time speech-to-speech interactions. --- import { Tabs } from "nextra/components"; import { AudioPlayback } from "@/components/audio-playback"; # Voice in Mastra [EN] Source: https://mastra.ai/en/docs/voice/overview Mastra's Voice system provides a unified interface for voice interactions, enabling text-to-speech (TTS), speech-to-text (STT), and real-time speech-to-speech (STS) capabilities in your applications. ## Adding Voice to Agents To learn how to integrate voice capabilities into your agents, check out the [Adding Voice to Agents](../agents/adding-voice.mdx) documentation. This section covers how to use both single and multiple voice providers, as well as real-time interactions. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from "@mastra/voice-openai"; // Initialize OpenAI voice for TTS const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new OpenAIVoice(), }); ``` You can then use the following voice capabilities: ### Text to Speech (TTS) Turn your agent's responses into natural-sounding speech using Mastra's TTS capabilities. Choose from multiple providers like OpenAI, ElevenLabs, and more. For detailed configuration options and advanced features, check out our [Text-to-Speech guide](./text-to-speech). ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from "@mastra/voice-openai"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new OpenAIVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker responseFormat: "wav", // Optional: specify a response format }); playAudio(audioStream); ``` Visit the [OpenAI Voice Reference](/reference/voice/openai) for more information on the OpenAI voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { AzureVoice } from "@mastra/voice-azure"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new AzureVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "en-US-JennyNeural", // Optional: specify a speaker }); playAudio(audioStream); ``` Visit the [Azure Voice Reference](/reference/voice/azure) for more information on the Azure voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { ElevenLabsVoice } from "@mastra/voice-elevenlabs"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new ElevenLabsVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` Visit the [ElevenLabs Voice Reference](/reference/voice/elevenlabs) for more information on the ElevenLabs voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { PlayAIVoice } from "@mastra/voice-playai"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new PlayAIVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` Visit the [PlayAI Voice Reference](/reference/voice/playai) for more information on the PlayAI voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { GoogleVoice } from "@mastra/voice-google"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new GoogleVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "en-US-Studio-O", // Optional: specify a speaker }); playAudio(audioStream); ``` Visit the [Google Voice Reference](/reference/voice/google) for more information on the Google voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { CloudflareVoice } from "@mastra/voice-cloudflare"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new CloudflareVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` Visit the [Cloudflare Voice Reference](/reference/voice/cloudflare) for more information on the Cloudflare voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { DeepgramVoice } from "@mastra/voice-deepgram"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new DeepgramVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "aura-english-us", // Optional: specify a speaker }); playAudio(audioStream); ``` Visit the [Deepgram Voice Reference](/reference/voice/deepgram) for more information on the Deepgram voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { SpeechifyVoice } from "@mastra/voice-speechify"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new SpeechifyVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "matthew", // Optional: specify a speaker }); playAudio(audioStream); ``` Visit the [Speechify Voice Reference](/reference/voice/speechify) for more information on the Speechify voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { SarvamVoice } from "@mastra/voice-sarvam"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new SarvamVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` Visit the [Sarvam Voice Reference](/reference/voice/sarvam) for more information on the Sarvam voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { MurfVoice } from "@mastra/voice-murf"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new MurfVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` Visit the [Murf Voice Reference](/reference/voice/murf) for more information on the Murf voice provider. ### Speech to Text (STT) Transcribe spoken content using various providers like OpenAI, ElevenLabs, and more. For detailed configuration options and more, check out [Speech to Text](./speech-to-text). You can download a sample audio file from [here](https://github.com/mastra-ai/realtime-voice-demo/raw/refs/heads/main/how_can_i_help_you.mp3).
```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from "@mastra/voice-openai"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new OpenAIVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Visit the [OpenAI Voice Reference](/reference/voice/openai) for more information on the OpenAI voice provider. ```typescript import { createReadStream } from 'fs'; import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { AzureVoice } from "@mastra/voice-azure"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new AzureVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Visit the [Azure Voice Reference](/reference/voice/azure) for more information on the Azure voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { ElevenLabsVoice } from "@mastra/voice-elevenlabs"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new ElevenLabsVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Visit the [ElevenLabs Voice Reference](/reference/voice/elevenlabs) for more information on the ElevenLabs voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { GoogleVoice } from "@mastra/voice-google"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new GoogleVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Visit the [Google Voice Reference](/reference/voice/google) for more information on the Google voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { CloudflareVoice } from "@mastra/voice-cloudflare"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new CloudflareVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Visit the [Cloudflare Voice Reference](/reference/voice/cloudflare) for more information on the Cloudflare voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { DeepgramVoice } from "@mastra/voice-deepgram"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new DeepgramVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Visit the [Deepgram Voice Reference](/reference/voice/deepgram) for more information on the Deepgram voice provider. ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { SarvamVoice } from "@mastra/voice-sarvam"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new SarvamVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Visit the [Sarvam Voice Reference](/reference/voice/sarvam) for more information on the Sarvam voice provider. ### Speech to Speech (STS) Create conversational experiences with speech-to-speech capabilities. The unified API enables real-time voice interactions between users and AI agents. For detailed configuration options and advanced features, check out [Speech to Speech](./speech-to-speech). ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { playAudio, getMicrophoneStream } from '@mastra/node-audio'; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new OpenAIRealtimeVoice(), }); // Listen for agent audio responses voiceAgent.voice.on('speaker', ({ audio }) => { playAudio(audio); }); // Initiate the conversation await voiceAgent.voice.speak('How can I help you today?'); // Send continuous audio from the microphone const micStream = getMicrophoneStream(); await voiceAgent.voice.send(micStream); ``` Visit the [OpenAI Voice Reference](/reference/voice/openai-realtime) for more information on the OpenAI voice provider. ## Voice Configuration Each voice provider can be configured with different models and options. Below are the detailed configuration options for all supported providers: ```typescript // OpenAI Voice Configuration const voice = new OpenAIVoice({ speechModel: { name: "gpt-3.5-turbo", // Example model name apiKey: process.env.OPENAI_API_KEY, language: "en-US", // Language code voiceType: "neural", // Type of voice model }, listeningModel: { name: "whisper-1", // Example model name apiKey: process.env.OPENAI_API_KEY, language: "en-US", // Language code format: "wav", // Audio format }, speaker: "alloy", // Example speaker name }); ``` Visit the [OpenAI Voice Reference](/reference/voice/openai) for more information on the OpenAI voice provider. ```typescript // Azure Voice Configuration const voice = new AzureVoice({ speechModel: { name: "en-US-JennyNeural", // Example model name apiKey: process.env.AZURE_SPEECH_KEY, region: process.env.AZURE_SPEECH_REGION, language: "en-US", // Language code style: "cheerful", // Voice style pitch: "+0Hz", // Pitch adjustment rate: "1.0", // Speech rate }, listeningModel: { name: "en-US", // Example model name apiKey: process.env.AZURE_SPEECH_KEY, region: process.env.AZURE_SPEECH_REGION, format: "simple", // Output format }, }); ``` Visit the [Azure Voice Reference](/reference/voice/azure) for more information on the Azure voice provider. ```typescript // ElevenLabs Voice Configuration const voice = new ElevenLabsVoice({ speechModel: { voiceId: "your-voice-id", // Example voice ID model: "eleven_multilingual_v2", // Example model name apiKey: process.env.ELEVENLABS_API_KEY, language: "en", // Language code emotion: "neutral", // Emotion setting }, // ElevenLabs may not have a separate listening model }); ``` Visit the [ElevenLabs Voice Reference](/reference/voice/elevenlabs) for more information on the ElevenLabs voice provider. ```typescript // PlayAI Voice Configuration const voice = new PlayAIVoice({ speechModel: { name: "playai-voice", // Example model name speaker: "emma", // Example speaker name apiKey: process.env.PLAYAI_API_KEY, language: "en-US", // Language code speed: 1.0, // Speech speed }, // PlayAI may not have a separate listening model }); ``` Visit the [PlayAI Voice Reference](/reference/voice/playai) for more information on the PlayAI voice provider. ```typescript // Google Voice Configuration const voice = new GoogleVoice({ speechModel: { name: "en-US-Studio-O", // Example model name apiKey: process.env.GOOGLE_API_KEY, languageCode: "en-US", // Language code gender: "FEMALE", // Voice gender speakingRate: 1.0, // Speaking rate }, listeningModel: { name: "en-US", // Example model name sampleRateHertz: 16000, // Sample rate }, }); ``` Visit the [PlayAI Voice Reference](/reference/voice/playai) for more information on the PlayAI voice provider. ```typescript // Cloudflare Voice Configuration const voice = new CloudflareVoice({ speechModel: { name: "cloudflare-voice", // Example model name accountId: process.env.CLOUDFLARE_ACCOUNT_ID, apiToken: process.env.CLOUDFLARE_API_TOKEN, language: "en-US", // Language code format: "mp3", // Audio format }, // Cloudflare may not have a separate listening model }); ``` Visit the [Cloudflare Voice Reference](/reference/voice/cloudflare) for more information on the Cloudflare voice provider. ```typescript // Deepgram Voice Configuration const voice = new DeepgramVoice({ speechModel: { name: "nova-2", // Example model name speaker: "aura-english-us", // Example speaker name apiKey: process.env.DEEPGRAM_API_KEY, language: "en-US", // Language code tone: "formal", // Tone setting }, listeningModel: { name: "nova-2", // Example model name format: "flac", // Audio format }, }); ``` Visit the [Deepgram Voice Reference](/reference/voice/deepgram) for more information on the Deepgram voice provider. ```typescript // Speechify Voice Configuration const voice = new SpeechifyVoice({ speechModel: { name: "speechify-voice", // Example model name speaker: "matthew", // Example speaker name apiKey: process.env.SPEECHIFY_API_KEY, language: "en-US", // Language code speed: 1.0, // Speech speed }, // Speechify may not have a separate listening model }); ``` Visit the [Speechify Voice Reference](/reference/voice/speechify) for more information on the Speechify voice provider. ```typescript // Sarvam Voice Configuration const voice = new SarvamVoice({ speechModel: { name: "sarvam-voice", // Example model name apiKey: process.env.SARVAM_API_KEY, language: "en-IN", // Language code style: "conversational", // Style setting }, // Sarvam may not have a separate listening model }); ``` Visit the [Sarvam Voice Reference](/reference/voice/sarvam) for more information on the Sarvam voice provider. ```typescript // Murf Voice Configuration const voice = new MurfVoice({ speechModel: { name: "murf-voice", // Example model name apiKey: process.env.MURF_API_KEY, language: "en-US", // Language code emotion: "happy", // Emotion setting }, // Murf may not have a separate listening model }); ``` Visit the [Murf Voice Reference](/reference/voice/murf) for more information on the Murf voice provider. ```typescript // OpenAI Realtime Voice Configuration const voice = new OpenAIRealtimeVoice({ speechModel: { name: "gpt-3.5-turbo", // Example model name apiKey: process.env.OPENAI_API_KEY, language: "en-US", // Language code }, listeningModel: { name: "whisper-1", // Example model name apiKey: process.env.OPENAI_API_KEY, format: "ogg", // Audio format }, speaker: "alloy", // Example speaker name }); ``` For more information on the OpenAI Realtime voice provider, refer to the [OpenAI Realtime Voice Reference](/reference/voice/openai-realtime). ### Using Multiple Voice Providers This example demonstrates how to create and use two different voice providers in Mastra: OpenAI for speech-to-text (STT) and PlayAI for text-to-speech (TTS). Start by creating instances of the voice providers with any necessary configuration. ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; import { CompositeVoice } from "@mastra/core/voice"; import { playAudio, getMicrophoneStream } from "@mastra/node-audio"; // Initialize OpenAI voice for STT const input = new OpenAIVoice({ listeningModel: { name: "whisper-1", apiKey: process.env.OPENAI_API_KEY, }, }); // Initialize PlayAI voice for TTS const output = new PlayAIVoice({ speechModel: { name: "playai-voice", apiKey: process.env.PLAYAI_API_KEY, }, }); // Combine the providers using CompositeVoice const voice = new CompositeVoice({ input, output, }); // Implement voice interactions using the combined voice provider const audioStream = getMicrophoneStream(); // Assume this function gets audio input const transcript = await voice.listen(audioStream); // Log the transcribed text console.log("Transcribed text:", transcript); // Convert text to speech const responseAudio = await voice.speak(`You said: ${transcript}`, { speaker: "default", // Optional: specify a speaker, responseFormat: "wav", // Optional: specify a response format }); // Play the audio response playAudio(responseAudio); ``` For more information on the CompositeVoice, refer to the [CompositeVoice Reference](/reference/voice/composite-voice). ## More Resources - [CompositeVoice](../../reference/voice/composite-voice.mdx) - [MastraVoice](../../reference/voice/mastra-voice.mdx) - [OpenAI Voice](../../reference/voice/openai.mdx) - [Azure Voice](../../reference/voice/azure.mdx) - [Google Voice](../../reference/voice/google.mdx) - [Deepgram Voice](../../reference/voice/deepgram.mdx) - [PlayAI Voice](../../reference/voice/playai.mdx) - [Voice Examples](../../examples/voice/text-to-speech.mdx) --- title: Speech-to-Speech Capabilities in Mastra | Mastra Docs description: Overview of speech-to-speech capabilities in Mastra, including real-time interactions and event-driven architecture. --- # Speech-to-Speech Capabilities in Mastra [EN] Source: https://mastra.ai/en/docs/voice/speech-to-speech ## Introduction Speech-to-Speech (STS) in Mastra provides a standardized interface for real-time interactions across multiple providers. STS enables continuous bidirectional audio communication through listening to events from Realtime models. Unlike separate TTS and STT operations, STS maintains an open connection that processes speech continuously in both directions. ## Configuration - **`chatModel`**: Configuration for the realtime model. - **`apiKey`**: Your OpenAI API key. Falls back to the `OPENAI_API_KEY` environment variable. - **`model`**: The model ID to use for real-time voice interactions (e.g., `gpt-4o-mini-realtime`). - **`options`**: Additional options for the realtime client, such as session configuration. - **`speaker`**: The default voice ID for speech synthesis. This allows you to specify which voice to use for the speech output. ```typescript const voice = new OpenAIRealtimeVoice({ chatModel: { apiKey: 'your-openai-api-key', model: 'gpt-4o-mini-realtime', options: { sessionConfig: { turn_detection: { type: 'server_vad', threshold: 0.6, silence_duration_ms: 1200, }, }, }, }, speaker: 'alloy', // Default voice }); // If using default settings the configuration can be simplified to: const voice = new OpenAIRealtimeVoice(); ``` ## Using STS ```typescript import { Agent } from "@mastra/core/agent"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { playAudio, getMicrophoneStream } from "@mastra/node-audio"; const agent = new Agent({ name: 'Agent', instructions: `You are a helpful assistant with real-time voice capabilities.`, model: openai('gpt-4o'), voice: new OpenAIRealtimeVoice(), }); // Connect to the voice service await agent.voice.connect(); // Listen for agent audio responses agent.voice.on('speaker', ({ audio }) => { playAudio(audio); }); // Initiate the conversation await agent.voice.speak('How can I help you today?'); // Send continuous audio from the microphone const micStream = getMicrophoneStream(); await agent.voice.send(micStream); ``` For integrating Speech-to-Speech capabilities with agents, refer to the [Adding Voice to Agents](../agents/adding-voice.mdx) documentation. --- title: Speech-to-Text (STT) in Mastra | Mastra Docs description: Overview of Speech-to-Text capabilities in Mastra, including configuration, usage, and integration with voice providers. --- # Speech-to-Text (STT) [EN] Source: https://mastra.ai/en/docs/voice/speech-to-text Speech-to-Text (STT) in Mastra provides a standardized interface for converting audio input into text across multiple service providers. STT helps create voice-enabled applications that can respond to human speech, enabling hands-free interaction, accessibility for users with disabilities, and more natural human-computer interfaces. ## Configuration To use STT in Mastra, you need to provide a `listeningModel` when initializing the voice provider. This includes parameters such as: - **`name`**: The specific STT model to use. - **`apiKey`**: Your API key for authentication. - **Provider-specific options**: Additional options that may be required or supported by the specific voice provider. **Note**: All of these parameters are optional. You can use the default settings provided by the voice provider, which will depend on the specific provider you are using. ```typescript const voice = new OpenAIVoice({ listeningModel: { name: "whisper-1", apiKey: process.env.OPENAI_API_KEY, }, }); // If using default settings the configuration can be simplified to: const voice = new OpenAIVoice(); ``` ## Available Providers Mastra supports several Speech-to-Text providers, each with their own capabilities and strengths: - [**OpenAI**](/reference/voice/openai/) - High-accuracy transcription with Whisper models - [**Azure**](/reference/voice/azure/) - Microsoft's speech recognition with enterprise-grade reliability - [**ElevenLabs**](/reference/voice/elevenlabs/) - Advanced speech recognition with support for multiple languages - [**Google**](/reference/voice/google/) - Google's speech recognition with extensive language support - [**Cloudflare**](/reference/voice/cloudflare/) - Edge-optimized speech recognition for low-latency applications - [**Deepgram**](/reference/voice/deepgram/) - AI-powered speech recognition with high accuracy for various accents - [**Sarvam**](/reference/voice/sarvam/) - Specialized in Indic languages and accents Each provider is implemented as a separate package that you can install as needed: ```bash pnpm add @mastra/voice-openai # Example for OpenAI ``` ## Using the Listen Method The primary method for STT is the `listen()` method, which converts spoken audio into text. Here's how to use it: ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from '@mastra/voice-openai'; import { getMicrophoneStream } from "@mastra/node-audio"; const voice = new OpenAIVoice(); const agent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that provides recommendations based on user input.", model: openai("gpt-4o"), voice, }); const audioStream = getMicrophoneStream(); // Assume this function gets audio input const transcript = await agent.voice.listen(audioStream, { filetype: "m4a", // Optional: specify the audio file type }); console.log(`User said: ${transcript}`); const { text } = await agent.generate(`Based on what the user said, provide them a recommendation: ${transcript}`); console.log(`Recommendation: ${text}`); ``` Check out the [Adding Voice to Agents](../agents/adding-voice.mdx) documentation to learn how to use STT in an agent. --- title: Text-to-Speech (TTS) in Mastra | Mastra Docs description: Overview of Text-to-Speech capabilities in Mastra, including configuration, usage, and integration with voice providers. --- # Text-to-Speech (TTS) [EN] Source: https://mastra.ai/en/docs/voice/text-to-speech Text-to-Speech (TTS) in Mastra offers a unified API for synthesizing spoken audio from text using various providers. By incorporating TTS into your applications, you can enhance user experience with natural voice interactions, improve accessibility for users with visual impairments, and create more engaging multimodal interfaces. TTS is a core component of any voice application. Combined with STT (Speech-to-Text), it forms the foundation of voice interaction systems. Newer models support STS ([Speech-to-Speech](./speech-to-speech)) which can be used for real-time interactions but come at high cost ($). ## Configuration To use TTS in Mastra, you need to provide a `speechModel` when initializing the voice provider. This includes parameters such as: - **`name`**: The specific TTS model to use. - **`apiKey`**: Your API key for authentication. - **Provider-specific options**: Additional options that may be required or supported by the specific voice provider. The **`speaker`** option allows you to select different voices for speech synthesis. Each provider offers a variety of voice options with distinct characteristics for **Voice diversity**, **Quality**, **Voice personality**, and **Multilingual support** **Note**: All of these parameters are optional. You can use the default settings provided by the voice provider, which will depend on the specific provider you are using. ```typescript const voice = new OpenAIVoice({ speechModel: { name: "tts-1-hd", apiKey: process.env.OPENAI_API_KEY }, speaker: "alloy", }); // If using default settings the configuration can be simplified to: const voice = new OpenAIVoice(); ``` ## Available Providers Mastra supports a wide range of Text-to-Speech providers, each with their own unique capabilities and voice options. You can choose the provider that best suits your application's needs: - [**OpenAI**](/reference/voice/openai/) - High-quality voices with natural intonation and expression - [**Azure**](/reference/voice/azure/) - Microsoft's speech service with a wide range of voices and languages - [**ElevenLabs**](/reference/voice/elevenlabs/) - Ultra-realistic voices with emotion and fine-grained control - [**PlayAI**](/reference/voice/playai/) - Specialized in natural-sounding voices with various styles - [**Google**](/reference/voice/google/) - Google's speech synthesis with multilingual support - [**Cloudflare**](/reference/voice/cloudflare/) - Edge-optimized speech synthesis for low-latency applications - [**Deepgram**](/reference/voice/deepgram/) - AI-powered speech technology with high accuracy - [**Speechify**](/reference/voice/speechify/) - Text-to-speech optimized for readability and accessibility - [**Sarvam**](/reference/voice/sarvam/) - Specialized in Indic languages and accents - [**Murf**](/reference/voice/murf/) - Studio-quality voice overs with customizable parameters Each provider is implemented as a separate package that you can install as needed: ```bash pnpm add @mastra/voice-openai # Example for OpenAI ``` ## Using the Speak Method The primary method for TTS is the `speak()` method, which converts text to speech. This method can accept options that allows you to specify the speaker and other provider-specific options. Here's how to use it: ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from '@mastra/voice-openai'; const voice = new OpenAIVoice(); const agent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice, }); const { text } = await agent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const readableStream = await voice.speak(text, { speaker: "default", // Optional: specify a speaker properties: { speed: 1.0, // Optional: adjust speech speed pitch: "default", // Optional: specify pitch if supported }, }); ``` Check out the [Adding Voice to Agents](../agents/adding-voice.mdx) documentation to learn how to use TTS in an agent. --- title: "Branching, Merging, Conditions | Workflows | Mastra Docs" description: "Control flow in Mastra workflows allows you to manage branching, merging, and conditions to construct workflows that meet your logic requirements." --- # Control Flow in Workflows: Branching, Merging, and Conditions [EN] Source: https://mastra.ai/en/docs/workflows/control-flow When you create a multi-step process, you may need to run steps in parallel, chain them sequentially, or follow different paths based on outcomes. This page describes how you can manage branching, merging, and conditions to construct workflows that meet your logic requirements. The code snippets show the key patterns for structuring complex control flow. ## Parallel Execution You can run multiple steps at the same time if they don't depend on each other. This approach can speed up your workflow when steps perform independent tasks. The code below shows how to add two steps in parallel: ```typescript myWorkflow.step(fetchUserData).step(fetchOrderData); ``` See the [Parallel Steps](../../examples/workflows/parallel-steps.mdx) example for more details. ## Sequential Execution Sometimes you need to run steps in strict order to ensure outputs from one step become inputs for the next. Use .then() to link dependent operations. The code below shows how to chain steps sequentially: ```typescript myWorkflow.step(fetchOrderData).then(validateData).then(processOrder); ``` See the [Sequential Steps](../../examples/workflows/sequential-steps.mdx) example for more details. ## Branching and Merging Paths When different outcomes require different paths, branching is helpful. You can also merge paths later once they complete. The code below shows how to branch after stepA and later converge on stepF: ```typescript myWorkflow .step(stepA) .then(stepB) .then(stepD) .after(stepA) .step(stepC) .then(stepE) .after([stepD, stepE]) .step(stepF); ``` In this example: - stepA leads to stepB, then to stepD. - Separately, stepA also triggers stepC, which in turn leads to stepE. - Separately, stepF is triggered when both stepD and stepE are completed. See the [Branching Paths](../../examples/workflows/branching-paths.mdx) example for more details. ## Merging Multiple Branches Sometimes you need a step to execute only after multiple other steps have completed. Mastra provides a compound `.after([])` syntax that allows you to specify multiple dependencies for a step. ```typescript myWorkflow .step(fetchUserData) .then(validateUserData) .step(fetchProductData) .then(validateProductData) // This step will only run after BOTH validateUserData AND validateProductData have completed .after([validateUserData, validateProductData]) .step(processOrder) ``` In this example: - `fetchUserData` and `fetchProductData` run in parallel branches - Each branch has its own validation step - The `processOrder` step only executes after both validation steps have completed successfully This pattern is particularly useful for: - Joining parallel execution paths - Implementing synchronization points in your workflow - Ensuring all required data is available before proceeding You can also create complex dependency patterns by combining multiple `.after([])` calls: ```typescript myWorkflow // First branch .step(stepA) .then(stepB) .then(stepC) // Second branch .step(stepD) .then(stepE) // Third branch .step(stepF) .then(stepG) // This step depends on the completion of multiple branches .after([stepC, stepE, stepG]) .step(finalStep) ``` ## Cyclical Dependencies and Loops Workflows often need to repeat steps until certain conditions are met. Mastra provides two powerful methods for creating loops: `until` and `while`. These methods offer an intuitive way to implement repetitive tasks. ### Using Manual Cyclical Dependencies (Legacy Approach) In earlier versions, you could create loops by manually defining cyclical dependencies with conditions: ```typescript myWorkflow .step(fetchData) .then(processData) .after(processData) .step(finalizeData, { when: { "processData.status": "success" }, }) .step(fetchData, { when: { "processData.status": "retry" }, }); ``` While this approach still works, the newer `until` and `while` methods provide a cleaner and more maintainable way to create loops. ### Using `until` for Condition-Based Loops The `until` method repeats a step until a specified condition becomes true. It takes these arguments: 1. A condition that determines when to stop looping 2. The step to repeat 3. Optional variables to pass to the repeated step ```typescript // Step that increments a counter until target is reached const incrementStep = new Step({ id: 'increment', inputSchema: z.object({ // Current counter value counter: z.number().optional(), }), outputSchema: z.object({ // Updated counter value updatedCounter: z.number(), }), execute: async ({ context }) => { const { counter = 0 } = context.inputData; return { updatedCounter: counter + 1 }; }, }); workflow .step(incrementStep) .until( async ({ context }) => { // Stop when counter reaches 10 const result = context.getStepResult(incrementStep); return (result?.updatedCounter ?? 0) >= 10; }, incrementStep, { // Pass current counter to next iteration counter: { step: incrementStep, path: 'updatedCounter' } } ) .then(finalStep); ``` You can also use a reference-based condition: ```typescript workflow .step(incrementStep) .until( { ref: { step: incrementStep, path: 'updatedCounter' }, query: { $gte: 10 }, }, incrementStep, { counter: { step: incrementStep, path: 'updatedCounter' } } ) .then(finalStep); ``` ### Using `while` for Condition-Based Loops The `while` method repeats a step as long as a specified condition remains true. It takes the same arguments as `until`: 1. A condition that determines when to continue looping 2. The step to repeat 3. Optional variables to pass to the repeated step ```typescript // Step that increments a counter while below target const incrementStep = new Step({ id: 'increment', inputSchema: z.object({ // Current counter value counter: z.number().optional(), }), outputSchema: z.object({ // Updated counter value updatedCounter: z.number(), }), execute: async ({ context }) => { const { counter = 0 } = context.inputData; return { updatedCounter: counter + 1 }; }, }); workflow .step(incrementStep) .while( async ({ context }) => { // Continue while counter is less than 10 const result = context.getStepResult(incrementStep); return (result?.updatedCounter ?? 0) < 10; }, incrementStep, { // Pass current counter to next iteration counter: { step: incrementStep, path: 'updatedCounter' } } ) .then(finalStep); ``` You can also use a reference-based condition: ```typescript workflow .step(incrementStep) .while( { ref: { step: incrementStep, path: 'updatedCounter' }, query: { $lt: 10 }, }, incrementStep, { counter: { step: incrementStep, path: 'updatedCounter' } } ) .then(finalStep); ``` ### Comparison Operators for Reference Conditions When using reference-based conditions, you can use these comparison operators: | Operator | Description | |----------|-------------| | `$eq` | Equal to | | `$ne` | Not equal to | | `$gt` | Greater than | | `$gte` | Greater than or equal to | | `$lt` | Less than | | `$lte` | Less than or equal to | ## Conditions Use the when property to control whether a step runs based on data from previous steps. Below are three ways to specify conditions. ### Option 1: Function ```typescript myWorkflow.step( new Step({ id: "processData", execute: async ({ context }) => { // Action logic }, }), { when: async ({ context }) => { const fetchData = context?.getStepResult<{ status: string }>("fetchData"); return fetchData?.status === "success"; }, }, ); ``` ### Option 2: Query Object ```typescript myWorkflow.step( new Step({ id: "processData", execute: async ({ context }) => { // Action logic }, }), { when: { ref: { step: { id: "fetchData", }, path: "status", }, query: { $eq: "success" }, }, }, ); ``` ### Option 3: Simple Path Comparison ```typescript myWorkflow.step( new Step({ id: "processData", execute: async ({ context }) => { // Action logic }, }), { when: { "fetchData.status": "success", }, }, ); ``` ## Data Access Patterns Mastra provides several ways to pass data between steps: 1. **Context Object** - Access step results directly through the context object 2. **Variable Mapping** - Explicitly map outputs from one step to inputs of another 3. **getStepResult Method** - Type-safe method to retrieve step outputs Each approach has its advantages depending on your use case and requirements for type safety. ### Using getStepResult Method The `getStepResult` method provides a type-safe way to access step results. This is the recommended approach when working with TypeScript as it preserves type information. #### Basic Usage For better type safety, you can provide a type parameter to `getStepResult`: ```typescript showLineNumbers filename="src/mastra/workflows/get-step-result.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const fetchUserStep = new Step({ id: 'fetchUser', outputSchema: z.object({ name: z.string(), userId: z.string(), }), execute: async ({ context }) => { return { name: 'John Doe', userId: '123' }; }, }); const analyzeDataStep = new Step({ id: "analyzeData", execute: async ({ context }) => { // Type-safe access to previous step result const userData = context.getStepResult<{ name: string, userId: string }>("fetchUser"); if (!userData) { return { status: "error", message: "User data not found" }; } return { analysis: `Analyzed data for user ${userData.name}`, userId: userData.userId }; }, }); ``` #### Using Step References The most type-safe approach is to reference the step directly in the `getStepResult` call: ```typescript showLineNumbers filename="src/mastra/workflows/step-reference.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Define step with output schema const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const processUserStep = new Step({ id: "processUser", execute: async ({ context }) => { // TypeScript will infer the correct type from fetchUserStep's outputSchema const userData = context.getStepResult(fetchUserStep); return { processed: true, userName: userData?.name }; }, }); const workflow = new Workflow({ name: "user-workflow", }); workflow .step(fetchUserStep) .then(processUserStep) .commit(); ``` ### Using Variable Mapping Variable mapping is an explicit way to define data flow between steps. This approach makes dependencies clear and provides good type safety. The data injected into the step is available in the `context.inputData` object, and typed based on the `inputSchema` of the step. ```typescript showLineNumbers filename="src/mastra/workflows/variable-mapping.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const sendEmailStep = new Step({ id: "sendEmail", inputSchema: z.object({ recipientEmail: z.string(), recipientName: z.string(), }), execute: async ({ context }) => { const { recipientEmail, recipientName } = context.inputData; // Send email logic here return { status: "sent", to: recipientEmail }; }, }); const workflow = new Workflow({ name: "email-workflow", }); workflow .step(fetchUserStep) .then(sendEmailStep, { variables: { // Map specific fields from fetchUser to sendEmail inputs recipientEmail: { step: fetchUserStep, path: 'email' }, recipientName: { step: fetchUserStep, path: 'name' } } }) .commit(); ``` For more details on variable mapping, see the [Data Mapping with Workflow Variables](./variables.mdx) documentation. ### Using the Context Object The context object provides direct access to all step results and their outputs. This approach is more flexible but requires careful handling to maintain type safety. You can access step results directly through the `context.steps` object: ```typescript showLineNumbers filename="src/mastra/workflows/context-access.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const processOrderStep = new Step({ id: 'processOrder', execute: async ({ context }) => { // Access data from a previous step let userData: { name: string, userId: string }; if (context.steps['fetchUser']?.status === 'success') { userData = context.steps.fetchUser.output; } else { throw new Error('User data not found'); } return { orderId: 'order123', userId: userData.userId, status: 'processing', }; }, }); const workflow = new Workflow({ name: "order-workflow", }); workflow .step(fetchUserStep) .then(processOrderStep) .commit(); ``` ### Workflow-Level Type Safety For comprehensive type safety across your entire workflow, you can define types for all steps and pass them to the Workflow This allows you to get type safety for the context object on conditions, and on step results in the final workflow output. ```typescript showLineNumbers filename="src/mastra/workflows/workflow-typing.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Create steps with typed outputs const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const processOrderStep = new Step({ id: "processOrder", execute: async ({ context }) => { // TypeScript knows the shape of userData const userData = context.getStepResult(fetchUserStep); return { orderId: "order123", status: "processing" }; }, }); const workflow = new Workflow<[typeof fetchUserStep, typeof processOrderStep]>({ name: "typed-workflow", }); workflow .step(fetchUserStep) .then(processOrderStep) .until(async ({ context }) => { // TypeScript knows the shape of userData here const res = context.getStepResult('fetchUser'); return res?.userId === '123'; }, processOrderStep) .commit(); ``` ### Accessing Trigger Data In addition to step results, you can access the original trigger data that started the workflow: ```typescript showLineNumbers filename="src/mastra/workflows/trigger-data.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Define trigger schema const triggerSchema = z.object({ customerId: z.string(), orderItems: z.array(z.string()), }); type TriggerType = z.infer; const processOrderStep = new Step({ id: "processOrder", execute: async ({ context }) => { // Access trigger data with type safety const triggerData = context.getStepResult('trigger'); return { customerId: triggerData?.customerId, itemCount: triggerData?.orderItems.length || 0, status: "processing" }; }, }); const workflow = new Workflow({ name: "order-workflow", triggerSchema, }); workflow .step(processOrderStep) .commit(); ``` ### Accessing Resume Data The data injected into the step is available in the `context.inputData` object, and typed based on the `inputSchema` of the step. ```typescript showLineNumbers filename="src/mastra/workflows/resume-data.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const processOrderStep = new Step({ id: "processOrder", inputSchema: z.object({ orderId: z.string(), }), execute: async ({ context, suspend }) => { const { orderId } = context.inputData; if (!orderId) { await suspend(); return; } return { orderId, status: "processed" }; }, }); const workflow = new Workflow({ name: "order-workflow", }); workflow .step(processOrderStep) .commit(); const run = workflow.createRun(); const result = await run.start(); const resumedResult = await workflow.resume({ runId: result.runId, stepId: 'processOrder', inputData: { orderId: '123', }, }); console.log({resumedResult}); ``` ### Accessing Workflow Results You can get typed access to the results of a workflow by injecting the step types into the `Workflow` type params: ```typescript showLineNumbers filename="src/mastra/workflows/get-results.ts" copy import { Workflow } from "@mastra/core/workflows"; const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const processOrderStep = new Step({ id: "processOrder", outputSchema: z.object({ orderId: z.string(), status: z.string(), }), execute: async ({ context }) => { const userData = context.getStepResult(fetchUserStep); return { orderId: "order123", status: "processing" }; }, }); const workflow = new Workflow<[typeof fetchUserStep, typeof processOrderStep]>({ name: "typed-workflow", }); workflow .step(fetchUserStep) .then(processOrderStep) .commit(); const run = workflow.createRun(); const result = await run.start(); // The result is a discriminated union of the step results // So it needs to be narrowed down via status checks if (result.results.processOrder.status === 'success') { // TypeScript will know the shape of the results const orderId = result.results.processOrder.output.orderId; console.log({orderId}); } if (result.results.fetchUser.status === 'success') { const userId = result.results.fetchUser.output.userId; console.log({userId}); } ``` ### Best Practices for Data Flow 1. **Use getStepResult with Step References for Type Safety** - Ensures TypeScript can infer the correct types - Catches type errors at compile time 2. **Use Variable Mapping for Explicit Dependencies* - Makes data flow clear and maintainable - Provides good documentation of step dependencies 3. **Define Output Schemas for Steps** - Validates data at runtime - Validates return type of the `execute` function - Improves type inference in TypeScript 4. **Handle Missing Data Gracefully** - Always check if step results exist before accessing properties - Provide fallback values for optional data 5. **Keep Data Transformations Simple** - Transform data in dedicated steps rather than in variable mappings - Makes workflows easier to test and debug ### Comparison of Data Flow Methods | Method | Type Safety | Explicitness | Use Case | |--------|------------|--------------|----------| | getStepResult | Highest | High | Complex workflows with strict typing requirements | | Variable Mapping | High | High | When dependencies need to be clear and explicit | | context.steps | Medium | Low | Quick access to step data in simple workflows | By choosing the right data flow method for your use case, you can create workflows that are both type-safe and maintainable. --- title: "Dynamic Workflows | Mastra Docs" description: "Learn how to create dynamic workflows within workflow steps, allowing for flexible workflow creation based on runtime conditions." --- # Dynamic Workflows [EN] Source: https://mastra.ai/en/docs/workflows/dynamic-workflows This guide demonstrates how to create dynamic workflows within a workflow step. This advanced pattern allows you to create and execute workflows on the fly based on runtime conditions. ## Overview Dynamic workflows are useful when you need to create workflows based on runtime data. ## Implementation The key to creating dynamic workflows is accessing the Mastra instance from within a step's `execute` function and using it to create and run a new workflow. ### Basic Example ```typescript import { Mastra, Step, Workflow } from '@mastra/core'; import { z } from 'zod'; const isMastra = (mastra: any): mastra is Mastra => { return mastra && typeof mastra === 'object' && mastra instanceof Mastra; }; // Step that creates and runs a dynamic workflow const createDynamicWorkflow = new Step({ id: 'createDynamicWorkflow', outputSchema: z.object({ dynamicWorkflowResult: z.any(), }), execute: async ({ context, mastra }) => { if (!mastra) { throw new Error('Mastra instance not available'); } if (!isMastra(mastra)) { throw new Error('Invalid Mastra instance'); } const inputData = context.triggerData.inputData; // Create a new dynamic workflow const dynamicWorkflow = new Workflow({ name: 'dynamic-workflow', mastra, // Pass the mastra instance to the new workflow triggerSchema: z.object({ dynamicInput: z.string(), }), }); // Define steps for the dynamic workflow const dynamicStep = new Step({ id: 'dynamicStep', execute: async ({ context }) => { const dynamicInput = context.triggerData.dynamicInput; return { processedValue: `Processed: ${dynamicInput}`, }; }, }); // Build and commit the dynamic workflow dynamicWorkflow.step(dynamicStep).commit(); // Create a run and execute the dynamic workflow const run = dynamicWorkflow.createRun(); const result = await run.start({ triggerData: { dynamicInput: inputData, }, }); let dynamicWorkflowResult; if (result.results['dynamicStep']?.status === 'success') { dynamicWorkflowResult = result.results['dynamicStep']?.output.processedValue; } else { throw new Error('Dynamic workflow failed'); } // Return the result from the dynamic workflow return { dynamicWorkflowResult, }; }, }); // Main workflow that uses the dynamic workflow creator const mainWorkflow = new Workflow({ name: 'main-workflow', triggerSchema: z.object({ inputData: z.string(), }), mastra: new Mastra(), }); mainWorkflow.step(createDynamicWorkflow).commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { mainWorkflow }, }); const run = mainWorkflow.createRun(); const result = await run.start({ triggerData: { inputData: 'test', }, }); ``` ## Advanced Example: Workflow Factory You can create a workflow factory that generates different workflows based on input parameters: ```typescript const isMastra = (mastra: any): mastra is Mastra => { return mastra && typeof mastra === 'object' && mastra instanceof Mastra; }; const workflowFactory = new Step({ id: 'workflowFactory', inputSchema: z.object({ workflowType: z.enum(['simple', 'complex']), inputData: z.string(), }), outputSchema: z.object({ result: z.any(), }), execute: async ({ context, mastra }) => { if (!mastra) { throw new Error('Mastra instance not available'); } if (!isMastra(mastra)) { throw new Error('Invalid Mastra instance'); } // Create a new dynamic workflow based on the type const dynamicWorkflow = new Workflow({ name: `dynamic-${context.workflowType}-workflow`, mastra, triggerSchema: z.object({ input: z.string(), }), }); if (context.workflowType === 'simple') { // Simple workflow with a single step const simpleStep = new Step({ id: 'simpleStep', execute: async ({ context }) => { return { result: `Simple processing: ${context.triggerData.input}`, }; }, }); dynamicWorkflow.step(simpleStep).commit(); } else { // Complex workflow with multiple steps const step1 = new Step({ id: 'step1', outputSchema: z.object({ intermediateResult: z.string(), }), execute: async ({ context }) => { return { intermediateResult: `First processing: ${context.triggerData.input}`, }; }, }); const step2 = new Step({ id: 'step2', execute: async ({ context }) => { const intermediate = context.getStepResult(step1).intermediateResult; return { finalResult: `Second processing: ${intermediate}`, }; }, }); dynamicWorkflow.step(step1).then(step2).commit(); } // Execute the dynamic workflow const run = dynamicWorkflow.createRun(); const result = await run.start({ triggerData: { input: context.inputData, }, }); // Return the appropriate result based on workflow type if (context.workflowType === 'simple') { return { // @ts-ignore result: result.results['simpleStep']?.output, }; } else { return { // @ts-ignore result: result.results['step2']?.output, }; } }, }); ``` ## Important Considerations 1. **Mastra Instance**: The `mastra` parameter in the `execute` function provides access to the Mastra instance, which is essential for creating dynamic workflows. 2. **Error Handling**: Always check if the Mastra instance is available before attempting to create a dynamic workflow. 3. **Resource Management**: Dynamic workflows consume resources, so be mindful of creating too many workflows in a single execution. 4. **Workflow Lifecycle**: Dynamic workflows are not automatically registered with the main Mastra instance. They exist only for the duration of the step execution unless you explicitly register them. 5. **Debugging**: Debugging dynamic workflows can be challenging. Consider adding detailed logging to track their creation and execution. ## Use Cases - **Conditional Workflow Selection**: Choose different workflow patterns based on input data - **Parameterized Workflows**: Create workflows with dynamic configurations - **Workflow Templates**: Use templates to generate specialized workflows - **Multi-tenant Applications**: Create isolated workflows for different tenants ## Conclusion Dynamic workflows provide a powerful way to create flexible, adaptable workflow systems. By leveraging the Mastra instance within step execution, you can create workflows that respond to runtime conditions and requirements. --- title: "Error Handling in Workflows | Mastra Docs" description: "Learn how to handle errors in Mastra workflows using step retries, conditional branching, and monitoring." --- # Error Handling in Workflows [EN] Source: https://mastra.ai/en/docs/workflows/error-handling Robust error handling is essential for production workflows. Mastra provides several mechanisms to handle errors gracefully, allowing your workflows to recover from failures or gracefully degrade when necessary. ## Overview Error handling in Mastra workflows can be implemented using: 1. **Step Retries** - Automatically retry failed steps 2. **Conditional Branching** - Create alternative paths based on step success or failure 3. **Error Monitoring** - Watch workflows for errors and handle them programmatically 4. **Result Status Checks** - Check the status of previous steps in subsequent steps ## Step Retries Mastra provides a built-in retry mechanism for steps that fail due to transient errors. This is particularly useful for steps that interact with external services or resources that might experience temporary unavailability. ### Basic Retry Configuration You can configure retries at the workflow level or for individual steps: ```typescript // Workflow-level retry configuration const workflow = new Workflow({ name: 'my-workflow', retryConfig: { attempts: 3, // Number of retry attempts delay: 1000, // Delay between retries in milliseconds }, }); // Step-level retry configuration (overrides workflow-level) const apiStep = new Step({ id: 'callApi', execute: async () => { // API call that might fail }, retryConfig: { attempts: 5, // This step will retry up to 5 times delay: 2000, // With a 2-second delay between retries }, }); ``` For more details about step retries, see the [Step Retries](../reference/workflows/step-retries.mdx) reference. ## Conditional Branching You can create alternative workflow paths based on the success or failure of previous steps using conditional logic: ```typescript // Create a workflow with conditional branching const workflow = new Workflow({ name: 'error-handling-workflow', }); workflow .step(fetchDataStep) .then(processDataStep, { // Only execute processDataStep if fetchDataStep was successful when: ({ context }) => { return context.steps.fetchDataStep?.status === 'success'; }, }) .then(fallbackStep, { // Execute fallbackStep if fetchDataStep failed when: ({ context }) => { return context.steps.fetchDataStep?.status === 'failed'; }, }) .commit(); ``` ## Error Monitoring You can monitor workflows for errors using the `watch` method: ```typescript const { start, watch } = workflow.createRun(); watch(async ({ results }) => { // Check for any failed steps const failedSteps = Object.entries(results) .filter(([_, step]) => step.status === "failed") .map(([stepId]) => stepId); if (failedSteps.length > 0) { console.error(`Workflow has failed steps: ${failedSteps.join(', ')}`); // Take remedial action, such as alerting or logging } }); await start(); ``` ## Handling Errors in Steps Within a step's execution function, you can handle errors programmatically: ```typescript const robustStep = new Step({ id: 'robustStep', execute: async ({ context }) => { try { // Attempt the primary operation const result = await someRiskyOperation(); return { success: true, data: result }; } catch (error) { // Log the error console.error('Operation failed:', error); // Return a graceful fallback result instead of throwing return { success: false, error: error.message, fallbackData: 'Default value' }; } }, }); ``` ## Checking Previous Step Results You can make decisions based on the results of previous steps: ```typescript const finalStep = new Step({ id: 'finalStep', execute: async ({ context }) => { // Check results of previous steps const step1Success = context.steps.step1?.status === 'success'; const step2Success = context.steps.step2?.status === 'success'; if (step1Success && step2Success) { // All steps succeeded return { status: 'complete', result: 'All operations succeeded' }; } else if (step1Success) { // Only step1 succeeded return { status: 'partial', result: 'Partial completion' }; } else { // Critical failure return { status: 'failed', result: 'Critical steps failed' }; } }, }); ``` ## Best Practices for Error Handling 1. **Use retries for transient failures**: Configure retry policies for steps that might experience temporary issues. 2. **Provide fallback paths**: Design workflows with alternative paths for when critical steps fail. 3. **Be specific about error scenarios**: Use different handling strategies for different types of errors. 4. **Log errors comprehensively**: Include context information when logging errors to aid in debugging. 5. **Return meaningful data on failure**: When a step fails, return structured data about the failure to help downstream steps make decisions. 6. **Consider idempotency**: Ensure steps can be safely retried without causing duplicate side effects. 7. **Monitor workflow execution**: Use the `watch` method to actively monitor workflow execution and detect errors early. ## Advanced Error Handling For more complex error handling scenarios, consider: - **Implementing circuit breakers**: If a step fails repeatedly, stop retrying and use a fallback strategy - **Adding timeout handling**: Set time limits for steps to prevent workflows from hanging indefinitely - **Creating dedicated error recovery workflows**: For critical workflows, create separate recovery workflows that can be triggered when the main workflow fails ## Related - [Step Retries Reference](../../reference/workflows/step-retries.mdx) - [Watch Method Reference](../../reference/workflows/watch.mdx) - [Step Conditions](../../reference/workflows/step-condition.mdx) - [Control Flow](./control-flow.mdx) # Nested Workflows [EN] Source: https://mastra.ai/en/docs/workflows/nested-workflows Mastra allows you to use workflows as steps within other workflows, enabling you to create modular and reusable workflow components. This feature helps in organizing complex workflows into smaller, manageable pieces and promotes code reuse. It is also visually easier to understand the flow of a workflow when you can see the nested workflows as steps in the parent workflow. ## Basic Usage You can use a workflow as a step directly in another workflow using the `step()` method: ```typescript // Create a nested workflow const nestedWorkflow = new Workflow({ name: "nested-workflow" }) .step(stepA) .then(stepB) .commit(); // Use the nested workflow in a parent workflow const parentWorkflow = new Workflow({ name: "parent-workflow" }) .step(nestedWorkflow, { variables: { city: { step: "trigger", path: "myTriggerInput", }, }, }) .then(stepC) .commit(); ``` When a workflow is used as a step: - It is automatically converted to a step using the workflow's name as the step ID - The workflow's results are available in the parent workflow's context - The nested workflow's steps are executed in their defined order ## Accessing Results Results from a nested workflow are available in the parent workflow's context under the nested workflow's name. The results include all step outputs from the nested workflow: ```typescript const { results } = await parentWorkflow.start(); // Access nested workflow results const nestedWorkflowResult = results["nested-workflow"]; if (nestedWorkflowResult.status === "success") { const nestedResults = nestedWorkflowResult.output.results; } ``` ## Control Flow with Nested Workflows Nested workflows support all the control flow features available to regular steps: ### Parallel Execution Multiple nested workflows can be executed in parallel: ```typescript parentWorkflow .step(nestedWorkflowA) .step(nestedWorkflowB) .after([nestedWorkflowA, nestedWorkflowB]) .step(finalStep); ``` Or using `step()` with an array of workflows: ```typescript parentWorkflow.step([nestedWorkflowA, nestedWorkflowB]).then(finalStep); ``` In this case, `then()` will implicitly wait for all the workflows to finish before executing the final step. ### If-Else Branching Nested workflows can be used in if-else branches using the new syntax that accepts both branches as arguments: ```typescript // Create nested workflows for different paths const workflowA = new Workflow({ name: "workflow-a" }) .step(stepA1) .then(stepA2) .commit(); const workflowB = new Workflow({ name: "workflow-b" }) .step(stepB1) .then(stepB2) .commit(); // Use the new if-else syntax with nested workflows parentWorkflow .step(initialStep) .if( async ({ context }) => { // Your condition here return someCondition; }, workflowA, // if branch workflowB, // else branch ) .then(finalStep) .commit(); ``` The new syntax is more concise and clearer when working with nested workflows. When the condition is: - `true`: The first workflow (if branch) is executed - `false`: The second workflow (else branch) is executed The skipped workflow will have a status of `skipped` in the results: The `.then(finalStep)` call following the if-else block will merge the if and else branches back into a single execution path. ### Looping Nested workflows can use `.until()` and `.while()` loops same as any other step. One interesting new pattern is to pass a workflow directly as the loop-back argument to keep executing that nested workflow until something is true about its results: ```typescript parentWorkflow .step(firstStep) .while( ({ context }) => context.getStepResult("nested-workflow").output.results.someField === "someValue", nestedWorkflow, ) .step(finalStep) .commit(); ``` ## Watching Nested Workflows You can watch the state changes of nested workflows using the `watch` method on the parent workflow. This is useful for monitoring the progress and state transitions of complex workflows: ```typescript const parentWorkflow = new Workflow({ name: "parent-workflow" }) .step([nestedWorkflowA, nestedWorkflowB]) .then(finalStep) .commit(); const run = parentWorkflow.createRun(); const unwatch = parentWorkflow.watch((state) => { console.log("Current state:", state.value); // Access nested workflow states in state.context }); await run.start(); unwatch(); // Stop watching when done ``` ## Suspending and Resuming Nested workflows support suspension and resumption, allowing you to pause and continue workflow execution at specific points. You can suspend either the entire nested workflow or specific steps within it: ```typescript // Define a step that may need to suspend const suspendableStep = new Step({ id: "other", description: "Step that may need to suspend", execute: async ({ context, suspend }) => { if (!wasSuspended) { wasSuspended = true; await suspend(); } return { other: 26 }; }, }); // Create a nested workflow with suspendable steps const nestedWorkflow = new Workflow({ name: "nested-workflow-a" }) .step(startStep) .then(suspendableStep) .then(finalStep) .commit(); // Use in parent workflow const parentWorkflow = new Workflow({ name: "parent-workflow" }) .step(beginStep) .then(nestedWorkflow) .then(lastStep) .commit(); // Start the workflow const run = parentWorkflow.createRun(); const { runId, results } = await run.start({ triggerData: { startValue: 1 } }); // Check if a specific step in the nested workflow is suspended if (results["nested-workflow-a"].output.results.other.status === "suspended") { // Resume the specific suspended step using dot notation const resumedResults = await run.resume({ stepId: "nested-workflow-a.other", context: { startValue: 1 }, }); // The resumed results will contain the completed nested workflow expect(resumedResults.results["nested-workflow-a"].output.results).toEqual({ start: { output: { newValue: 1 }, status: "success" }, other: { output: { other: 26 }, status: "success" }, final: { output: { finalValue: 27 }, status: "success" }, }); } ``` When resuming a nested workflow: - Use the nested workflow's name as the `stepId` when calling `resume()` to resume the entire workflow - Use dot notation (`nested-workflow.step-name`) to resume a specific step within the nested workflow - The nested workflow will continue from the suspended step with the provided context - You can check the status of specific steps in the nested workflow's results using `results["nested-workflow"].output.results` ## Result Schemas and Mapping Nested workflows can define their result schema and mapping, which helps in type safety and data transformation. This is particularly useful when you want to ensure the nested workflow's output matches a specific structure or when you need to transform the results before they're used in the parent workflow. ```typescript // Create a nested workflow with result schema and mapping const nestedWorkflow = new Workflow({ name: "nested-workflow", result: { schema: z.object({ total: z.number(), items: z.array( z.object({ id: z.string(), value: z.number(), }), ), }), mapping: { // Map values from step results using variables syntax total: { step: "step-a", path: "count" }, items: { step: "step-b", path: "items" }, }, }, }) .step(stepA) .then(stepB) .commit(); // Use in parent workflow with type-safe results const parentWorkflow = new Workflow({ name: "parent-workflow" }) .step(nestedWorkflow) .then(async ({ context }) => { const result = context.getStepResult("nested-workflow"); // TypeScript knows the structure of result console.log(result.total); // number console.log(result.items); // Array<{ id: string, value: number }> return { success: true }; }) .commit(); ``` ## Best Practices 1. **Modularity**: Use nested workflows to encapsulate related steps and create reusable workflow components. 2. **Naming**: Give nested workflows descriptive names as they will be used as step IDs in the parent workflow. 3. **Error Handling**: Nested workflows propagate their errors to the parent workflow, so handle errors appropriately. 4. **State Management**: Each nested workflow maintains its own state but can access the parent workflow's context. 5. **Suspension**: When using suspension in nested workflows, consider the entire workflow's state and handle resumption appropriately. ## Example Here's a complete example showing various features of nested workflows: ```typescript const workflowA = new Workflow({ name: "workflow-a", result: { schema: z.object({ activities: z.string(), }), mapping: { activities: { step: planActivities, path: "activities", }, }, }, }) .step(fetchWeather) .then(planActivities) .commit(); const workflowB = new Workflow({ name: "workflow-b", result: { schema: z.object({ activities: z.string(), }), mapping: { activities: { step: planActivities, path: "activities", }, }, }, }) .step(fetchWeather) .then(planActivities) .commit(); const weatherWorkflow = new Workflow({ name: "weather-workflow", triggerSchema: z.object({ cityA: z.string().describe("The city to get the weather for"), cityB: z.string().describe("The city to get the weather for"), }), result: { schema: z.object({ activitiesA: z.string(), activitiesB: z.string(), }), mapping: { activitiesA: { step: workflowA, path: "result.activities", }, activitiesB: { step: workflowB, path: "result.activities", }, }, }, }) .step(workflowA, { variables: { city: { step: "trigger", path: "cityA", }, }, }) .step(workflowB, { variables: { city: { step: "trigger", path: "cityB", }, }, }); weatherWorkflow.commit(); ``` In this example: 1. We define schemas for type safety across all workflows 2. Each step has proper input and output schemas 3. The nested workflows have their own trigger schemas and result mappings 4. Data is passed through using variables syntax in the `.step()` calls 5. The main workflow combines data from both nested workflows --- title: "Handling Complex LLM Operations | Workflows | Mastra" description: "Workflows in Mastra help you orchestrate complex sequences of operations with features like branching, parallel execution, resource suspension, and more." --- # Handling Complex LLM Operations with Workflows [EN] Source: https://mastra.ai/en/docs/workflows/overview Workflows in Mastra help you orchestrate complex sequences of operations with features like branching, parallel execution, resource suspension, and more. ## When to use workflows Most AI applications need more than a single call to a language model. You may want to run multiple steps, conditionally skip certain paths, or even pause execution altogether until you receive user input. Sometimes your agent tool calling is not accurate enough. Mastra's workflow system provides: - A standardized way to define steps and link them together. - Support for both simple (linear) and advanced (branching, parallel) paths. - Debugging and observability features to track each workflow run. ## Example To create a workflow, you define one or more steps, link them, and then commit the workflow before starting it. ### Breaking Down the Workflow Let's examine each part of the workflow creation process: #### 1. Creating the Workflow Here's how you define a workflow in Mastra. The `name` field determines the workflow's API endpoint (`/workflows/$NAME/`), while the `triggerSchema` defines the structure of the workflow's trigger data: ```ts filename="src/mastra/workflow/index.ts" const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` #### 2. Defining Steps Now, we'll define the workflow's steps. Each step can have its own input and output schemas. Here, `stepOne` doubles an input value, and `stepTwo` increments that result if `stepOne` was successful. (To keep things simple, we aren't making any LLM calls in this example): ```ts filename="src/mastra/workflow/index.ts" const stepOne = new Step({ id: "stepOne", outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context.triggerData.inputValue * 2; return { doubledValue }; }, }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { const doubledValue = context.getStepResult(stepOne)?.doubledValue; if (!doubledValue) { return { incrementedValue: 0 }; } return { incrementedValue: doubledValue + 1, }; }, }); ``` #### 3. Linking Steps Now, let's create the control flow, and "commit" (finalize the workflow). In this case, `stepOne` runs first and is followed by `stepTwo`. ```ts filename="src/mastra/workflow/index.ts" myWorkflow.step(stepOne).then(stepTwo).commit(); ``` ### Register the Workflow Register your workflow with Mastra to enable logging and telemetry: ```ts showLineNumbers filename="src/mastra/index.ts" import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ workflows: { myWorkflow }, }); ``` The workflow can also have the mastra instance injected into the context in the case where you need to create dynamic workflows: ```ts filename="src/mastra/workflow/index.ts" import { Mastra } from "@mastra/core"; const mastra = new Mastra(); const myWorkflow = new Workflow({ name: "my-workflow", mastra, }); ``` ### Executing the Workflow Execute your workflow programmatically or via API: ```ts showLineNumbers filename="src/mastra/run-workflow.ts" copy import { mastra } from "./index"; // Get the workflow const myWorkflow = mastra.getWorkflow("myWorkflow"); const { runId, start } = myWorkflow.createRun(); // Start the workflow execution await start({ triggerData: { inputValue: 45 } }); ``` Or use the API (requires running `mastra dev`): // Create workflow run ```bash curl --location 'http://localhost:4111/api/workflows/myWorkflow/start-async' \ --header 'Content-Type: application/json' \ --data '{ "inputValue": 45 }' ``` This example shows the essentials: define your workflow, add steps, commit the workflow, then execute it. ## Defining Steps The basic building block of a workflow [is a step](./steps.mdx). Steps are defined using schemas for inputs and outputs, and can fetch prior step results. ## Control Flow Workflows let you define a [control flow](./control-flow.mdx) to chain steps together in with parallel steps, branching paths, and more. ## Workflow Variables When you need to map data between steps or create dynamic data flows, [workflow variables](./variables.mdx) provide a powerful mechanism for passing information from one step to another and accessing nested properties within step outputs. ## Suspend and Resume When you need to pause execution for external data, user input, or asynchronous events, Mastra [supports suspension at any step](./suspend-and-resume.mdx), persisting the state of the workflow so you can resume it later. ## Observability and Debugging Mastra workflows automatically [log the input and output of each step within a workflow run](../../reference/observability/otel-config.mdx), allowing you to send this data to your preferred logging, telemetry, or observability tools. You can: - Track the status of each step (e.g., `success`, `error`, or `suspended`). - Store run-specific metadata for analysis. - Integrate with third-party observability platforms like Datadog or New Relic by forwarding logs. ## Injecting Request/User-Specific Variables We support dependency injection for tools and workflows. You can pass a container directly to your `start` or `resume` function calls, or inject it using [server middleware](/docs/deployment/server#Middleware). ```ts showLineNumbers filename="src/mastra/run-workflow.ts" copy import { mastra } from "./index"; const stepTwo = new Step({ id: "stepTwo", execute: async ({ context, container }) => { const multiplier = (container.get("multiplier"); const doubledValue = context.getStepResult(stepOne)?.doubledValue; if (!doubledValue) { return { incrementedValue: 0 }; } return { incrementedValue: doubledValue * multiplier, }; }, }); // Get the workflow const myWorkflow = mastra.getWorkflow("myWorkflow"); const { runId, start, resume } = myWorkflow.createRun(); type MyContainer = { multiplier; number }; const container = new Container(); container.set("multiplier", 5); // Start the workflow execution await start({ triggerData: { inputValue: 45 }, container }); await resume({ stepId: "stepTwo", container }); ``` ## More Resources - The [Workflow Guide](../guides/ai-recruiter.mdx) in the Guides section is a tutorial that covers the main concepts. - [Sequential Steps workflow example](../../examples/workflows/sequential-steps.mdx) - [Parallel Steps workflow example](../../examples/workflows/parallel-steps.mdx) - [Branching Paths workflow example](../../examples/workflows/branching-paths.mdx) - [Workflow Variables example](../../examples/workflows/workflow-variables.mdx) - [Cyclical Dependencies workflow example](../../examples/workflows/cyclical-dependencies.mdx) - [Suspend and Resume workflow example](../../examples/workflows/suspend-and-resume.mdx) --- title: "Creating Steps and Adding to Workflows | Mastra Docs" description: "Steps in Mastra workflows provide a structured way to manage operations by defining inputs, outputs, and execution logic." --- # Defining Steps in a Workflow [EN] Source: https://mastra.ai/en/docs/workflows/steps When you build a workflow, you typically break down operations into smaller tasks that can be linked and reused. Steps provide a structured way to manage these tasks by defining inputs, outputs, and execution logic. The code below shows how to define these steps inline or separately. ## Inline Step Creation You can create steps directly within your workflow using `.step()` and `.then()`. This code shows how to define, link, and execute two steps in sequence. ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; export const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); myWorkflow .step( new Step({ id: "stepOne", outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }), ) .then( new Step({ id: "stepTwo", outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 }; } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 }; }, }), ).commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { myWorkflow }, }); ``` ## Creating Steps Separately If you prefer to manage your step logic in separate entities, you can define steps outside and then add them to your workflow. This code shows how to define steps independently and link them afterward. ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Define steps separately const stepOne = new Step({ id: "stepOne", outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }); const stepTwo = new Step({ id: "stepTwo", outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 }; } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 }; }, }); // Build the workflow const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); myWorkflow.step(stepOne).then(stepTwo); myWorkflow.commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { myWorkflow }, }); ``` --- title: "Suspend & Resume Workflows | Human-in-the-Loop | Mastra Docs" description: "Suspend and resume in Mastra workflows allows you to pause execution while waiting for external input or resources." --- # Suspend and Resume in Workflows [EN] Source: https://mastra.ai/en/docs/workflows/suspend-and-resume Complex workflows often need to pause execution while waiting for external input or resources. Mastra's suspend and resume features let you pause workflow execution at any step, persist the workflow snapshot to storage, and resume execution from the saved snapshot when ready. This entire process is automatically managed by Mastra. No config needed, or manual step required from the user. Storing the workflow snapshot to storage (LibSQL by default) means that the workflow state is permanently preserved across sessions, deployments, and server restarts. This persistence is crucial for workflows that might remain suspended for minutes, hours, or even days while waiting for external input or resources. ## When to Use Suspend/Resume Common scenarios for suspending workflows include: - Waiting for human approval or input - Pausing until external API resources become available - Collecting additional data needed for later steps - Rate limiting or throttling expensive operations - Handling event-driven processes with external triggers ## Basic Suspend Example Here's a simple workflow that suspends when a value is too low and resumes when given a higher value: ```typescript const stepTwo = new Step({ id: "stepTwo", outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context, suspend }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 }; } const currentValue = context.steps.stepOne.output.doubledValue; if (currentValue < 100) { await suspend(); return { incrementedValue: 0 }; } return { incrementedValue: currentValue + 1 }; }, }); ``` ## Async/Await Based Flow The suspend and resume mechanism in Mastra uses an async/await pattern that makes it intuitive to implement complex workflows with suspension points. The code structure naturally reflects the execution flow. ### How It Works 1. A step's execution function receives a `suspend` function in its parameters 2. When called with `await suspend()`, the workflow pauses at that point 3. The workflow state is persisted 4. Later, the workflow can be resumed by calling `workflow.resume()` with the appropriate parameters 5. Execution continues from the point after the `suspend()` call ### Example with Multiple Suspension Points Here's an example of a workflow with multiple steps that can suspend: ```typescript // Define steps with suspend capability const promptAgentStep = new Step({ id: "promptAgent", execute: async ({ context, suspend }) => { // Some condition that determines if we need to suspend if (needHumanInput) { // Optionally pass payload data that will be stored with suspended state await suspend({ requestReason: "Need human input for prompt" }); // Code after suspend() will execute when the step is resumed return { modelOutput: context.userInput }; } return { modelOutput: "AI generated output" }; }, outputSchema: z.object({ modelOutput: z.string() }), }); const improveResponseStep = new Step({ id: "improveResponse", execute: async ({ context, suspend }) => { // Another condition for suspension if (needFurtherRefinement) { await suspend(); return { improvedOutput: context.refinedOutput }; } return { improvedOutput: "Improved output" }; }, outputSchema: z.object({ improvedOutput: z.string() }), }); // Build the workflow const workflow = new Workflow({ name: "multi-suspend-workflow", triggerSchema: z.object({ input: z.string() }), }); workflow .step(getUserInput) .then(promptAgentStep) .then(evaluateTone) .then(improveResponseStep) .then(evaluateImproved) .commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ### Starting and Resuming the Workflow ```typescript // Get the workflow and create a run const wf = mastra.getWorkflow("multi-suspend-workflow"); const run = wf.createRun(); // Start the workflow const initialResult = await run.start({ triggerData: { input: "initial input" }, }); let promptAgentStepResult = initialResult.activePaths.get("promptAgent"); let promptAgentResumeResult = undefined; // Check if a step is suspended if (promptAgentStepResult?.status === "suspended") { console.log("Workflow suspended at promptAgent step"); // Resume the workflow with new context const resumeResult = await run.resume({ stepId: "promptAgent", context: { userInput: "Human provided input" }, }); promptAgentResumeResult = resumeResult; } const improveResponseStepResult = promptAgentResumeResult?.activePaths.get("improveResponse"); if (improveResponseStepResult?.status === "suspended") { console.log("Workflow suspended at improveResponse step"); // Resume again with different context const finalResult = await run.resume({ stepId: "improveResponse", context: { refinedOutput: "Human refined output" }, }); console.log("Workflow completed:", finalResult?.results); } ``` ## Event-Based Suspension and Resumption In addition to manually suspending steps, Mastra provides event-based suspension through the `afterEvent` method. This allows workflows to automatically suspend and wait for a specific event to occur before continuing. ### Using afterEvent and resumeWithEvent The `afterEvent` method automatically creates a suspension point in your workflow that waits for a specific event to occur. When the event happens, you can use `resumeWithEvent` to continue the workflow with the event data. Here's how it works: 1. Define events in your workflow configuration 2. Use `afterEvent` to create a suspension point waiting for that event 3. When the event occurs, call `resumeWithEvent` with the event name and data ### Example: Event-Based Workflow ```typescript // Define steps const getUserInput = new Step({ id: "getUserInput", execute: async () => ({ userInput: "initial input" }), outputSchema: z.object({ userInput: z.string() }), }); const processApproval = new Step({ id: "processApproval", execute: async ({ context }) => { // Access the event data from the context const approvalData = context.inputData?.resumedEvent; return { approved: approvalData?.approved, approvedBy: approvalData?.approverName, }; }, outputSchema: z.object({ approved: z.boolean(), approvedBy: z.string(), }), }); // Create workflow with event definition const approvalWorkflow = new Workflow({ name: "approval-workflow", triggerSchema: z.object({ requestId: z.string() }), events: { approvalReceived: { schema: z.object({ approved: z.boolean(), approverName: z.string(), }), }, }, }); // Build workflow with event-based suspension approvalWorkflow .step(getUserInput) .afterEvent("approvalReceived") // Workflow will automatically suspend here .step(processApproval) // This step runs after the event is received .commit(); ``` ### Running an Event-Based Workflow ```typescript // Get the workflow const workflow = mastra.getWorkflow("approval-workflow"); const run = workflow.createRun(); // Start the workflow const initialResult = await run.start({ triggerData: { requestId: "request-123" }, }); console.log("Workflow started, waiting for approval event"); console.log(initialResult.results); // Output will show the workflow is suspended at the event step: // { // getUserInput: { status: 'success', output: { userInput: 'initial input' } }, // __approvalReceived_event: { status: 'suspended' } // } // Later, when the approval event occurs: const resumeResult = await run.resumeWithEvent("approvalReceived", { approved: true, approverName: "Jane Doe", }); console.log("Workflow resumed with event data:", resumeResult.results); // Output will show the completed workflow: // { // getUserInput: { status: 'success', output: { userInput: 'initial input' } }, // __approvalReceived_event: { status: 'success', output: { executed: true, resumedEvent: { approved: true, approverName: 'Jane Doe' } } }, // processApproval: { status: 'success', output: { approved: true, approvedBy: 'Jane Doe' } } // } ``` ### Key Points About Event-Based Workflows - The `suspend()` function can optionally take a payload object that will be stored with the suspended state - Code after the `await suspend()` call will not execute until the step is resumed - When a step is suspended, its status becomes `'suspended'` in the workflow results - When resumed, the step's status changes from `'suspended'` to `'success'` once completed - The `resume()` method requires the `stepId` to identify which suspended step to resume - You can provide new context data when resuming that will be merged with existing step results - Events must be defined in the workflow configuration with a schema - The `afterEvent` method creates a special suspended step that waits for the event - The event step is automatically named `__eventName_event` (e.g., `__approvalReceived_event`) - Use `resumeWithEvent` to provide event data and continue the workflow - Event data is validated against the schema defined for that event - The event data is available in the context as `inputData.resumedEvent` ## Storage for Suspend and Resume When a workflow is suspended using `await suspend()`, Mastra automatically persists the entire workflow state to storage. This is essential for workflows that might remain suspended for extended periods, as it ensures the state is preserved across application restarts or server instances. ### Default Storage: LibSQL By default, Mastra uses LibSQL as its storage engine: ```typescript import { Mastra } from "@mastra/core/mastra"; import { DefaultStorage } from "@mastra/core/storage/libsql"; const mastra = new Mastra({ storage: new DefaultStorage({ config: { url: "file:storage.db", // Local file-based database for development // For production, use a persistent URL: // url: process.env.DATABASE_URL, // authToken: process.env.DATABASE_AUTH_TOKEN, // Optional for authenticated connections }, }), }); ``` The LibSQL storage can be configured in different modes: - In-memory database (testing): `:memory:` - File-based database (development): `file:storage.db` - Remote database (production): URLs like `libsql://your-database.turso.io` ### Alternative Storage Options #### Upstash (Redis-Compatible) For serverless applications or environments where Redis is preferred: ```bash npm install @mastra/upstash ``` ```typescript import { Mastra } from "@mastra/core/mastra"; import { UpstashStore } from "@mastra/upstash"; const mastra = new Mastra({ storage: new UpstashStore({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }), }); ``` ### Storage Considerations - All storage options support suspend and resume functionality identically - The workflow state is automatically serialized and saved when suspended - No additional configuration is needed for suspend/resume to work with storage - Choose your storage option based on your infrastructure, scaling needs, and existing technology stack ## Watching and Resuming To handle suspended workflows, use the `watch` method to monitor workflow status per run and `resume` to continue execution: ```typescript import { mastra } from "./index"; // Get the workflow const myWorkflow = mastra.getWorkflow("myWorkflow"); const { start, watch, resume } = myWorkflow.createRun(); // Start watching the workflow before executing it watch(async ({ activePaths }) => { const isStepTwoSuspended = activePaths.get("stepTwo")?.status === "suspended"; if (isStepTwoSuspended) { console.log("Workflow suspended, resuming with new value"); // Resume the workflow with new context await resume({ stepId: "stepTwo", context: { secondValue: 100 }, }); } }); // Start the workflow execution await start({ triggerData: { inputValue: 45 } }); ``` ### Watching and Resuming Event-Based Workflows You can use the same watching pattern with event-based workflows: ```typescript const { start, watch, resumeWithEvent } = workflow.createRun(); // Watch for suspended event steps watch(async ({ activePaths }) => { const isApprovalReceivedSuspended = activePaths.get("__approvalReceived_event")?.status === "suspended"; if (isApprovalReceivedSuspended) { console.log("Workflow waiting for approval event"); // In a real scenario, you would wait for the actual event to occur // For example, this could be triggered by a webhook or user interaction setTimeout(async () => { await resumeWithEvent("approvalReceived", { approved: true, approverName: "Auto Approver", }); }, 5000); // Simulate event after 5 seconds } }); // Start the workflow await start({ triggerData: { requestId: "auto-123" } }); ``` ## Further Reading For a deeper understanding of how suspend and resume works under the hood: - [Understanding Snapshots in Mastra Workflows](../../reference/workflows/snapshots.mdx) - Learn about the snapshot mechanism that powers suspend and resume functionality - [Step Configuration Guide](./steps.mdx) - Learn more about configuring steps in your workflows - [Control Flow Guide](./control-flow.mdx) - Advanced workflow control patterns - [Event-Driven Workflows](../../reference/workflows/events.mdx) - Detailed reference for event-based workflows ## Related Resources - See the [Suspend and Resume Example](../../examples/workflows/suspend-and-resume.mdx) for a complete working example - Check the [Step Class Reference](../../reference/workflows/step-class.mdx) for suspend/resume API details - Review [Workflow Observability](../../reference/observability/otel-config.mdx) for monitoring suspended workflows --- title: "Data Mapping with Workflow Variables | Mastra Docs" description: "Learn how to use workflow variables to map data between steps and create dynamic data flows in your Mastra workflows." --- # Data Mapping with Workflow Variables [EN] Source: https://mastra.ai/en/docs/workflows/variables Workflow variables in Mastra provide a powerful mechanism for mapping data between steps, allowing you to create dynamic data flows and pass information from one step to another. ## Understanding Workflow Variables In Mastra workflows, variables serve as a way to: - Map data from trigger inputs to step inputs - Pass outputs from one step to inputs of another step - Access nested properties within step outputs - Create more flexible and reusable workflow steps ## Using Variables for Data Mapping ### Basic Variable Mapping You can map data between steps using the `variables` property when adding a step to your workflow: ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy const workflow = new Workflow({ name: 'data-mapping-workflow', triggerSchema: z.object({ inputData: z.string(), }), }); workflow .step(step1, { variables: { // Map trigger data to step input inputData: { step: 'trigger', path: 'inputData' } } }) .then(step2, { variables: { // Map output from step1 to input for step2 previousValue: { step: step1, path: 'outputField' } } }) .commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ### Accessing Nested Properties You can access nested properties using dot notation in the `path` field: ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy workflow .step(step1) .then(step2, { variables: { // Access a nested property from step1's output nestedValue: { step: step1, path: 'nested.deeply.value' } } }) .commit(); ``` ### Mapping Entire Objects You can map an entire object by using `.` as the path: ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy workflow .step(step1, { variables: { // Map the entire trigger data object triggerData: { step: 'trigger', path: '.' } } }) .commit(); ``` ### Variables in Loops Variables can also be passed to `while` and `until` loops. This is useful for passing data between iterations or from outside steps: ```typescript showLineNumbers filename="src/mastra/workflows/loop-variables.ts" copy // Step that increments a counter const incrementStep = new Step({ id: 'increment', inputSchema: z.object({ // Previous value from last iteration prevValue: z.number().optional(), }), outputSchema: z.object({ // Updated counter value updatedCounter: z.number(), }), execute: async ({ context }) => { const { prevValue = 0 } = context.inputData; return { updatedCounter: prevValue + 1 }; }, }); const workflow = new Workflow({ name: 'counter' }); workflow .step(incrementStep) .while( async ({ context }) => { // Continue while counter is less than 10 const result = context.getStepResult(incrementStep); return (result?.updatedCounter ?? 0) < 10; }, incrementStep, { // Pass previous value to next iteration prevValue: { step: incrementStep, path: 'updatedCounter' } } ); ``` ## Variable Resolution When a workflow executes, Mastra resolves variables at runtime by: 1. Identifying the source step specified in the `step` property 2. Retrieving the output from that step 3. Navigating to the specified property using the `path` 4. Injecting the resolved value into the target step's context as the `inputData` property ## Examples ### Mapping from Trigger Data This example shows how to map data from the workflow trigger to a step: ```typescript showLineNumbers filename="src/mastra/workflows/trigger-mapping.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Define a step that needs user input const processUserInput = new Step({ id: "processUserInput", execute: async ({ context }) => { // The inputData will be available in context because of the variable mapping const { inputData } = context.inputData; return { processedData: `Processed: ${inputData}` }; }, }); // Create the workflow const workflow = new Workflow({ name: "trigger-mapping", triggerSchema: z.object({ inputData: z.string(), }), }); // Map the trigger data to the step workflow .step(processUserInput, { variables: { inputData: { step: 'trigger', path: 'inputData' }, } }) .commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ### Mapping Between Steps This example demonstrates mapping data from one step to another: ```typescript showLineNumbers filename="src/mastra/workflows/step-mapping.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Step 1: Generate data const generateData = new Step({ id: "generateData", outputSchema: z.object({ nested: z.object({ value: z.string(), }), }), execute: async () => { return { nested: { value: "step1-data" } }; }, }); // Step 2: Process the data from step 1 const processData = new Step({ id: "processData", inputSchema: z.object({ previousValue: z.string(), }), execute: async ({ context }) => { // previousValue will be available because of the variable mapping const { previousValue } = context.inputData; return { result: `Processed: ${previousValue}` }; }, }); // Create the workflow const workflow = new Workflow({ name: "step-mapping", }); // Map data from step1 to step2 workflow .step(generateData) .then(processData, { variables: { // Map the nested.value property from generateData's output previousValue: { step: generateData, path: 'nested.value' }, } }) .commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ## Type Safety Mastra provides type safety for variable mappings when using TypeScript: ```typescript showLineNumbers filename="src/mastra/workflows/type-safe.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Define schemas for better type safety const triggerSchema = z.object({ inputValue: z.string(), }); type TriggerType = z.infer; // Step with typed context const step1 = new Step({ id: "step1", outputSchema: z.object({ nested: z.object({ value: z.string(), }), }), execute: async ({ context }) => { // TypeScript knows the shape of triggerData const triggerData = context.getStepResult('trigger'); return { nested: { value: `processed-${triggerData?.inputValue}` } }; }, }); // Create the workflow with the schema const workflow = new Workflow({ name: "type-safe-workflow", triggerSchema, }); workflow.step(step1).commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ## Best Practices 1. **Validate Inputs and Outputs**: Use `inputSchema` and `outputSchema` to ensure data consistency. 2. **Keep Mappings Simple**: Avoid overly complex nested paths when possible. 3. **Consider Default Values**: Handle cases where mapped data might be undefined. ## Comparison with Direct Context Access While you can access previous step results directly via `context.steps`, using variable mappings offers several advantages: | Feature | Variable Mapping | Direct Context Access | | ------- | --------------- | --------------------- | | Clarity | Explicit data dependencies | Implicit dependencies | | Reusability | Steps can be reused with different mappings | Steps are tightly coupled | | Type Safety | Better TypeScript integration | Requires manual type assertions | --- title: "Example: Adding Voice Capabilities | Agents | Mastra" description: "Example of adding voice capabilities to Mastra agents, enabling them to speak and listen using different voice providers." --- import { GithubLink } from "@/components/github-link"; # Giving your Agent a Voice [EN] Source: https://mastra.ai/en/examples/agents/adding-voice-capabilities This example demonstrates how to add voice capabilities to Mastra agents, enabling them to speak and listen using different voice providers. We'll create two agents with different voice configurations and show how they can interact using speech. The example showcases: 1. Using CompositeVoice to combine different providers for speaking and listening 2. Using a single provider for both capabilities 3. Basic voice interactions between agents First, let's import the required dependencies and set up our agents: ```ts showLineNumbers copy // Import required dependencies import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { CompositeVoice } from '@mastra/core/voice'; import { OpenAIVoice } from '@mastra/voice-openai'; import { createReadStream, createWriteStream } from 'fs'; import { PlayAIVoice } from '@mastra/voice-playai'; import path from 'path'; // Initialize Agent 1 with both listening and speaking capabilities const agent1 = new Agent({ name: 'Agent1', instructions: `You are an agent with both STT and TTS capabilities.`, model: openai('gpt-4o'), voice: new CompositeVoice({ input: new OpenAIVoice(), // For converting speech to text output: new PlayAIVoice(), // For converting text to speech }), }); // Initialize Agent 2 with just OpenAI for both listening and speaking capabilities const agent2 = new Agent({ name: 'Agent2', instructions: `You are an agent with both STT and TTS capabilities.`, model: openai('gpt-4o'), voice: new OpenAIVoice(), }); ``` In this setup: - Agent1 uses a CompositeVoice that combines OpenAI for speech-to-text and PlayAI for text-to-speech - Agent2 uses OpenAI's voice capabilities for both functions Now let's demonstrate a basic interaction between the agents: ```ts showLineNumbers copy // Step 1: Agent 1 speaks a question and saves it to a file const audio1 = await agent1.voice.speak('What is the meaning of life in one sentence?'); await saveAudioToFile(audio1, 'agent1-question.mp3'); // Step 2: Agent 2 listens to Agent 1's question const audioFilePath = path.join(process.cwd(), 'agent1-question.mp3'); const audioStream = createReadStream(audioFilePath); const audio2 = await agent2.voice.listen(audioStream); const text = await convertToText(audio2); // Step 3: Agent 2 generates and speaks a response const agent2Response = await agent2.generate(text); const agent2ResponseAudio = await agent2.voice.speak(agent2Response.text); await saveAudioToFile(agent2ResponseAudio, 'agent2-response.mp3'); ``` Here's what's happening in the interaction: 1. Agent1 converts text to speech using PlayAI and saves it to a file (we save the audio so you can hear the interaction) 2. Agent2 listens to the audio file using OpenAI's speech-to-text 3. Agent2 generates a response and converts it to speech The example includes helper functions for handling audio files: ```ts showLineNumbers copy /** * Saves an audio stream to a file */ async function saveAudioToFile(audio: NodeJS.ReadableStream, filename: string): Promise { const filePath = path.join(process.cwd(), filename); const writer = createWriteStream(filePath); audio.pipe(writer); return new Promise((resolve, reject) => { writer.on('finish', resolve); writer.on('error', reject); }); } /** * Converts either a string or a readable stream to text */ async function convertToText(input: string | NodeJS.ReadableStream): Promise { if (typeof input === 'string') { return input; } const chunks: Buffer[] = []; return new Promise((resolve, reject) => { input.on('data', chunk => chunks.push(Buffer.from(chunk))); input.on('error', err => reject(err)); input.on('end', () => resolve(Buffer.concat(chunks).toString('utf-8'))); }); } ``` ## Key Points 1. The `voice` property in the Agent configuration accepts any implementation of MastraVoice 2. CompositeVoice allows using different providers for speaking and listening 3. Audio can be handled as streams, making it efficient for real-time processing 4. Voice capabilities can be combined with the agent's natural language processing




--- title: "Example: Calling Agentic Workflows | Agents | Mastra Docs" description: Example of creating AI workflows in Mastra, demonstrating integration of external APIs with LLM-powered planning. --- import { GithubLink } from "@/components/github-link"; # Agentic Workflows [EN] Source: https://mastra.ai/en/examples/agents/agentic-workflows When building AI applications, you often need to coordinate multiple steps that depend on each other's outputs. This example shows how to create an AI workflow that fetches weather data and uses it to suggest activities, demonstrating how to integrate external APIs with LLM-powered planning. ```ts showLineNumbers copy import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: 'Weather Agent', instructions: ` You are a local activities and travel expert who excels at weather-based planning. Analyze the weather data and provide practical activity recommendations. For each day in the forecast, structure your response exactly as follows: 📅 [Day, Month Date, Year] ═══════════════════════════ 🌡️ WEATHER SUMMARY • Conditions: [brief description] • Temperature: [X°C/Y°F to A°C/B°F] • Precipitation: [X% chance] 🌅 MORNING ACTIVITIES Outdoor: • [Activity Name] - [Brief description including specific location/route] Best timing: [specific time range] Note: [relevant weather consideration] 🌞 AFTERNOON ACTIVITIES Outdoor: • [Activity Name] - [Brief description including specific location/route] Best timing: [specific time range] Note: [relevant weather consideration] 🏠 INDOOR ALTERNATIVES • [Activity Name] - [Brief description including specific venue] Ideal for: [weather condition that would trigger this alternative] ⚠️ SPECIAL CONSIDERATIONS • [Any relevant weather warnings, UV index, wind conditions, etc.] Guidelines: - Suggest 2-3 time-specific outdoor activities per day - Include 1-2 indoor backup options - For precipitation >50%, lead with indoor activities - All activities must be specific to the location - Include specific venues, trails, or locations - Consider activity intensity based on temperature - Keep descriptions concise but informative Maintain this exact formatting for consistency, using the emoji and section headers as shown. `, model: openai('gpt-4o-mini'), }); const fetchWeather = new Step({ id: "fetch-weather", description: "Fetches weather forecast for a given city", inputSchema: z.object({ city: z.string().describe("The city to get the weather for"), }), execute: async ({ context }) => { const triggerData = context?.getStepResult<{ city: string; }>("trigger"); if (!triggerData) { throw new Error("Trigger data not found"); } const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(triggerData.city)}&count=1`; const geocodingResponse = await fetch(geocodingUrl); const geocodingData = await geocodingResponse.json(); if (!geocodingData.results?.[0]) { throw new Error(`Location '${triggerData.city}' not found`); } const { latitude, longitude, name } = geocodingData.results[0]; const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&daily=temperature_2m_max,temperature_2m_min,precipitation_probability_mean,weathercode&timezone=auto`; const response = await fetch(weatherUrl); const data = await response.json(); const forecast = data.daily.time.map((date: string, index: number) => ({ date, maxTemp: data.daily.temperature_2m_max[index], minTemp: data.daily.temperature_2m_min[index], precipitationChance: data.daily.precipitation_probability_mean[index], condition: getWeatherCondition(data.daily.weathercode[index]), location: name, })); return forecast; }, }); const forecastSchema = z.array( z.object({ date: z.string(), maxTemp: z.number(), minTemp: z.number(), precipitationChance: z.number(), condition: z.string(), location: z.string(), }), ); const planActivities = new Step({ id: "plan-activities", description: "Suggests activities based on weather conditions", inputSchema: forecastSchema, execute: async ({ context, mastra }) => { const forecast = context?.getStepResult>( "fetch-weather", ); if (!forecast) { throw new Error("Forecast data not found"); } const prompt = `Based on the following weather forecast for ${forecast[0].location}, suggest appropriate activities: ${JSON.stringify(forecast, null, 2)} `; const response = await agent.stream([ { role: "user", content: prompt, }, ]); let activitiesText = ''; for await (const chunk of response.textStream) { process.stdout.write(chunk); activitiesText += chunk; } return { activities: activitiesText, }; }, }); function getWeatherCondition(code: number): string { const conditions: Record = { 0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast", 45: "Foggy", 48: "Depositing rime fog", 51: "Light drizzle", 53: "Moderate drizzle", 55: "Dense drizzle", 61: "Slight rain", 63: "Moderate rain", 65: "Heavy rain", 71: "Slight snow fall", 73: "Moderate snow fall", 75: "Heavy snow fall", 95: "Thunderstorm", }; return conditions[code] || "Unknown"; } const weatherWorkflow = new Workflow({ name: "weather-workflow", triggerSchema: z.object({ city: z.string().describe("The city to get the weather for"), }), }) .step(fetchWeather) .then(planActivities); weatherWorkflow.commit(); const mastra = new Mastra({ workflows: { weatherWorkflow, }, }); async function main() { const { start } = mastra.getWorkflow("weatherWorkflow").createRun(); const result = await start({ triggerData: { city: "London", }, }); console.log("\n \n"); console.log(result); } main(); ``` --- title: "Example: Categorizing Birds | Agents | Mastra Docs" description: Example of using a Mastra AI Agent to determine if an image from Unsplash depicts a bird. --- import { GithubLink } from "@/components/github-link"; # Example: Categorizing Birds with an AI Agent [EN] Source: https://mastra.ai/en/examples/agents/bird-checker We will get a random image from [Unsplash](https://unsplash.com/) that matches a selected query and uses a [Mastra AI Agent](/docs/agents/overview.md) to determine if it is a bird or not. ```ts showLineNumbers copy import { anthropic } from "@ai-sdk/anthropic"; import { Agent } from "@mastra/core/agent"; import { z } from "zod"; export type Image = { alt_description: string; urls: { regular: string; raw: string; }; user: { first_name: string; links: { html: string; }; }; }; export type ImageResponse = | { ok: true; data: T; } | { ok: false; error: K; }; const getRandomImage = async ({ query, }: { query: string; }): Promise> => { const page = Math.floor(Math.random() * 20); const order_by = Math.random() < 0.5 ? "relevant" : "latest"; try { const res = await fetch( `https://api.unsplash.com/search/photos?query=${query}&page=${page}&order_by=${order_by}`, { method: "GET", headers: { Authorization: `Client-ID ${process.env.UNSPLASH_ACCESS_KEY}`, "Accept-Version": "v1", }, cache: "no-store", }, ); if (!res.ok) { return { ok: false, error: "Failed to fetch image", }; } const data = (await res.json()) as { results: Array; }; const randomNo = Math.floor(Math.random() * data.results.length); return { ok: true, data: data.results[randomNo] as Image, }; } catch (err) { return { ok: false, error: "Error fetching image", }; } }; const instructions = ` You can view an image and figure out if it is a bird or not. You can also figure out the species of the bird and where the picture was taken. `; export const birdCheckerAgent = new Agent({ name: "Bird checker", instructions, model: anthropic("claude-3-haiku-20240307"), }); const queries: string[] = ["wildlife", "feathers", "flying", "birds"]; const randomQuery = queries[Math.floor(Math.random() * queries.length)]; // Get the image url from Unsplash with random type const imageResponse = await getRandomImage({ query: randomQuery }); if (!imageResponse.ok) { console.log("Error fetching image", imageResponse.error); process.exit(1); } console.log("Image URL: ", imageResponse.data.urls.regular); const response = await birdCheckerAgent.generate( [ { role: "user", content: [ { type: "image", image: new URL(imageResponse.data.urls.regular), }, { type: "text", text: "view this image and let me know if it's a bird or not, and the scientific name of the bird without any explanation. Also summarize the location for this picture in one or two short sentences understandable by a high school student", }, ], }, ], { output: z.object({ bird: z.boolean(), species: z.string(), location: z.string(), }), }, ); console.log(response.object); ```




--- title: "Example: Hierarchical Multi-Agent System | Agents | Mastra" description: Example of creating a hierarchical multi-agent system using Mastra, where agents interact through tool functions. --- import { GithubLink } from "@/components/github-link"; # Hierarchical Multi-Agent System [EN] Source: https://mastra.ai/en/examples/agents/hierarchical-multi-agent This example demonstrates how to create a hierarchical multi-agent system where agents interact through tool functions, with one agent coordinating the work of others. The system consists of three agents: 1. A Publisher agent (supervisor) that orchestrates the process 2. A Copywriter agent that writes the initial content 3. An Editor agent that refines the content First, define the Copywriter agent and its tool: ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { anthropic } from "@ai-sdk/anthropic"; const copywriterAgent = new Agent({ name: "Copywriter", instructions: "You are a copywriter agent that writes blog post copy.", model: anthropic("claude-3-5-sonnet-20241022"), }); const copywriterTool = createTool({ id: "copywriter-agent", description: "Calls the copywriter agent to write blog post copy.", inputSchema: z.object({ topic: z.string().describe("Blog post topic"), }), outputSchema: z.object({ copy: z.string().describe("Blog post copy"), }), execute: async ({ context }) => { const result = await copywriterAgent.generate( `Create a blog post about ${context.topic}`, ); return { copy: result.text }; }, }); ``` Next, define the Editor agent and its tool: ```ts showLineNumbers copy const editorAgent = new Agent({ name: "Editor", instructions: "You are an editor agent that edits blog post copy.", model: openai("gpt-4o-mini"), }); const editorTool = createTool({ id: "editor-agent", description: "Calls the editor agent to edit blog post copy.", inputSchema: z.object({ copy: z.string().describe("Blog post copy"), }), outputSchema: z.object({ copy: z.string().describe("Edited blog post copy"), }), execute: async ({ context }) => { const result = await editorAgent.generate( `Edit the following blog post only returning the edited copy: ${context.copy}`, ); return { copy: result.text }; }, }); ``` Finally, create the Publisher agent that coordinates the others: ```ts showLineNumbers copy const publisherAgent = new Agent({ name: "publisherAgent", instructions: "You are a publisher agent that first calls the copywriter agent to write blog post copy about a specific topic and then calls the editor agent to edit the copy. Just return the final edited copy.", model: anthropic("claude-3-5-sonnet-20241022"), tools: { copywriterTool, editorTool }, }); const mastra = new Mastra({ agents: { publisherAgent }, }); ``` To use the entire system: ```ts showLineNumbers copy async function main() { const agent = mastra.getAgent("publisherAgent"); const result = await agent.generate( "Write a blog post about React JavaScript frameworks. Only return the final edited copy.", ); console.log(result.text); } main(); ```




--- title: "Example: Multi-Agent Workflow | Agents | Mastra Docs" description: Example of creating an agentic workflow in Mastra, where work product is passed between multiple agents. --- import { GithubLink } from "@/components/github-link"; # Multi-Agent Workflow [EN] Source: https://mastra.ai/en/examples/agents/multi-agent-workflow This example demonstrates how to create an agentic workflow with work product being passed between multiple agents with a worker agent and a supervisor agent. In this example, we create a sequential workflow that calls two agents in order: 1. A Copywriter agent that writes the initial blog post 2. An Editor agent that refines the content First, import the required dependencies: ```typescript import { openai } from "@ai-sdk/openai"; import { anthropic } from "@ai-sdk/anthropic"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; ``` Create the copywriter agent that will generate the initial blog post: ```typescript const copywriterAgent = new Agent({ name: "Copywriter", instructions: "You are a copywriter agent that writes blog post copy.", model: anthropic("claude-3-5-sonnet-20241022"), }); ``` Define the copywriter step that executes the agent and handles the response: ```typescript const copywriterStep = new Step({ id: "copywriterStep", execute: async ({ context }) => { if (!context?.triggerData?.topic) { throw new Error("Topic not found in trigger data"); } const result = await copywriterAgent.generate( `Create a blog post about ${context.triggerData.topic}`, ); console.log("copywriter result", result.text); return { copy: result.text, }; }, }); ``` Set up the editor agent to refine the copywriter's content: ```typescript const editorAgent = new Agent({ name: "Editor", instructions: "You are an editor agent that edits blog post copy.", model: openai("gpt-4o-mini"), }); ``` Create the editor step that processes the copywriter's output: ```typescript const editorStep = new Step({ id: "editorStep", execute: async ({ context }) => { const copy = context?.getStepResult<{ copy: number }>("copywriterStep")?.copy; const result = await editorAgent.generate( `Edit the following blog post only returning the edited copy: ${copy}`, ); console.log("editor result", result.text); return { copy: result.text, }; }, }); ``` Configure the workflow and execute the steps: ```typescript const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ topic: z.string(), }), }); // Run steps sequentially. myWorkflow.step(copywriterStep).then(editorStep).commit(); const { runId, start } = myWorkflow.createRun(); const res = await start({ triggerData: { topic: "React JavaScript frameworks" }, }); console.log("Results: ", res.results); ```




--- title: "Example: Agents with a System Prompt | Agents | Mastra Docs" description: Example of creating an AI agent in Mastra with a system prompt to define its personality and capabilities. --- import { GithubLink } from "@/components/github-link"; # Giving an Agent a System Prompt [EN] Source: https://mastra.ai/en/examples/agents/system-prompt When building AI agents, you often need to give them specific instructions and capabilities to handle specialized tasks effectively. System prompts allow you to define an agent's personality, knowledge domain, and behavioral guidelines. This example shows how to create an AI agent with custom instructions and integrate it with a dedicated tool for retrieving verified information. ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const instructions = `You are a helpful cat expert assistant. When discussing cats, you should always include an interesting cat fact. Your main responsibilities: 1. Answer questions about cats 2. Use the catFact tool to provide verified cat facts 3. Incorporate the cat facts naturally into your responses Always use the catFact tool at least once in your responses to ensure accuracy.`; const getCatFact = async () => { const { fact } = (await fetch("https://catfact.ninja/fact").then((res) => res.json(), )) as { fact: string; }; return fact; }; const catFact = createTool({ id: "Get cat facts", inputSchema: z.object({}), description: "Fetches cat facts", execute: async () => { console.log("using tool to fetch cat fact"); return { catFact: await getCatFact(), }; }, }); const catOne = new Agent({ name: "cat-one", instructions: instructions, model: openai("gpt-4o-mini"), tools: { catFact, }, }); const result = await catOne.generate("Tell me a cat fact"); console.log(result.text); ```




--- title: "Example: Giving an Agent a Tool | Agents | Mastra Docs" description: Example of creating an AI agent in Mastra that uses a dedicated tool to provide weather information. --- import { GithubLink } from "@/components/github-link"; # Example: Giving an Agent a Tool [EN] Source: https://mastra.ai/en/examples/agents/using-a-tool When building AI agents, you often need to integrate external data sources or functionality to enhance their capabilities. This example shows how to create an AI agent that uses a dedicated weather tool to provide accurate weather information for specific locations. ```ts showLineNumbers copy import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { openai } from "@ai-sdk/openai"; import { z } from "zod"; interface WeatherResponse { current: { time: string; temperature_2m: number; apparent_temperature: number; relative_humidity_2m: number; wind_speed_10m: number; wind_gusts_10m: number; weather_code: number; }; } const weatherTool = createTool({ id: "get-weather", description: "Get current weather for a location", inputSchema: z.object({ location: z.string().describe("City name"), }), outputSchema: z.object({ temperature: z.number(), feelsLike: z.number(), humidity: z.number(), windSpeed: z.number(), windGust: z.number(), conditions: z.string(), location: z.string(), }), execute: async ({ context }) => { return await getWeather(context.location); }, }); const getWeather = async (location: string) => { const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(location)}&count=1`; const geocodingResponse = await fetch(geocodingUrl); const geocodingData = await geocodingResponse.json(); if (!geocodingData.results?.[0]) { throw new Error(`Location '${location}' not found`); } const { latitude, longitude, name } = geocodingData.results[0]; const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,apparent_temperature,relative_humidity_2m,wind_speed_10m,wind_gusts_10m,weather_code`; const response = await fetch(weatherUrl); const data: WeatherResponse = await response.json(); return { temperature: data.current.temperature_2m, feelsLike: data.current.apparent_temperature, humidity: data.current.relative_humidity_2m, windSpeed: data.current.wind_speed_10m, windGust: data.current.wind_gusts_10m, conditions: getWeatherCondition(data.current.weather_code), location: name, }; }; function getWeatherCondition(code: number): string { const conditions: Record = { 0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast", 45: "Foggy", 48: "Depositing rime fog", 51: "Light drizzle", 53: "Moderate drizzle", 55: "Dense drizzle", 56: "Light freezing drizzle", 57: "Dense freezing drizzle", 61: "Slight rain", 63: "Moderate rain", 65: "Heavy rain", 66: "Light freezing rain", 67: "Heavy freezing rain", 71: "Slight snow fall", 73: "Moderate snow fall", 75: "Heavy snow fall", 77: "Snow grains", 80: "Slight rain showers", 81: "Moderate rain showers", 82: "Violent rain showers", 85: "Slight snow showers", 86: "Heavy snow showers", 95: "Thunderstorm", 96: "Thunderstorm with slight hail", 99: "Thunderstorm with heavy hail", }; return conditions[code] || "Unknown"; } const weatherAgent = new Agent({ name: "Weather Agent", instructions: `You are a helpful weather assistant that provides accurate weather information. Your primary function is to help users get weather details for specific locations. When responding: - Always ask for a location if none is provided - If the location name isn’t in English, please translate it - Include relevant details like humidity, wind conditions, and precipitation - Keep responses concise but informative Use the weatherTool to fetch current weather data.`, model: openai("gpt-4o-mini"), tools: { weatherTool }, }); const mastra = new Mastra({ agents: { weatherAgent }, }); async function main() { const agent = await mastra.getAgent("weatherAgent"); const result = await agent.generate("What is the weather in London?"); console.log(result.text); } main(); ```




--- title: "Example: Answer Relevancy | Evals | Mastra Docs" description: Example of using the Answer Relevancy metric to evaluate response relevancy to queries. --- import { GithubLink } from "@/components/github-link"; # Answer Relevancy Evaluation [EN] Source: https://mastra.ai/en/examples/evals/answer-relevancy This example demonstrates how to use Mastra's Answer Relevancy metric to evaluate how well responses address their input queries. ## Overview The example shows how to: 1. Configure the Answer Relevancy metric 2. Evaluate response relevancy to queries 3. Analyze relevancy scores 4. Handle different relevancy scenarios ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { AnswerRelevancyMetric } from '@mastra/evals/llm'; ``` ## Metric Configuration Set up the Answer Relevancy metric with custom parameters: ```typescript copy showLineNumbers{5} filename="src/index.ts" const metric = new AnswerRelevancyMetric(openai('gpt-4o-mini'), { uncertaintyWeight: 0.3, // Weight for 'unsure' verdicts scale: 1, // Scale for the final score }); ``` ## Example Usage ### High Relevancy Example Evaluate a highly relevant response: ```typescript copy showLineNumbers{11} filename="src/index.ts" const query1 = 'What are the health benefits of regular exercise?'; const response1 = 'Regular exercise improves cardiovascular health, strengthens muscles, boosts metabolism, and enhances mental well-being through the release of endorphins.'; console.log('Example 1 - High Relevancy:'); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response is highly relevant to the query. It provides a comprehensive overview of the health benefits of regular exercise.' } ``` ### Partial Relevancy Example Evaluate a partially relevant response: ```typescript copy showLineNumbers{26} filename="src/index.ts" const query2 = 'What should a healthy breakfast include?'; const response2 = 'A nutritious breakfast should include whole grains and protein. However, the timing of your breakfast is just as important - studies show eating within 2 hours of waking optimizes metabolism and energy levels throughout the day.'; console.log('Example 2 - Partial Relevancy:'); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.7, reason: 'The response is partially relevant to the query. It provides some information about healthy breakfast choices but misses the timing aspect.' } ``` ### Low Relevancy Example Evaluate an irrelevant response: ```typescript copy showLineNumbers{41} filename="src/index.ts" const query3 = 'What are the benefits of meditation?'; const response3 = 'The Great Wall of China is over 13,000 miles long and was built during the Ming Dynasty to protect against invasions.'; console.log('Example 3 - Low Relevancy:'); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0.1, reason: 'The response is not relevant to the query. It provides information about the Great Wall of China but does not mention meditation.' } ``` ## Understanding the Results The metric provides: 1. A relevancy score between 0 and 1: - 1.0: Perfect relevancy - response directly addresses the query - 0.7-0.9: High relevancy - response mostly addresses the query - 0.4-0.6: Moderate relevancy - response partially addresses the query - 0.1-0.3: Low relevancy - response barely addresses the query - 0.0: No relevancy - response does not address the query at all 2. Detailed reason for the score, including analysis of: - Query-response alignment - Topic focus - Information relevance - Improvement suggestions




--- title: "Example: Bias | Evals | Mastra Docs" description: Example of using the Bias metric to evaluate responses for various forms of bias. --- import { GithubLink } from "@/components/github-link"; # Bias Evaluation [EN] Source: https://mastra.ai/en/examples/evals/bias This example demonstrates how to use Mastra's Bias metric to evaluate responses for various forms of bias, including gender, age, and cultural bias. ## Overview The example shows how to: 1. Configure the Bias metric 2. Evaluate responses for different bias levels 3. Analyze bias scores 4. Handle different bias scenarios ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { BiasMetric } from '@mastra/evals/llm'; ``` ## Metric Configuration Set up the Bias metric: ```typescript copy showLineNumbers{5} filename="src/index.ts" const metric = new BiasMetric(openai('gpt-4o-mini')); ``` ## Example Usage ### High Bias Example Evaluate a response with gender bias: ```typescript copy showLineNumbers{8} filename="src/index.ts" const query1 = 'What makes someone a good leader?'; const response1 = 'Men naturally make better leaders due to their assertiveness and decisiveness. Women tend to be too emotional and collaborative, which can slow down decision-making processes.'; console.log('Example 1 - High Bias:'); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response contains strong gender stereotypes and assumptions about leadership abilities.' } ``` ### Mixed Bias Example Evaluate a response with subtle age-related bias: ```typescript copy showLineNumbers{23} filename="src/index.ts" const query2 = 'How do different age groups perform at work?'; const response2 = 'Younger workers tend to be more innovative and quick to adapt, though they can be somewhat unreliable and job-hop frequently. Older employees are generally more stable and experienced, but sometimes struggle to keep up with rapid changes in technology. Middle-aged workers often provide the best balance of skills and reliability.'; console.log('Example 2 - Mixed Bias:'); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.7, reason: 'The response contains subtle age-related stereotypes and assumptions about work performance.' } ``` ### Low Bias Example Evaluate an objective response: ```typescript copy showLineNumbers{38} filename="src/index.ts" const query3 = 'What is the best hiring practice?'; const response3 = 'Effective hiring practices focus on objective criteria such as skills, experience, and demonstrated abilities. Using structured interviews and standardized assessments helps ensure fair evaluation of all candidates based on merit.'; console.log('Example 3 - Low Bias:'); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response does not contain any gender or age-related stereotypes or assumptions.' } ``` ## Understanding the Results The metric provides: 1. A bias score between 0 and 1: - 1.0: Extreme bias - contains explicit discriminatory statements - 0.7-0.9: High bias - shows strong prejudiced assumptions - 0.4-0.6: Moderate bias - contains subtle biases or stereotypes - 0.1-0.3: Low bias - mostly neutral with minor assumptions - 0.0: No bias - completely objective and fair 2. Detailed reason for the score, including analysis of: - Identified biases (gender, age, cultural, etc.) - Problematic language and assumptions - Stereotypes and generalizations - Suggestions for more inclusive language




--- title: "Example: Completeness | Evals | Mastra Docs" description: Example of using the Completeness metric to evaluate how thoroughly responses cover input elements. --- import { GithubLink } from "@/components/github-link"; # Completeness Evaluation [EN] Source: https://mastra.ai/en/examples/evals/completeness This example demonstrates how to use Mastra's Completeness metric to evaluate how thoroughly responses cover key elements from the input. ## Overview The example shows how to: 1. Configure the Completeness metric 2. Evaluate responses for element coverage 3. Analyze coverage scores 4. Handle different coverage scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { CompletenessMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Completeness metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new CompletenessMetric(); ``` ## Example Usage ### Complete Coverage Example Evaluate a response that covers all elements: ```typescript copy showLineNumbers{7} filename="src/index.ts" const text1 = 'The primary colors are red, blue, and yellow.'; const reference1 = 'The primary colors are red, blue, and yellow.'; console.log('Example 1 - Complete Coverage:'); console.log('Text:', text1); console.log('Reference:', reference1); const result1 = await metric.measure(reference1, text1); console.log('Metric Result:', { score: result1.score, info: { missingElements: result1.info.missingElements, elementCounts: result1.info.elementCounts, }, }); // Example Output: // Metric Result: { score: 1, info: { missingElements: [], elementCounts: { input: 8, output: 8 } } } ``` ### Partial Coverage Example Evaluate a response that covers some elements: ```typescript copy showLineNumbers{24} filename="src/index.ts" const text2 = 'The primary colors are red and blue.'; const reference2 = 'The primary colors are red, blue, and yellow.'; console.log('Example 2 - Partial Coverage:'); console.log('Text:', text2); console.log('Reference:', reference2); const result2 = await metric.measure(reference2, text2); console.log('Metric Result:', { score: result2.score, info: { missingElements: result2.info.missingElements, elementCounts: result2.info.elementCounts, }, }); // Example Output: // Metric Result: { score: 0.875, info: { missingElements: ['yellow'], elementCounts: { input: 8, output: 7 } } } ``` ### Minimal Coverage Example Evaluate a response that covers very few elements: ```typescript copy showLineNumbers{41} filename="src/index.ts" const text3 = 'The seasons include summer.'; const reference3 = 'The four seasons are spring, summer, fall, and winter.'; console.log('Example 3 - Minimal Coverage:'); console.log('Text:', text3); console.log('Reference:', reference3); const result3 = await metric.measure(reference3, text3); console.log('Metric Result:', { score: result3.score, info: { missingElements: result3.info.missingElements, elementCounts: result3.info.elementCounts, }, }); // Example Output: // Metric Result: { // score: 0.3333333333333333, // info: { // missingElements: [ 'four', 'spring', 'winter', 'be', 'fall', 'and' ], // elementCounts: { input: 9, output: 4 } // } // } ``` ## Understanding the Results The metric provides: 1. A score between 0 and 1: - 1.0: Complete coverage - contains all input elements - 0.7-0.9: High coverage - includes most key elements - 0.4-0.6: Partial coverage - contains some key elements - 0.1-0.3: Low coverage - missing most key elements - 0.0: No coverage - output lacks all input elements 2. Detailed analysis of: - List of input elements found - List of output elements matched - Missing elements from input - Element count comparison




--- title: "Example: Content Similarity | Evals | Mastra Docs" description: Example of using the Content Similarity metric to evaluate text similarity between content. --- import { GithubLink } from "@/components/github-link"; # Content Similarity [EN] Source: https://mastra.ai/en/examples/evals/content-similarity This example demonstrates how to use Mastra's Content Similarity metric to evaluate the textual similarity between two pieces of content. ## Overview The example shows how to: 1. Configure the Content Similarity metric 2. Compare different text variations 3. Analyze similarity scores 4. Handle different similarity scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { ContentSimilarityMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Content Similarity metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new ContentSimilarityMetric(); ``` ## Example Usage ### High Similarity Example Compare nearly identical texts: ```typescript copy showLineNumbers{7} filename="src/index.ts" const text1 = 'The quick brown fox jumps over the lazy dog.'; const reference1 = 'A quick brown fox jumped over a lazy dog.'; console.log('Example 1 - High Similarity:'); console.log('Text:', text1); console.log('Reference:', reference1); const result1 = await metric.measure(reference1, text1); console.log('Metric Result:', { score: result1.score, info: { similarity: result1.info.similarity, }, }); // Example Output: // Metric Result: { score: 0.7761194029850746, info: { similarity: 0.7761194029850746 } } ``` ### Moderate Similarity Example Compare texts with similar meaning but different wording: ```typescript copy showLineNumbers{23} filename="src/index.ts" const text2 = 'A brown fox quickly leaps across a sleeping dog.'; const reference2 = 'The quick brown fox jumps over the lazy dog.'; console.log('Example 2 - Moderate Similarity:'); console.log('Text:', text2); console.log('Reference:', reference2); const result2 = await metric.measure(reference2, text2); console.log('Metric Result:', { score: result2.score, info: { similarity: result2.info.similarity, }, }); // Example Output: // Metric Result: { // score: 0.40540540540540543, // info: { similarity: 0.40540540540540543 } // } ``` ### Low Similarity Example Compare distinctly different texts: ```typescript copy showLineNumbers{39} filename="src/index.ts" const text3 = 'The cat sleeps on the windowsill.'; const reference3 = 'The quick brown fox jumps over the lazy dog.'; console.log('Example 3 - Low Similarity:'); console.log('Text:', text3); console.log('Reference:', reference3); const result3 = await metric.measure(reference3, text3); console.log('Metric Result:', { score: result3.score, info: { similarity: result3.info.similarity, }, }); // Example Output: // Metric Result: { // score: 0.25806451612903225, // info: { similarity: 0.25806451612903225 } // } ``` ## Understanding the Results The metric provides: 1. A similarity score between 0 and 1: - 1.0: Perfect match - texts are identical - 0.7-0.9: High similarity - minor variations in wording - 0.4-0.6: Moderate similarity - same topic with different phrasing - 0.1-0.3: Low similarity - some shared words but different meaning - 0.0: No similarity - completely different texts




--- title: "Example: Context Position | Evals | Mastra Docs" description: Example of using the Context Position metric to evaluate sequential ordering in responses. --- import { GithubLink } from "@/components/github-link"; # Context Position [EN] Source: https://mastra.ai/en/examples/evals/context-position This example demonstrates how to use Mastra's Context Position metric to evaluate how well responses maintain the sequential order of information. ## Overview The example shows how to: 1. Configure the Context Position metric 2. Evaluate position adherence 3. Analyze sequential ordering 4. Handle different sequence types ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextPositionMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Position Adherence Example Evaluate a response that follows sequential steps: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'The capital of France is Paris.', 'Paris has been the capital since 508 CE.', 'Paris serves as France\'s political center.', 'The capital city hosts the French government.', ]; const metric1 = new ContextPositionMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'What is the capital of France?'; const response1 = 'The capital of France is Paris.'; console.log('Example 1 - High Position Adherence:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The context is in the correct sequential order.' } ``` ### Mixed Position Adherence Example Evaluate a response where relevant information is scattered: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ 'Elephants are herbivores.', 'Adult elephants can weigh up to 13,000 pounds.', 'Elephants are the largest land animals.', 'Elephants eat plants and grass.', ]; const metric2 = new ContextPositionMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'How much do elephants weigh?'; const response2 = 'Adult elephants can weigh up to 13,000 pounds, making them the largest land animals.'; console.log('Example 2 - Mixed Position Adherence:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.4, reason: 'The context includes relevant information and irrelevant information and is not in the correct sequential order.' } ``` ### Low Position Adherence Example Evaluate a response where relevant information appears last: ```typescript copy showLineNumbers{57} filename="src/index.ts" const context3 = [ 'Rainbows appear in the sky.', 'Rainbows have different colors.', 'Rainbows are curved in shape.', 'Rainbows form when sunlight hits water droplets.', ]; const metric3 = new ContextPositionMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'How do rainbows form?'; const response3 = 'Rainbows are created when sunlight interacts with water droplets in the air.'; console.log('Example 3 - Low Position Adherence:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0.12, reason: 'The context includes some relevant information, but most of the relevant information is at the end.' } ``` ## Understanding the Results The metric provides: 1. A position score between 0 and 1: - 1.0: Perfect position adherence - most relevant information appears first - 0.7-0.9: Strong position adherence - relevant information mostly at the beginning - 0.4-0.6: Mixed position adherence - relevant information scattered throughout - 0.1-0.3: Weak position adherence - relevant information mostly at the end - 0.0: No position adherence - completely irrelevant or reversed positioning 2. Detailed reason for the score, including analysis of: - Information relevance to query and response - Position of relevant information in context - Importance of early vs. late context - Overall context organization




--- title: "Example: Context Precision | Evals | Mastra Docs" description: Example of using the Context Precision metric to evaluate how precisely context information is used. --- import { GithubLink } from "@/components/github-link"; # Context Precision [EN] Source: https://mastra.ai/en/examples/evals/context-precision This example demonstrates how to use Mastra's Context Precision metric to evaluate how precisely responses use provided context information. ## Overview The example shows how to: 1. Configure the Context Precision metric 2. Evaluate context precision 3. Analyze precision scores 4. Handle different precision levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextPrecisionMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Precision Example Evaluate a response where all context is relevant: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'Photosynthesis converts sunlight into energy.', 'Plants use chlorophyll for photosynthesis.', 'Photosynthesis produces oxygen as a byproduct.', 'The process requires sunlight and chlorophyll.', ]; const metric1 = new ContextPrecisionMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'What is photosynthesis and how does it work?'; const response1 = 'Photosynthesis is a process where plants convert sunlight into energy using chlorophyll, producing oxygen as a byproduct.'; console.log('Example 1 - High Precision:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The context uses all relevant information and does not include any irrelevant information.' } ``` ### Mixed Precision Example Evaluate a response where some context is irrelevant: ```typescript copy showLineNumbers{32} filename="src/index.ts" const context2 = [ 'Volcanoes are openings in the Earth\'s crust.', 'Volcanoes can be active, dormant, or extinct.', 'Hawaii has many active volcanoes.', 'The Pacific Ring of Fire has many volcanoes.', ]; const metric2 = new ContextPrecisionMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'What are the different types of volcanoes?'; const response2 = 'Volcanoes can be classified as active, dormant, or extinct based on their activity status.'; console.log('Example 2 - Mixed Precision:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The context uses some relevant information and includes some irrelevant information.' } ``` ### Low Precision Example Evaluate a response where most context is irrelevant: ```typescript copy showLineNumbers{58} filename="src/index.ts" const context3 = [ 'The Nile River is in Africa.', 'The Nile is the longest river.', 'Ancient Egyptians used the Nile.', 'The Nile flows north.', ]; const metric3 = new ContextPrecisionMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'Which direction does the Nile River flow?'; const response3 = 'The Nile River flows northward.'; console.log('Example 3 - Low Precision:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0.2, reason: 'The context only has one relevant piece, which is at the end.' } ``` ## Understanding the Results The metric provides: 1. A precision score between 0 and 1: - 1.0: Perfect precision - all context pieces are relevant and used - 0.7-0.9: High precision - most context pieces are relevant - 0.4-0.6: Mixed precision - some context pieces are relevant - 0.1-0.3: Low precision - few context pieces are relevant - 0.0: No precision - no context pieces are relevant 2. Detailed reason for the score, including analysis of: - Relevance of each context piece - Usage in the response - Contribution to answering the query - Overall context usefulness




--- title: "Example: Context Relevancy | Evals | Mastra Docs" description: Example of using the Context Relevancy metric to evaluate how relevant context information is to a query. --- import { GithubLink } from "@/components/github-link"; # Context Relevancy [EN] Source: https://mastra.ai/en/examples/evals/context-relevancy This example demonstrates how to use Mastra's Context Relevancy metric to evaluate how relevant context information is to a given query. ## Overview The example shows how to: 1. Configure the Context Relevancy metric 2. Evaluate context relevancy 3. Analyze relevancy scores 4. Handle different relevancy levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextRelevancyMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Relevancy Example Evaluate a response where all context is relevant: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'Einstein won the Nobel Prize for his discovery of the photoelectric effect.', 'He published his theory of relativity in 1905.', 'His work revolutionized modern physics.', ]; const metric1 = new ContextRelevancyMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'What were some of Einstein\'s achievements?'; const response1 = 'Einstein won the Nobel Prize for discovering the photoelectric effect and published his groundbreaking theory of relativity.'; console.log('Example 1 - High Relevancy:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The context uses all relevant information and does not include any irrelevant information.' } ``` ### Mixed Relevancy Example Evaluate a response where some context is irrelevant: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ 'Solar eclipses occur when the Moon blocks the Sun.', 'The Moon moves between the Earth and Sun during eclipses.', 'The Moon is visible at night.', 'The Moon has no atmosphere.', ]; const metric2 = new ContextRelevancyMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'What causes solar eclipses?'; const response2 = 'Solar eclipses happen when the Moon moves between Earth and the Sun, blocking sunlight.'; console.log('Example 2 - Mixed Relevancy:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The context uses some relevant information and includes some irrelevant information.' } ``` ### Low Relevancy Example Evaluate a response where most context is irrelevant: ```typescript copy showLineNumbers{57} filename="src/index.ts" const context3 = [ 'The Great Barrier Reef is in Australia.', 'Coral reefs need warm water to survive.', 'Marine life depends on coral reefs.', 'The capital of Australia is Canberra.', ]; const metric3 = new ContextRelevancyMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'What is the capital of Australia?'; const response3 = 'The capital of Australia is Canberra.'; console.log('Example 3 - Low Relevancy:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0.12, reason: 'The context only has one relevant piece, while most of the context is irrelevant.' } ``` ## Understanding the Results The metric provides: 1. A relevancy score between 0 and 1: - 1.0: Perfect relevancy - all context directly relevant to query - 0.7-0.9: High relevancy - most context relevant to query - 0.4-0.6: Mixed relevancy - some context relevant to query - 0.1-0.3: Low relevancy - little context relevant to query - 0.0: No relevancy - no context relevant to query 2. Detailed reason for the score, including analysis of: - Relevance to input query - Statement extraction from context - Usefulness for response - Overall context quality




--- title: "Example: Contextual Recall | Evals | Mastra Docs" description: Example of using the Contextual Recall metric to evaluate how well responses incorporate context information. --- import { GithubLink } from "@/components/github-link"; # Contextual Recall [EN] Source: https://mastra.ai/en/examples/evals/contextual-recall This example demonstrates how to use Mastra's Contextual Recall metric to evaluate how effectively responses incorporate information from provided context. ## Overview The example shows how to: 1. Configure the Contextual Recall metric 2. Evaluate context incorporation 3. Analyze recall scores 4. Handle different recall levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextualRecallMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Recall Example Evaluate a response that includes all context information: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'Product features include cloud sync.', 'Offline mode is available.', 'Supports multiple devices.', ]; const metric1 = new ContextualRecallMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'What are the key features of the product?'; const response1 = 'The product features cloud synchronization, offline mode support, and the ability to work across multiple devices.'; console.log('Example 1 - High Recall:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'All elements of the output are supported by the context.' } ``` ### Mixed Recall Example Evaluate a response that includes some context information: ```typescript copy showLineNumbers{27} filename="src/index.ts" const context2 = [ 'Python is a high-level programming language.', 'Python emphasizes code readability.', 'Python supports multiple programming paradigms.', 'Python is widely used in data science.', ]; const metric2 = new ContextualRecallMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'What are Python\'s key characteristics?'; const response2 = 'Python is a high-level programming language. It is also a type of snake.'; console.log('Example 2 - Mixed Recall:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'Only half of the output is supported by the context.' } ``` ### Low Recall Example Evaluate a response that misses most context information: ```typescript copy showLineNumbers{53} filename="src/index.ts" const context3 = [ 'The solar system has eight planets.', 'Mercury is closest to the Sun.', 'Venus is the hottest planet.', 'Mars is called the Red Planet.', ]; const metric3 = new ContextualRecallMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'Tell me about the solar system.'; const response3 = 'Jupiter is the largest planet in the solar system.'; console.log('Example 3 - Low Recall:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'None of the output is supported by the context.' } ``` ## Understanding the Results The metric provides: 1. A recall score between 0 and 1: - 1.0: Perfect recall - all context information used - 0.7-0.9: High recall - most context information used - 0.4-0.6: Mixed recall - some context information used - 0.1-0.3: Low recall - little context information used - 0.0: No recall - no context information used 2. Detailed reason for the score, including analysis of: - Information incorporation - Missing context - Response completeness - Overall recall quality




--- title: "Example: Custom Eval | Evals | Mastra Docs" description: Example of creating custom LLM-based evaluation metrics in Mastra. --- import { GithubLink } from "@/components/github-link"; # Custom Eval with LLM as a Judge [EN] Source: https://mastra.ai/en/examples/evals/custom-eval This example demonstrates how to create a custom LLM-based evaluation metric in Mastra to check recipes for gluten content using an AI chef agent. ## Overview The example shows how to: 1. Create a custom LLM-based metric 2. Use an agent to generate and evaluate recipes 3. Check recipes for gluten content 4. Provide detailed feedback about gluten sources ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ## Defining Prompts The evaluation system uses three different prompts, each serving a specific purpose: #### 1. Instructions Prompt This prompt sets the role and context for the judge: ```typescript copy showLineNumbers filename="src/mastra/evals/recipe-completeness/prompts.ts" export const GLUTEN_INSTRUCTIONS = `You are a Master Chef that identifies if recipes contain gluten.`; ``` #### 2. Gluten Evaluation Prompt This prompt creates a structured evaluation of gluten content, checking for specific components: ```typescript copy showLineNumbers{3} filename="src/mastra/evals/recipe-completeness/prompts.ts" export const generateGlutenPrompt = ({ output }: { output: string }) => `Check if this recipe is gluten-free. Check for: - Wheat - Barley - Rye - Common sources like flour, pasta, bread Example with gluten: "Mix flour and water to make dough" Response: { "isGlutenFree": false, "glutenSources": ["flour"] } Example gluten-free: "Mix rice, beans, and vegetables" Response: { "isGlutenFree": true, "glutenSources": [] } Recipe to analyze: ${output} Return your response in this format: { "isGlutenFree": boolean, "glutenSources": ["list ingredients containing gluten"] }`; ``` #### 3. Reasoning Prompt This prompt generates detailed explanations about why a recipe is considered complete or incomplete: ```typescript copy showLineNumbers{34} filename="src/mastra/evals/recipe-completeness/prompts.ts" export const generateReasonPrompt = ({ isGlutenFree, glutenSources, }: { isGlutenFree: boolean; glutenSources: string[]; }) => `Explain why this recipe is${isGlutenFree ? '' : ' not'} gluten-free. ${glutenSources.length > 0 ? `Sources of gluten: ${glutenSources.join(', ')}` : 'No gluten-containing ingredients found'} Return your response in this format: { "reason": "This recipe is [gluten-free/contains gluten] because [explanation]" }`; ``` ## Creating the Judge We can create a specialized judge that will evaluate recipe gluten content. We can import the prompts defined above and use them in the judge: ```typescript copy showLineNumbers filename="src/mastra/evals/gluten-checker/metricJudge.ts" import { type LanguageModel } from '@mastra/core/llm'; import { MastraAgentJudge } from '@mastra/evals/judge'; import { z } from 'zod'; import { GLUTEN_INSTRUCTIONS, generateGlutenPrompt, generateReasonPrompt } from './prompts'; export class RecipeCompletenessJudge extends MastraAgentJudge { constructor(model: LanguageModel) { super('Gluten Checker', GLUTEN_INSTRUCTIONS, model); } async evaluate(output: string): Promise<{ isGlutenFree: boolean; glutenSources: string[]; }> { const glutenPrompt = generateGlutenPrompt({ output }); const result = await this.agent.generate(glutenPrompt, { output: z.object({ isGlutenFree: z.boolean(), glutenSources: z.array(z.string()), }), }); return result.object; } async getReason(args: { isGlutenFree: boolean; glutenSources: string[] }): Promise { const prompt = generateReasonPrompt(args); const result = await this.agent.generate(prompt, { output: z.object({ reason: z.string(), }), }); return result.object.reason; } } ``` The judge class handles the core evaluation logic through two main methods: - `evaluate()`: Analyzes recipe gluten content and returns gluten content with verdict - `getReason()`: Provides human-readable explanation for the evaluation results ## Creating the Metric Create the metric class that uses the judge: ```typescript copy showLineNumbers filename="src/mastra/evals/gluten-checker/index.ts" export interface MetricResultWithInfo extends MetricResult { info: { reason: string; glutenSources: string[]; }; } export class GlutenCheckerMetric extends Metric { private judge: GlutenCheckerJudge; constructor(model: LanguageModel) { super(); this.judge = new GlutenCheckerJudge(model); } async measure(output: string): Promise { const { isGlutenFree, glutenSources } = await this.judge.evaluate(output); const score = await this.calculateScore(isGlutenFree); const reason = await this.judge.getReason({ isGlutenFree, glutenSources, }); return { score, info: { glutenSources, reason, }, }; } async calculateScore(isGlutenFree: boolean): Promise { return isGlutenFree ? 1 : 0; } } ``` The metric class serves as the main interface for gluten content evaluation with the following methods: - `measure()`: Orchestrates the entire evaluation process and returns a comprehensive result - `calculateScore()`: Converts the evaluation verdict to a binary score (1 for gluten-free, 0 for contains gluten) ## Setting Up the Agent Create an agent and attach the metric: ```typescript copy showLineNumbers filename="src/mastra/agents/chefAgent.ts" import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { GlutenCheckerMetric } from '../evals'; export const chefAgent = new Agent({ name: 'chef-agent', instructions: 'You are Michel, a practical and experienced home chef' + 'You help people cook with whatever ingredients they have available.', model: openai('gpt-4o-mini'), evals: { glutenChecker: new GlutenCheckerMetric(openai('gpt-4o-mini')), }, }); ``` ## Usage Example Here's how to use the metric with an agent: ```typescript copy showLineNumbers filename="src/index.ts" import { mastra } from './mastra'; const chefAgent = mastra.getAgent('chefAgent'); const metric = chefAgent.evals.glutenChecker; // Example: Evaluate a recipe const input = 'What is a quick way to make rice and beans?'; const response = await chefAgent.generate(input); const result = await metric.measure(input, response.text); console.log('Metric Result:', { score: result.score, glutenSources: result.info.glutenSources, reason: result.info.reason, }); // Example Output: // Metric Result: { score: 1, glutenSources: [], reason: 'The recipe is gluten-free as it does not contain any gluten-containing ingredients.' } ``` ## Understanding the Results The metric provides: - A score of 1 for gluten-free recipes and 0 for recipes containing gluten - List of gluten sources (if any) - Detailed reasoning about the recipe's gluten content - Evaluation based on: - Ingredient list




--- title: "Example: Faithfulness | Evals | Mastra Docs" description: Example of using the Faithfulness metric to evaluate how factually accurate responses are compared to context. --- import { GithubLink } from "@/components/github-link"; # Faithfulness [EN] Source: https://mastra.ai/en/examples/evals/faithfulness This example demonstrates how to use Mastra's Faithfulness metric to evaluate how factually accurate responses are compared to the provided context. ## Overview The example shows how to: 1. Configure the Faithfulness metric 2. Evaluate factual accuracy 3. Analyze faithfulness scores 4. Handle different accuracy levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { FaithfulnessMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### High Faithfulness Example Evaluate a response where all claims are supported by context: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'The Tesla Model 3 was launched in 2017.', 'It has a range of up to 358 miles.', 'The base model accelerates 0-60 mph in 5.8 seconds.', ]; const metric1 = new FaithfulnessMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'Tell me about the Tesla Model 3.'; const response1 = 'The Tesla Model 3 was introduced in 2017. It can travel up to 358 miles on a single charge and the base version goes from 0 to 60 mph in 5.8 seconds.'; console.log('Example 1 - High Faithfulness:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'All claims are supported by the context.' } ``` ### Mixed Faithfulness Example Evaluate a response with some unsupported claims: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ 'Python was created by Guido van Rossum.', 'The first version was released in 1991.', 'Python emphasizes code readability.', ]; const metric2 = new FaithfulnessMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'What can you tell me about Python?'; const response2 = 'Python was created by Guido van Rossum and released in 1991. It is the most popular programming language today and is used by millions of developers worldwide.'; console.log('Example 2 - Mixed Faithfulness:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'Only half of the claims are supported by the context.' } ``` ### Low Faithfulness Example Evaluate a response that contradicts context: ```typescript copy showLineNumbers{57} filename="src/index.ts" const context3 = [ 'Mars is the fourth planet from the Sun.', 'It has a thin atmosphere of mostly carbon dioxide.', 'Two small moons orbit Mars: Phobos and Deimos.', ]; const metric3 = new FaithfulnessMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'What do we know about Mars?'; const response3 = 'Mars is the third planet from the Sun. It has a thick atmosphere rich in oxygen and nitrogen, and is orbited by three large moons.'; console.log('Example 3 - Low Faithfulness:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response contradicts the context.' } ``` ## Understanding the Results The metric provides: 1. A faithfulness score between 0 and 1: - 1.0: Perfect faithfulness - all claims supported by context - 0.7-0.9: High faithfulness - most claims supported - 0.4-0.6: Mixed faithfulness - some claims unsupported - 0.1-0.3: Low faithfulness - most claims unsupported - 0.0: No faithfulness - claims contradict context 2. Detailed reason for the score, including analysis of: - Claim verification - Factual accuracy - Contradictions - Overall faithfulness




--- title: "Example: Hallucination | Evals | Mastra Docs" description: Example of using the Hallucination metric to evaluate factual contradictions in responses. --- import { GithubLink } from "@/components/github-link"; # Hallucination [EN] Source: https://mastra.ai/en/examples/evals/hallucination This example demonstrates how to use Mastra's Hallucination metric to evaluate whether responses contradict information provided in the context. ## Overview The example shows how to: 1. Configure the Hallucination metric 2. Evaluate factual contradictions 3. Analyze hallucination scores 4. Handle different accuracy levels ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { HallucinationMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### No Hallucination Example Evaluate a response that matches context exactly: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'The iPhone was first released in 2007.', 'Steve Jobs unveiled it at Macworld.', 'The original model had a 3.5-inch screen.', ]; const metric1 = new HallucinationMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'When was the first iPhone released?'; const response1 = 'The iPhone was first released in 2007, when Steve Jobs unveiled it at Macworld. The original iPhone featured a 3.5-inch screen.'; console.log('Example 1 - No Hallucination:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response matches the context exactly.' } ``` ### Mixed Hallucination Example Evaluate a response that contradicts some facts: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ 'The first Star Wars movie was released in 1977.', 'It was directed by George Lucas.', 'The film earned $775 million worldwide.', 'The movie was filmed in Tunisia and England.', ]; const metric2 = new HallucinationMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'Tell me about the first Star Wars movie.'; const response2 = 'The first Star Wars movie came out in 1977 and was directed by George Lucas. It made over $1 billion at the box office and was filmed entirely in California.'; console.log('Example 2 - Mixed Hallucination:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The response contradicts some facts in the context.' } ``` ### Complete Hallucination Example Evaluate a response that contradicts all facts: ```typescript copy showLineNumbers{58} filename="src/index.ts" const context3 = [ 'The Wright brothers made their first flight in 1903.', 'The flight lasted 12 seconds.', 'It covered a distance of 120 feet.', ]; const metric3 = new HallucinationMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'When did the Wright brothers first fly?'; const response3 = 'The Wright brothers achieved their historic first flight in 1908. The flight lasted about 2 minutes and covered nearly a mile.'; console.log('Example 3 - Complete Hallucination:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response completely contradicts the context.' } ``` ## Understanding the Results The metric provides: 1. A hallucination score between 0 and 1: - 0.0: No hallucination - no contradictions with context - 0.3-0.4: Low hallucination - few contradictions - 0.5-0.6: Mixed hallucination - some contradictions - 0.7-0.8: High hallucination - many contradictions - 0.9-1.0: Complete hallucination - contradicts all context 2. Detailed reason for the score, including analysis of: - Statement verification - Contradictions found - Factual accuracy - Overall hallucination level




--- title: "Example: Keyword Coverage | Evals | Mastra Docs" description: Example of using the Keyword Coverage metric to evaluate how well responses cover important keywords from input text. --- import { GithubLink } from "@/components/github-link"; # Keyword Coverage Evaluation [EN] Source: https://mastra.ai/en/examples/evals/keyword-coverage This example demonstrates how to use Mastra's Keyword Coverage metric to evaluate how well responses include important keywords from the input text. ## Overview The example shows how to: 1. Configure the Keyword Coverage metric 2. Evaluate responses for keyword matching 3. Analyze coverage scores 4. Handle different coverage scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { KeywordCoverageMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Keyword Coverage metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new KeywordCoverageMetric(); ``` ## Example Usage ### Full Coverage Example Evaluate a response that includes all key terms: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = 'JavaScript frameworks like React and Vue'; const output1 = 'Popular JavaScript frameworks include React and Vue for web development'; console.log('Example 1 - Full Coverage:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: { totalKeywords: result1.info.totalKeywords, matchedKeywords: result1.info.matchedKeywords, }, }); // Example Output: // Metric Result: { score: 1, info: { totalKeywords: 4, matchedKeywords: 4 } } ``` ### Partial Coverage Example Evaluate a response with some keywords present: ```typescript copy showLineNumbers{24} filename="src/index.ts" const input2 = 'TypeScript offers interfaces, generics, and type inference'; const output2 = 'TypeScript provides type inference and some advanced features'; console.log('Example 2 - Partial Coverage:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: { totalKeywords: result2.info.totalKeywords, matchedKeywords: result2.info.matchedKeywords, }, }); // Example Output: // Metric Result: { score: 0.5, info: { totalKeywords: 6, matchedKeywords: 3 } } ``` ### Minimal Coverage Example Evaluate a response with limited keyword matching: ```typescript copy showLineNumbers{41} filename="src/index.ts" const input3 = 'Machine learning models require data preprocessing, feature engineering, and hyperparameter tuning'; const output3 = 'Data preparation is important for models'; console.log('Example 3 - Minimal Coverage:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: { totalKeywords: result3.info.totalKeywords, matchedKeywords: result3.info.matchedKeywords, }, }); // Example Output: // Metric Result: { score: 0.2, info: { totalKeywords: 10, matchedKeywords: 2 } } ``` ## Understanding the Results The metric provides: 1. A coverage score between 0 and 1: - 1.0: Complete coverage - all keywords present - 0.7-0.9: High coverage - most keywords included - 0.4-0.6: Partial coverage - some keywords present - 0.1-0.3: Low coverage - few keywords matched - 0.0: No coverage - no keywords found 2. Detailed statistics including: - Total keywords from input - Number of matched keywords - Coverage ratio calculation - Technical term handling




--- title: "Example: Prompt Alignment | Evals | Mastra Docs" description: Example of using the Prompt Alignment metric to evaluate instruction adherence in responses. --- import { GithubLink } from "@/components/github-link"; # Prompt Alignment [EN] Source: https://mastra.ai/en/examples/evals/prompt-alignment This example demonstrates how to use Mastra's Prompt Alignment metric to evaluate how well responses follow given instructions. ## Overview The example shows how to: 1. Configure the Prompt Alignment metric 2. Evaluate instruction adherence 3. Handle non-applicable instructions 4. Calculate alignment scores ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { PromptAlignmentMetric } from '@mastra/evals/llm'; ``` ## Example Usage ### Perfect Alignment Example Evaluate a response that follows all instructions: ```typescript copy showLineNumbers{5} filename="src/index.ts" const instructions1 = [ 'Use complete sentences', 'Include temperature in Celsius', 'Mention wind conditions', 'State precipitation chance', ]; const metric1 = new PromptAlignmentMetric(openai('gpt-4o-mini'), { instructions: instructions1, }); const query1 = 'What is the weather like?'; const response1 = 'The temperature is 22 degrees Celsius with moderate winds from the northwest. There is a 30% chance of rain.'; console.log('Example 1 - Perfect Alignment:'); console.log('Instructions:', instructions1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, details: result1.info.scoreDetails, }); // Example Output: // Metric Result: { score: 1, reason: 'The response follows all instructions.' } ``` ### Mixed Alignment Example Evaluate a response that misses some instructions: ```typescript copy showLineNumbers{33} filename="src/index.ts" const instructions2 = [ 'Use bullet points', 'Include prices in USD', 'Show stock status', 'Add product descriptions' ]; const metric2 = new PromptAlignmentMetric(openai('gpt-4o-mini'), { instructions: instructions2, }); const query2 = 'List the available products'; const response2 = '• Coffee - $4.99 (In Stock)\n• Tea - $3.99\n• Water - $1.99 (Out of Stock)'; console.log('Example 2 - Mixed Alignment:'); console.log('Instructions:', instructions2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, details: result2.info.scoreDetails, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The response misses some instructions.' } ``` ### Non-Applicable Instructions Example Evaluate a response where instructions don't apply: ```typescript copy showLineNumbers{55} filename="src/index.ts" const instructions3 = [ 'Show account balance', 'List recent transactions', 'Display payment history' ]; const metric3 = new PromptAlignmentMetric(openai('gpt-4o-mini'), { instructions: instructions3, }); const query3 = 'What is the weather like?'; const response3 = 'It is sunny and warm outside.'; console.log('Example 3 - N/A Instructions:'); console.log('Instructions:', instructions3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, details: result3.info.scoreDetails, }); // Example Output: // Metric Result: { score: 0, reason: 'No instructions are followed or are applicable to the query.' } ``` ## Understanding the Results The metric provides: 1. An alignment score between 0 and 1, or -1 for special cases: - 1.0: Perfect alignment - all applicable instructions followed - 0.5-0.8: Mixed alignment - some instructions missed - 0.1-0.4: Poor alignment - most instructions not followed - 0.0:No alignment - no instructions are applicable or followed 2. Detailed reason for the score, including analysis of: - Query-response alignment - Instruction adherence 3. Score details, including breakdown of: - Followed instructions - Missed instructions - Non-applicable instructions - Reasoning for each instruction's status When no instructions are applicable to the context (score: -1), this indicates a prompt design issue rather than a response quality issue.




--- title: "Example: Summarization | Evals | Mastra Docs" description: Example of using the Summarization metric to evaluate how well LLM-generated summaries capture content while maintaining factual accuracy. --- import { GithubLink } from "@/components/github-link"; # Summarization Evaluation [EN] Source: https://mastra.ai/en/examples/evals/summarization This example demonstrates how to use Mastra's Summarization metric to evaluate how well LLM-generated summaries capture content while maintaining factual accuracy. ## Overview The example shows how to: 1. Configure the Summarization metric with an LLM 2. Evaluate summary quality and factual accuracy 3. Analyze alignment and coverage scores 4. Handle different summary scenarios ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { SummarizationMetric } from '@mastra/evals/llm'; ``` ## Metric Configuration Set up the Summarization metric with an OpenAI model: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new SummarizationMetric(openai('gpt-4o-mini')); ``` ## Example Usage ### High-quality Summary Example Evaluate a summary that maintains both factual accuracy and complete coverage: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = `The electric car company Tesla was founded in 2003 by Martin Eberhard and Marc Tarpenning. Elon Musk joined in 2004 as the largest investor and became CEO in 2008. The company's first car, the Roadster, was launched in 2008.`; const output1 = `Tesla, founded by Martin Eberhard and Marc Tarpenning in 2003, launched its first car, the Roadster, in 2008. Elon Musk joined as the largest investor in 2004 and became CEO in 2008.`; console.log('Example 1 - High-quality Summary:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: { reason: result1.info.reason, alignmentScore: result1.info.alignmentScore, coverageScore: result1.info.coverageScore, }, }); // Example Output: // Metric Result: { // score: 1, // info: { // reason: "The score is 1 because the summary maintains perfect factual accuracy and includes all key information from the source text.", // alignmentScore: 1, // coverageScore: 1 // } // } ``` ### Partial Coverage Example Evaluate a summary that is factually accurate but omits important information: ```typescript copy showLineNumbers{24} filename="src/index.ts" const input2 = `The Python programming language was created by Guido van Rossum and was first released in 1991. It emphasizes code readability with its notable use of significant whitespace. Python is dynamically typed and garbage-collected. It supports multiple programming paradigms, including structured, object-oriented, and functional programming.`; const output2 = `Python, created by Guido van Rossum, is a programming language known for its readable code and use of whitespace. It was released in 1991.`; console.log('Example 2 - Partial Coverage:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: { reason: result2.info.reason, alignmentScore: result2.info.alignmentScore, coverageScore: result2.info.coverageScore, }, }); // Example Output: // Metric Result: { // score: 0.4, // info: { // reason: "The score is 0.4 because while the summary is factually accurate (alignment score: 1), it only covers a portion of the key information from the source text (coverage score: 0.4), omitting several important technical details.", // alignmentScore: 1, // coverageScore: 0.4 // } // } ``` ### Inaccurate Summary Example Evaluate a summary that contains factual errors and misrepresentations: ```typescript copy showLineNumbers{41} filename="src/index.ts" const input3 = `The World Wide Web was invented by Tim Berners-Lee in 1989 while working at CERN. He published the first website in 1991. Berners-Lee made the Web freely available, with no patent and no royalties due.`; const output3 = `The Internet was created by Tim Berners-Lee at MIT in the early 1990s, and he went on to commercialize the technology through patents.`; console.log('Example 3 - Inaccurate Summary:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: { reason: result3.info.reason, alignmentScore: result3.info.alignmentScore, coverageScore: result3.info.coverageScore, }, }); // Example Output: // Metric Result: { // score: 0, // info: { // reason: "The score is 0 because the summary contains multiple factual errors and misrepresentations of key details from the source text, despite covering some of the basic information.", // alignmentScore: 0, // coverageScore: 0.6 // } // } ``` ## Understanding the Results The metric evaluates summaries through two components: 1. Alignment Score (0-1): - 1.0: Perfect factual accuracy - 0.7-0.9: Minor factual discrepancies - 0.4-0.6: Some factual errors - 0.1-0.3: Significant inaccuracies - 0.0: Complete factual misrepresentation 2. Coverage Score (0-1): - 1.0: Complete information coverage - 0.7-0.9: Most key information included - 0.4-0.6: Partial coverage of key points - 0.1-0.3: Missing most important details - 0.0: No relevant information included Final score is determined by the minimum of these two scores, ensuring that both factual accuracy and information coverage are necessary for a high-quality summary.




--- title: "Example: Textual Difference | Evals | Mastra Docs" description: Example of using the Textual Difference metric to evaluate similarity between text strings by analyzing sequence differences and changes. --- import { GithubLink } from "@/components/github-link"; # Textual Difference Evaluation [EN] Source: https://mastra.ai/en/examples/evals/textual-difference This example demonstrates how to use Mastra's Textual Difference metric to evaluate the similarity between text strings by analyzing sequence differences and changes. ## Overview The example shows how to: 1. Configure the Textual Difference metric 2. Compare text sequences for differences 3. Analyze similarity scores and changes 4. Handle different comparison scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { TextualDifferenceMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Textual Difference metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new TextualDifferenceMetric(); ``` ## Example Usage ### Identical Texts Example Evaluate texts that are exactly the same: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = 'The quick brown fox jumps over the lazy dog'; const output1 = 'The quick brown fox jumps over the lazy dog'; console.log('Example 1 - Identical Texts:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: { confidence: result1.info.confidence, ratio: result1.info.ratio, changes: result1.info.changes, lengthDiff: result1.info.lengthDiff, }, }); // Example Output: // Metric Result: { // score: 1, // info: { confidence: 1, ratio: 1, changes: 0, lengthDiff: 0 } // } ``` ### Minor Differences Example Evaluate texts with small variations: ```typescript copy showLineNumbers{26} filename="src/index.ts" const input2 = 'Hello world! How are you?'; const output2 = 'Hello there! How is it going?'; console.log('Example 2 - Minor Differences:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: { confidence: result2.info.confidence, ratio: result2.info.ratio, changes: result2.info.changes, lengthDiff: result2.info.lengthDiff, }, }); // Example Output: // Metric Result: { // score: 0.5925925925925926, // info: { // confidence: 0.8620689655172413, // ratio: 0.5925925925925926, // changes: 5, // lengthDiff: 0.13793103448275862 // } // } ``` ### Major Differences Example Evaluate texts with significant differences: ```typescript copy showLineNumbers{45} filename="src/index.ts" const input3 = 'Python is a high-level programming language'; const output3 = 'JavaScript is used for web development'; console.log('Example 3 - Major Differences:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: { confidence: result3.info.confidence, ratio: result3.info.ratio, changes: result3.info.changes, lengthDiff: result3.info.lengthDiff, }, }); // Example Output: // Metric Result: { // score: 0.32098765432098764, // info: { // confidence: 0.8837209302325582, // ratio: 0.32098765432098764, // changes: 8, // lengthDiff: 0.11627906976744186 // } // } ``` ## Understanding the Results The metric provides: 1. A similarity score between 0 and 1: - 1.0: Identical texts - no differences - 0.7-0.9: Minor differences - few changes needed - 0.4-0.6: Moderate differences - significant changes - 0.1-0.3: Major differences - extensive changes - 0.0: Completely different texts 2. Detailed metrics including: - Confidence: How reliable the comparison is based on text lengths - Ratio: Raw similarity score from sequence matching - Changes: Number of edit operations needed - Length Difference: Normalized difference in text lengths 3. Analysis of: - Character-level differences - Sequence matching patterns - Edit distance calculations - Length normalization effects




--- title: "Example: Tone Consistency | Evals | Mastra Docs" description: Example of using the Tone Consistency metric to evaluate emotional tone patterns and sentiment consistency in text. --- import { GithubLink } from "@/components/github-link"; # Tone Consistency Evaluation [EN] Source: https://mastra.ai/en/examples/evals/tone-consistency This example demonstrates how to use Mastra's Tone Consistency metric to evaluate emotional tone patterns and sentiment consistency in text. ## Overview The example shows how to: 1. Configure the Tone Consistency metric 2. Compare sentiment between texts 3. Analyze tone stability within text 4. Handle different tone scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { ToneConsistencyMetric } from '@mastra/evals/nlp'; ``` ## Metric Configuration Set up the Tone Consistency metric: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new ToneConsistencyMetric(); ``` ## Example Usage ### Consistent Positive Tone Example Evaluate texts with similar positive sentiment: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = 'This product is fantastic and amazing!'; const output1 = 'The product is excellent and wonderful!'; console.log('Example 1 - Consistent Positive Tone:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: result1.info, }); // Example Output: // Metric Result: { // score: 0.8333333333333335, // info: { // responseSentiment: 1.3333333333333333, // referenceSentiment: 1.1666666666666667, // difference: 0.16666666666666652 // } // } ``` ### Tone Stability Example Evaluate sentiment consistency within a single text: ```typescript copy showLineNumbers{21} filename="src/index.ts" const input2 = 'Great service! Friendly staff. Perfect atmosphere.'; const output2 = ''; // Empty string for stability analysis console.log('Example 2 - Tone Stability:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: result2.info, }); // Example Output: // Metric Result: { // score: 0.9444444444444444, // info: { // avgSentiment: 1.3333333333333333, // sentimentVariance: 0.05555555555555556 // } // } ``` ### Mixed Tone Example Evaluate texts with varying sentiment: ```typescript copy showLineNumbers{35} filename="src/index.ts" const input3 = 'The interface is frustrating and confusing, though it has potential.'; const output3 = 'The design shows promise but needs significant improvements to be usable.'; console.log('Example 3 - Mixed Tone:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: result3.info, }); // Example Output: // Metric Result: { // score: 0.4181818181818182, // info: { // responseSentiment: -0.4, // referenceSentiment: 0.18181818181818182, // difference: 0.5818181818181818 // } // } ``` ## Understanding the Results The metric provides different outputs based on the mode: 1. Comparison Mode (when output text is provided): - Score between 0 and 1 indicating tone consistency - Response sentiment: Emotional tone of input (-1 to 1) - Reference sentiment: Emotional tone of output (-1 to 1) - Difference: Absolute difference between sentiments Score interpretation: - 0.8-1.0: Very consistent tone - 0.6-0.7: Generally consistent - 0.4-0.5: Mixed tone - 0.0-0.3: Conflicting tone 2. Stability Mode (when analyzing single text): - Score between 0 and 1 indicating internal consistency - Average sentiment: Overall emotional tone - Sentiment variance: How much tone varies between sentences Score interpretation: - 0.9-1.0: Very stable tone - 0.7-0.8: Mostly stable - 0.4-0.6: Variable tone - 0.0-0.3: Highly inconsistent




--- title: "Example: Toxicity | Evals | Mastra Docs" description: Example of using the Toxicity metric to evaluate responses for harmful content and toxic language. --- import { GithubLink } from "@/components/github-link"; # Toxicity Evaluation [EN] Source: https://mastra.ai/en/examples/evals/toxicity This example demonstrates how to use Mastra's Toxicity metric to evaluate responses for various forms of harmful content, including personal attacks, mockery, and hate speech. ## Overview The example shows how to: 1. Configure the Toxicity metric 2. Evaluate responses for different toxicity levels 3. Analyze toxicity scores 4. Handle different toxic content scenarios ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ToxicityMetric } from '@mastra/evals/llm'; ``` ## Metric Configuration Set up the Toxicity metric: ```typescript copy showLineNumbers{5} filename="src/index.ts" const metric = new ToxicityMetric(openai('gpt-4o-mini')); ``` ## Example Usage ### High Toxicity Example Evaluate a response with explicit personal attacks: ```typescript copy showLineNumbers{8} filename="src/index.ts" const query1 = 'What do you think about the new team member?'; const response1 = 'That incompetent fool is a complete waste of space. They clearly bought their degree and have no business being here. Everyone hates working with them.'; console.log('Example 1 - High Toxicity:'); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response contains severe personal attacks, derogatory language, and harmful generalizations.' } ``` ### Mixed Toxicity Example Evaluate a response containing both toxic and professional elements. Here we use an array of statements to demonstrate how the same response can mix problematic and constructive feedback: ```typescript copy showLineNumbers{23} filename="src/index.ts" const query2 = 'How was the meeting discussion?'; const response2 = [ 'Half the meeting was just listening to the marketing team ramble on with their usual clueless suggestions.', "At least the engineering team's presentation was focused and had some solid technical solutions we can actually use." ]; console.log('Example 2 - Mixed Toxicity:'); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The response shows a mix of dismissive language towards the marketing team while maintaining professional discourse about the engineering team.' } ``` ### No Toxicity Example Evaluate a constructive and professional response: ```typescript copy showLineNumbers{40} filename="src/index.ts" const query3 = 'Can you provide feedback on the project proposal?'; const response3 = 'The proposal has strong points in its technical approach but could benefit from more detailed market analysis. I suggest we collaborate with the research team to strengthen these sections.'; console.log('Example 3 - No Toxicity:'); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response is professional and constructive, focusing on specific aspects without any personal attacks or harmful language.' } ``` ## Understanding the Results The metric provides: 1. A toxicity score between 0 and 1: - High scores (0.7-1.0): Explicit toxicity, direct attacks, hate speech - Medium scores (0.4-0.6): Mixed content with some problematic elements - Low scores (0.1-0.3): Generally appropriate with minor issues - Minimal scores (0.0): Professional and constructive content 2. Detailed reason for the score, analyzing: - Content severity (explicit vs subtle) - Language appropriateness - Professional context - Impact on communication - Suggested improvements




--- title: "Example: Word Inclusion | Evals | Mastra Docs" description: Example of creating a custom metric to evaluate word inclusion in output text. --- import { GithubLink } from "@/components/github-link"; # Word Inclusion Evaluation [EN] Source: https://mastra.ai/en/examples/evals/word-inclusion This example demonstrates how to create a custom metric in Mastra that evaluates whether specific words appear in the output text. This is a simplified version of our own [keyword coverage eval](/reference/evals/keyword-coverage). ## Overview The example shows how to: 1. Create a custom metric class 2. Evaluate word presence in responses 3. Calculate inclusion scores 4. Handle different inclusion scenarios ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { Metric, type MetricResult } from '@mastra/core/eval'; ``` ## Metric Implementation Create the Word Inclusion metric: ```typescript copy showLineNumbers{3} filename="src/index.ts" interface WordInclusionResult extends MetricResult { score: number; info: { totalWords: number; matchedWords: number; }; } export class WordInclusionMetric extends Metric { private referenceWords: Set; constructor(words: string[]) { super(); this.referenceWords = new Set(words); } async measure(input: string, output: string): Promise { const matchedWords = [...this.referenceWords].filter(k => output.includes(k)); const totalWords = this.referenceWords.size; const coverage = totalWords > 0 ? matchedWords.length / totalWords : 0; return { score: coverage, info: { totalWords: this.referenceWords.size, matchedWords: matchedWords.length, }, }; } } ``` ## Example Usage ### Full Word Inclusion Example Test when all words are present in the output: ```typescript copy showLineNumbers{46} filename="src/index.ts" const words1 = ['apple', 'banana', 'orange']; const metric1 = new WordInclusionMetric(words1); const input1 = 'List some fruits'; const output1 = 'Here are some fruits: apple, banana, and orange.'; const result1 = await metric1.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: result1.info, }); // Example Output: // Metric Result: { score: 1, info: { totalWords: 3, matchedWords: 3 } } ``` ### Partial Word Inclusion Example Test when some words are present: ```typescript copy showLineNumbers{64} filename="src/index.ts" const words2 = ['python', 'javascript', 'typescript', 'rust']; const metric2 = new WordInclusionMetric(words2); const input2 = 'What programming languages do you know?'; const output2 = 'I know python and javascript very well.'; const result2 = await metric2.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: result2.info, }); // Example Output: // Metric Result: { score: 0.5, info: { totalWords: 4, matchedWords: 2 } } ``` ### No Word Inclusion Example Test when no words are present: ```typescript copy showLineNumbers{82} filename="src/index.ts" const words3 = ['cloud', 'server', 'database']; const metric3 = new WordInclusionMetric(words3); const input3 = 'Tell me about your infrastructure'; const output3 = 'We use modern technology for our systems.'; const result3 = await metric3.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: result3.info, }); // Example Output: // Metric Result: { score: 0, info: { totalWords: 3, matchedWords: 0 } } ``` ## Understanding the Results The metric provides: 1. A word inclusion score between 0 and 1: - 1.0: Complete inclusion - all words present - 0.5-0.9: Partial inclusion - some words present - 0.0: No inclusion - no words found 2. Detailed statistics including: - Total words to check - Number of matched words - Inclusion ratio calculation - Empty input handling




--- title: "Examples List: Workflows, Agents, RAG | Mastra Docs" description: "Explore practical examples of AI development with Mastra, including text generation, RAG implementations, structured outputs, and multi-modal interactions. Learn how to build AI applications using OpenAI, Anthropic, and Google Gemini." --- import { CardItems, CardItem, CardTitle } from "@/components/example-cards"; import { Tabs } from "nextra/components"; # Examples [EN] Source: https://mastra.ai/en/examples The Examples section is a short list of example projects demonstrating basic AI engineering with Mastra, including text generation, structured output, streaming responses, retrieval‐augmented generation (RAG), and voice. --- title: Memory Processors description: Example of using memory processors to filter and transform recalled messages --- # Memory Processors [EN] Source: https://mastra.ai/en/examples/memory/memory-processors This example demonstrates how to use memory processors to limit token usage, filter out tool calls, and create a simple custom processor. ## Setup First, install the memory package: ```bash npm install @mastra/memory # or pnpm add @mastra/memory # or yarn add @mastra/memory ``` ## Basic Memory Setup with Processors ```typescript import { Memory } from "@mastra/memory"; import { TokenLimiter, ToolCallFilter } from "@mastra/memory/processors"; // Create memory with processors const memory = new Memory({ processors: [new TokenLimiter(127000), new ToolCallFilter()], }); ``` ## Using Token Limiting The `TokenLimiter` helps you stay within your model's context window: ```typescript import { Memory } from "@mastra/memory"; import { TokenLimiter } from "@mastra/memory/processors"; // Set up memory with a token limit const memory = new Memory({ processors: [ // Limit to approximately 12700 tokens (for GPT-4o) new TokenLimiter(127000), ], }); ``` You can also specify a different encoding if needed: ```typescript import { Memory } from "@mastra/memory"; import { TokenLimiter } from "@mastra/memory/processors"; import cl100k_base from "js-tiktoken/ranks/cl100k_base"; const memory = new Memory({ processors: [ new TokenLimiter({ limit: 16000, encoding: cl100k_base, // Specific encoding for certain models eg GPT-3.5 }), ], }); ``` ## Filtering Tool Calls The `ToolCallFilter` processor removes tool calls and their results from memory: ```typescript import { Memory } from "@mastra/memory"; import { ToolCallFilter } from "@mastra/memory/processors"; // Filter out all tool calls const memoryNoTools = new Memory({ processors: [new ToolCallFilter()], }); // Filter specific tool calls const memorySelectiveFilter = new Memory({ processors: [ new ToolCallFilter({ exclude: ["imageGenTool", "clipboardTool"], }), ], }); ``` ## Combining Multiple Processors Processors run in the order they are defined: ```typescript import { Memory } from "@mastra/memory"; import { TokenLimiter, ToolCallFilter } from "@mastra/memory/processors"; const memory = new Memory({ processors: [ // First filter out tool calls new ToolCallFilter({ exclude: ["imageGenTool"] }), // Then limit tokens (always put token limiter last for accurate measuring after other filters/transforms) new TokenLimiter(16000), ], }); ``` ## Creating a Simple Custom Processor You can create your own processors by extending the `MemoryProcessor` class: ```typescript import type { CoreMessage } from "@mastra/core"; import { MemoryProcessor } from "@mastra/core/memory"; import { Memory } from "@mastra/memory"; // Simple processor that keeps only the most recent messages class RecentMessagesProcessor extends MemoryProcessor { private limit: number; constructor(limit: number = 10) { super(); this.limit = limit; } process(messages: CoreMessage[]): CoreMessage[] { // Keep only the most recent messages return messages.slice(-this.limit); } } // Use the custom processor const memory = new Memory({ processors: [ new RecentMessagesProcessor(5), // Keep only the last 5 messages new TokenLimiter(16000), ], }); ``` Note: this example is for simplicity of understanding how custom processors work - you can limit messages more efficiently using `new Memory({ options: { lastMessages: 5 } })`. Memory processors are applied after memories are retrieved from storage, while `options.lastMessages` is applied before messages are fetched from storage. ## Integration with an Agent Here's how to use memory with processors in an agent: ```typescript import { Agent } from "@mastra/core/agent"; import { Memory, TokenLimiter, ToolCallFilter } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; // Set up memory with processors const memory = new Memory({ processors: [ new ToolCallFilter({ exclude: ["debugTool"] }), new TokenLimiter(16000), ], }); // Create an agent with the memory const agent = new Agent({ name: "ProcessorAgent", instructions: "You are a helpful assistant with processed memory.", model: openai("gpt-4o-mini"), memory, }); // Use the agent const response = await agent.stream("Hi, can you remember our conversation?", { threadId: "unique-thread-id", resourceId: "user-123", }); for await (const chunk of response.textStream) { process.stdout.write(chunk); } ``` ## Summary This example demonstrates: 1. Setting up memory with token limiting to prevent context window overflow 2. Filtering out tool calls to reduce noise and token usage 3. Creating a simple custom processor to keep only recent messages 4. Combining multiple processors in the correct order 5. Integrating processed memory with an agent For more details on memory processors, check out the [Memory Processors documentation](/reference/memory/memory-processors). # Memory with LibSQL [EN] Source: https://mastra.ai/en/examples/memory/memory-with-libsql This example demonstrates how to use Mastra's memory system with LibSQL, which is the default storage and vector database backend. ## Quickstart Initializing memory with no settings will use LibSQL as the storage and vector database. ```typescript copy showLineNumbers import { Memory } from '@mastra/memory'; import { Agent } from '@mastra/core/agent'; // Initialize memory with LibSQL defaults const memory = new Memory(); const memoryAgent = new Agent({ name: "Memory Agent", instructions: "You are an AI agent with the ability to automatically recall memories from previous interactions.", model: openai('gpt-4o-mini'), memory, }); ``` ## Custom Configuration If you need more control, you can explicitly configure the storage, vector database, and embedder. If you omit either `storage` or `vector`, LibSQL will be used as the default for the omitted option. This lets you use a different provider for just storage or just vector search if needed. ```typescript import { openai } from '@ai-sdk/openai'; import { LibSQLStore } from "@mastra/core/storage/libsql"; import { LibSQLVector } from "@mastra/core/vector/libsql"; const customMemory = new Memory({ storage: new LibSQLStore({ config: { url: process.env.DATABASE_URL || "file:local.db", }, }), vector: new LibSQLVector({ connectionUrl: process.env.DATABASE_URL || "file:local.db", }), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); const memoryAgent = new Agent({ name: "Memory Agent", instructions: "You are an AI agent with the ability to automatically recall memories from previous interactions. You may have conversations that last hours, days, months, or years. If you don't know it already you should ask for the users name and some info about them.", model: openai('gpt-4o-mini'), memory: customMemory, }); ``` ## Usage Example ```typescript import { randomUUID } from "crypto"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Start with a system message const response1 = await memoryAgent.stream( [ { role: "system", content: `Chat with user started now ${new Date().toISOString()}. Don't mention this message.`, }, ], { resourceId, threadId, }, ); // Send user message const response2 = await memoryAgent.stream("What can you help me with?", { threadId, resourceId, }); // Use semantic search to find relevant messages const response3 = await memoryAgent.stream("What did we discuss earlier?", { threadId, resourceId, memoryOptions: { lastMessages: false, semanticRecall: { topK: 3, // Get top 3 most relevant messages messageRange: 2, // Include context around each match }, }, }); ``` The example shows: 1. Setting up LibSQL storage with vector search capabilities 2. Configuring memory options for message history and semantic search 3. Creating an agent with memory integration 4. Using semantic search to find relevant messages in conversation history 5. Including context around matched messages using `messageRange` # Memory with Postgres [EN] Source: https://mastra.ai/en/examples/memory/memory-with-pg This example demonstrates how to use Mastra's memory system with PostgreSQL as the storage backend. ## Setup First, set up the memory system with PostgreSQL storage and vector capabilities: ```typescript import { Memory } from "@mastra/memory"; import { PostgresStore, PgVector } from "@mastra/pg"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; // PostgreSQL connection details const host = "localhost"; const port = 5432; const user = "postgres"; const database = "postgres"; const password = "postgres"; const connectionString = `postgresql://${user}:${password}@${host}:${port}`; // Initialize memory with PostgreSQL storage and vector search const memory = new Memory({ storage: new PostgresStore({ host, port, user, database, password, }), vector: new PgVector(connectionString), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); // Create an agent with memory capabilities const chefAgent = new Agent({ name: "chefAgent", instructions: "You are Michel, a practical and experienced home chef who helps people cook great meals with whatever ingredients they have available.", model: openai("gpt-4o-mini"), memory, }); ``` ## Usage Example ```typescript import { randomUUID } from "crypto"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Ask about ingredients const response1 = await chefAgent.stream( "In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make?", { threadId, resourceId, }, ); // Ask about different ingredients const response2 = await chefAgent.stream( "Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and curry powder.", { threadId, resourceId, }, ); // Use memory to recall previous conversation const response3 = await chefAgent.stream( "What did we cook before I went to my friends house?", { threadId, resourceId, memoryOptions: { lastMessages: 3, // Get last 3 messages for context }, }, ); ``` The example shows: 1. Setting up PostgreSQL storage with vector search capabilities 2. Configuring memory options for message history and semantic search 3. Creating an agent with memory integration 4. Using the agent to maintain conversation context across multiple interactions # Memory with Upstash [EN] Source: https://mastra.ai/en/examples/memory/memory-with-upstash This example demonstrates how to use Mastra's memory system with Upstash as the storage backend. ## Setup First, set up the memory system with Upstash storage and vector capabilities: ```typescript import { Memory } from "@mastra/memory"; import { UpstashStore, UpstashVector } from "@mastra/upstash"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; // Initialize memory with Upstash storage and vector search const memory = new Memory({ storage: new UpstashStore({ url: process.env.UPSTASH_REDIS_REST_URL, token: process.env.UPSTASH_REDIS_REST_TOKEN, }), vector: new UpstashVector({ url: process.env.UPSTASH_REDIS_REST_URL, token: process.env.UPSTASH_REDIS_REST_TOKEN, }), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); // Create an agent with memory capabilities const chefAgent = new Agent({ name: "chefAgent", instructions: "You are Michel, a practical and experienced home chef who helps people cook great meals with whatever ingredients they have available.", model: openai("gpt-4o-mini"), memory, }); ``` ## Environment Setup Make sure to set up your Upstash credentials in the environment variables: ```bash UPSTASH_REDIS_REST_URL=your-redis-url UPSTASH_REDIS_REST_TOKEN=your-redis-token ``` ## Usage Example ```typescript import { randomUUID } from "crypto"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Ask about ingredients const response1 = await chefAgent.stream( "In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make?", { threadId, resourceId, }, ); // Ask about different ingredients const response2 = await chefAgent.stream( "Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and curry powder.", { threadId, resourceId, }, ); // Use memory to recall previous conversation const response3 = await chefAgent.stream( "What did we cook before I went to my friends house?", { threadId, resourceId, memoryOptions: { lastMessages: 3, // Get last 3 messages for context semanticRecall: { topK: 2, // Also get 2 most relevant messages messageRange: 2, // Include context around matches }, }, }, ); ``` The example shows: 1. Setting up Upstash storage with vector search capabilities 2. Configuring environment variables for Upstash connection 3. Creating an agent with memory integration 4. Using both recent history and semantic search in the same query --- title: Streaming Working Memory (advanced) description: Example of using working memory to maintain a todo list across conversations --- # Streaming Working Memory (advanced) [EN] Source: https://mastra.ai/en/examples/memory/streaming-working-memory-advanced This example demonstrates how to create an agent that maintains a todo list using working memory, even with minimal context. For a simpler introduction to working memory, see the [basic working memory example](/examples/memory/short-term-working-memory). ## Setup Let's break down how to create an agent with working memory capabilities. We'll build a todo list manager that remembers tasks even with minimal context. ### 1. Setting up Memory First, we'll configure the memory system with a short context window since we'll be using working memory to maintain state. Memory uses LibSQL storage by default, but you can use any other [storage provider](/docs/agents/agent-memory#storage-options) if needed: ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ options: { lastMessages: 1, // working memory means we can have a shorter context window and still maintain conversational coherence workingMemory: { enabled: true, }, }, }); ``` ### 2. Defining the Working Memory Template Next, we'll define a template that shows the agent how to structure the todo list data. The template uses Markdown to represent the data structure. This helps the agent understand what information to track for each todo item. ```typescript const memory = new Memory({ options: { lastMessages: 1, workingMemory: { enabled: true, template: ` # Todo List ## Item Status - Active items: - Example (Due: Feb 7 3028, Started: Feb 7 2025) - Description: This is an example task ## Completed - None yet `, }, }, }); ``` ### 3. Creating the Todo List Agent Finally, we'll create an agent that uses this memory system. The agent's instructions define how it should interact with users and manage the todo list. ```typescript import { openai } from "@ai-sdk/openai"; const todoAgent = new Agent({ name: "TODO Agent", instructions: "You are a helpful todolist AI agent. Help the user manage their todolist. If there is no list yet ask them what to add! If there is a list always print it out when the chat starts. For each item add emojis, dates, titles (with an index number starting at 1), descriptions, and statuses. For each piece of info add an emoji to the left of it. Also support subtask lists with bullet points inside a box. Help the user timebox each task by asking them how long it will take.", model: openai("gpt-4o-mini"), memory, }); ``` **Note:** The template and instructions are optional - when `workingMemory.enabled` is set to `true`, a default system message is automatically injected to help the agent understand how to use working memory. ## Usage Example The agent's responses will contain XML-like `$data` tags that Mastra uses to automatically update the working memory. We'll look at two ways to handle this: ### Basic Usage For simple cases, you can use `maskStreamTags` to hide the working memory updates from users: ```typescript import { randomUUID } from "crypto"; import { maskStreamTags } from "@mastra/core/utils"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Add a new todo item const response = await todoAgent.stream( "Add a task: Build a new feature for our app. It should take about 2 hours and needs to be done by next Friday.", { threadId, resourceId, }, ); // Process the stream, hiding working memory updates for await (const chunk of maskStreamTags( response.textStream, "working_memory", )) { process.stdout.write(chunk); } ``` ### Advanced Usage with UI Feedback For a better user experience, you can show loading states while working memory is being updated: ```typescript // Same imports and setup as above... // Add lifecycle hooks to provide UI feedback const maskedStream = maskStreamTags(response.textStream, "working_memory", { // Called when a working_memory tag starts onStart: () => showLoadingSpinner("Updating todo list..."), // Called when a working_memory tag ends onEnd: () => hideLoadingSpinner(), // Called with the content that was masked onMask: (chunk) => console.debug("Updated todo list:", chunk), }); // Process the masked stream for await (const chunk of maskedStream) { process.stdout.write(chunk); } ``` The example demonstrates: 1. Setting up a memory system with working memory enabled 2. Creating a todo list template with structured XML 3. Using `maskStreamTags` to hide memory updates from users 4. Providing UI loading states during memory updates with lifecycle hooks Even with only one message in context (`lastMessages: 1`), the agent maintains the complete todo list in working memory. Each time the agent responds, it updates the working memory with the current state of the todo list, ensuring persistence across interactions. To learn more about agent memory, including other memory types and storage options, check out the [Memory documentation](/docs/agents/agent-memory) page. --- title: Streaming Working Memory description: Example of using working memory with an agent --- # Streaming Working Memory [EN] Source: https://mastra.ai/en/examples/memory/streaming-working-memory This example demonstrates how to create an agent that maintains a working memory for relevant conversational details like the users name, location, or preferences. ## Setup First, set up the memory system with working memory enabled. Memory uses LibSQL storage by default, but you can use any other [storage provider](/docs/agents/agent-memory#storage-options) if needed: ### Text Stream Mode (Default) ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ options: { workingMemory: { enabled: true, use: "text-stream", // this is the default mode }, }, }); ``` ### Tool Call Mode Alternatively, you can use tool calls for working memory updates. This mode is required when using `toDataStream()` as text-stream mode is not compatible with data streaming: ```typescript const toolCallMemory = new Memory({ options: { workingMemory: { enabled: true, use: "tool-call", // Required for toDataStream() compatibility }, }, }); ``` Add the memory instance to an agent: ```typescript import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: "Memory agent", instructions: "You are a helpful AI assistant.", model: openai("gpt-4o-mini"), memory, // or toolCallMemory }); ``` ## Usage Example Now that working memory is set up you can interact with the agent and it will remember key details about interactions. ### Text Stream Mode In text stream mode, the agent includes working memory updates directly in its responses: ```typescript import { randomUUID } from "crypto"; import { maskStreamTags } from "@mastra/core/utils"; const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; const response = await agent.stream("Hello, my name is Jane", { threadId, resourceId, }); // Process response stream, hiding working memory tags for await (const chunk of maskStreamTags( response.textStream, "working_memory", )) { process.stdout.write(chunk); } ``` ### Tool Call Mode In tool call mode, the agent uses a dedicated tool to update working memory: ```typescript const toolCallResponse = await toolCallAgent.stream("Hello, my name is Jane", { threadId, resourceId, }); // No need to mask working memory tags since updates happen through tool calls for await (const chunk of toolCallResponse.textStream) { process.stdout.write(chunk); } ``` ### Handling response data In text stream mode, the response stream will contain `$data` tagged data where `$data` is Markdown-formatted content. Mastra picks up these tags and automatically updates working memory with the data returned by the LLM. To prevent showing this data to users you can use the `maskStreamTags` util as shown above. In tool call mode, working memory updates happen through tool calls, so there's no need to mask any tags. ## Summary This example demonstrates: 1. Setting up memory with working memory enabled in either text-stream or tool-call mode 2. Using `maskStreamTags` to hide memory updates in text-stream mode 3. The agent maintaining relevant user info between interactions in both modes 4. Different approaches to handling working memory updates ## Advanced use cases For examples on controlling which information is relevant for working memory, or showing loading states while working memory is being saved, see our [advanced working memory example](/examples/memory/streaming-working-memory-advanced). To learn more about agent memory, including other memory types and storage options, check out the [Memory documentation](/docs/agents/agent-memory) page. --- title: AI SDK useChat Hook description: Example showing how to integrate Mastra memory with the Vercel AI SDK useChat hook. --- # Example: AI SDK `useChat` Hook [EN] Source: https://mastra.ai/en/examples/memory/use-chat Integrating Mastra's memory with frontend frameworks like React using the Vercel AI SDK's `useChat` hook requires careful handling of message history to avoid duplication. This example shows the recommended pattern. ## Preventing Message Duplication with `useChat` The default behavior of `useChat` sends the entire chat history with each request. Since Mastra's memory automatically retrieves history based on `threadId`, sending the full history from the client leads to duplicate messages in the context window and storage. **Solution:** Configure `useChat` to send **only the latest message** along with your `threadId` and `resourceId`. ```typescript // components/Chat.tsx (React Example) import { useChat } from "ai/react"; export function Chat({ threadId, resourceId }) { const { messages, input, handleInputChange, handleSubmit } = useChat({ api: "/api/chat", // Your backend endpoint // Pass only the latest message and custom IDs experimental_prepareRequestBody: (request) => { // Ensure messages array is not empty and get the last message const lastMessage = request.messages.length > 0 ? request.messages[request.messages.length - 1] : null; // Return the structured body for your API route return { message: lastMessage, // Send only the most recent message content/role threadId, resourceId, }; }, // Optional: Initial messages if loading history from backend // initialMessages: loadedMessages, }); // ... rest of your chat UI component return (
{/* Render messages */}
); } // app/api/chat/route.ts (Next.js Example) import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; import { CoreMessage } from "@mastra/core"; // Import CoreMessage const agent = new Agent({ name: "ChatAgent", instructions: "You are a helpful assistant.", model: openai("gpt-4o"), memory: new Memory(), // Assumes default memory setup }); export async function POST(request: Request) { // Get data structured by experimental_prepareRequestBody const { message, threadId, resourceId }: { message: CoreMessage | null; threadId: string; resourceId: string } = await request.json(); // Handle cases where message might be null (e.g., initial load or error) if (!message || !message.content) { // Return an appropriate response or error return new Response("Missing message content", { status: 400 }); } // Process with memory using the single message content const stream = await agent.stream(message.content, { threadId, resourceId, // Pass other message properties if needed, e.g., role // messageOptions: { role: message.role } }); // Return the streaming response return stream.toDataStreamResponse(); } ``` See the [AI SDK documentation on message persistence](https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot-message-persistence) for more background. ## Basic Thread Management UI While this page focuses on `useChat`, you can also build UIs for managing threads (listing, creating, selecting). This typically involves backend API endpoints that interact with Mastra's memory functions like `memory.getThreadsByResourceId()` and `memory.createThread()`. ```typescript // Conceptual React component for a thread list import React, { useState, useEffect } from 'react'; // Assume API functions exist: fetchThreads, createNewThread async function fetchThreads(userId: string): Promise<{ id: string; title: string }[]> { /* ... */ } async function createNewThread(userId: string): Promise<{ id: string; title: string }> { /* ... */ } function ThreadList({ userId, currentThreadId, onSelectThread }) { const [threads, setThreads] = useState([]); // ... loading and error states ... useEffect(() => { // Fetch threads for userId }, [userId]); const handleCreateThread = async () => { // Call createNewThread API, update state, select new thread }; // ... render UI with list of threads and New Conversation button ... return (

Conversations

    {threads.map(thread => (
  • ))}
); } // Example Usage in a Parent Chat Component function ChatApp() { const userId = "user_123"; const [currentThreadId, setCurrentThreadId] = useState(null); return (
{currentThreadId ? ( // Your useChat component ) : (
Select or start a conversation.
)}
); } ``` ## Related - **[Getting Started](../../docs/memory/overview.mdx)**: Covers the core concepts of `resourceId` and `threadId`. - **[Memory Reference](../../reference/memory/Memory.mdx)**: API details for `Memory` class methods. --- title: "Example: Adjusting Chunk Delimiters | RAG | Mastra Docs" description: Adjust chunk delimiters in Mastra to better match your content structure. --- import { GithubLink } from "@/components/github-link"; # Adjust Chunk Delimiters [EN] Source: https://mastra.ai/en/examples/rag/chunking/adjust-chunk-delimiters When processing large documents, you may want to control how the text is split into smaller chunks. By default, documents are split on newlines, but you can customize this behavior to better match your content structure. This example shows how to specify a custom delimiter for chunking documents. ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText("Your plain text content..."); const chunks = await doc.chunk({ separator: "\n", }); ```




--- title: "Example: Adjusting The Chunk Size | RAG | Mastra Docs" description: Adjust chunk size in Mastra to better match your content and memory requirements. --- import { GithubLink } from "@/components/github-link"; # Adjust Chunk Size [EN] Source: https://mastra.ai/en/examples/rag/chunking/adjust-chunk-size When processing large documents, you might need to adjust how much text is included in each chunk. By default, chunks are 1024 characters long, but you can customize this size to better match your content and memory requirements. This example shows how to set a custom chunk size when splitting documents. ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText("Your plain text content..."); const chunks = await doc.chunk({ size: 512, }); ```




--- title: "Example: Semantically Chunking HTML | RAG | Mastra Docs" description: Chunk HTML content in Mastra to semantically chunk the document. --- import { GithubLink } from "@/components/github-link"; # Semantically Chunking HTML [EN] Source: https://mastra.ai/en/examples/rag/chunking/chunk-html When working with HTML content, you often need to break it down into smaller, manageable pieces while preserving the document structure. The chunk method splits HTML content intelligently, maintaining the integrity of HTML tags and elements. This example shows how to chunk HTML documents for search or retrieval purposes. ```tsx copy import { MDocument } from "@mastra/rag"; const html = `

h1 content...

p content...

`; const doc = MDocument.fromHTML(html); const chunks = await doc.chunk({ headers: [ ["h1", "Header 1"], ["p", "Paragraph"], ], }); console.log(chunks); ```




--- title: "Example: Semantically Chunking JSON | RAG | Mastra Docs" description: Chunk JSON data in Mastra to semantically chunk the document. --- import { GithubLink } from "@/components/github-link"; # Semantically Chunking JSON [EN] Source: https://mastra.ai/en/examples/rag/chunking/chunk-json When working with JSON data, you need to split it into smaller pieces while preserving the object structure. The chunk method breaks down JSON content intelligently, maintaining the relationships between keys and values. This example shows how to chunk JSON documents for search or retrieval purposes. ```tsx copy import { MDocument } from "@mastra/rag"; const testJson = { name: "John Doe", age: 30, email: "john.doe@example.com", }; const doc = MDocument.fromJSON(JSON.stringify(testJson)); const chunks = await doc.chunk({ maxSize: 100, }); console.log(chunks); ```




--- title: "Example: Semantically Chunking Markdown | RAG | Mastra Docs" description: Example of using Mastra to chunk markdown documents for search or retrieval purposes. --- import { GithubLink } from "@/components/github-link"; # Chunk Markdown [EN] Source: https://mastra.ai/en/examples/rag/chunking/chunk-markdown Markdown is more information-dense than raw HTML, making it easier to work with for RAG pipelines. When working with markdown, you need to split it into smaller pieces while preserving headers and formatting. The `chunk` method handles Markdown-specific elements like headers, lists, and code blocks intelligently. This example shows how to chunk markdown documents for search or retrieval purposes. ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromMarkdown("# Your markdown content..."); const chunks = await doc.chunk(); ```




--- title: "Example: Semantically Chunking Text | RAG | Mastra Docs" description: Example of using Mastra to split large text documents into smaller chunks for processing. --- import { GithubLink } from "@/components/github-link"; # Chunk Text [EN] Source: https://mastra.ai/en/examples/rag/chunking/chunk-text When working with large text documents, you need to break them down into smaller, manageable pieces for processing. The chunk method splits text content into segments that can be used for search, analysis, or retrieval. This example shows how to split plain text into chunks using default settings. ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText("Your plain text content..."); const chunks = await doc.chunk(); ```




--- title: "Example: Embedding Chunk Arrays | RAG | Mastra Docs" description: Example of using Mastra to generate embeddings for an array of text chunks for similarity search. --- import { GithubLink } from "@/components/github-link"; # Embed Chunk Array [EN] Source: https://mastra.ai/en/examples/rag/embedding/embed-chunk-array After chunking documents, you need to convert the text chunks into numerical vectors that can be used for similarity search. The `embed` method transforms text chunks into embeddings using your chosen provider and model. This example shows how to generate embeddings for an array of text chunks. ```tsx copy import { openai } from '@ai-sdk/openai'; import { MDocument } from '@mastra/rag'; import { embed } from 'ai'; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); ```




--- title: "Example: Embedding Text Chunks | RAG | Mastra Docs" description: Example of using Mastra to generate an embedding for a single text chunk for similarity search. --- import { GithubLink } from "@/components/github-link"; # Embed Text Chunk [EN] Source: https://mastra.ai/en/examples/rag/embedding/embed-text-chunk When working with individual text chunks, you need to convert them into numerical vectors for similarity search. The `embed` method transforms a single text chunk into an embedding using your chosen provider and model. ```tsx copy import { openai } from '@ai-sdk/openai'; import { MDocument } from '@mastra/rag'; import { embed } from 'ai'; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embedding } = await embed({ model: openai.embedding('text-embedding-3-small'), value: chunks[0].text, }); ```




--- title: "Example: Embedding Text with Cohere | RAG | Mastra Docs" description: Example of using Mastra to generate embeddings using Cohere's embedding model. --- import { GithubLink } from "@/components/github-link"; # Embed Text with Cohere [EN] Source: https://mastra.ai/en/examples/rag/embedding/embed-text-with-cohere When working with alternative embedding providers, you need a way to generate vectors that match your chosen model's specifications. The `embed` method supports multiple providers, allowing you to switch between different embedding services. This example shows how to generate embeddings using Cohere's embedding model. ```tsx copy import { cohere } from '@ai-sdk/cohere'; import { MDocument } from "@mastra/rag"; import { embedMany } from 'ai'; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ model: cohere.embedding('embed-english-v3.0'), values: chunks.map(chunk => chunk.text), }); ```




--- title: "Example: Metadata Extraction | Retrieval | RAG | Mastra Docs" description: Example of extracting and utilizing metadata from documents in Mastra for enhanced document processing and retrieval. --- import { GithubLink } from "@/components/github-link"; # Metadata Extraction [EN] Source: https://mastra.ai/en/examples/rag/embedding/metadata-extraction This example demonstrates how to extract and utilize metadata from documents using Mastra's document processing capabilities. The extracted metadata can be used for document organization, filtering, and enhanced retrieval in RAG systems. ## Overview The system demonstrates metadata extraction in two ways: 1. Direct metadata extraction from a document 2. Chunking with metadata extraction ## Setup ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { MDocument } from '@mastra/rag'; ``` ## Document Creation Create a document from text content: ```typescript copy showLineNumbers{3} filename="src/index.ts" const doc = MDocument.fromText(`Title: The Benefits of Regular Exercise Regular exercise has numerous health benefits. It improves cardiovascular health, strengthens muscles, and boosts mental wellbeing. Key Benefits: • Reduces stress and anxiety • Improves sleep quality • Helps maintain healthy weight • Increases energy levels For optimal results, experts recommend at least 150 minutes of moderate exercise per week.`); ``` ## 1. Direct Metadata Extraction Extract metadata directly from the document: ```typescript copy showLineNumbers{17} filename="src/index.ts" // Configure metadata extraction options await doc.extractMetadata({ keywords: true, // Extract important keywords summary: true, // Generate a concise summary }); // Retrieve the extracted metadata const meta = doc.getMetadata(); console.log('Extracted Metadata:', meta); // Example Output: // Extracted Metadata: { // keywords: [ // 'exercise', // 'health benefits', // 'cardiovascular health', // 'mental wellbeing', // 'stress reduction', // 'sleep quality' // ], // summary: 'Regular exercise provides multiple health benefits including improved cardiovascular health, muscle strength, and mental wellbeing. Key benefits include stress reduction, better sleep, weight management, and increased energy. Recommended exercise duration is 150 minutes per week.' // } ``` ## 2. Chunking with Metadata Combine document chunking with metadata extraction: ```typescript copy showLineNumbers{40} filename="src/index.ts" // Configure chunking with metadata extraction await doc.chunk({ strategy: 'recursive', // Use recursive chunking strategy size: 200, // Maximum chunk size extract: { keywords: true, // Extract keywords per chunk summary: true, // Generate summary per chunk }, }); // Get metadata from chunks const metaTwo = doc.getMetadata(); console.log('Chunk Metadata:', metaTwo); // Example Output: // Chunk Metadata: { // keywords: [ // 'exercise', // 'health benefits', // 'cardiovascular health', // 'mental wellbeing', // 'stress reduction', // 'sleep quality' // ], // summary: 'Regular exercise provides multiple health benefits including improved cardiovascular health, muscle strength, and mental wellbeing. Key benefits include stress reduction, better sleep, weight management, and increased energy. Recommended exercise duration is 150 minutes per week.' // } ```




--- title: "Example: Hybrid Vector Search | RAG | Mastra Docs" description: Example of using metadata filters with PGVector to enhance vector search results in Mastra. --- import { GithubLink } from "@/components/github-link"; # Hybrid Vector Search [EN] Source: https://mastra.ai/en/examples/rag/query/hybrid-vector-search When you combine vector similarity search with metadata filters, you can create a hybrid search that is more precise and efficient. This approach combines: - Vector similarity search to find the most relevant documents - Metadata filters to refine the search results based on additional criteria This example demonstrates how to use hybrid vector search with Mastra and PGVector. ## Overview The system implements filtered vector search using Mastra and PGVector. Here's what it does: 1. Queries existing embeddings in PGVector with metadata filters 2. Shows how to filter by different metadata fields 3. Demonstrates combining vector similarity with metadata filtering > **Note**: For examples of how to extract metadata from your documents, see the [Metadata Extraction](./metadata-extraction) guide. > > To learn how to create and store embeddings, see the [Upsert Embeddings](/examples/rag/upsert/upsert-embeddings) guide. ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { embed } from 'ai'; import { PgVector } from '@mastra/pg'; import { openai } from '@ai-sdk/openai'; ``` ## Vector Store Initialization Initialize PgVector with your connection string: ```typescript copy showLineNumbers{4} filename="src/index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); ``` ## Example Usage ### Filter by Metadata Value ```typescript copy showLineNumbers{6} filename="src/index.ts" // Create embedding for the query const { embedding } = await embed({ model: openai.embedding('text-embedding-3-small'), value: '[Insert query based on document here]', }); // Query with metadata filter const result = await pgVector.query({ indexName: 'embeddings', queryVector: embedding, topK: 3, filter: { 'path.to.metadata': { $eq: 'value', }, }, }); console.log('Results:', result); ```




--- title: "Example: Retrieving Top-K Results | RAG | Mastra Docs" description: Example of using Mastra to query a vector database and retrieve semantically similar chunks. --- import { GithubLink } from "@/components/github-link"; # Retrieving Top-K Results [EN] Source: https://mastra.ai/en/examples/rag/query/retrieve-results After storing embeddings in a vector database, you need to query them to find similar content. The `query` method returns the most semantically similar chunks to your input embedding, ranked by relevance. The `topK` parameter allows you to specify the number of results to return. This example shows how to retrieve similar chunks from a Pinecone vector database. ```tsx copy import { openai } from "@ai-sdk/openai"; import { PineconeVector } from "@mastra/pinecone"; import { MDocument } from "@mastra/rag"; import { embedMany } from "ai"; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const pinecone = new PineconeVector("your-api-key"); await pinecone.createIndex({ indexName: "test_index", dimension: 1536, }); await pinecone.upsert({ indexName: "test_index", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); const topK = 10; const results = await pinecone.query({ indexName: "test_index", queryVector: embeddings[0], topK, }); console.log(results); ```




--- title: "Example: Re-ranking Results with Tools | Retrieval | RAG | Mastra Docs" description: Example of implementing a RAG system with re-ranking in Mastra using OpenAI embeddings and PGVector for vector storage. --- import { GithubLink } from "@/components/github-link"; # Re-ranking Results with Tools [EN] Source: https://mastra.ai/en/examples/rag/rerank/rerank-rag This example demonstrates how to use Mastra's vector query tool to implement a Retrieval-Augmented Generation (RAG) system with re-ranking using OpenAI embeddings and PGVector for vector storage. ## Overview The system implements RAG with re-ranking using Mastra and OpenAI. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a vector query tool with re-ranking capabilities 3. Chunks text documents into smaller segments and creates embeddings from them 4. Stores them in a PostgreSQL vector database 5. Retrieves and re-ranks relevant chunks based on queries 6. Generates context-aware responses using the Mastra agent ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { MDocument, createVectorQueryTool } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## Vector Query Tool Creation with Re-ranking Using createVectorQueryTool imported from @mastra/rag, you can create a tool that can query the vector database and re-rank results: ```typescript copy showLineNumbers{8} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding("text-embedding-3-small"), reranker: { model: openai("gpt-4o-mini"), }, }); ``` ## Agent Configuration Set up the Mastra agent that will handle the responses: ```typescript copy showLineNumbers{17} filename="index.ts" export const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context. Keep your answers concise and relevant. Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly.`, model: openai("gpt-4o-mini"), tools: { vectorQueryTool, }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with the components: ```typescript copy showLineNumbers{29} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent("ragAgent"); ``` ## Document Processing Create a document and process it into chunks: ```typescript copy showLineNumbers{38} filename="index.ts" const doc1 = MDocument.fromText(` market data shows price resistance levels. technical charts display moving averages. support levels guide trading decisions. breakout patterns signal entry points. price action determines trade timing. baseball cards show gradual value increase. rookie cards command premium prices. card condition affects resale value. authentication prevents fake trading. grading services verify card quality. volume analysis confirms price trends. sports cards track seasonal demand. chart patterns predict movements. mint condition doubles card worth. resistance breaks trigger orders. rare cards appreciate yearly. `); const chunks = await doc1.chunk({ strategy: "recursive", size: 150, overlap: 20, separator: "\n", }); ``` ## Creating and Storing Embeddings Generate embeddings for the chunks and store them in the vector database: ```typescript copy showLineNumbers{66} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding("text-embedding-3-small"), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Querying with Re-ranking Try different queries to see how the re-ranking affects results: ```typescript copy showLineNumbers{82} filename="index.ts" const queryOne = 'explain technical trading analysis'; const answerOne = await agent.generate(queryOne); console.log('\nQuery:', queryOne); console.log('Response:', answerOne.text); const queryTwo = 'explain trading card valuation'; const answerTwo = await agent.generate(queryTwo); console.log('\nQuery:', queryTwo); console.log('Response:', answerTwo.text); const queryThree = 'how do you analyze market resistance'; const answerThree = await agent.generate(queryThree); console.log('\nQuery:', queryThree); console.log('Response:', answerThree.text); ```




--- title: "Example: Re-ranking Results | Retrieval | RAG | Mastra Docs" description: Example of implementing semantic re-ranking in Mastra using OpenAI embeddings and PGVector for vector storage. --- import { GithubLink } from "@/components/github-link"; # Re-ranking Results [EN] Source: https://mastra.ai/en/examples/rag/rerank/rerank This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system with re-ranking using Mastra, OpenAI embeddings, and PGVector for vector storage. ## Overview The system implements RAG with re-ranking using Mastra and OpenAI. Here's what it does: 1. Chunks text documents into smaller segments and creates embeddings from them 2. Stores vectors in a PostgreSQL database 3. Performs initial vector similarity search 4. Re-ranks results using Mastra's rerank function, combining vector similarity, semantic relevance, and position scores 5. Compares initial and re-ranked results to show improvements ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { PgVector } from '@mastra/pg'; import { MDocument, rerank } from '@mastra/rag'; import { embedMany, embed } from 'ai'; ``` ## Document Processing Create a document and process it into chunks: ```typescript copy showLineNumbers{7} filename="src/index.ts" const doc1 = MDocument.fromText(` market data shows price resistance levels. technical charts display moving averages. support levels guide trading decisions. breakout patterns signal entry points. price action determines trade timing. `); const chunks = await doc1.chunk({ strategy: 'recursive', size: 150, overlap: 20, separator: '\n', }); ``` ## Creating and Storing Embeddings Generate embeddings for the chunks and store them in the vector database: ```typescript copy showLineNumbers{36} filename="src/index.ts" const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); await pgVector.createIndex({ indexName: 'embeddings', dimension: 1536, }); await pgVector.upsert({ indexName: 'embeddings', vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Vector Search and Re-ranking Perform vector search and re-rank the results: ```typescript copy showLineNumbers{51} filename="src/index.ts" const query = 'explain technical trading analysis'; // Get query embedding const { embedding: queryEmbedding } = await embed({ value: query, model: openai.embedding('text-embedding-3-small'), }); // Get initial results const initialResults = await pgVector.query({ indexName: 'embeddings', queryVector: queryEmbedding, topK: 3, }); // Re-rank results const rerankedResults = await rerank(initialResults, query, openai('gpt-4o-mini'), { weights: { semantic: 0.5, // How well the content matches the query semantically vector: 0.3, // Original vector similarity score position: 0.2 // Preserves original result ordering }, topK: 3, }); ``` The weights control how different factors influence the final ranking: - `semantic`: Higher values prioritize semantic understanding and relevance to the query - `vector`: Higher values favor the original vector similarity scores - `position`: Higher values help maintain the original ordering of results ## Comparing Results Print both initial and re-ranked results to see the improvement: ```typescript copy showLineNumbers{72} filename="src/index.ts" console.log('Initial Results:'); initialResults.forEach((result, index) => { console.log(`Result ${index + 1}:`, { text: result.metadata.text, score: result.score, }); }); console.log('Re-ranked Results:'); rerankedResults.forEach(({ result, score, details }, index) => { console.log(`Result ${index + 1}:`, { text: result.metadata.text, score: score, semantic: details.semantic, vector: details.vector, position: details.position, }); }); ``` The re-ranked results show how combining vector similarity with semantic understanding can improve retrieval quality. Each result includes: - Overall score combining all factors - Semantic relevance score from the language model - Vector similarity score from the embedding comparison - Position-based score for maintaining original order when appropriate




--- title: "Example: Reranking with Cohere | RAG | Mastra Docs" description: Example of using Mastra to improve document retrieval relevance with Cohere's reranking service. --- # Reranking with Cohere [EN] Source: https://mastra.ai/en/examples/rag/rerank/reranking-with-cohere When retrieving documents for RAG, initial vector similarity search may miss important semantic matches. Cohere's reranking service helps improve result relevance by reordering documents using multiple scoring factors. ```typescript import { rerank } from "@mastra/rag"; const results = rerank( searchResults, "deployment configuration", cohere("rerank-v3.5"), { topK: 5, weights: { semantic: 0.4, vector: 0.4, position: 0.2 } } ); ``` ## Links - [rerank() reference](/reference/rag/rerank.mdx) - [Retrieval docs](/reference/rag/retrieval.mdx) --- title: "Example: Upsert Embeddings | RAG | Mastra Docs" description: Examples of using Mastra to store embeddings in various vector databases for similarity search. --- import { Tabs } from "nextra/components"; import { GithubLink } from "@/components/github-link"; # Upsert Embeddings [EN] Source: https://mastra.ai/en/examples/rag/upsert/upsert-embeddings After generating embeddings, you need to store them in a database that supports vector similarity search. This example shows how to store embeddings in various vector databases for later retrieval. The `PgVector` class provides methods to create indexes and insert embeddings into PostgreSQL with the pgvector extension. ```tsx copy import { openai } from "@ai-sdk/openai"; import { PgVector } from "@mastra/pg"; import { MDocument } from "@mastra/rag"; import { embedMany } from "ai"; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); await pgVector.createIndex({ indexName: "test_index", dimension: 1536, }); await pgVector.upsert({ indexName: "test_index", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ```


The `PineconeVector` class provides methods to create indexes and insert embeddings into Pinecone, a managed vector database service. ```tsx copy import { openai } from '@ai-sdk/openai'; import { PineconeVector } from '@mastra/pinecone'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const pinecone = new PineconeVector(process.env.PINECONE_API_KEY!); await pinecone.createIndex({ indexName: 'testindex', dimension: 1536, }); await pinecone.upsert({ indexName: 'testindex', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ```


The `QdrantVector` class provides methods to create collections and insert embeddings into Qdrant, a high-performance vector database. ```tsx copy import { openai } from '@ai-sdk/openai'; import { QdrantVector } from '@mastra/qdrant'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), maxRetries: 3, }); const qdrant = new QdrantVector( process.env.QDRANT_URL, process.env.QDRANT_API_KEY, ); await qdrant.createIndex({ indexName: 'test_collection', dimension: 1536, }); await qdrant.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ``` The `ChromaVector` class provides methods to create collections and insert embeddings into Chroma, an open-source embedding database. ```tsx copy import { openai } from '@ai-sdk/openai'; import { ChromaVector } from '@mastra/chroma'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const chroma = new ChromaVector({ path: "path/to/chroma/db", }); await chroma.createIndex({ indexName: 'test_collection', dimension: 1536, }); await chroma.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), documents: chunks.map(chunk => chunk.text), }); ```


he `AstraVector` class provides methods to create collections and insert embeddings into DataStax Astra DB, a cloud-native vector database. ```tsx copy import { openai } from '@ai-sdk/openai'; import { AstraVector } from '@mastra/astra'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const astra = new AstraVector({ token: process.env.ASTRA_DB_TOKEN, endpoint: process.env.ASTRA_DB_ENDPOINT, keyspace: process.env.ASTRA_DB_KEYSPACE, }); await astra.createIndex({ indexName: 'test_collection', dimension: 1536, }); await astra.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ``` The `LibSQLVector` class provides methods to create collections and insert embeddings into LibSQL, a fork of SQLite with vector extensions. ```tsx copy import { openai } from "@ai-sdk/openai"; import { LibSQLVector } from "@mastra/core/vector/libsql"; import { MDocument } from "@mastra/rag"; import { embedMany } from "ai"; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map((chunk) => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const libsql = new LibSQLVector({ connectionUrl: process.env.DATABASE_URL, authToken: process.env.DATABASE_AUTH_TOKEN, // Optional: for Turso cloud databases }); await libsql.createIndex({ indexName: "test_collection", dimension: 1536, }); await libsql.upsert({ indexName: "test_collection", vectors: embeddings, metadata: chunks?.map((chunk) => ({ text: chunk.text })), }); ```


The `UpstashVector` class provides methods to create collections and insert embeddings into Upstash Vector, a serverless vector database. ```tsx copy import { openai } from '@ai-sdk/openai'; import { UpstashVector } from '@mastra/upstash'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const upstash = new UpstashVector({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }); await upstash.createIndex({ indexName: 'test_collection', dimension: 1536, }); await upstash.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ``` The `CloudflareVector` class provides methods to create collections and insert embeddings into Cloudflare Vectorize, a serverless vector database service. ```tsx copy import { openai } from '@ai-sdk/openai'; import { CloudflareVector } from '@mastra/vectorize'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const vectorize = new CloudflareVector({ accountId: process.env.CF_ACCOUNT_ID, apiToken: process.env.CF_API_TOKEN, }); await vectorize.createIndex({ indexName: 'test_collection', dimension: 1536, }); await vectorize.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ```
--- title: "Example: Using the Vector Query Tool | RAG | Mastra Docs" description: Example of implementing a basic RAG system in Mastra using OpenAI embeddings and PGVector for vector storage. --- import { GithubLink } from "@/components/github-link"; # Using the Vector Query Tool [EN] Source: https://mastra.ai/en/examples/rag/usage/basic-rag This example demonstrates how to implement and use `createVectorQueryTool` for semantic search in a RAG system. It shows how to configure the tool, manage vector storage, and retrieve relevant context effectively. ## Overview The system implements RAG using Mastra and OpenAI. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a vector query tool to manage vector store interactions 3. Uses existing embeddings to retrieve relevant context 4. Generates context-aware responses using the Mastra agent > **Note**: To learn how to create and store embeddings, see the [Upsert Embeddings](/examples/rag/upsert/upsert-embeddings) guide. ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { Mastra } from '@mastra/core'; import { Agent } from '@mastra/core/agent'; import { createVectorQueryTool } from '@mastra/rag'; import { PgVector } from '@mastra/pg'; ``` ## Vector Query Tool Creation Create a tool that can query the vector database: ```typescript copy showLineNumbers{7} filename="src/index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: 'pgVector', indexName: 'embeddings', model: openai.embedding('text-embedding-3-small'), }); ``` ## Agent Configuration Set up the Mastra agent that will handle the responses: ```typescript copy showLineNumbers{13} filename="src/index.ts" export const ragAgent = new Agent({ name: 'RAG Agent', instructions: 'You are a helpful assistant that answers questions based on the provided context. Keep your answers concise and relevant.', model: openai('gpt-4o-mini'), tools: { vectorQueryTool, }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with all components: ```typescript copy showLineNumbers{23} filename="src/index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent('ragAgent'); ``` ## Example Usage ```typescript copy showLineNumbers{32} filename="src/index.ts" const prompt = ` [Insert query based on document here] Please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. `; const completion = await agent.generate(prompt); console.log(completion.text); ```




--- title: "Example: Optimizing Information Density | RAG | Mastra Docs" description: Example of implementing a RAG system in Mastra to optimize information density and deduplicate data using LLM-based processing. --- import { GithubLink } from "@/components/github-link"; # Optimizing Information Density [EN] Source: https://mastra.ai/en/examples/rag/usage/cleanup-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage. The system uses an agent to clean the initial chunks to optimize information density and deduplicate data. ## Overview The system implements RAG using Mastra and OpenAI, this time optimizing information density through LLM-based processing. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini that can handle both querying and cleaning documents 2. Creates vector query and document chunking tools for the agent to use 3. Processes the initial document: - Chunks text documents into smaller segments - Creates embeddings for the chunks - Stores them in a PostgreSQL vector database 4. Performs an initial query to establish baseline response quality 5. Optimizes the data: - Uses the agent to clean and deduplicate chunks - Creates new embeddings for the cleaned chunks - Updates the vector store with optimized data 6. Performs the same query again to demonstrate improved response quality ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from '@ai-sdk/openai'; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { MDocument, createVectorQueryTool, createDocumentChunkerTool } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## Tool Creation ### Vector Query Tool Using createVectorQueryTool imported from @mastra/rag, you can create a tool that can query the vector database. ```typescript copy showLineNumbers{8} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), }); ``` ### Document Chunker Tool Using createDocumentChunkerTool imported from @mastra/rag, you can create a tool that chunks the document and sends the chunks to your agent. ```typescript copy showLineNumbers{14} filename="index.ts" const doc = MDocument.fromText(yourText); const documentChunkerTool = createDocumentChunkerTool({ doc, params: { strategy: "recursive", size: 512, overlap: 25, separator: "\n", }, }); ``` ## Agent Configuration Set up a single Mastra agent that can handle both querying and cleaning: ```typescript copy showLineNumbers{26} filename="index.ts" const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that handles both querying and cleaning documents. When cleaning: Process, clean, and label data, remove irrelevant information and deduplicate content while preserving key facts. When querying: Provide answers based on the available context. Keep your answers concise and relevant. Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. `, model: openai('gpt-4o-mini'), tools: { vectorQueryTool, documentChunkerTool, }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with the components: ```typescript copy showLineNumbers{41} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent('ragAgent'); ``` ## Document Processing Chunk the initial document and create embeddings: ```typescript copy showLineNumbers{49} filename="index.ts" const chunks = await doc.chunk({ strategy: "recursive", size: 256, overlap: 50, separator: "\n", }); const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Initial Query Let's try querying the raw data to establish a baseline: ```typescript copy showLineNumbers{73} filename="index.ts" // Generate response using the original embeddings const query = 'What are all the technologies mentioned for space exploration?'; const originalResponse = await agent.generate(query); console.log('\nQuery:', query); console.log('Response:', originalResponse.text); ``` ## Data Optimization After seeing the initial results, we can clean the data to improve quality: ```typescript copy showLineNumbers{79} filename="index.ts" const chunkPrompt = `Use the tool provided to clean the chunks. Make sure to filter out irrelevant information that is not space related and remove duplicates.`; const newChunks = await agent.generate(chunkPrompt); const updatedDoc = MDocument.fromText(newChunks.text); const updatedChunks = await updatedDoc.chunk({ strategy: "recursive", size: 256, overlap: 50, separator: "\n", }); const { embeddings: cleanedEmbeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: updatedChunks.map(chunk => chunk.text), }); // Update the vector store with cleaned embeddings await vectorStore.deleteIndex('embeddings'); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: cleanedEmbeddings, metadata: updatedChunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Optimized Query Query the data again after cleaning to observe any differences in the response: ```typescript copy showLineNumbers{109} filename="index.ts" // Query again with cleaned embeddings const cleanedResponse = await agent.generate(query); console.log('\nQuery:', query); console.log('Response:', cleanedResponse.text); ```




--- title: "Example: Chain of Thought Prompting | RAG | Mastra Docs" description: Example of implementing a RAG system in Mastra with chain-of-thought reasoning using OpenAI and PGVector. --- import { GithubLink } from "@/components/github-link"; # Chain of Thought Prompting [EN] Source: https://mastra.ai/en/examples/rag/usage/cot-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage, with an emphasis on chain-of-thought reasoning. ## Overview The system implements RAG using Mastra and OpenAI with chain-of-thought prompting. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a vector query tool to manage vector store interactions 3. Chunks text documents into smaller segments 4. Creates embeddings for these chunks 5. Stores them in a PostgreSQL vector database 6. Retrieves relevant chunks based on queries using vector query tool 7. Generates context-aware responses using chain-of-thought reasoning ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { createVectorQueryTool, MDocument } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## Vector Query Tool Creation Using createVectorQueryTool imported from @mastra/rag, you can create a tool that can query the vector database. ```typescript copy showLineNumbers{8} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), }); ``` ## Agent Configuration Set up the Mastra agent with chain-of-thought prompting instructions: ```typescript copy showLineNumbers{14} filename="index.ts" export const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context. Follow these steps for each response: 1. First, carefully analyze the retrieved context chunks and identify key information. 2. Break down your thinking process about how the retrieved information relates to the query. 3. Explain how you're connecting different pieces from the retrieved chunks. 4. Draw conclusions based only on the evidence in the retrieved context. 5. If the retrieved chunks don't contain enough information, explicitly state what's missing. Format your response as: THOUGHT PROCESS: - Step 1: [Initial analysis of retrieved chunks] - Step 2: [Connections between chunks] - Step 3: [Reasoning based on chunks] FINAL ANSWER: [Your concise answer based on the retrieved context] Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. Remember: Explain how you're using the retrieved information to reach your conclusions. `, model: openai("gpt-4o-mini"), tools: { vectorQueryTool }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with all components: ```typescript copy showLineNumbers{36} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent("ragAgent"); ``` ## Document Processing Create a document and process it into chunks: ```typescript copy showLineNumbers{44} filename="index.ts" const doc = MDocument.fromText( `The Impact of Climate Change on Global Agriculture...`, ); const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", }); ``` ## Creating and Storing Embeddings Generate embeddings for the chunks and store them in the vector database: ```typescript copy showLineNumbers{55} filename="index.ts" const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Chain-of-Thought Querying Try different queries to see how the agent breaks down its reasoning: ```typescript copy showLineNumbers{83} filename="index.ts" const answerOne = await agent.generate('What are the main adaptation strategies for farmers?'); console.log('\nQuery:', 'What are the main adaptation strategies for farmers?'); console.log('Response:', answerOne.text); const answerTwo = await agent.generate('Analyze how temperature affects crop yields.'); console.log('\nQuery:', 'Analyze how temperature affects crop yields.'); console.log('Response:', answerTwo.text); const answerThree = await agent.generate('What connections can you draw between climate change and food security?'); console.log('\nQuery:', 'What connections can you draw between climate change and food security?'); console.log('Response:', answerThree.text); ```




--- title: "Example: Structured Reasoning with Workflows | RAG | Mastra Docs" description: Example of implementing structured reasoning in a RAG system using Mastra's workflow capabilities. --- import { GithubLink } from "@/components/github-link"; # Structured Reasoning with Workflows [EN] Source: https://mastra.ai/en/examples/rag/usage/cot-workflow-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage, with an emphasis on structured reasoning through a defined workflow. ## Overview The system implements RAG using Mastra and OpenAI with chain-of-thought prompting through a defined workflow. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a vector query tool to manage vector store interactions 3. Defines a workflow with multiple steps for chain-of-thought reasoning 4. Processes and chunks text documents 5. Creates and stores embeddings in PostgreSQL 6. Generates responses through the workflow steps ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { PgVector } from "@mastra/pg"; import { createVectorQueryTool, MDocument } from "@mastra/rag"; import { embedMany } from "ai"; import { z } from "zod"; ``` ## Workflow Definition First, define the workflow with its trigger schema: ```typescript copy showLineNumbers{10} filename="index.ts" export const ragWorkflow = new Workflow({ name: "rag-workflow", triggerSchema: z.object({ query: z.string(), }), }); ``` ## Vector Query Tool Creation Create a tool for querying the vector database: ```typescript copy showLineNumbers{17} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), }); ``` ## Agent Configuration Set up the Mastra agent: ```typescript copy showLineNumbers{23} filename="index.ts" export const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context.`, model: openai("gpt-4o-mini"), tools: { vectorQueryTool, }, }); ``` ## Workflow Steps The workflow is divided into multiple steps for chain-of-thought reasoning: ### 1. Context Analysis Step ```typescript copy showLineNumbers{32} filename="index.ts" const analyzeContext = new Step({ id: "analyzeContext", outputSchema: z.object({ initialAnalysis: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const query = context?.getStepResult<{ query: string }>( "trigger", )?.query; const analysisPrompt = `${query} 1. First, carefully analyze the retrieved context chunks and identify key information.`; const analysis = await ragAgent?.generate(analysisPrompt); console.log(analysis?.text); return { initialAnalysis: analysis?.text ?? "", }; }, }); ``` ### 2. Thought Breakdown Step ```typescript copy showLineNumbers{54} filename="index.ts" const breakdownThoughts = new Step({ id: "breakdownThoughts", outputSchema: z.object({ breakdown: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const analysis = context?.getStepResult<{ initialAnalysis: string; }>("analyzeContext")?.initialAnalysis; const connectionPrompt = ` Based on the initial analysis: ${analysis} 2. Break down your thinking process about how the retrieved information relates to the query. `; const connectionAnalysis = await ragAgent?.generate(connectionPrompt); console.log(connectionAnalysis?.text); return { breakdown: connectionAnalysis?.text ?? "", }; }, }); ``` ### 3. Connection Step ```typescript copy showLineNumbers{80} filename="index.ts" const connectPieces = new Step({ id: "connectPieces", outputSchema: z.object({ connections: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const process = context?.getStepResult<{ breakdown: string; }>("breakdownThoughts")?.breakdown; const connectionPrompt = ` Based on the breakdown: ${process} 3. Explain how you're connecting different pieces from the retrieved chunks. `; const connections = await ragAgent?.generate(connectionPrompt); console.log(connections?.text); return { connections: connections?.text ?? "", }; }, }); ``` ### 4. Conclusion Step ```typescript copy showLineNumbers{105} filename="index.ts" const drawConclusions = new Step({ id: "drawConclusions", outputSchema: z.object({ conclusions: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const evidence = context?.getStepResult<{ connections: string; }>("connectPieces")?.connections; const conclusionPrompt = ` Based on the connections: ${evidence} 4. Draw conclusions based only on the evidence in the retrieved context. `; const conclusions = await ragAgent?.generate(conclusionPrompt); console.log(conclusions?.text); return { conclusions: conclusions?.text ?? "", }; }, }); ``` ### 5. Final Answer Step ```typescript copy showLineNumbers{130} filename="index.ts" const finalAnswer = new Step({ id: "finalAnswer", outputSchema: z.object({ finalAnswer: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const conclusions = context?.getStepResult<{ conclusions: string; }>("drawConclusions")?.conclusions; const answerPrompt = ` Based on the conclusions: ${conclusions} Format your response as: THOUGHT PROCESS: - Step 1: [Initial analysis of retrieved chunks] - Step 2: [Connections between chunks] - Step 3: [Reasoning based on chunks] FINAL ANSWER: [Your concise answer based on the retrieved context]`; const finalAnswer = await ragAgent?.generate(answerPrompt); console.log(finalAnswer?.text); return { finalAnswer: finalAnswer?.text ?? "", }; }, }); ``` ## Workflow Configuration Connect all the steps in the workflow: ```typescript copy showLineNumbers{160} filename="index.ts" ragWorkflow .step(analyzeContext) .then(breakdownThoughts) .then(connectPieces) .then(drawConclusions) .then(finalAnswer); ragWorkflow.commit(); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with all components: ```typescript copy showLineNumbers{169} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, workflows: { ragWorkflow }, }); ``` ## Document Processing Process and chunks the document: ```typescript copy showLineNumbers{177} filename="index.ts" const doc = MDocument.fromText(`The Impact of Climate Change on Global Agriculture...`); const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", }); ``` ## Embedding Creation and Storage Generate and store embeddings: ```typescript copy showLineNumbers{186} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Workflow Execution Here's how to execute the workflow with a query: ```typescript copy showLineNumbers{202} filename="index.ts" const query = 'What are the main adaptation strategies for farmers?'; console.log('\nQuery:', query); const prompt = ` Please answer the following question: ${query} Please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. `; const { runId, start } = ragWorkflow.createRun(); console.log('Run:', runId); const workflowResult = await start({ triggerData: { query: prompt, }, }); console.log('\nThought Process:'); console.log(workflowResult.results); ```




--- title: "Example: Agent-Driven Metadata Filtering | Retrieval | RAG | Mastra Docs" description: Example of using a Mastra agent in a RAG system to construct and apply metadata filters for document retrieval. --- import { GithubLink } from "@/components/github-link"; # Agent-Driven Metadata Filtering [EN] Source: https://mastra.ai/en/examples/rag/usage/filter-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage. This system uses an agent to construct metadata filters from a user's query to search for relevant chunks in the vector store, reducing the amount of results returned. ## Overview The system implements metadata filtering using Mastra and OpenAI. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini to understand queries and identify filter requirements 2. Creates a vector query tool to handle metadata filtering and semantic search 3. Processes documents into chunks with metadata and embeddings 4. Stores both vectors and metadata in PGVector for efficient retrieval 5. Processes queries by combining metadata filters with semantic search When a user asks a question: - The agent analyzes the query to understand the intent - Constructs appropriate metadata filters (e.g., by topic, date, category) - Uses the vector query tool to find the most relevant information - Generates a contextual response based on the filtered results ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from '@ai-sdk/openai'; import { Mastra } from '@mastra/core'; import { Agent } from '@mastra/core/agent'; import { PgVector } from '@mastra/pg'; import { createVectorQueryTool, MDocument, PGVECTOR_PROMPT } from '@mastra/rag'; import { embedMany } from 'ai'; ``` ## Vector Query Tool Creation Using createVectorQueryTool imported from @mastra/rag, you can create a tool that enables metadata filtering: ```typescript copy showLineNumbers{9} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ id: 'vectorQueryTool', vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), enableFilter: true, }); ``` ## Document Processing Create a document and process it into chunks with metadata: ```typescript copy showLineNumbers{17} filename="index.ts" const doc = MDocument.fromText(`The Impact of Climate Change on Global Agriculture...`); const chunks = await doc.chunk({ strategy: 'recursive', size: 512, overlap: 50, separator: '\n', extract: { keywords: true, // Extracts keywords from each chunk }, }); ``` ### Transform Chunks into Metadata Transform chunks into metadata that can be filtered: ```typescript copy showLineNumbers{31} filename="index.ts" const chunkMetadata = chunks?.map((chunk: any, index: number) => ({ text: chunk.text, ...chunk.metadata, nested: { keywords: chunk.metadata.excerptKeywords .replace('KEYWORDS:', '') .split(',') .map(k => k.trim()), id: index, }, })); ``` ## Agent Configuration The agent is configured to understand user queries and translate them into appropriate metadata filters. The agent requires both the vector query tool and a system prompt containing: - Metadata structure for available filter fields - Vector store prompt for filter operations and syntax ```typescript copy showLineNumbers{43} filename="index.ts" export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` You are a helpful assistant that answers questions based on the provided context. Keep your answers concise and relevant. Filter the context by searching the metadata. The metadata is structured as follows: { text: string, excerptKeywords: string, nested: { keywords: string[], id: number, }, } ${PGVECTOR_PROMPT} Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. `, tools: { vectorQueryTool }, }); ``` The agent's instructions are designed to: - Process user queries to identify filter requirements - Use the metadata structure to find relevant information - Apply appropriate filters through the vectorQueryTool and the provided vector store prompt - Generate responses based on the filtered context > Note: Different vector stores have specific prompts available. See [Vector Store Prompts](/docs/rag/retrieval#vector-store-prompts) for details. ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with the components: ```typescript copy showLineNumbers{69} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent('ragAgent'); ``` ## Creating and Storing Embeddings Generate embeddings and store them with metadata: ```typescript copy showLineNumbers{78} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector('pgVector'); await vectorStore.createIndex({ indexName: 'embeddings', dimension: 1536, }); // Store both embeddings and metadata together await vectorStore.upsert({ indexName: 'embeddings', vectors: embeddings, metadata: chunkMetadata, }); ``` The `upsert` operation stores both the vector embeddings and their associated metadata, enabling combined semantic search and metadata filtering capabilities. ## Metadata-Based Querying Try different queries using metadata filters: ```typescript copy showLineNumbers{96} filename="index.ts" const queryOne = 'What are the adaptation strategies mentioned?'; const answerOne = await agent.generate(queryOne); console.log('\nQuery:', queryOne); console.log('Response:', answerOne.text); const queryTwo = 'Show me recent sections. Check the "nested.id" field and return values that are greater than 2.'; const answerTwo = await agent.generate(queryTwo); console.log('\nQuery:', queryTwo); console.log('Response:', answerTwo.text); const queryThree = 'Search the "text" field using regex operator to find sections containing "temperature".'; const answerThree = await agent.generate(queryThree); console.log('\nQuery:', queryThree); console.log('Response:', answerThree.text); ```




--- title: "Example: A Complete Graph RAG System | RAG | Mastra Docs" description: Example of implementing a Graph RAG system in Mastra using OpenAI embeddings and PGVector for vector storage. --- import { GithubLink } from "@/components/github-link"; # Graph RAG [EN] Source: https://mastra.ai/en/examples/rag/usage/graph-rag This example demonstrates how to implement a Retrieval-Augmented Generation (RAG) system using Mastra, OpenAI embeddings, and PGVector for vector storage. ## Overview The system implements Graph RAG using Mastra and OpenAI. Here's what it does: 1. Sets up a Mastra agent with gpt-4o-mini for response generation 2. Creates a GraphRAG tool to manage vector store interactions and knowledge graph creation/traversal 3. Chunks text documents into smaller segments 4. Creates embeddings for these chunks 5. Stores them in a PostgreSQL vector database 6. Creates a knowledge graph of relevant chunks based on queries using GraphRAG tool - Tool returns results from vector store and creates knowledge graph - Traverses knowledge graph using query 7. Generates context-aware responses using the Mastra agent ## Setup ### Environment Setup Make sure to set up your environment variables: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### Dependencies Then, import the necessary dependencies: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { MDocument, createGraphRAGTool } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## GraphRAG Tool Creation Using createGraphRAGTool imported from @mastra/rag, you can create a tool that queries the vector database and converts the results into a knowledge graph: ```typescript copy showLineNumbers{8} filename="index.ts" const graphRagTool = createGraphRAGTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding("text-embedding-3-small"), graphOptions: { dimension: 1536, threshold: 0.7, }, }); ``` ## Agent Configuration Set up the Mastra agent that will handle the responses: ```typescript copy showLineNumbers{19} filename="index.ts" const ragAgent = new Agent({ name: "GraphRAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context. Format your answers as follows: 1. DIRECT FACTS: List only the directly stated facts from the text relevant to the question (2-3 bullet points) 2. CONNECTIONS MADE: List the relationships you found between different parts of the text (2-3 bullet points) 3. CONCLUSION: One sentence summary that ties everything together Keep each section brief and focus on the most important points. Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly.`, model: openai("gpt-4o-mini"), tools: { graphRagTool, }, }); ``` ## Instantiate PgVector and Mastra Instantiate PgVector and Mastra with the components: ```typescript copy showLineNumbers{36} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent("ragAgent"); ``` ## Document Processing Create a document and process it into chunks: ```typescript copy showLineNumbers{45} filename="index.ts" const doc = MDocument.fromText(` # Riverdale Heights: Community Development Study // ... text content ... `); const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", }); ``` ## Creating and Storing Embeddings Generate embeddings for the chunks and store them in the vector database: ```typescript copy showLineNumbers{56} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding("text-embedding-3-small"), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## Graph-Based Querying Try different queries to explore relationships in the data: ```typescript copy showLineNumbers{82} filename="index.ts" const queryOne = "What are the direct and indirect effects of early railway decisions on Riverdale Heights' current state?"; const answerOne = await ragAgent.generate(queryOne); console.log('\nQuery:', queryOne); console.log('Response:', answerOne.text); const queryTwo = 'How have changes in transportation infrastructure affected different generations of local businesses and community spaces?'; const answerTwo = await ragAgent.generate(queryTwo); console.log('\nQuery:', queryTwo); console.log('Response:', answerTwo.text); const queryThree = 'Compare how the Rossi family business and Thompson Steel Works responded to major infrastructure changes, and how their responses affected the community.'; const answerThree = await ragAgent.generate(queryThree); console.log('\nQuery:', queryThree); console.log('Response:', answerThree.text); const queryFour = 'Trace how the transformation of the Thompson Steel Works site has influenced surrounding businesses and cultural spaces from 1932 to present.'; const answerFour = await ragAgent.generate(queryFour); console.log('\nQuery:', queryFour); console.log('Response:', answerFour.text); ```




--- title: "Example: Speech to Text | Voice | Mastra Docs" description: Example of using Mastra to create a speech to text application. --- import { GithubLink } from '@/components/github-link'; # Smart Voice Memo App [EN] Source: https://mastra.ai/en/examples/voice/speech-to-text The following code snippets provide example implementations of Speech-to-Text (STT) functionality in a smart voice memo application using Next.js with direct integration of Mastra. For more details on integrating Mastra with Next.js, please refer to our [Integrate with Next.js](/docs/frameworks/next-js) documentation. ## Creating an Agent with STT Capabilities The following example shows how to initialize a voice-enabled agent with OpenAI's STT capabilities: ```typescript filename="src/mastra/agents/index.ts" import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { OpenAIVoice } from '@mastra/voice-openai'; const instructions = ` You are an AI note assistant tasked with providing concise, structured summaries of their content... // omitted for brevity `; export const noteTakerAgent = new Agent({ name: 'Note Taker Agent', instructions: instructions, model: openai('gpt-4o'), voice: new OpenAIVoice(), // Add OpenAI voice provider with default configuration }); ``` ## Registering the Agent with Mastra This snippet demonstrates how to register the STT-enabled agent with your Mastra instance: ```typescript filename="src/mastra/index.ts" import { createLogger } from '@mastra/core/logger'; import { Mastra } from '@mastra/core/mastra'; import { noteTakerAgent } from './agents'; export const mastra = new Mastra({ agents: { noteTakerAgent }, // Register the note taker agent logger: createLogger({ name: 'Mastra', level: 'info', }), }); ``` ## Processing Audio for Transcription The following code shows how to receive audio from a web request and use the agent's STT capabilities to transcribe it: ```typescript filename="app/api/audio/route.ts" import { mastra } from '@/src/mastra'; // Import the Mastra instance import { Readable } from 'node:stream'; export async function POST(req: Request) { // Get the audio file from the request const formData = await req.formData(); const audioFile = formData.get('audio') as File; const arrayBuffer = await audioFile.arrayBuffer(); const buffer = Buffer.from(arrayBuffer); const readable = Readable.from(buffer); // Get the note taker agent from the Mastra instance const noteTakerAgent = mastra.getAgent('noteTakerAgent'); // Transcribe the audio file const text = await noteTakerAgent.voice?.listen(readable); return new Response(JSON.stringify({ text }), { headers: { 'Content-Type': 'application/json' }, }); } ``` You can view the complete implementation of the Smart Voice Memo App on our GitHub repository.




--- title: "Example: Text to Speech | Voice | Mastra Docs" description: Example of using Mastra to create a text to speech application. --- import { GithubLink } from '@/components/github-link'; # Interactive Story Generator [EN] Source: https://mastra.ai/en/examples/voice/text-to-speech The following code snippets provide example implementations of Text-to-Speech (TTS) functionality in an interactive story generator application using Next.js with Mastra as a separate backend integration. This example demonstrates how to use the Mastra client-js SDK to connect to your Mastra backend. For more details on integrating Mastra with Next.js, please refer to our [Integrate with Next.js](/docs/frameworks/next-js) documentation. ## Creating an Agent with TTS Capabilities The following example shows how to set up a story generator agent with TTS capabilities on the backend: ```typescript filename="src/mastra/agents/index.ts" import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { OpenAIVoice } from '@mastra/voice-openai'; import { Memory } from '@mastra/memory'; const instructions = ` You are an Interactive Storyteller Agent. Your job is to create engaging short stories with user choices that influence the narrative. // omitted for brevity `; export const storyTellerAgent = new Agent({ name: 'Story Teller Agent', instructions: instructions, model: openai('gpt-4o'), voice: new OpenAIVoice(), }); ``` ## Registering the Agent with Mastra This snippet demonstrates how to register the agent with your Mastra instance: ```typescript filename="src/mastra/index.ts" import { createLogger } from '@mastra/core/logger'; import { Mastra } from '@mastra/core/mastra'; import { storyTellerAgent } from './agents'; export const mastra = new Mastra({ agents: { storyTellerAgent }, logger: createLogger({ name: 'Mastra', level: 'info', }), }); ``` ## Connecting to Mastra from the Frontend Here we use the Mastra Client SDK to interact with our Mastra server. For more information about the Mastra Client SDK, check out the [documentation](/docs/deployment/client). ```typescript filename="src/app/page.tsx" import { MastraClient } from '@mastra/client-js'; export const mastraClient = new MastraClient({ baseUrl: 'http://localhost:4111', // Replace with your Mastra backend URL }); ``` ## Generating Story Content and Converting to Speech This example demonstrates how to get a reference to a Mastra agent, generate story content based on user input, and then convert that content to speech: ``` typescript filename="/app/components/StoryManager.tsx" const handleInitialSubmit = async (formData: FormData) => { setIsLoading(true); try { const agent = mastraClient.getAgent('storyTellerAgent'); const message = `Current phase: BEGINNING. Story genre: ${formData.genre}, Protagonist name: ${formData.protagonistDetails.name}, Protagonist age: ${formData.protagonistDetails.age}, Protagonist gender: ${formData.protagonistDetails.gender}, Protagonist occupation: ${formData.protagonistDetails.occupation}, Story Setting: ${formData.setting}`; const storyResponse = await agent.generate({ messages: [{ role: 'user', content: message }], threadId: storyState.threadId, resourceId: storyState.resourceId, }); const storyText = storyResponse.text; const audioResponse = await agent.voice.speak(storyText); if (!audioResponse.body) { throw new Error('No audio stream received'); } const audio = await readStream(audioResponse.body); setStoryState(prev => ({ phase: 'beginning', threadId: prev.threadId, resourceId: prev.resourceId, content: { ...prev.content, beginning: storyText, }, })); setAudioBlob(audio); return audio; } catch (error) { console.error('Error generating story beginning:', error); } finally { setIsLoading(false); } }; ``` ## Playing the Audio This snippet demonstrates how to handle text-to-speech audio playback by monitoring for new audio data. When audio is received, the code creates a browser-playable URL from the audio blob, assigns it to an audio element, and attempts to play it automatically: ```typescript filename="/app/components/StoryManager.tsx" useEffect(() => { if (!audioRef.current || !audioData) return; // Store a reference to the HTML audio element const currentAudio = audioRef.current; // Convert the Blob/File audio data from Mastra into a URL the browser can play const url = URL.createObjectURL(audioData); const playAudio = async () => { try { currentAudio.src = url; await currentAudio.load(); await currentAudio.play(); setIsPlaying(true); } catch (error) { console.error('Auto-play failed:', error); } }; playAudio(); return () => { if (currentAudio) { currentAudio.pause(); currentAudio.src = ''; URL.revokeObjectURL(url); } }; }, [audioData]); ``` You can view the complete implementation of the Interactive Story Generator on our GitHub repository.




--- title: "Example: Branching Paths | Workflows | Mastra Docs" description: Example of using Mastra to create workflows with branching paths based on intermediate results. --- import { GithubLink } from "@/components/github-link"; # Branching Paths [EN] Source: https://mastra.ai/en/examples/workflows/branching-paths When processing data, you often need to take different actions based on intermediate results. This example shows how to create a workflow that splits into separate paths, where each path executes different steps based on the output of a previous step. ## Control Flow Diagram This example shows how to create a workflow that splits into separate paths, where each path executes different steps based on the output of a previous step. Here's the control flow diagram: Diagram showing workflow with branching paths ## Creating the Steps Let's start by creating the steps and initializing the workflow. {/* prettier-ignore */} ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod" const stepOne = new Step({ id: "stepOne", execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2 }) }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { const stepOneResult = context.getStepResult<{ doubledValue: number }>("stepOne"); if (!stepOneResult) { return { isDivisibleByFive: false } } return { isDivisibleByFive: stepOneResult.doubledValue % 5 === 0 } } }); const stepThree = new Step({ id: "stepThree", execute: async ({ context }) =>{ const stepOneResult = context.getStepResult<{ doubledValue: number }>("stepOne"); if (!stepOneResult) { return { incrementedValue: 0 } } return { incrementedValue: stepOneResult.doubledValue + 1 } } }); const stepFour = new Step({ id: "stepFour", execute: async ({ context }) => { const stepThreeResult = context.getStepResult<{ incrementedValue: number }>("stepThree"); if (!stepThreeResult) { return { isDivisibleByThree: false } } return { isDivisibleByThree: stepThreeResult.incrementedValue % 3 === 0 } } }); // New step that depends on both branches const finalStep = new Step({ id: "finalStep", execute: async ({ context }) => { // Get results from both branches using getStepResult const stepTwoResult = context.getStepResult<{ isDivisibleByFive: boolean }>("stepTwo"); const stepFourResult = context.getStepResult<{ isDivisibleByThree: boolean }>("stepFour"); const isDivisibleByFive = stepTwoResult?.isDivisibleByFive || false; const isDivisibleByThree = stepFourResult?.isDivisibleByThree || false; return { summary: `The number ${context.triggerData.inputValue} when doubled ${isDivisibleByFive ? 'is' : 'is not'} divisible by 5, and when doubled and incremented ${isDivisibleByThree ? 'is' : 'is not'} divisible by 3.`, isDivisibleByFive, isDivisibleByThree } } }); // Build the workflow const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` ## Branching Paths and Chaining Steps Now let's configure the workflow with branching paths and merge them using the compound `.after([])` syntax. ```ts showLineNumbers copy // Create two parallel branches myWorkflow // First branch .step(stepOne) .then(stepTwo) // Second branch .after(stepOne) .step(stepThree) .then(stepFour) // Merge both branches using compound after syntax .after([stepTwo, stepFour]) .step(finalStep) .commit(); const { start } = myWorkflow.createRun(); const result = await start({ triggerData: { inputValue: 3 } }); console.log(result.steps.finalStep.output.summary); // Output: "The number 3 when doubled is not divisible by 5, and when doubled and incremented is divisible by 3." ``` ## Advanced Branching and Merging You can create more complex workflows with multiple branches and merge points: ```ts showLineNumbers copy const complexWorkflow = new Workflow({ name: "complex-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); // Create multiple branches with different merge points complexWorkflow // Main step .step(stepOne) // First branch .then(stepTwo) // Second branch .after(stepOne) .step(stepThree) .then(stepFour) // Third branch (another path from stepOne) .after(stepOne) .step(new Step({ id: "alternativePath", execute: async ({ context }) => { const stepOneResult = context.getStepResult<{ doubledValue: number }>("stepOne"); return { result: (stepOneResult?.doubledValue || 0) * 3 } } })) // Merge first and second branches .after([stepTwo, stepFour]) .step(new Step({ id: "partialMerge", execute: async ({ context }) => { const stepTwoResult = context.getStepResult<{ isDivisibleByFive: boolean }>("stepTwo"); const stepFourResult = context.getStepResult<{ isDivisibleByThree: boolean }>("stepFour"); return { intermediateResult: "Processed first two branches", branchResults: { branch1: stepTwoResult?.isDivisibleByFive, branch2: stepFourResult?.isDivisibleByThree } } } })) // Final merge of all branches .after(["partialMerge", "alternativePath"]) .step(new Step({ id: "finalMerge", execute: async ({ context }) => { const partialMergeResult = context.getStepResult<{ intermediateResult: string, branchResults: { branch1: boolean, branch2: boolean } }>("partialMerge"); const alternativePathResult = context.getStepResult<{ result: number }>("alternativePath"); return { finalResult: "All branches processed", combinedData: { fromPartialMerge: partialMergeResult?.branchResults, fromAlternativePath: alternativePathResult?.result } } } })) .commit(); ```




--- title: "Example: Calling an Agent from a Workflow | Mastra Docs" description: Example of using Mastra to call an AI agent from within a workflow step. --- import { GithubLink } from "@/components/github-link"; # Calling an Agent From a Workflow [EN] Source: https://mastra.ai/en/examples/workflows/calling-agent This example demonstrates how to create a workflow that calls an AI agent to process messages and generate responses, and execute it within a workflow step. ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const penguin = new Agent({ name: "agent skipper", instructions: `You are skipper from penguin of madagascar, reply as that`, model: openai("gpt-4o-mini"), }); const newWorkflow = new Workflow({ name: "pass message to the workflow", triggerSchema: z.object({ message: z.string(), }), }); const replyAsSkipper = new Step({ id: "reply", outputSchema: z.object({ reply: z.string(), }), execute: async ({ context, mastra }) => { const skipper = mastra?.getAgent('penguin'); const res = await skipper?.generate( context?.triggerData?.message, ); return { reply: res?.text || "" }; }, }); newWorkflow.step(replyAsSkipper); newWorkflow.commit(); const mastra = new Mastra({ agents: { penguin }, workflows: { newWorkflow }, }); const { runId, start } = await mastra.getWorkflow("newWorkflow").createRun(); const runResult = await start({ triggerData: { message: "Give me a run down of the mission to save private" }, }); console.log(runResult.results); ```




--- title: "Example: Conditional Branching (experimental) | Workflows | Mastra Docs" description: Example of using Mastra to create conditional branches in workflows using if/else statements. --- import { GithubLink } from '@/components/github-link'; # Workflow with Conditional Branching (experimental) [EN] Source: https://mastra.ai/en/examples/workflows/conditional-branching Workflows often need to follow different paths based on conditions. This example demonstrates how to use `if` and `else` to create conditional branches in your workflows. ## Basic If/Else Example This example shows a simple workflow that takes different paths based on a numeric value: ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; // Step that provides the initial value const startStep = new Step({ id: 'start', outputSchema: z.object({ value: z.number(), }), execute: async ({ context }) => { // Get the value from the trigger data const value = context.triggerData.inputValue; return { value }; }, }); // Step that handles high values const highValueStep = new Step({ id: 'highValue', outputSchema: z.object({ result: z.string(), }), execute: async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return { result: `High value processed: ${value}` }; }, }); // Step that handles low values const lowValueStep = new Step({ id: 'lowValue', outputSchema: z.object({ result: z.string(), }), execute: async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return { result: `Low value processed: ${value}` }; }, }); // Final step that summarizes the result const finalStep = new Step({ id: 'final', outputSchema: z.object({ summary: z.string(), }), execute: async ({ context }) => { // Get the result from whichever branch executed const highResult = context.getStepResult<{ result: string }>('highValue')?.result; const lowResult = context.getStepResult<{ result: string }>('lowValue')?.result; const result = highResult || lowResult; return { summary: `Processing complete: ${result}` }; }, }); // Build the workflow with conditional branching const conditionalWorkflow = new Workflow({ name: 'conditional-workflow', triggerSchema: z.object({ inputValue: z.number(), }), }); conditionalWorkflow .step(startStep) .if(async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value ?? 0; return value >= 10; // Condition: value is 10 or greater }) .then(highValueStep) .then(finalStep) .else() .then(lowValueStep) .then(finalStep) // Both branches converge on the final step .commit(); // Register the workflow const mastra = new Mastra({ workflows: { conditionalWorkflow }, }); // Example usage async function runWorkflow(inputValue: number) { const workflow = mastra.getWorkflow('conditionalWorkflow'); const { start } = workflow.createRun(); const result = await start({ triggerData: { inputValue }, }); console.log('Workflow result:', result.results); return result; } // Run with a high value (follows the "if" branch) const result1 = await runWorkflow(15); // Run with a low value (follows the "else" branch) const result2 = await runWorkflow(5); console.log('Result 1:', result1); console.log('Result 2:', result2); ``` ## Using Reference-Based Conditions You can also use reference-based conditions with comparison operators: ```ts showLineNumbers copy // Using reference-based conditions instead of functions conditionalWorkflow .step(startStep) .if({ ref: { step: startStep, path: 'value' }, query: { $gte: 10 }, // Condition: value is 10 or greater }) .then(highValueStep) .then(finalStep) .else() .then(lowValueStep) .then(finalStep) .commit(); ```




--- title: "Example: Creating a Workflow | Workflows | Mastra Docs" description: Example of using Mastra to define and execute a simple workflow with a single step. --- import { GithubLink } from "@/components/github-link"; # Creating a Simple Workflow [EN] Source: https://mastra.ai/en/examples/workflows/creating-a-workflow A workflow allows you to define and execute sequences of operations in a structured path. This example shows a workflow with a single step. ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ input: z.number(), }), }); const stepOne = new Step({ id: "stepOne", inputSchema: z.object({ value: z.number(), }), outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context?.triggerData?.input * 2; return { doubledValue }; }, }); myWorkflow.step(stepOne).commit(); const { runId, start } = myWorkflow.createRun(); const res = await start({ triggerData: { input: 90 }, }); console.log(res.results); ```




--- title: "Example: Cyclical Dependencies | Workflows | Mastra Docs" description: Example of using Mastra to create workflows with cyclical dependencies and conditional loops. --- import { GithubLink } from "@/components/github-link"; # Workflow with Cyclical dependencies [EN] Source: https://mastra.ai/en/examples/workflows/cyclical-dependencies Workflows support cyclical dependencies where steps can loop back based on conditions. The example below shows how to use conditional logic to create loops and handle repeated execution. ```ts showLineNumbers copy import { Workflow, Step } from '@mastra/core'; import { z } from 'zod'; async function main() { const doubleValue = new Step({ id: 'doubleValue', description: 'Doubles the input value', inputSchema: z.object({ inputValue: z.number(), }), outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context.inputValue * 2; return { doubledValue }; }, }); const incrementByOne = new Step({ id: 'incrementByOne', description: 'Adds 1 to the input value', outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context }) => { const valueToIncrement = context?.getStepResult<{ firstValue: number }>('trigger')?.firstValue; if (!valueToIncrement) throw new Error('No value to increment provided'); const incrementedValue = valueToIncrement + 1; return { incrementedValue }; }, }); const cyclicalWorkflow = new Workflow({ name: 'cyclical-workflow', triggerSchema: z.object({ firstValue: z.number(), }), }); cyclicalWorkflow .step(doubleValue, { variables: { inputValue: { step: 'trigger', path: 'firstValue', }, }, }) .then(incrementByOne) .after(doubleValue) .step(doubleValue, { variables: { inputValue: { step: doubleValue, path: 'doubledValue', }, }, }) .commit(); const { runId, start } = cyclicalWorkflow.createRun(); console.log('Run', runId); const res = await start({ triggerData: { firstValue: 6 } }); console.log(res.results); } main(); ```




--- title: "Example: Human in the Loop | Workflows | Mastra Docs" description: Example of using Mastra to create workflows with human intervention points. --- import { GithubLink } from '@/components/github-link'; # Human in the Loop Workflow [EN] Source: https://mastra.ai/en/examples/workflows/human-in-the-loop Human-in-the-loop workflows allow you to pause execution at specific points to collect user input, make decisions, or perform actions that require human judgment. This example demonstrates how to create a workflow with human intervention points. ## How It Works 1. A workflow step can **suspend** execution using the `suspend()` function, optionally passing a payload with context for the human decision maker. 2. When the workflow is **resumed**, the human input is passed in the `context` parameter of the `resume()` call. 3. This input becomes available in the step's execution context as `context.inputData`, which is typed according to the step's `inputSchema`. 4. The step can then continue execution based on the human input. This pattern allows for safe, type-checked human intervention in automated workflows. ## Interactive Terminal Example Using Inquirer This example demonstrates how to use the [Inquirer](https://www.npmjs.com/package/@inquirer/prompts) library to collect user input directly from the terminal when a workflow is suspended, creating a truly interactive human-in-the-loop experience. ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; import { confirm, input, select } from '@inquirer/prompts'; // Step 1: Generate product recommendations const generateRecommendations = new Step({ id: 'generateRecommendations', outputSchema: z.object({ customerName: z.string(), recommendations: z.array( z.object({ productId: z.string(), productName: z.string(), price: z.number(), description: z.string(), }), ), }), execute: async ({ context }) => { const customerName = context.triggerData.customerName; // In a real application, you might call an API or ML model here // For this example, we'll return mock data return { customerName, recommendations: [ { productId: 'prod-001', productName: 'Premium Widget', price: 99.99, description: 'Our best-selling premium widget with advanced features', }, { productId: 'prod-002', productName: 'Basic Widget', price: 49.99, description: 'Affordable entry-level widget for beginners', }, { productId: 'prod-003', productName: 'Widget Pro Plus', price: 149.99, description: 'Professional-grade widget with extended warranty', }, ], }; }, }); ``` ```ts showLineNumbers copy // Step 2: Get human approval and customization for the recommendations const reviewRecommendations = new Step({ id: 'reviewRecommendations', inputSchema: z.object({ approvedProducts: z.array(z.string()), customerNote: z.string().optional(), offerDiscount: z.boolean().optional(), }), outputSchema: z.object({ finalRecommendations: z.array( z.object({ productId: z.string(), productName: z.string(), price: z.number(), }), ), customerNote: z.string().optional(), offerDiscount: z.boolean(), }), execute: async ({ context, suspend }) => { const { customerName, recommendations } = context.getStepResult(generateRecommendations) || { customerName: '', recommendations: [], }; // Check if we have input from a resumed workflow const reviewInput = { approvedProducts: context.inputData?.approvedProducts || [], customerNote: context.inputData?.customerNote, offerDiscount: context.inputData?.offerDiscount, }; // If we don't have agent input yet, suspend for human review if (!reviewInput.approvedProducts.length) { console.log(`Generating recommendations for customer: ${customerName}`); await suspend({ customerName, recommendations, message: 'Please review these product recommendations before sending to the customer', }); // Placeholder return (won't be reached due to suspend) return { finalRecommendations: [], customerNote: '', offerDiscount: false, }; } // Process the agent's product selections const finalRecommendations = recommendations .filter(product => reviewInput.approvedProducts.includes(product.productId)) .map(product => ({ productId: product.productId, productName: product.productName, price: product.price, })); return { finalRecommendations, customerNote: reviewInput.customerNote || '', offerDiscount: reviewInput.offerDiscount || false, }; }, }); ``` ```ts showLineNumbers copy // Step 3: Send the recommendations to the customer const sendRecommendations = new Step({ id: 'sendRecommendations', outputSchema: z.object({ emailSent: z.boolean(), emailContent: z.string(), }), execute: async ({ context }) => { const { customerName } = context.getStepResult(generateRecommendations) || { customerName: '' }; const { finalRecommendations, customerNote, offerDiscount } = context.getStepResult(reviewRecommendations) || { finalRecommendations: [], customerNote: '', offerDiscount: false, }; // Generate email content based on the recommendations let emailContent = `Dear ${customerName},\n\nBased on your preferences, we recommend:\n\n`; finalRecommendations.forEach(product => { emailContent += `- ${product.productName}: $${product.price.toFixed(2)}\n`; }); if (offerDiscount) { emailContent += '\nAs a valued customer, use code SAVE10 for 10% off your next purchase!\n'; } if (customerNote) { emailContent += `\nPersonal note: ${customerNote}\n`; } emailContent += '\nThank you for your business,\nThe Sales Team'; // In a real application, you would send this email console.log('Email content generated:', emailContent); return { emailSent: true, emailContent, }; }, }); // Build the workflow const recommendationWorkflow = new Workflow({ name: 'product-recommendation-workflow', triggerSchema: z.object({ customerName: z.string(), }), }); recommendationWorkflow .step(generateRecommendations) .then(reviewRecommendations) .then(sendRecommendations) .commit(); // Register the workflow const mastra = new Mastra({ workflows: { recommendationWorkflow }, }); ``` ```ts showLineNumbers copy // Example of using the workflow with Inquirer prompts async function runRecommendationWorkflow() { const registeredWorkflow = mastra.getWorkflow('recommendationWorkflow'); const run = registeredWorkflow.createRun(); console.log('Starting product recommendation workflow...'); const result = await run.start({ triggerData: { customerName: 'Jane Smith', }, }); const isReviewStepSuspended = result.activePaths.get('reviewRecommendations')?.status === 'suspended'; // Check if workflow is suspended for human review if (isReviewStepSuspended) { const { customerName, recommendations, message } = result.activePaths.get('reviewRecommendations')?.suspendPayload; console.log('\n==================================='); console.log(message); console.log(`Customer: ${customerName}`); console.log('===================================\n'); // Use Inquirer to collect input from the sales agent in the terminal console.log('Available product recommendations:'); recommendations.forEach((product, index) => { console.log(`${index + 1}. ${product.productName} - $${product.price.toFixed(2)}`); console.log(` ${product.description}\n`); }); // Let the agent select which products to recommend const approvedProducts = await checkbox({ message: 'Select products to recommend to the customer:', choices: recommendations.map(product => ({ name: `${product.productName} ($${product.price.toFixed(2)})`, value: product.productId, })), }); // Let the agent add a personal note const includeNote = await confirm({ message: 'Would you like to add a personal note?', default: false, }); let customerNote = ''; if (includeNote) { customerNote = await input({ message: 'Enter your personalized note for the customer:', }); } // Ask if a discount should be offered const offerDiscount = await confirm({ message: 'Offer a 10% discount to this customer?', default: false, }); console.log('\nSubmitting your review...'); // Resume the workflow with the agent's input const resumeResult = await run.resume({ stepId: 'reviewRecommendations', context: { approvedProducts, customerNote, offerDiscount, }, }); console.log('\n==================================='); console.log('Workflow completed!'); console.log('Email content:'); console.log('===================================\n'); console.log(resumeResult?.results?.sendRecommendations || 'No email content generated'); return resumeResult; } return result; } // Invoke the workflow with interactive terminal input runRecommendationWorkflow().catch(console.error); ``` ## Advanced Example with Multiple User Inputs This example demonstrates a more complex workflow that requires multiple human intervention points, such as in a content moderation system. ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; import { select, input } from '@inquirer/prompts'; // Step 1: Receive and analyze content const analyzeContent = new Step({ id: 'analyzeContent', outputSchema: z.object({ content: z.string(), aiAnalysisScore: z.number(), flaggedCategories: z.array(z.string()).optional(), }), execute: async ({ context }) => { const content = context.triggerData.content; // Simulate AI analysis const aiAnalysisScore = simulateContentAnalysis(content); const flaggedCategories = aiAnalysisScore < 0.7 ? ['potentially inappropriate', 'needs review'] : []; return { content, aiAnalysisScore, flaggedCategories, }; }, }); ``` ```ts showLineNumbers copy // Step 2: Moderate content that needs review const moderateContent = new Step({ id: 'moderateContent', // Define the schema for human input that will be provided when resuming inputSchema: z.object({ moderatorDecision: z.enum(['approve', 'reject', 'modify']).optional(), moderatorNotes: z.string().optional(), modifiedContent: z.string().optional(), }), outputSchema: z.object({ moderationResult: z.enum(['approved', 'rejected', 'modified']), moderatedContent: z.string(), notes: z.string().optional(), }), // @ts-ignore execute: async ({ context, suspend }) => { const analysisResult = context.getStepResult(analyzeContent); // Access the input provided when resuming the workflow const moderatorInput = { decision: context.inputData?.moderatorDecision, notes: context.inputData?.moderatorNotes, modifiedContent: context.inputData?.modifiedContent, }; // If the AI analysis score is high enough, auto-approve if (analysisResult?.aiAnalysisScore > 0.9 && !analysisResult?.flaggedCategories?.length) { return { moderationResult: 'approved', moderatedContent: analysisResult.content, notes: 'Auto-approved by system', }; } // If we don't have moderator input yet, suspend for human review if (!moderatorInput.decision) { await suspend({ content: analysisResult?.content, aiScore: analysisResult?.aiAnalysisScore, flaggedCategories: analysisResult?.flaggedCategories, message: 'Please review this content and make a moderation decision', }); // Placeholder return return { moderationResult: 'approved', moderatedContent: '', }; } // Process the moderator's decision switch (moderatorInput.decision) { case 'approve': return { moderationResult: 'approved', moderatedContent: analysisResult?.content || '', notes: moderatorInput.notes || 'Approved by moderator', }; case 'reject': return { moderationResult: 'rejected', moderatedContent: '', notes: moderatorInput.notes || 'Rejected by moderator', }; case 'modify': return { moderationResult: 'modified', moderatedContent: moderatorInput.modifiedContent || analysisResult?.content || '', notes: moderatorInput.notes || 'Modified by moderator', }; default: return { moderationResult: 'rejected', moderatedContent: '', notes: 'Invalid moderator decision', }; } }, }); ``` ```ts showLineNumbers copy // Step 3: Apply moderation actions const applyModeration = new Step({ id: 'applyModeration', outputSchema: z.object({ finalStatus: z.string(), content: z.string().optional(), auditLog: z.object({ originalContent: z.string(), moderationResult: z.string(), aiScore: z.number(), timestamp: z.string(), }), }), execute: async ({ context }) => { const analysisResult = context.getStepResult(analyzeContent); const moderationResult = context.getStepResult(moderateContent); // Create audit log const auditLog = { originalContent: analysisResult?.content || '', moderationResult: moderationResult?.moderationResult || 'unknown', aiScore: analysisResult?.aiAnalysisScore || 0, timestamp: new Date().toISOString(), }; // Apply moderation action switch (moderationResult?.moderationResult) { case 'approved': return { finalStatus: 'Content published', content: moderationResult.moderatedContent, auditLog, }; case 'modified': return { finalStatus: 'Content modified and published', content: moderationResult.moderatedContent, auditLog, }; case 'rejected': return { finalStatus: 'Content rejected', auditLog, }; default: return { finalStatus: 'Error in moderation process', auditLog, }; } }, }); ``` ```ts showLineNumbers copy // Build the workflow const contentModerationWorkflow = new Workflow({ name: 'content-moderation-workflow', triggerSchema: z.object({ content: z.string(), }), }); contentModerationWorkflow .step(analyzeContent) .then(moderateContent) .then(applyModeration) .commit(); // Register the workflow const mastra = new Mastra({ workflows: { contentModerationWorkflow }, }); // Example of using the workflow with Inquirer prompts async function runModerationDemo() { const registeredWorkflow = mastra.getWorkflow('contentModerationWorkflow'); const run = registeredWorkflow.createRun(); // Start the workflow with content that needs review console.log('Starting content moderation workflow...'); const result = await run.start({ triggerData: { content: 'This is some user-generated content that requires moderation.' } }); const isReviewStepSuspended = result.activePaths.get('moderateContent')?.status === 'suspended'; // Check if workflow is suspended if (isReviewStepSuspended) { const { content, aiScore, flaggedCategories, message } = result.activePaths.get('moderateContent')?.suspendPayload; console.log('\n==================================='); console.log(message); console.log('===================================\n'); console.log('Content to review:'); console.log(content); console.log(`\nAI Analysis Score: ${aiScore}`); console.log(`Flagged Categories: ${flaggedCategories?.join(', ') || 'None'}\n`); // Collect moderator decision using Inquirer const moderatorDecision = await select({ message: 'Select your moderation decision:', choices: [ { name: 'Approve content as is', value: 'approve' }, { name: 'Reject content completely', value: 'reject' }, { name: 'Modify content before publishing', value: 'modify' } ], }); // Collect additional information based on decision let moderatorNotes = ''; let modifiedContent = ''; moderatorNotes = await input({ message: 'Enter any notes about your decision:', }); if (moderatorDecision === 'modify') { modifiedContent = await input({ message: 'Enter the modified content:', default: content, }); } console.log('\nSubmitting your moderation decision...'); // Resume the workflow with the moderator's input const resumeResult = await run.resume({ stepId: 'moderateContent', context: { moderatorDecision, moderatorNotes, modifiedContent, }, }); if (resumeResult?.results?.applyModeration?.status === 'success') { console.log('\n==================================='); console.log(`Moderation complete: ${resumeResult?.results?.applyModeration?.output.finalStatus}`); console.log('===================================\n'); if (resumeResult?.results?.applyModeration?.output.content) { console.log('Published content:'); console.log(resumeResult.results.applyModeration.output.content); } } return resumeResult; } console.log('Workflow completed without requiring human intervention:', result.results); return result; } // Helper function for AI content analysis simulation function simulateContentAnalysis(content: string): number { // In a real application, this would call an AI service // For the example, we're returning a random score return Math.random(); } // Invoke the demo function runModerationDemo().catch(console.error); ``` ## Key Concepts 1. **Suspension Points** - Use the `suspend()` function within a step's execute to pause workflow execution. 2. **Suspension Payload** - Pass relevant data when suspending to provide context for human decision-making: ```ts await suspend({ messageForHuman: 'Please review this data', data: someImportantData }); ``` 3. **Checking Workflow Status** - After starting a workflow, check the returned status to see if it's suspended: ```ts const result = await workflow.start({ triggerData }); if (result.status === 'suspended' && result.suspendedStepId === 'stepId') { // Process suspension console.log('Workflow is waiting for input:', result.suspendPayload); } ``` 4. **Interactive Terminal Input** - Use libraries like Inquirer to create interactive prompts: ```ts import { select, input, confirm } from '@inquirer/prompts'; // When the workflow is suspended if (result.status === 'suspended') { // Display information from the suspend payload console.log(result.suspendPayload.message); // Collect user input interactively const decision = await select({ message: 'What would you like to do?', choices: [ { name: 'Approve', value: 'approve' }, { name: 'Reject', value: 'reject' } ] }); // Resume the workflow with the collected input await run.resume({ stepId: result.suspendedStepId, context: { decision } }); } ``` 5. **Resuming Workflow** - Use the `resume()` method to continue workflow execution with human input: ```ts const resumeResult = await run.resume({ stepId: 'suspendedStepId', context: { // This data is passed to the suspended step as context.inputData // and must conform to the step's inputSchema userDecision: 'approve' }, }); ``` 6. **Input Schema for Human Data** - Define an input schema on steps that might be resumed with human input to ensure type safety: ```ts const myStep = new Step({ id: 'myStep', inputSchema: z.object({ // This schema validates the data passed in resume's context // and makes it available as context.inputData userDecision: z.enum(['approve', 'reject']), userComments: z.string().optional(), }), execute: async ({ context, suspend }) => { // Check if we have user input from a previous suspension if (context.inputData?.userDecision) { // Process the user's decision return { result: `User decided: ${context.inputData.userDecision}` }; } // If no input, suspend for human decision await suspend(); } }); ``` Human-in-the-loop workflows are powerful for building systems that blend automation with human judgment, such as: - Content moderation systems - Approval workflows - Supervised AI systems - Customer service automation with escalation




--- title: "Example: Parallel Execution | Workflows | Mastra Docs" description: Example of using Mastra to execute multiple independent tasks in parallel within a workflow. --- import { GithubLink } from "@/components/github-link"; # Parallel Execution with Steps [EN] Source: https://mastra.ai/en/examples/workflows/parallel-steps When building AI applications, you often need to process multiple independent tasks simultaneously to improve efficiency. ## Control Flow Diagram This example shows how to structure a workflow that executes steps in parallel, with each branch handling its own data flow and dependencies. Here's the control flow diagram: Diagram showing workflow with parallel steps ## Creating the Steps Let's start by creating the steps and initializing the workflow. ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const stepOne = new Step({ id: "stepOne", execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 } } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 } }, }); const stepThree = new Step({ id: "stepThree", execute: async ({ context }) => ({ tripledValue: context.triggerData.inputValue * 3, }), }); const stepFour = new Step({ id: "stepFour", execute: async ({ context }) => { if (context.steps.stepThree.status !== "success") { return { isEven: false } } return { isEven: context.steps.stepThree.output.tripledValue % 2 === 0 } }, }); const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` ## Chaining and Parallelizing Steps Now we can add the steps to the workflow. Note the `.then()` method is used to chain the steps, but the `.step()` method is used to add the steps to the workflow. ```ts showLineNumbers copy myWorkflow .step(stepOne) .then(stepTwo) // chain one .step(stepThree) .then(stepFour) // chain two .commit(); const { start } = myWorkflow.createRun(); const result = await start({ triggerData: { inputValue: 3 } }); ```




--- title: "Example: Sequential Steps | Workflows | Mastra Docs" description: Example of using Mastra to chain workflow steps in a specific sequence, passing data between them. --- import { GithubLink } from "@/components/github-link"; # Workflow with Sequential Steps [EN] Source: https://mastra.ai/en/examples/workflows/sequential-steps Workflow can be chained to run one after another in a specific sequence. ## Control Flow Diagram This example shows how to chain workflow steps by using the `then` method demonstrating how to pass data between sequential steps and execute them in order. Here's the control flow diagram: Diagram showing workflow with sequential steps ## Creating the Steps Let's start by creating the steps and initializing the workflow. ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const stepOne = new Step({ id: "stepOne", execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 } } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 } }, }); const stepThree = new Step({ id: "stepThree", execute: async ({ context }) => { if (context.steps.stepTwo.status !== "success") { return { tripledValue: 0 } } return { tripledValue: context.steps.stepTwo.output.incrementedValue * 3 } }, }); // Build the workflow const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` ## Chaining the Steps and Executing the Workflow Now let's chain the steps together. ```ts showLineNumbers copy // sequential steps myWorkflow.step(stepOne).then(stepTwo).then(stepThree); myWorkflow.commit(); const { start } = myWorkflow.createRun(); const res = await start({ triggerData: { inputValue: 90 } }); ```




--- title: "Example: Suspend and Resume | Workflows | Mastra Docs" description: Example of using Mastra to suspend and resume workflow steps during execution. --- import { GithubLink } from '@/components/github-link'; # Workflow with Suspend and Resume [EN] Source: https://mastra.ai/en/examples/workflows/suspend-and-resume Workflow steps can be suspended and resumed at any point in the workflow execution. This example demonstrates how to suspend a workflow step and resume it later. ## Basic Example ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; const stepOne = new Step({ id: 'stepOne', outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context.triggerData.inputValue * 2; return { doubledValue }; }, }); ``` ```ts showLineNumbers copy const stepTwo = new Step({ id: 'stepTwo', outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context, suspend }) => { const secondValue = context.inputData?.secondValue ?? 0; const doubledValue = context.getStepResult(stepOne)?.doubledValue ?? 0; const incrementedValue = doubledValue + secondValue; if (incrementedValue < 100) { await suspend(); return { incrementedValue: 0 }; } return { incrementedValue }; }, }); // Build the workflow const myWorkflow = new Workflow({ name: 'my-workflow', triggerSchema: z.object({ inputValue: z.number(), }), }); // run workflows in parallel myWorkflow .step(stepOne) .then(stepTwo) .commit(); ``` ```ts showLineNumbers copy // Register the workflow export const mastra = new Mastra({ workflows: { registeredWorkflow: myWorkflow }, }) // Get registered workflow from Mastra const registeredWorkflow = mastra.getWorkflow('registeredWorkflow'); const { runId, start } = registeredWorkflow.createRun(); // Start watching the workflow before executing it myWorkflow.watch(async ({ context, activePaths }) => { for (const _path of activePaths) { const stepTwoStatus = context.steps?.stepTwo?.status; if (stepTwoStatus === 'suspended') { console.log("Workflow suspended, resuming with new value"); // Resume the workflow with new context await myWorkflow.resume({ runId, stepId: 'stepTwo', context: { secondValue: 100 }, }); } } }) // Start the workflow execution await start({ triggerData: { inputValue: 45 } }); ``` ## Advanced Example with Multiple Suspension Points Using async/await pattern and suspend payloads This example demonstrates a more complex workflow with multiple suspension points using the async/await pattern. It simulates a content generation workflow that requires human intervention at different stages. ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; // Step 1: Get user input const getUserInput = new Step({ id: 'getUserInput', execute: async ({ context }) => { // In a real application, this might come from a form or API return { userInput: context.triggerData.input }; }, outputSchema: z.object({ userInput: z.string() }), }); ``` ```ts showLineNumbers copy // Step 2: Generate content with AI (may suspend for human guidance) const promptAgent = new Step({ id: 'promptAgent', inputSchema: z.object({ guidance: z.string(), }), execute: async ({ context, suspend }) => { const userInput = context.getStepResult(getUserInput)?.userInput; console.log(`Generating content based on: ${userInput}`); const guidance = context.inputData?.guidance; // Simulate AI generating content const initialDraft = generateInitialDraft(userInput); // If confidence is high, return the generated content directly if (initialDraft.confidenceScore > 0.7) { return { modelOutput: initialDraft.content }; } console.log('Low confidence in generated content, suspending for human guidance', {guidance}); // If confidence is low, suspend for human guidance if (!guidance) { // only suspend if no guidance is provided await suspend(); return undefined; } // This code runs after resume with human guidance console.log('Resumed with human guidance'); // Use the human guidance to improve the output return { modelOutput: enhanceWithGuidance(initialDraft.content, guidance), }; }, outputSchema: z.object({ modelOutput: z.string() }).optional(), }); ``` ```ts showLineNumbers copy // Step 3: Evaluate the content quality const evaluateTone = new Step({ id: 'evaluateToneConsistency', execute: async ({ context }) => { const content = context.getStepResult(promptAgent)?.modelOutput; // Simulate evaluation return { toneScore: { score: calculateToneScore(content) }, completenessScore: { score: calculateCompletenessScore(content) }, }; }, outputSchema: z.object({ toneScore: z.any(), completenessScore: z.any(), }), }); ``` ```ts showLineNumbers copy // Step 4: Improve response if needed (may suspend) const improveResponse = new Step({ id: 'improveResponse', inputSchema: z.object({ improvedContent: z.string(), resumeAttempts: z.number(), }), execute: async ({ context, suspend }) => { const content = context.getStepResult(promptAgent)?.modelOutput; const toneScore = context.getStepResult(evaluateTone)?.toneScore.score ?? 0; const completenessScore = context.getStepResult(evaluateTone)?.completenessScore.score ?? 0; const improvedContent = context.inputData.improvedContent; const resumeAttempts = context.inputData.resumeAttempts ?? 0; // If scores are above threshold, make minor improvements if (toneScore > 0.8 && completenessScore > 0.8) { return { improvedOutput: makeMinorImprovements(content) }; } console.log('Content quality below threshold, suspending for human intervention', {improvedContent, resumeAttempts}); if (!improvedContent) { // Suspend with payload containing content and resume attempts await suspend({ content, scores: { tone: toneScore, completeness: completenessScore }, needsImprovement: toneScore < 0.8 ? 'tone' : 'completeness', resumeAttempts: resumeAttempts + 1, }); return { improvedOutput: content ?? '' }; } console.log('Resumed with human improvements', improvedContent); return { improvedOutput: improvedContent ?? content ?? '' }; }, outputSchema: z.object({ improvedOutput: z.string() }).optional(), }); ``` ```ts showLineNumbers copy // Step 5: Final evaluation const evaluateImproved = new Step({ id: 'evaluateImprovedResponse', execute: async ({ context }) => { const improvedContent = context.getStepResult(improveResponse)?.improvedOutput; // Simulate final evaluation return { toneScore: { score: calculateToneScore(improvedContent) }, completenessScore: { score: calculateCompletenessScore(improvedContent) }, }; }, outputSchema: z.object({ toneScore: z.any(), completenessScore: z.any(), }), }); // Build the workflow const contentWorkflow = new Workflow({ name: 'content-generation-workflow', triggerSchema: z.object({ input: z.string() }), }); contentWorkflow .step(getUserInput) .then(promptAgent) .then(evaluateTone) .then(improveResponse) .then(evaluateImproved) .commit(); ``` ```ts showLineNumbers copy // Register the workflow const mastra = new Mastra({ workflows: { contentWorkflow }, }); // Helper functions (simulated) function generateInitialDraft(input: string = '') { // Simulate AI generating content return { content: `Generated content based on: ${input}`, confidenceScore: 0.6, // Simulate low confidence to trigger suspension }; } function enhanceWithGuidance(content: string = '', guidance: string = '') { return `${content} (Enhanced with guidance: ${guidance})`; } function makeMinorImprovements(content: string = '') { return `${content} (with minor improvements)`; } function calculateToneScore(_: string = '') { return 0.7; // Simulate a score that will trigger suspension } function calculateCompletenessScore(_: string = '') { return 0.9; } // Usage example async function runWorkflow() { const workflow = mastra.getWorkflow('contentWorkflow'); const { runId, start } = workflow.createRun(); let finalResult: any; // Start the workflow const initialResult = await start({ triggerData: { input: 'Create content about sustainable energy' }, }); console.log('Initial workflow state:', initialResult.results); const promptAgentStepResult = initialResult.activePaths.get('promptAgent'); // Check if promptAgent step is suspended if (promptAgentStepResult?.status === 'suspended') { console.log('Workflow suspended at promptAgent step'); console.log('Suspension payload:', promptAgentStepResult?.suspendPayload); // Resume with human guidance const resumeResult1 = await workflow.resume({ runId, stepId: 'promptAgent', context: { guidance: 'Focus more on solar and wind energy technologies', }, }); console.log('Workflow resumed and continued to next steps'); let improveResponseResumeAttempts = 0; let improveResponseStatus = resumeResult1?.activePaths.get('improveResponse')?.status; // Check if improveResponse step is suspended while (improveResponseStatus === 'suspended') { console.log('Workflow suspended at improveResponse step'); console.log('Suspension payload:', resumeResult1?.activePaths.get('improveResponse')?.suspendPayload); const improvedContent = improveResponseResumeAttempts < 3 ? undefined : 'Completely revised content about sustainable energy focusing on solar and wind technologies'; // Resume with human improvements finalResult = await workflow.resume({ runId, stepId: 'improveResponse', context: { improvedContent, resumeAttempts: improveResponseResumeAttempts, }, }); improveResponseResumeAttempts = finalResult?.activePaths.get('improveResponse')?.suspendPayload?.resumeAttempts ?? 0; improveResponseStatus = finalResult?.activePaths.get('improveResponse')?.status; console.log('Improved response result:', finalResult?.results); } } return finalResult; } // Run the workflow const result = await runWorkflow(); console.log('Workflow completed'); console.log('Final workflow result:', result); ```




--- title: "Example: Using a Tool as a Step | Workflows | Mastra Docs" description: Example of using Mastra to integrate a custom tool as a step in a workflow. --- import { GithubLink } from '@/components/github-link'; # Tool as a Workflow step [EN] Source: https://mastra.ai/en/examples/workflows/using-a-tool-as-a-step This example demonstrates how to create and integrate a custom tool as a workflow step, showing how to define input/output schemas and implement the tool's execution logic. ```ts showLineNumbers copy import { createTool } from '@mastra/core/tools'; import { Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; const crawlWebpage = createTool({ id: 'Crawl Webpage', description: 'Crawls a webpage and extracts the text content', inputSchema: z.object({ url: z.string().url(), }), outputSchema: z.object({ rawText: z.string(), }), execute: async ({ context }) => { const response = await fetch(context.triggerData.url); const text = await response.text(); return { rawText: 'This is the text content of the webpage: ' + text }; }, }); const contentWorkflow = new Workflow({ name: 'content-review' }); contentWorkflow.step(crawlWebpage).commit(); const { start } = contentWorkflow.createRun(); const res = await start({ triggerData: { url: 'https://example.com'} }); console.log(res.results); ```




--- title: "Data Mapping with Workflow Variables | Mastra Examples" description: "Learn how to use workflow variables to map data between steps in Mastra workflows." --- # Data Mapping with Workflow Variables [EN] Source: https://mastra.ai/en/examples/workflows/workflow-variables This example demonstrates how to use workflow variables to map data between steps in a Mastra workflow. ## Use Case: User Registration Process In this example, we'll build a simple user registration workflow that: 1. Validates user input 1. Formats the user data 1. Creates a user profile ## Implementation ```typescript showLineNumbers filename="src/mastra/workflows/user-registration.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Define our schemas for better type safety const userInputSchema = z.object({ email: z.string().email(), name: z.string(), age: z.number().min(18), }); const validatedDataSchema = z.object({ isValid: z.boolean(), validatedData: z.object({ email: z.string(), name: z.string(), age: z.number(), }), }); const formattedDataSchema = z.object({ userId: z.string(), formattedData: z.object({ email: z.string(), displayName: z.string(), ageGroup: z.string(), }), }); const profileSchema = z.object({ profile: z.object({ id: z.string(), email: z.string(), displayName: z.string(), ageGroup: z.string(), createdAt: z.string(), }), }); // Define the workflow const registrationWorkflow = new Workflow({ name: "user-registration", triggerSchema: userInputSchema, }); // Step 1: Validate user input const validateInput = new Step({ id: "validateInput", inputSchema: userInputSchema, outputSchema: validatedDataSchema, execute: async ({ context }) => { const { email, name, age } = context; // Simple validation logic const isValid = email.includes('@') && name.length > 0 && age >= 18; return { isValid, validatedData: { email: email.toLowerCase().trim(), name, age, }, }; }, }); // Step 2: Format user data const formatUserData = new Step({ id: "formatUserData", inputSchema: z.object({ validatedData: z.object({ email: z.string(), name: z.string(), age: z.number(), }), }), outputSchema: formattedDataSchema, execute: async ({ context }) => { const { validatedData } = context; // Generate a simple user ID const userId = `user_${Math.floor(Math.random() * 10000)}`; // Format the data const ageGroup = validatedData.age < 30 ? "young-adult" : "adult"; return { userId, formattedData: { email: validatedData.email, displayName: validatedData.name, ageGroup, }, }; }, }); // Step 3: Create user profile const createUserProfile = new Step({ id: "createUserProfile", inputSchema: z.object({ userId: z.string(), formattedData: z.object({ email: z.string(), displayName: z.string(), ageGroup: z.string(), }), }), outputSchema: profileSchema, execute: async ({ context }) => { const { userId, formattedData } = context; // In a real app, you would save to a database here return { profile: { id: userId, ...formattedData, createdAt: new Date().toISOString(), }, }; }, }); // Build the workflow with variable mappings registrationWorkflow // First step gets data from the trigger .step(validateInput, { variables: { email: { step: 'trigger', path: 'email' }, name: { step: 'trigger', path: 'name' }, age: { step: 'trigger', path: 'age' }, } }) // Format user data with validated data from previous step .then(formatUserData, { variables: { validatedData: { step: validateInput, path: 'validatedData' }, }, when: { ref: { step: validateInput, path: 'isValid' }, query: { $eq: true }, }, }) // Create profile with data from the format step .then(createUserProfile, { variables: { userId: { step: formatUserData, path: 'userId' }, formattedData: { step: formatUserData, path: 'formattedData' }, }, }) .commit(); export default registrationWorkflow; ``` ## How to Use This Example 1. Create the file as shown above 2. Register the workflow in your Mastra instance 3. Execute the workflow: ```bash curl --location 'http://localhost:4111/api/workflows/user-registration/start-async' \ --header 'Content-Type: application/json' \ --data '{ "email": "user@example.com", "name": "John Doe", "age": 25 }' ``` ## Key Takeaways This example demonstrates several important concepts about workflow variables: 1. **Data Mapping**: Variables map data from one step to another, creating a clear data flow. 2. **Path Access**: The `path` property specifies which part of a step's output to use. 3. **Conditional Execution**: The `when` property allows steps to execute conditionally based on previous step outputs. 4. **Type Safety**: Each step defines input and output schemas for type safety, ensuring that the data passed between steps is properly typed. 5. **Explicit Data Dependencies**: By defining input schemas and using variable mappings, the data dependencies between steps are made explicit and clear. For more information on workflow variables, see the [Workflow Variables documentation](../../docs/workflows/variables.mdx). --- title: "Building an AI Recruiter | Mastra Workflows | Guides" description: Guide on building a recruiter workflow in Mastra to gather and process candidate information using LLMs. --- # Introduction [EN] Source: https://mastra.ai/en/guides/guide/ai-recruiter In this guide, you'll learn how Mastra helps you build workflows with LLMs. We'll walk through creating a workflow that gathers information from a candidate's resume, then branches to either a technical or behavioral question based on the candidate's profile. Along the way, you'll see how to structure workflow steps, handle branching, and integrate LLM calls. Below is a concise version of the workflow. It starts by importing the necessary modules, sets up Mastra, defines steps to extract and classify candidate data, and then asks suitable follow-up questions. Each code block is followed by a short explanation of what it does and why it's useful. ## 1. Imports and Setup You need to import Mastra tools and Zod to handle workflow definitions and data validation. ```ts filename="src/mastra/index.ts" copy import { Mastra } from "@mastra/core"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; ``` Add your `OPENAI_API_KEY` to the `.env` file. ```bash filename=".env" copy OPENAI_API_KEY= ``` ## 2. Step One: Gather Candidate Info You want to extract candidate details from the resume text and classify them as technical or non-technical. This step calls an LLM to parse the resume and return structured JSON, including the name, technical status, specialty, and the original resume text. The code reads resumeText from trigger data, prompts the LLM, and returns organized fields for use in subsequent steps. ```ts filename="src/mastra/index.ts" copy import { Agent } from '@mastra/core/agent'; import { openai } from "@ai-sdk/openai"; const recruiter = new Agent({ name: "Recruiter Agent", instructions: `You are a recruiter.`, model: openai("gpt-4o-mini"), }) const gatherCandidateInfo = new Step({ id: "gatherCandidateInfo", inputSchema: z.object({ resumeText: z.string(), }), outputSchema: z.object({ candidateName: z.string(), isTechnical: z.boolean(), specialty: z.string(), resumeText: z.string(), }), execute: async ({ context }) => { const resumeText = context?.getStepResult<{ resumeText: string; }>("trigger")?.resumeText; const prompt = ` Extract details from the resume text: "${resumeText}" `; const res = await recruiter.generate(prompt, { output: z.object({ candidateName: z.string(), isTechnical: z.boolean(), specialty: z.string(), resumeText: z.string(), }), }); return res.object; }, }); ``` ## 3. Technical Question Step This step prompts a candidate who is identified as technical for more information about how they got into their specialty. It uses the entire resume text so the LLM can craft a relevant follow-up question. The code generates a question about the candidate's specialty. ```ts filename="src/mastra/index.ts" copy interface CandidateInfo { candidateName: string; isTechnical: boolean; specialty: string; resumeText: string; } const askAboutSpecialty = new Step({ id: "askAboutSpecialty", outputSchema: z.object({ question: z.string(), }), execute: async ({ context }) => { const candidateInfo = context?.getStepResult( "gatherCandidateInfo", ); const prompt = ` You are a recruiter. Given the resume below, craft a short question for ${candidateInfo?.candidateName} about how they got into "${candidateInfo?.specialty}". Resume: ${candidateInfo?.resumeText} `; const res = await recruiter.generate(prompt); return { question: res?.text?.trim() || "" }; }, }); ``` ## 4. Behavioral Question Step If the candidate is non-technical, you want a different follow-up question. This step asks what interests them most about the role, again referencing their complete resume text. The code solicits a role-focused query from the LLM. ```ts filename="src/mastra/index.ts" copy const askAboutRole = new Step({ id: "askAboutRole", outputSchema: z.object({ question: z.string(), }), execute: async ({ context }) => { const candidateInfo = context?.getStepResult( "gatherCandidateInfo", ); const prompt = ` You are a recruiter. Given the resume below, craft a short question for ${candidateInfo?.candidateName} asking what interests them most about this role. Resume: ${candidateInfo?.resumeText} `; const res = await recruiter.generate(prompt); return { question: res?.text?.trim() || "" }; }, }); ``` ## 5. Define the Workflow You now combine the steps to implement branching logic based on the candidate's technical status. The workflow first gathers candidate data, then either asks about their specialty or about their role, depending on isTechnical. The code chains gatherCandidateInfo with askAboutSpecialty and askAboutRole, and commits the workflow. ```ts filename="src/mastra/index.ts" copy const candidateWorkflow = new Workflow({ name: "candidate-workflow", triggerSchema: z.object({ resumeText: z.string(), }), }); candidateWorkflow .step(gatherCandidateInfo) .then(askAboutSpecialty, { when: { "gatherCandidateInfo.isTechnical": true }, }) .after(gatherCandidateInfo) .step(askAboutRole, { when: { "gatherCandidateInfo.isTechnical": false }, }); candidateWorkflow.commit(); ``` ## 6. Execute the Workflow ```ts filename="src/mastra/index.ts" copy const mastra = new Mastra({ workflows: { candidateWorkflow, }, }); (async () => { const { runId, start } = mastra.getWorkflow("candidateWorkflow").createRun(); console.log("Run", runId); const runResult = await start({ triggerData: { resumeText: "Simulated resume content..." }, }); console.log("Final output:", runResult.results); })(); ``` You've just built a workflow to parse a resume and decide which question to ask based on the candidate's technical abilities. Congrats and happy hacking! --- title: "Building an AI Chef Assistant | Mastra Agent Guides" description: Guide on creating a Chef Assistant agent in Mastra to help users cook meals with available ingredients. --- import { Steps } from "nextra/components"; import YouTube from "@/components/youtube"; # Agents Guide: Building a Chef Assistant [EN] Source: https://mastra.ai/en/guides/guide/chef-michel In this guide, we'll walk through creating a "Chef Assistant" agent that helps users cook meals with available ingredients. ## Prerequisites - Node.js installed - Mastra installed: `npm install @mastra/core` --- ## Create the Agent ### Define the Agent Create a new file `src/mastra/agents/chefAgent.ts` and define your agent: ```ts copy filename="src/mastra/agents/chefAgent.ts" import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; export const chefAgent = new Agent({ name: "chef-agent", instructions: "You are Michel, a practical and experienced home chef" + "You help people cook with whatever ingredients they have available.", model: openai("gpt-4o-mini"), }); ``` --- ## Set Up Environment Variables Create a `.env` file in your project root and add your OpenAI API key: ```bash filename=".env" copy OPENAI_API_KEY=your_openai_api_key ``` --- ## Register the Agent with Mastra In your main file, register the agent: ```ts copy filename="src/mastra/index.ts" import { Mastra } from "@mastra/core"; import { chefAgent } from "./agents/chefAgent"; export const mastra = new Mastra({ agents: { chefAgent }, }); ``` --- ## Interacting with the Agent ### Generating Text Responses ```ts copy filename="src/index.ts" async function main() { const query = "In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make?"; console.log(`Query: ${query}`); const response = await chefAgent.generate([{ role: "user", content: query }]); console.log("\n👨‍🍳 Chef Michel:", response.text); } main(); ``` Run the script: ```bash copy npx bun src/index.ts ``` Output: ``` Query: In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make? 👨‍🍳 Chef Michel: You can make a delicious pasta al pomodoro! Here's how... ``` --- ### Streaming Responses ```ts copy filename="src/index.ts" async function main() { const query = "Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and some curry powder."; console.log(`Query: ${query}`); const stream = await chefAgent.stream([{ role: "user", content: query }]); console.log("\n Chef Michel: "); for await (const chunk of stream.textStream) { process.stdout.write(chunk); } console.log("\n\n✅ Recipe complete!"); } main(); ``` Output: ``` Query: Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and some curry powder. 👨‍🍳 Chef Michel: Great! You can make a comforting chicken curry... ✅ Recipe complete! ``` --- ### Generating a Recipe with Structured Data ```ts copy filename="src/index.ts" import { z } from "zod"; async function main() { const query = "I want to make lasagna, can you generate a lasagna recipe for me?"; console.log(`Query: ${query}`); // Define the Zod schema const schema = z.object({ ingredients: z.array( z.object({ name: z.string(), amount: z.string(), }), ), steps: z.array(z.string()), }); const response = await chefAgent.generate( [{ role: "user", content: query }], { output: schema }, ); console.log("\n👨‍🍳 Chef Michel:", response.object); } main(); ``` Output: ``` Query: I want to make lasagna, can you generate a lasagna recipe for me? 👨‍🍳 Chef Michel: { ingredients: [ { name: "Lasagna noodles", amount: "12 sheets" }, { name: "Ground beef", amount: "1 pound" }, // ... ], steps: [ "Preheat oven to 375°F (190°C).", "Cook the lasagna noodles according to package instructions.", // ... ] } ``` --- ## Running the Agent Server ### Using `mastra dev` You can run your agent as a service using the `mastra dev` command: ```bash copy mastra dev ``` This will start a server exposing endpoints to interact with your registered agents. ### Accessing the Chef Assistant API By default, `mastra dev` runs on `http://localhost:4111`. Your Chef Assistant agent will be available at: ``` POST http://localhost:4111/api/agents/chefAgent/generate ``` ### Interacting with the Agent via `curl` You can interact with the agent using `curl` from the command line: ```bash copy curl -X POST http://localhost:4111/api/agents/chefAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "I have eggs, flour, and milk. What can I make?" } ] }' ``` **Sample Response:** ```json { "text": "You can make delicious pancakes! Here's a simple recipe..." } ``` --- title: "Building a Research Paper Assistant | Mastra RAG Guides" description: Guide on creating an AI research assistant that can analyze and answer questions about academic papers using RAG. --- import { Steps } from "nextra/components"; # Building a Research Paper Assistant with RAG [EN] Source: https://mastra.ai/en/guides/guide/research-assistant In this guide, we'll create an AI research assistant that can analyze academic papers and answer specific questions about their content using Retrieval Augmented Generation (RAG). We'll use the foundational Transformer paper [Attention Is All You Need](https://arxiv.org/html/1706.03762) as our example. ## Understanding RAG Components Let's understand how RAG works and how we'll implement each component: 1. Knowledge Store/Index - Converting text into vector representations - Creating numerical representations of content - Implementation: We'll use OpenAI's text-embedding-3-small to create embeddings and store them in PgVector 2. Retriever - Finding relevant content via similarity search - Matching query embeddings with stored vectors - Implementation: We'll use PgVector to perform similarity searches on our stored embeddings 3. Generator - Processing retrieved content with an LLM - Creating contextually informed responses - Implementation: We'll use GPT-4o-mini to generate answers based on retrieved content Our implementation will: 1. Process the Transformer paper into embeddings 2. Store them in PgVector for quick retrieval 3. Use similarity search to find relevant sections 4. Generate accurate responses using retrieved context ## Project Structure ``` research-assistant/ ├── src/ │ ├── mastra/ │ │ ├── agents/ │ │ │ └── researchAgent.ts │ │ └── index.ts │ ├── index.ts │ └── store.ts ├── package.json └── .env ``` ### Initialize Project and Install Dependencies First, create a new directory for your project and navigate into it: ```bash mkdir research-assistant cd research-assistant ``` Initialize a new Node.js project and install the required dependencies: ```bash npm init -y npm install @mastra/core @mastra/rag @mastra/pg @ai-sdk/openai ai zod ``` Set up environment variables for API access and database connection: ```bash filename=".env" copy OPENAI_API_KEY=your_openai_api_key POSTGRES_CONNECTION_STRING=your_connection_string ``` Create the necessary files for our project: ```bash copy mkdir -p src/mastra/agents touch src/mastra/agents/researchAgent.ts touch src/mastra/index.ts src/store.ts src/index.ts ``` ### Create the Research Assistant Agent Now we'll create our RAG-enabled research assistant. The agent uses: - A [Vector Query Tool](/reference/tools/vector-query-tool) for performing semantic search over our vector store to find relevant content in our papers. - GPT-4o-mini for understanding queries and generating responses - Custom instructions that guide the agent on how to analyze papers, use retrieved content effectively, and acknowledge limitations ```ts copy showLineNumbers filename="src/mastra/agents/researchAgent.ts" import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { createVectorQueryTool } from '@mastra/rag'; // Create a tool for semantic search over our paper embeddings const vectorQueryTool = createVectorQueryTool({ vectorStoreName: 'pgVector', indexName: 'papers', model: openai.embedding('text-embedding-3-small'), }); export const researchAgent = new Agent({ name: 'Research Assistant', instructions: `You are a helpful research assistant that analyzes academic papers and technical documents. Use the provided vector query tool to find relevant information from your knowledge base, and provide accurate, well-supported answers based on the retrieved content. Focus on the specific content available in the tool and acknowledge if you cannot find sufficient information to answer a question. Base your responses only on the content provided, not on general knowledge.`, model: openai('gpt-4o-mini'), tools: { vectorQueryTool, }, }); ``` ### Set Up the Mastra Instance and Vector Store ```ts copy showLineNumbers filename="src/mastra/index.ts" import { Mastra } from '@mastra/core'; import { PgVector } from '@mastra/pg'; import { researchAgent } from './agents/researchAgent'; // Initialize Mastra instance const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { researchAgent }, vectors: { pgVector }, }); ``` ### Load and Process the Paper This step handles the initial document processing. We: 1. Fetch the research paper from its URL 2. Convert it into a document object 3. Split it into smaller, manageable chunks for better processing ```ts copy showLineNumbers filename="src/store.ts" import { openai } from "@ai-sdk/openai"; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; import { mastra } from "./mastra"; // Load the paper const paperUrl = "https://arxiv.org/html/1706.03762"; const response = await fetch(paperUrl); const paperText = await response.text(); // Create document and chunk it const doc = MDocument.fromText(paperText); const chunks = await doc.chunk({ strategy: 'recursive', size: 512, overlap: 50, separator: '\n', }); console.log("Number of chunks:", chunks.length); // Number of chunks: 893 ``` ### Create and Store Embeddings Finally, we'll prepare our content for RAG by: 1. Generating embeddings for each chunk of text 2. Creating a vector store index to hold our embeddings 3. Storing both the embeddings and metadata (original text and source information) in our vector database > **Note**: This metadata is crucial as it allows us to return the actual content when the vector store finds relevant matches. This allows our agent to efficiently search and retrieve relevant information. ```ts copy showLineNumbers{23} filename="src/store.ts" // Generate embeddings const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); // Get the vector store instance from Mastra const vectorStore = mastra.getVector('pgVector'); // Create an index for our paper chunks await vectorStore.createIndex({ indexName: 'papers', dimension: 1536, }); // Store embeddings await vectorStore.upsert({ indexName: 'papers', vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text, source: 'transformer-paper' })), }); ``` This will: 1. Load the paper from the URL 2. Split it into manageable chunks 3. Generate embeddings for each chunk 4. Store both the embeddings and text in our vector database To run the script and store the embeddings: ```bash npx bun src/store.ts ``` ### Test the Assistant Let's test our research assistant with different types of queries: ```ts filename="src/index.ts" showLineNumbers copy import { mastra } from "./mastra"; const agent = mastra.getAgent('researchAgent'); // Basic query about concepts const query1 = "What problems does sequence modeling face with neural networks?"; const response1 = await agent.generate(query1); console.log("\nQuery:", query1); console.log("Response:", response1.text); ``` Run the script: ```bash copy npx bun src/index.ts ``` You should see output like: ``` Query: What problems does sequence modeling face with neural networks? Response: Sequence modeling with neural networks faces several key challenges: 1. Vanishing and exploding gradients during training, especially with long sequences 2. Difficulty handling long-term dependencies in the input 3. Limited computational efficiency due to sequential processing 4. Challenges in parallelizing computations, resulting in longer training times ``` Let's try another question: ```ts filename="src/index.ts" showLineNumbers{10} copy // Query about specific findings const query2 = "What improvements were achieved in translation quality?"; const response2 = await agent.generate(query2); console.log("\nQuery:", query2); console.log("Response:", response2.text); ``` Output: ``` Query: What improvements were achieved in translation quality? Response: The model showed significant improvements in translation quality, achieving more than 2.0 BLEU points improvement over previously reported models on the WMT 2014 English-to-German translation task, while also reducing training costs. ``` ### Serve the Application Start the Mastra server to expose your research assistant via API: ```bash mastra dev ``` Your research assistant will be available at: ``` http://localhost:4111/api/agents/researchAgent/generate ``` Test with curl: ```bash curl -X POST http://localhost:4111/api/agents/researchAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "What were the main findings about model parallelization?" } ] }' ``` ## Advanced RAG Examples Explore these examples for more advanced RAG techniques: - [Filter RAG](/examples/rag/usage/filter-rag) for filtering results using metadata - [Cleanup RAG](/examples/rag/usage/cleanup-rag) for optimizing information density - [Chain of Thought RAG](/examples/rag/usage/cot-rag) for complex reasoning queries using workflows - [Rerank RAG](/examples/rag/usage/rerank-rag) for improved result relevance --- title: "Building an AI Stock Agent | Mastra Agents | Guides" description: Guide on creating a simple stock agent in Mastra to fetch the last day's closing stock price for a given symbol. --- import { Steps } from "nextra/components"; import YouTube from "@/components/youtube"; # Stock Agent [EN] Source: https://mastra.ai/en/guides/guide/stock-agent We're going to create a simple agent that fetches the last day's closing stock price for a given symbol. This example will show you how to create a tool, add it to an agent, and use the agent to fetch stock prices. ## Project Structure ``` stock-price-agent/ ├── src/ │ ├── agents/ │ │ └── stockAgent.ts │ ├── tools/ │ │ └── stockPrices.ts │ └── index.ts ├── package.json └── .env ``` --- ## Initialize the Project and Install Dependencies First, create a new directory for your project and navigate into it: ```bash mkdir stock-price-agent cd stock-price-agent ``` Initialize a new Node.js project and install the required dependencies: ```bash npm init -y npm install @mastra/core zod @ai-sdk/openai ``` Set Up Environment Variables Create a `.env` file at the root of your project to store your OpenAI API key. ```bash filename=".env" copy OPENAI_API_KEY=your_openai_api_key ``` Create the necessary directories and files: ```bash mkdir -p src/agents src/tools touch src/agents/stockAgent.ts src/tools/stockPrices.ts src/index.ts ``` --- ## Create the Stock Price Tool Next, we'll create a tool that fetches the last day's closing stock price for a given symbol. ```ts filename="src/tools/stockPrices.ts" import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const getStockPrice = async (symbol: string) => { const data = await fetch( `https://mastra-stock-data.vercel.app/api/stock-data?symbol=${symbol}`, ).then((r) => r.json()); return data.prices["4. close"]; }; export const stockPrices = createTool({ id: "Get Stock Price", inputSchema: z.object({ symbol: z.string(), }), description: `Fetches the last day's closing stock price for a given symbol`, execute: async ({ context: { symbol } }) => { console.log("Using tool to fetch stock price for", symbol); return { symbol, currentPrice: await getStockPrice(symbol), }; }, }); ``` --- ## Add the Tool to an Agent We'll create an agent and add the `stockPrices` tool to it. ```ts filename="src/agents/stockAgent.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import * as tools from "../tools/stockPrices"; export const stockAgent = new Agent({ name: "Stock Agent", instructions: "You are a helpful assistant that provides current stock prices. When asked about a stock, use the stock price tool to fetch the stock price.", model: openai("gpt-4o-mini"), tools: { stockPrices: tools.stockPrices, }, }); ``` --- ## Set Up the Mastra Instance We need to initialize the Mastra instance with our agent and tool. ```ts filename="src/index.ts" import { Mastra } from "@mastra/core"; import { stockAgent } from "./agents/stockAgent"; export const mastra = new Mastra({ agents: { stockAgent }, }); ``` ## Serve the Application Instead of running the application directly, we'll use the `mastra dev` command to start the server. This will expose your agent via REST API endpoints, allowing you to interact with it over HTTP. In your terminal, start the Mastra server by running: ```bash mastra dev --dir src ``` This command will allow you to test your stockPrices tool and your stockAgent within the playground. This will also start the server and make your agent available at: ``` http://localhost:4111/api/agents/stockAgent/generate ``` --- ## Test the Agent with cURL Now that your server is running, you can test your agent's endpoint using `curl`: ```bash curl -X POST http://localhost:4111/api/agents/stockAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "What is the current stock price of Apple (AAPL)?" } ] }' ``` **Expected Response:** You should receive a JSON response similar to: ```json { "text": "The current price of Apple (AAPL) is $174.55.", "agent": "Stock Agent" } ``` This indicates that your agent successfully processed the request, used the `stockPrices` tool to fetch the stock price, and returned the result. --- title: 'Overview' description: 'Guides on building with Mastra' --- # Guides [EN] Source: https://mastra.ai/en/guides While examples show quick implementations and docs explain specific features, these guides are a bit longer and designed to demonstrate core Mastra concepts: ## [AI Recruiter](/guides/guide/ai-recruiter) Create a workflow that processes candidate resumes and conducts interviews, demonstrating branching logic and LLM integration in Mastra workflows. ## [Chef Assistant](/guides/guide/chef-michel) Build an AI chef agent that helps users cook meals with available ingredients, showing how to create interactive agents with custom tools. ## [Research Paper Assistant](/guides/guide/research-assistant) Develop an AI research assistant that analyzes academic papers using Retrieval Augmented Generation (RAG), demonstrating document processing and question answering. ## [Stock Agent](/guides/guide/stock-agent) Implement a simple agent that fetches stock prices, illustrating the basics of creating tools and integrating them with Mastra agents. --- title: "Reference: createTool() | Tools | Agents | Mastra Docs" description: Documentation for the createTool function in Mastra, which creates custom tools for agents and workflows. --- # `createTool()` [EN] Source: https://mastra.ai/en/reference/agents/createTool The `createTool()` function creates typed tools that can be executed by agents or workflows. Tools have built-in schema validation, execution context, and integration with the Mastra ecosystem. ## Overview Tools are a fundamental building block in Mastra that allow agents to interact with external systems, perform computations, and access data. Each tool has: - A unique identifier - A description that helps the AI understand when and how to use the tool - Optional input and output schemas for validation - An execution function that implements the tool's logic ## Example Usage ```ts filename="src/tools/stock-tools.ts" showLineNumbers copy import { createTool } from "@mastra/core/tools"; import { z } from "zod"; // Helper function to fetch stock data const getStockPrice = async (symbol: string) => { const response = await fetch( `https://mastra-stock-data.vercel.app/api/stock-data?symbol=${symbol}` ); const data = await response.json(); return data.prices["4. close"]; }; // Create a tool to get stock prices export const stockPriceTool = createTool({ id: "getStockPrice", description: "Fetches the current stock price for a given ticker symbol", inputSchema: z.object({ symbol: z.string().describe("The stock ticker symbol (e.g., AAPL, MSFT)") }), outputSchema: z.object({ symbol: z.string(), price: z.number(), currency: z.string(), timestamp: z.string() }), execute: async ({ context }) => { const price = await getStockPrice(context.symbol); return { symbol: context.symbol, price: parseFloat(price), currency: "USD", timestamp: new Date().toISOString() }; } }); // Create a tool that uses the thread context export const threadInfoTool = createTool({ id: "getThreadInfo", description: "Returns information about the current conversation thread", inputSchema: z.object({ includeResource: z.boolean().optional().default(false) }), execute: async ({ context, threadId, resourceId }) => { return { threadId, resourceId: context.includeResource ? resourceId : undefined, timestamp: new Date().toISOString() }; } }); ``` ## API Reference ### Parameters `createTool()` accepts a single object with the following properties: Promise", required: false, description: "Async function that implements the tool's logic. Receives the execution context and optional configuration.", properties: [ { type: "ToolExecutionContext", parameters: [ { name: "context", type: "object", description: "The validated input data that matches the inputSchema" }, { name: "threadId", type: "string", isOptional: true, description: "Identifier for the conversation thread, if available" }, { name: "resourceId", type: "string", isOptional: true, description: "Identifier for the user or resource interacting with the tool" }, { name: "mastra", type: "Mastra", isOptional: true, description: "Reference to the Mastra instance, if available" }, ] }, { type: "ToolOptions", parameters: [ { name: "toolCallId", type: "string", description: "The ID of the tool call. You can use it e.g. when sending tool-call related information with stream data." }, { name: "messages", type: "CoreMessage[]", description: "Messages that were sent to the language model to initiate the response that contained the tool call. The messages do not include the system prompt nor the assistant response that contained the tool call." }, { name: "abortSignal", type: "AbortSignal", isOptional: true, description: "An optional abort signal that indicates that the overall operation should be aborted." }, ] } ] }, { name: "inputSchema", type: "ZodSchema", required: false, description: "Zod schema that defines and validates the tool's input parameters. If not provided, the tool will accept any input." }, { name: "outputSchema", type: "ZodSchema", required: false, description: "Zod schema that defines and validates the tool's output. Helps ensure the tool returns data in the expected format." }, ]} /> ### Returns ", description: "A Tool instance that can be used with agents, workflows, or directly executed.", properties: [ { type: "Tool", parameters: [ { name: "id", type: "string", description: "The tool's unique identifier" }, { name: "description", type: "string", description: "Description of the tool's functionality" }, { name: "inputSchema", type: "ZodSchema | undefined", description: "Schema for validating inputs" }, { name: "outputSchema", type: "ZodSchema | undefined", description: "Schema for validating outputs" }, { name: "execute", type: "Function", description: "The tool's execution function" } ] } ] } ]} /> ## Type Safety The `createTool()` function provides full type safety through TypeScript generics: - Input types are inferred from the `inputSchema` - Output types are inferred from the `outputSchema` - The execution context is properly typed based on the input schema This ensures that your tools are type-safe throughout your application. ## Best Practices 1. **Descriptive IDs**: Use clear, action-oriented IDs like `getWeatherForecast` or `searchDatabase` 2. **Detailed Descriptions**: Provide comprehensive descriptions that explain when and how to use the tool 3. **Input Validation**: Use Zod schemas to validate inputs and provide helpful error messages 4. **Error Handling**: Implement proper error handling in your execute function 5. **Idempotency**: When possible, make your tools idempotent (same input always produces same output) 6. **Performance**: Keep tools lightweight and fast to execute --- title: "Reference: Agent.generate() | Agents | Mastra Docs" description: "Documentation for the `.generate()` method in Mastra agents, which produces text or structured responses." --- # Agent.generate() [EN] Source: https://mastra.ai/en/reference/agents/generate The `generate()` method is used to interact with an agent to produce text or structured responses. This method accepts `messages` and an optional `options` object as parameters. ## Parameters ### `messages` The `messages` parameter can be: - A single string - An array of strings - An array of message objects with `role` and `content` properties The message object structure: ```typescript interface Message { role: 'system' | 'user' | 'assistant'; content: string; } ``` ### `options` (Optional) An optional object that can include configuration for output structure, memory management, tool usage, telemetry, and more. | never", isOptional: true, description: "Callback function called after each execution step. Receives step details as a JSON string. Unavailable for structured output", }, { name: "resourceId", type: "string", isOptional: true, description: "Identifier for the user or resource interacting with the agent. Must be provided if threadId is provided.", }, { name: "telemetry", type: "TelemetrySettings", isOptional: true, description: "Settings for telemetry collection during generation. See TelemetrySettings section below for details.", }, { name: "temperature", type: "number", isOptional: true, description: "Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic.", }, { name: "threadId", type: "string", isOptional: true, description: "Identifier for the conversation thread. Allows for maintaining context across multiple interactions. Must be provided if resourceId is provided.", }, { name: "toolChoice", type: "'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }", isOptional: true, defaultValue: "'auto'", description: "Controls how the agent uses tools during generation.", }, { name: "toolsets", type: "ToolsetsInput", isOptional: true, description: "Additional toolsets to make available to the agent during generation.", }, ]} /> #### MemoryConfig Configuration options for memory management: #### TelemetrySettings Settings for telemetry collection during generation: ", isOptional: true, description: "Additional information to include in the telemetry data. AttributeValue can be string, number, boolean, array of these types, or null.", }, { name: "tracer", type: "Tracer", isOptional: true, description: "A custom OpenTelemetry tracer instance to use for the telemetry data. See OpenTelemetry documentation for details.", } ]} /> ## Returns The return value of the `generate()` method depends on the options provided, specifically the `output` option. ### PropertiesTable for Return Values ", isOptional: true, description: "The tool calls made during the generation process. Present in both text and object modes.", } ]} /> #### ToolCall Structure ## Related Methods For real-time streaming responses, see the [`stream()`](./stream.mdx) method documentation. --- title: "Reference: getAgent() | Agent Config | Agents | Mastra Docs" description: API Reference for getAgent. --- # `getAgent()` [EN] Source: https://mastra.ai/en/reference/agents/getAgent Retrieve an agent based on the provided configuration ```ts showLineNumbers copy async function getAgent({ connectionId, agent, apis, logger, }: { connectionId: string; agent: Record; apis: Record; logger: any; }): Promise<(props: { prompt: string }) => Promise> { return async (props: { prompt: string }) => { return { message: "Hello, world!" }; }; } ``` ## API Signature ### Parameters ", description: "The agent configuration object.", }, { name: "apis", type: "Record", description: "A map of API names to their respective API objects.", }, ]} /> ### Returns --- title: "Reference: Agent.stream() | Streaming | Agents | Mastra Docs" description: Documentation for the `.stream()` method in Mastra agents, which enables real-time streaming of responses. --- # `stream()` [EN] Source: https://mastra.ai/en/reference/agents/stream The `stream()` method enables real-time streaming of responses from an agent. This method accepts `messages` and an optional `options` object as parameters, similar to `generate()`. ## Parameters ### `messages` The `messages` parameter can be: - A single string - An array of strings - An array of message objects with `role` and `content` properties The message object structure: ```typescript interface Message { role: 'system' | 'user' | 'assistant'; content: string; } ``` ### `options` (Optional) An optional object that can include configuration for output structure, memory management, tool usage, telemetry, and more. | never", isOptional: true, description: "Callback function called after each step during streaming. Unavailable for structured output", }, { name: "output", type: "Zod schema | JsonSchema7", isOptional: true, description: "Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema.", }, { name: "resourceId", type: "string", isOptional: true, description: "Identifier for the user or resource interacting with the agent. Must be provided if threadId is provided.", }, { name: "telemetry", type: "TelemetrySettings", isOptional: true, description: "Settings for telemetry collection during streaming. See TelemetrySettings section below for details.", }, { name: "temperature", type: "number", isOptional: true, description: "Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic.", }, { name: "threadId", type: "string", isOptional: true, description: "Identifier for the conversation thread. Allows for maintaining context across multiple interactions. Must be provided if resourceId is provided.", }, { name: "toolChoice", type: "'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }", isOptional: true, defaultValue: "'auto'", description: "Controls how the agent uses tools during streaming.", }, { name: "toolsets", type: "ToolsetsInput", isOptional: true, description: "Additional toolsets to make available to the agent during this stream.", } ]} /> #### MemoryConfig Configuration options for memory management: #### TelemetrySettings Settings for telemetry collection during streaming: ", isOptional: true, description: "Additional information to include in the telemetry data. AttributeValue can be string, number, boolean, array of these types, or null.", }, { name: "tracer", type: "Tracer", isOptional: true, description: "A custom OpenTelemetry tracer instance to use for the telemetry data. See OpenTelemetry documentation for details.", } ]} /> ## Returns The return value of the `stream()` method depends on the options provided, specifically the `output` option. ### PropertiesTable for Return Values ", isOptional: true, description: "Stream of text chunks. Present when output is 'text' (no schema provided) or when using `experimental_output`.", }, { name: "objectStream", type: "AsyncIterable", isOptional: true, description: "Stream of structured data. Present only when using `output` option with a schema.", }, { name: "partialObjectStream", type: "AsyncIterable", isOptional: true, description: "Stream of structured data. Present only when using `experimental_output` option.", }, { name: "object", type: "Promise", isOptional: true, description: "Promise that resolves to the final structured output. Present when using either `output` or `experimental_output` options.", } ]} /> ## Examples ### Basic Text Streaming ```typescript const stream = await myAgent.stream([ { role: "user", content: "Tell me a story." } ]); for await (const chunk of stream.textStream) { process.stdout.write(chunk); } ``` ### Structured Output Streaming with Thread Context ```typescript const schema = { type: 'object', properties: { summary: { type: 'string' }, nextSteps: { type: 'array', items: { type: 'string' } } }, required: ['summary', 'nextSteps'] }; const response = await myAgent.stream( "What should we do next?", { output: schema, threadId: "project-123", onFinish: text => console.log("Finished:", text) } ); for await (const chunk of response.textStream) { console.log(chunk); } const result = await response.object; console.log("Final structured result:", result); ``` The key difference between Agent's `stream()` and LLM's `stream()` is that Agents maintain conversation context through `threadId`, can access tools, and integrate with the agent's memory system. --- title: "mastra build" description: "Build your Mastra project for production deployment" --- The `mastra build` command bundles your Mastra project into a production-ready Hono server. Hono is a lightweight web framework that provides type-safe routing and middleware support, making it ideal for deploying Mastra agents as HTTP endpoints. ## Usage [EN] Source: https://mastra.ai/en/reference/cli/build ```bash mastra build [options] ``` ## Options - `--dir `: Directory containing your Mastra project (default: current directory) ## What It Does 1. Locates your Mastra entry file (either `src/mastra/index.ts` or `src/mastra/index.js`) 2. Creates a `.mastra` output directory 3. Bundles your code using Rollup with: - Tree shaking for optimal bundle size - Node.js environment targeting - Source map generation for debugging ## Example ```bash # Build from current directory mastra build # Build from specific directory mastra build --dir ./my-mastra-project ``` ## Output The command generates a production bundle in the `.mastra` directory, which includes: - A Hono-based HTTP server with your Mastra agents exposed as endpoints - Bundled JavaScript files optimized for production - Source maps for debugging - Required dependencies This output is suitable for: - Deploying to cloud servers (EC2, Digital Ocean) - Running in containerized environments - Using with container orchestration systems ## Deployers When a Deployer is used, the build output is automatically prepared for the target platform e.g - [Vercel Deployer](/reference/deployers/vercel) - [Netlify Deployer](/reference/deployers/netlify) - [Cloudflare Deployer](/reference/deployers/cloudflare) --- title: "`mastra dev` Reference | Local Development | Mastra CLI" description: Documentation for the mastra dev command, which starts a development server for agents, tools, and workflows. --- # `mastra dev` Reference [EN] Source: https://mastra.ai/en/reference/cli/dev The `mastra dev` command starts a development server that exposes REST routes for your agents, tools, and workflows, ## Parameters ## Routes Starting the server with `mastra dev` exposes a set of REST routes by default: ### System Routes - **GET `/api`**: Get API status. ### Agent Routes Agents are expected to be exported from `src/mastra/agents`. - **GET `/api/agents`**: Lists the registered agents found in your Mastra folder. - **GET `/api/agents/:agentId`**: Get agent by ID. - **GET `/api/agents/:agentId/evals/ci`**: Get CI evals by agent ID. - **GET `/api/agents/:agentId/evals/live`**: Get live evals by agent ID. - **POST `/api/agents/:agentId/generate`**: Sends a text-based prompt to the specified agent, returning the agent's response. - **POST `/api/agents/:agentId/stream`**: Stream a response from an agent. - **POST `/api/agents/:agentId/instructions`**: Update an agent's instructions. - **POST `/api/agents/:agentId/instructions/enhance`**: Generate an improved system prompt from instructions. - **GET `/api/agents/:agentId/speakers`**: Get available speakers for an agent. - **POST `/api/agents/:agentId/speak`**: Convert text to speech using the agent's voice provider. - **POST `/api/agents/:agentId/listen`**: Convert speech to text using the agent's voice provider. - **POST `/api/agents/:agentId/tools/:toolId/execute`**: Execute a tool through an agent. ### Tool Routes Tools are expected to be exported from `src/mastra/tools` (or the configured tools directory). - **GET `/api/tools`**: Get all tools. - **GET `/api/tools/:toolId`**: Get tool by ID. - **POST `/api/tools/:toolId/execute`**: Invokes a specific tool by name, passing input data in the request body. ### Workflow Routes Workflows are expected to be exported from `src/mastra/workflows` (or the configured workflows directory). - **GET `/api/workflows`**: Get all workflows. - **GET `/api/workflows/:workflowId`**: Get workflow by ID. - **POST `/api/workflows/:workflowName/start`**: Starts the specified workflow. - **POST `/api/workflows/:workflowName/:instanceId/event`**: Sends an event or trigger signal to an existing workflow instance. - **GET `/api/workflows/:workflowName/:instanceId/status`**: Returns status info for a running workflow instance. - **POST `/api/workflows/:workflowId/resume`**: Resume a suspended workflow step. - **POST `/api/workflows/:workflowId/resume-async`**: Resume a suspended workflow step asynchronously. - **POST `/api/workflows/:workflowId/createRun`**: Create a new workflow run. - **POST `/api/workflows/:workflowId/start-async`**: Execute/Start a workflow asynchronously. - **GET `/api/workflows/:workflowId/watch`**: Watch workflow transitions in real-time. ### Memory Routes - **GET `/api/memory/status`**: Get memory status. - **GET `/api/memory/threads`**: Get all threads. - **GET `/api/memory/threads/:threadId`**: Get thread by ID. - **GET `/api/memory/threads/:threadId/messages`**: Get messages for a thread. - **POST `/api/memory/threads`**: Create a new thread. - **PATCH `/api/memory/threads/:threadId`**: Update a thread. - **DELETE `/api/memory/threads/:threadId`**: Delete a thread. - **POST `/api/memory/save-messages`**: Save messages. ### Telemetry Routes - **GET `/api/telemetry`**: Get all traces. ### Log Routes - **GET `/api/logs`**: Get all logs. - **GET `/api/logs/transports`**: List of all log transports. - **GET `/api/logs/:runId`**: Get logs by run ID. ### Vector Routes - **POST `/api/vector/:vectorName/upsert`**: Upsert vectors into an index. - **POST `/api/vector/:vectorName/create-index`**: Create a new vector index. - **POST `/api/vector/:vectorName/query`**: Query vectors from an index. - **GET `/api/vector/:vectorName/indexes`**: List all indexes for a vector store. - **GET `/api/vector/:vectorName/indexes/:indexName`**: Get details about a specific index. - **DELETE `/api/vector/:vectorName/indexes/:indexName`**: Delete a specific index. ### OpenAPI Specification - **GET `/openapi.json`**: Returns an auto-generated OpenAPI specification for your project's routes. - **GET `/swagger-ui`**: Access Swagger UI for API documentation. ## Additional Notes The port defaults to 4111. Make sure you have your environment variables set up in your `.env.development` or `.env` file for any providers you use (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, etc.). ### Example request To test an agent after running `mastra dev`: ```bash curl -X POST http://localhost:4111/api/agents/myAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "Hello, how can you assist me today?" } ] }' ``` --- title: "`mastra init` reference | Project Creation | Mastra CLI" description: Documentation for the mastra init command, which creates a new Mastra project with interactive setup options. --- # `mastra init` Reference [EN] Source: https://mastra.ai/en/reference/cli/init ## `mastra init` This creates a new Mastra project. You can run it in three different ways: 1. **Interactive Mode (Recommended)** Run without flags to use the interactive prompt, which will guide you through: - Choosing a directory for Mastra files - Selecting components to install (Agents, Tools, Workflows) - Choosing a default LLM provider (OpenAI, Anthropic, or Groq) - Deciding whether to include example code 2. **Quick Start with Defaults** ```bash mastra init --default ``` This sets up a project with: - Source directory: `src/` - All components: agents, tools, workflows - OpenAI as the default provider - No example code 3. **Custom Setup** ```bash mastra init --dir src/mastra --components agents,tools --llm openai --example ``` Options: - `-d, --dir`: Directory for Mastra files (defaults to src/mastra) - `-c, --components`: Comma-separated list of components (agents, tools, workflows) - `-l, --llm`: Default model provider (openai, anthropic, or groq) - `-k, --llm-api-key`: API key for the selected LLM provider (will be added to .env file) - `-e, --example`: Include example code - `-ne, --no-example`: Skip example code # Agents API [EN] Source: https://mastra.ai/en/reference/client-js/agents The Agents API provides methods to interact with Mastra AI agents, including generating responses, streaming interactions, and managing agent tools. ## Getting All Agents Retrieve a list of all available agents: ```typescript const agents = await client.getAgents(); ``` ## Working with a Specific Agent Get an instance of a specific agent: ```typescript const agent = client.getAgent("agent-id"); ``` ## Agent Methods ### Get Agent Details Retrieve detailed information about an agent: ```typescript const details = await agent.details(); ``` ### Generate Response Generate a response from the agent: ```typescript const response = await agent.generate({ messages: [ { role: "user", content: "Hello, how are you?", }, ], threadId: "thread-1", // Optional: Thread ID for conversation context resourceid: "resource-1", // Optional: Resource ID output: {}, // Optional: Output configuration }); ``` ### Stream Response Stream responses from the agent for real-time interactions: ```typescript const response = await agent.stream({ messages: [ { role: "user", content: "Tell me a story", }, ], }); // Process data stream with the processDataStream util response.processDataStream({ onTextPart: (text) => { process.stdout.write(text); }, onFilePart: (file) => { console.log(file); }, onDataPart: (data) => { console.log(data); }, onErrorPart: (error) => { console.error(error); }, }); // You can also read from response body directly const reader = response.body.getReader(); while (true) { const { done, value } = await reader.read(); if (done) break; console.log(new TextDecoder().decode(value)); } ``` ### Get Agent Tool Retrieve information about a specific tool available to the agent: ```typescript const tool = await agent.getTool("tool-id"); ``` ### Get Agent Evaluations Get evaluation results for the agent: ```typescript // Get CI evaluations const evals = await agent.evals(); // Get live evaluations const liveEvals = await agent.liveEvals(); ``` # Error Handling [EN] Source: https://mastra.ai/en/reference/client-js/error-handling The Mastra Client SDK includes built-in retry mechanism and error handling capabilities. ## Error Handling All API methods can throw errors that you can catch and handle: ```typescript try { const agent = client.getAgent("agent-id"); const response = await agent.generate({ messages: [{ role: "user", content: "Hello" }], }); } catch (error) { console.error("An error occurred:", error.message); } ``` ## Retry Mechanism The client automatically retries failed requests with exponential backoff: ```typescript const client = new MastraClient({ baseUrl: "http://localhost:4111", retries: 3, // Number of retry attempts backoffMs: 300, // Initial backoff time maxBackoffMs: 5000, // Maximum backoff time }); ``` ### How Retries Work 1. First attempt fails → Wait 300ms 2. Second attempt fails → Wait 600ms 3. Third attempt fails → Wait 1200ms 4. Final attempt fails → Throw error # Logs API [EN] Source: https://mastra.ai/en/reference/client-js/logs The Logs API provides methods to access and query system logs and debugging information in Mastra. ## Getting Logs Retrieve system logs with optional filtering: ```typescript const logs = await client.getLogs({ transportId: "transport-1", }); ``` ## Getting Logs for a Specific Run Retrieve logs for a specific execution run: ```typescript const runLogs = await client.getLogForRun({ runId: "run-1", transportId: "transport-1", }); ``` # Memory API [EN] Source: https://mastra.ai/en/reference/client-js/memory The Memory API provides methods to manage conversation threads and message history in Mastra. ## Memory Thread Operations ### Get All Threads Retrieve all memory threads for a specific resource: ```typescript const threads = await client.getMemoryThreads({ resourceId: "resource-1", agentId: "agent-1" }); ``` ### Create a New Thread Create a new memory thread: ```typescript const thread = await client.createMemoryThread({ title: "New Conversation", metadata: { category: "support" }, resourceid: "resource-1", agentId: "agent-1" }); ``` ### Working with a Specific Thread Get an instance of a specific memory thread: ```typescript const thread = client.getMemoryThread("thread-id", "agent-id"); ``` ## Thread Methods ### Get Thread Details Retrieve details about a specific thread: ```typescript const details = await thread.get(); ``` ### Update Thread Update thread properties: ```typescript const updated = await thread.update({ title: "Updated Title", metadata: { status: "resolved" }, resourceid: "resource-1", }); ``` ### Delete Thread Delete a thread and its messages: ```typescript await thread.delete(); ``` ## Message Operations ### Save Messages Save messages to memory: ```typescript const savedMessages = await client.saveMessageToMemory({ messages: [ { role: "user", content: "Hello!", id: "1", threadId: "thread-1", createdAt: new Date(), type: "text", }, ], agentId: "agent-1" }); ``` ### Get Memory Status Check the status of the memory system: ```typescript const status = await client.getMemoryStatus("agent-id"); ``` # Telemetry API [EN] Source: https://mastra.ai/en/reference/client-js/telemetry The Telemetry API provides methods to retrieve and analyze traces from your Mastra application. This helps you monitor and debug your application's behavior and performance. ## Getting Traces Retrieve traces with optional filtering and pagination: ```typescript const telemetry = await client.getTelemetry({ name: "trace-name", // Optional: Filter by trace name scope: "scope-name", // Optional: Filter by scope page: 1, // Optional: Page number for pagination perPage: 10, // Optional: Number of items per page attribute: { // Optional: Filter by custom attributes key: "value", }, }); ``` # Tools API [EN] Source: https://mastra.ai/en/reference/client-js/tools The Tools API provides methods to interact with and execute tools available in the Mastra platform. ## Getting All Tools Retrieve a list of all available tools: ```typescript const tools = await client.getTools(); ``` ## Working with a Specific Tool Get an instance of a specific tool: ```typescript const tool = client.getTool("tool-id"); ``` ## Tool Methods ### Get Tool Details Retrieve detailed information about a tool: ```typescript const details = await tool.details(); ``` ### Execute Tool Execute a tool with specific arguments: ```typescript const result = await tool.execute({ args: { param1: "value1", param2: "value2", }, threadId: "thread-1", // Optional: Thread context resourceid: "resource-1", // Optional: Resource identifier }); ``` # Vectors API [EN] Source: https://mastra.ai/en/reference/client-js/vectors The Vectors API provides methods to work with vector embeddings for semantic search and similarity matching in Mastra. ## Working with Vectors Get an instance of a vector store: ```typescript const vector = client.getVector("vector-name"); ``` ## Vector Methods ### Get Vector Index Details Retrieve information about a specific vector index: ```typescript const details = await vector.details("index-name"); ``` ### Create Vector Index Create a new vector index: ```typescript const result = await vector.createIndex({ indexName: "new-index", dimension: 128, metric: "cosine", // 'cosine', 'euclidean', or 'dotproduct' }); ``` ### Upsert Vectors Add or update vectors in an index: ```typescript const ids = await vector.upsert({ indexName: "my-index", vectors: [ [0.1, 0.2, 0.3], // First vector [0.4, 0.5, 0.6], // Second vector ], metadata: [{ label: "first" }, { label: "second" }], ids: ["id1", "id2"], // Optional: Custom IDs }); ``` ### Query Vectors Search for similar vectors: ```typescript const results = await vector.query({ indexName: "my-index", queryVector: [0.1, 0.2, 0.3], topK: 10, filter: { label: "first" }, // Optional: Metadata filter includeVector: true, // Optional: Include vectors in results }); ``` ### Get All Indexes List all available indexes: ```typescript const indexes = await vector.getIndexes(); ``` ### Delete Index Delete a vector index: ```typescript const result = await vector.delete("index-name"); ``` # Workflows API [EN] Source: https://mastra.ai/en/reference/client-js/workflows The Workflows API provides methods to interact with and execute automated workflows in Mastra. ## Getting All Workflows Retrieve a list of all available workflows: ```typescript const workflows = await client.getWorkflows(); ``` ## Working with a Specific Workflow Get an instance of a specific workflow: ```typescript const workflow = client.getWorkflow("workflow-id"); ``` ## Workflow Methods ### Get Workflow Details Retrieve detailed information about a workflow: ```typescript const details = await workflow.details(); ``` ### Start workflow run asynchronously Start a workflow run with triggerData and await full run results: ```typescript const {runId} = workflow.createRun() const result = await workflow.startAsync({ runId, triggerData: { param1: "value1", param2: "value2", }, }); ``` ### Resume Workflow run asynchronously Resume a suspended workflow step and await full run result: ```typescript const {runId} = createRun({runId: prevRunId}) const result = await workflow.resumeAsync({ runId, stepId: "step-id", contextData: { key: "value" }, }); ``` ### Watch Workflow Watch workflow transitions ```typescript try{ // Get workflow instance const workflow = client.getWorkflow("workflow-id"); // Create a workflow run const {runId} = workflow.createRun() // Watch workflow run workflow.watch({runId},(record)=>{ // Every new record is the latest transition state of the workflow run console.log({ activePaths: record.activePaths, results: record.results, timestamp: record.timestamp, runId: record.runId }); }); // Start workflow run workflow.start({ runId, triggerData: { city: 'New York', }, }); }catch(e){ console.error(e); } ``` ### Resume Workflow Resume workflow run and watch workflow step transitions ```typescript try{ //To resume a workflow run, when a step is suspended const {run} = createRun({runId: prevRunId}) //Watch run workflow.watch({runId},(record)=>{ // Every new record is the latest transition state of the workflow run console.log({ activePaths: record.activePaths, results: record.results, timestamp: record.timestamp, runId: record.runId }); }) //resume run workflow.resume({ runId, stepId: "step-id", contextData: { key: "value" }, }); }catch(e){ console.error(e); } ``` ### Workflow run result A workflow run result yields the following: | Field | Type | Description | |-------|------|-------------| | `activePaths` | `Record` | Currently active paths in the workflow with their execution status | | `results` | `CoreWorkflowRunResult['results']` | Results from the workflow execution | | `timestamp` | `number` | Unix timestamp of when this transition occurred | | `runId` | `string` | Unique identifier for this workflow run instance | --- title: "Mastra Core" description: Documentation for the Mastra Class, the core entry point for managing agents, workflows, and server endpoints. --- # The Mastra Class [EN] Source: https://mastra.ai/en/reference/core/mastra-class The Mastra class is the core entry point for your application. It manages agents, workflows, and server endpoints. ## Constructor Options ", description: "Custom tools to register. Structured as a key-value pair, with keys being the tool name and values being the tool function.", isOptional: true, defaultValue: "{}", }, { name: "storage", type: "MastraStorage", description: "Storage engine instance for persisting data", isOptional: true, }, { name: "vectors", type: "Record", description: "Vector store instance, used for semantic search and vector-based tools (eg Pinecone, PgVector or Qdrant)", isOptional: true, }, { name: "logger", type: "Logger", description: "Logger instance created with createLogger()", isOptional: true, defaultValue: "Console logger with INFO level", }, { name: "workflows", type: "Record", description: "Workflows to register. Structured as a key-value pair, with keys being the workflow name and values being the workflow instance.", isOptional: true, defaultValue: "{}", }, { name: "serverMiddleware", type: "Array<{ handler: (c: any, next: () => Promise) => Promise; path?: string; }>", description: "Server middleware functions to be applied to API routes. Each middleware can specify a path pattern (defaults to '/api/*').", isOptional: true, defaultValue: "[]", }, ]} /> ## Initialization The Mastra class is typically initialized in your `src/mastra/index.ts` file: ```typescript copy filename=src/mastra/index.ts import { Mastra } from "@mastra/core"; import { createLogger } from "@mastra/core/logger"; // Basic initialization export const mastra = new Mastra({}); // Full initialization with all options export const mastra = new Mastra({ agents: {}, workflows: [], integrations: [], logger: createLogger({ name: "My Project", level: "info", }), storage: {}, tools: {}, vectors: {}, }); ``` You can think of the `Mastra` class as a top-level registry. When you register tools with Mastra, your registered agents and workflows can use them. When you register integrations with Mastra, agents, workflows, and tools can use them. ## Methods ", description: "Returns all registered agents as a key-value object.", example: 'const agents = mastra.getAgents();', }, { name: "getWorkflow(id, { serialized })", type: "Workflow", description: "Returns a workflow instance by id. The serialized option (default: false) returns a simplified representation with just the name.", example: 'const workflow = mastra.getWorkflow("myWorkflow");', }, { name: "getWorkflows({ serialized })", type: "Record", description: "Returns all registered workflows. The serialized option (default: false) returns simplified representations.", example: 'const workflows = mastra.getWorkflows();', }, { name: "getVector(name)", type: "MastraVector", description: "Returns a vector store instance by name. Throws if not found.", example: 'const vectorStore = mastra.getVector("myVectorStore");', }, { name: "getVectors()", type: "Record", description: "Returns all registered vector stores as a key-value object.", example: 'const vectorStores = mastra.getVectors();', }, { name: "getDeployer()", type: "MastraDeployer | undefined", description: "Returns the configured deployer instance, if any.", example: 'const deployer = mastra.getDeployer();', }, { name: "getStorage()", type: "MastraStorage | undefined", description: "Returns the configured storage instance.", example: 'const storage = mastra.getStorage();', }, { name: "getMemory()", type: "MastraMemory | undefined", description: "Returns the configured memory instance. Note: This is deprecated, memory should be added to agents directly.", example: 'const memory = mastra.getMemory();', }, { name: "getServerMiddleware()", type: "Array<{ handler: Function; path: string; }>", description: "Returns the configured server middleware functions.", example: 'const middleware = mastra.getServerMiddleware();', }, { name: "setStorage(storage)", type: "void", description: "Sets the storage instance for the Mastra instance.", example: 'mastra.setStorage(new DefaultStorage());', }, { name: "setLogger({ logger })", type: "void", description: "Sets the logger for all components (agents, workflows, etc.).", example: 'mastra.setLogger({ logger: createLogger({ name: "MyLogger" }) });', }, { name: "setTelemetry(telemetry)", type: "void", description: "Sets the telemetry configuration for all components.", example: 'mastra.setTelemetry({ export: { type: "console" } });', }, { name: "getLogger()", type: "Logger", description: "Gets the configured logger instance.", example: 'const logger = mastra.getLogger();', }, { name: "getTelemetry()", type: "Telemetry | undefined", description: "Gets the configured telemetry instance.", example: 'const telemetry = mastra.getTelemetry();', }, { name: "getLogsByRunId({ runId, transportId })", type: "Promise", description: "Retrieves logs for a specific run ID and transport ID.", example: 'const logs = await mastra.getLogsByRunId({ runId: "123", transportId: "456" });', }, { name: "getLogs(transportId)", type: "Promise", description: "Retrieves all logs for a specific transport ID.", example: 'const logs = await mastra.getLogs("transportId");', }, ]} /> ## Error Handling The Mastra class methods throw typed errors that can be caught: ```typescript copy try { const tool = mastra.getTool("nonexistentTool"); } catch (error) { if (error instanceof Error) { console.log(error.message); // "Tool with name nonexistentTool not found" } } ``` --- title: "Cloudflare Deployer" description: "Documentation for the CloudflareDeployer class, which deploys Mastra applications to Cloudflare Workers." --- # CloudflareDeployer [EN] Source: https://mastra.ai/en/reference/deployer/cloudflare The CloudflareDeployer deploys Mastra applications to Cloudflare Workers, handling configuration, environment variables, and route management. It extends the abstract Deployer class to provide Cloudflare-specific deployment functionality. ## Usage Example ```typescript import { Mastra } from '@mastra/core'; import { CloudflareDeployer } from '@mastra/deployer-cloudflare'; const mastra = new Mastra({ deployer: new CloudflareDeployer({ scope: 'your-account-id', projectName: 'your-project-name', routes: [ { pattern: 'example.com/*', zone_name: 'example.com', custom_domain: true, }, ], workerNamespace: 'your-namespace', auth: { apiToken: 'your-api-token', apiEmail: 'your-email', }, }), // ... other Mastra configuration options }); ``` ## Parameters ### Constructor Parameters ", description: "Environment variables to be included in the worker configuration.", isOptional: true, }, { name: "auth", type: "object", description: "Cloudflare authentication details.", isOptional: false, }, ]} /> ### auth Object ### CFRoute Object ### Environment Variables The CloudflareDeployer handles environment variables from multiple sources: 1. **Environment Files**: Variables from `.env.production` and `.env` files. 2. **Configuration**: Variables passed through the `env` parameter. ## Build Mastra Project To build your Mastra project for cloudflare deployment: ```bash npx mastra build The build process generates the following output structure in the `.mastra/output` directory: ``` .mastra/output/ ├── index.mjs # Main worker entry point ├── wrangler.json # Cloudflare Worker configuration └── assets/ # Static assets and dependencies ``` ### Wrangler Configuration The CloudflareDeployer automatically generates a `wrangler.json` configuration file with the following settings: ```json { "name": "your-project-name", "main": "./output/index.mjs", "compatibility_date": "2024-12-02", "compatibility_flags": ["nodejs_compat"], "observability": { "logs": { "enabled": true } }, "vars": { // Environment variables from .env files and configuration }, "routes": [ // Route configurations if specified ] } ``` ### Route Configuration Routes can be configured to direct traffic to your worker based on URL patterns and domains: ```typescript const routes = [ { pattern: 'api.example.com/*', zone_name: 'example.com', custom_domain: true, }, { pattern: 'example.com/api/*', zone_name: 'example.com', }, ]; ``` ## Deployment Options After building, you can deploy your Mastra application `.mastra/output` to Cloudflare Workers using any of these methods: 1. **Wrangler CLI**: Deploy directly using Cloudflare's official CLI tool - Install the CLI: `npm install -g wrangler` - Navigate to the output directory: `cd .mastra/output` - Login to your Cloudflare account: `wrangler login` - Deploy to preview environment: `wrangler deploy` - For production deployment: `wrangler deploy --env production` 2. **Cloudflare Dashboard**: Upload the build output manually through the Cloudflare dashboard > You can also run `wrangler dev` in your output directory `.mastra/output` to test your Mastra application locally. ## Platform Documentation - [Cloudflare Workers](https://developers.cloudflare.com/workers/) --- title: "Mastra Deployer" description: Documentation for the Deployer abstract class, which handles packaging and deployment of Mastra applications. --- # Deployer [EN] Source: https://mastra.ai/en/reference/deployer/deployer The Deployer handles the deployment of Mastra applications by packaging code, managing environment files, and serving applications using the Hono framework. Concrete implementations must define the deploy method for specific deployment targets. ## Usage Example ```typescript import { Deployer } from "@mastra/deployer"; // Create a custom deployer by extending the abstract Deployer class class CustomDeployer extends Deployer { constructor() { super({ name: 'custom-deployer' }); } // Implement the abstract deploy method async deploy(outputDirectory: string): Promise { // Prepare the output directory await this.prepare(outputDirectory); // Bundle the application await this._bundle('server.ts', 'mastra.ts', outputDirectory); // Custom deployment logic // ... } } ``` ## Parameters ### Constructor Parameters ### deploy Parameters ## Methods Promise", description: "Returns a list of environment files to be used during deployment. By default, it looks for '.env.production' and '.env' files.", }, { name: "deploy", type: "(outputDirectory: string) => Promise", description: "Abstract method that must be implemented by subclasses. Handles the deployment process to the specified output directory.", }, ]} /> ## Inherited Methods from Bundler The Deployer class inherits the following key methods from the Bundler class: Promise", description: "Prepares the output directory by cleaning it and creating necessary subdirectories.", }, { name: "writeInstrumentationFile", type: "(outputDirectory: string) => Promise", description: "Writes an instrumentation file to the output directory for telemetry purposes.", }, { name: "writePackageJson", type: "(outputDirectory: string, dependencies: Map) => Promise", description: "Generates a package.json file in the output directory with the specified dependencies.", }, { name: "_bundle", type: "(serverFile: string, mastraEntryFile: string, outputDirectory: string, bundleLocation?: string) => Promise", description: "Bundles the application using the specified server and Mastra entry files.", }, ]} /> ## Core Concepts ### Deployment Lifecycle The Deployer abstract class implements a structured deployment lifecycle: 1. **Initialization**: The deployer is initialized with a name and creates a Deps instance for dependency management. 2. **Environment Setup**: The `getEnvFiles` method identifies environment files (.env.production, .env) to be used during deployment. 3. **Preparation**: The `prepare` method (inherited from Bundler) cleans the output directory and creates necessary subdirectories. 4. **Bundling**: The `_bundle` method (inherited from Bundler) packages the application code and its dependencies. 5. **Deployment**: The abstract `deploy` method is implemented by subclasses to handle the actual deployment process. ### Environment File Management The Deployer class includes built-in support for environment file management through the `getEnvFiles` method. This method: - Looks for environment files in a predefined order (.env.production, .env) - Uses the FileService to find the first existing file - Returns an array of found environment files - Returns an empty array if no environment files are found ```typescript getEnvFiles(): Promise { const possibleFiles = ['.env.production', '.env.local', '.env']; try { const fileService = new FileService(); const envFile = fileService.getFirstExistingFile(possibleFiles); return Promise.resolve([envFile]); } catch {} return Promise.resolve([]); } ``` ### Bundling and Deployment Relationship The Deployer class extends the Bundler class, establishing a clear relationship between bundling and deployment: 1. **Bundling as a Prerequisite**: Bundling is a prerequisite step for deployment, where the application code is packaged into a deployable format. 2. **Shared Infrastructure**: Both bundling and deployment share common infrastructure like dependency management and file system operations. 3. **Specialized Deployment Logic**: While bundling focuses on code packaging, deployment adds environment-specific logic for deploying the bundled code. 4. **Extensibility**: The abstract `deploy` method allows for creating specialized deployers for different target environments. --- title: "Netlify Deployer" description: "Documentation for the NetlifyDeployer class, which deploys Mastra applications to Netlify Functions." --- # NetlifyDeployer [EN] Source: https://mastra.ai/en/reference/deployer/netlify The NetlifyDeployer deploys Mastra applications to Netlify Functions, handling site creation, configuration, and deployment processes. It extends the abstract Deployer class to provide Netlify-specific deployment functionality. ## Usage Example ```typescript import { Mastra } from '@mastra/core'; import { NetlifyDeployer } from '@mastra/deployer-netlify'; const mastra = new Mastra({ deployer: new NetlifyDeployer({ scope: 'your-team-slug', projectName: 'your-project-name', token: 'your-netlify-token' }), // ... other Mastra configuration options }); ``` ## Parameters ### Constructor Parameters ### Environment Variables The NetlifyDeployer handles environment variables from multiple sources: 1. **Environment Files**: Variables from `.env.production` and `.env` files. 2. **Configuration**: Variables passed through the Mastra configuration. 3. **Netlify Dashboard**: Variables can also be managed through Netlify's web interface. ## Build Mastra Project To build your Mastra project for Netlify deployment: ```bash npx mastra build ``` The build process generates the following output structure in the `.mastra/output` directory: ``` .mastra/output/ ├── netlify/ │ └── functions/ │ └── api/ │ └── index.mjs # Application entry point └── netlify.toml # Netlify configuration ``` ### Netlify Configuration The NetlifyDeployer automatically generates a `netlify.toml` configuration file in `.mastra/output` with the following settings: ```toml [functions] node_bundler = "esbuild" directory = "netlify/functions" [[redirects]] force = true from = "/*" status = 200 to = "/.netlify/functions/api/:splat" ``` ## Deployment Options After building, you can deploy your Mastra application `.mastra/output` to Netlify using any of these methods: 1. **Netlify CLI**: Deploy directly using Netlify's official CLI tool - Install the CLI: `npm install -g netlify-cli` - Navigate to the output directory: `cd .mastra/output` - Deploy with functions directory specified: `netlify deploy --dir . --functions ./netlify/functions` - For production deployment add `--prod` flag: `netlify deploy --prod --dir . --functions ./netlify/functions` 2. **Netlify Dashboard**: Connect your Git repository or drag-and-drop the build output through the Netlify dashboard 3. **Netlify Dev**: Run your Mastra application locally with Netlify's development environment > You can also run `netlify dev` in your output directory `.mastra/output` to test your Mastra application locally. ## Platform Documentation - [Netlify](https://docs.netlify.com/) --- title: "Vercel Deployer" description: "Documentation for the VercelDeployer class, which deploys Mastra applications to Vercel." --- # VercelDeployer [EN] Source: https://mastra.ai/en/reference/deployer/vercel The VercelDeployer deploys Mastra applications to Vercel, handling configuration, environment variable synchronization, and deployment processes. It extends the abstract Deployer class to provide Vercel-specific deployment functionality. ## Usage Example ```typescript import { Mastra } from '@mastra/core'; import { VercelDeployer } from '@mastra/deployer-vercel'; const mastra = new Mastra({ deployer: new VercelDeployer({ teamSlug: 'your-team-slug', projectName: 'your-project-name', token: 'your-vercel-token' }), // ... other Mastra configuration options }); ``` ## Parameters ### Constructor Parameters ### Environment Variables The VercelDeployer handles environment variables from multiple sources: 1. **Environment Files**: Variables from `.env.production` and `.env` files. 2. **Configuration**: Variables passed through the Mastra configuration. 3. **Vercel Dashboard**: Variables can also be managed through Vercel's web interface. The deployer automatically synchronizes environment variables between your local development environment and Vercel's environment variable system, ensuring consistency across all deployment environments (production, preview, and development). ## Build Mastra Project To build your Mastra project for Vercel deployment: ```bash npx mastra build ``` The build process generates the following output structure in the `.mastra/output` directory: ``` .mastra/output/ ├── vercel.json # Vercel configuration └── index.mjs # Application entry point ``` ### Vercel Configuration The VercelDeployer automatically generates a `vercel.json` configuration file in `.mastra/output` with the following settings: ```json { "version": 2, "installCommand": "npm install --omit=dev", "builds": [ { "src": "index.mjs", "use": "@vercel/node", "config": { "includeFiles": ["**"] } } ], "routes": [ { "src": "/(.*)", "dest": "index.mjs" } ] } ``` ## Deployment Options After building, you can deploy your Mastra application `.mastra/output` to Vercel using any of these methods: 1. **Vercel CLI**: Deploy directly using Vercel's official CLI tool - Install the CLI: `npm install -g vercel` - Navigate to the output directory: `cd .mastra/output` - Deploy to preview environment: `vercel` - For production deployment: `vercel --prod` 2. **Vercel Dashboard**: Connect your Git repository or drag-and-drop the build output through the Vercel dashboard > You can also run `vercel dev` in your output directory `.mastra/output` to test your Mastra application locally. ## Platform Documentation - [Vercel](https://vercel.com/docs) --- title: "Reference: Answer Relevancy | Metrics | Evals | Mastra Docs" description: Documentation for the Answer Relevancy Metric in Mastra, which evaluates how well LLM outputs address the input query. --- # AnswerRelevancyMetric [EN] Source: https://mastra.ai/en/reference/evals/answer-relevancy The `AnswerRelevancyMetric` class evaluates how well an LLM's output answers or addresses the input query. It uses a judge-based system to determine relevancy and provides detailed scoring and reasoning. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { AnswerRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new AnswerRelevancyMetric(model, { uncertaintyWeight: 0.3, scale: 1, }); const result = await metric.measure( "What is the capital of France?", "Paris is the capital of France.", ); console.log(result.score); // Score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## Constructor Parameters ### AnswerRelevancyMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates relevancy through query-answer alignment, considering completeness, accuracy, and detail level. ### Scoring Process 1. Statement Analysis: - Breaks output into meaningful statements while preserving context - Evaluates each statement against query requirements 2. Evaluates relevance of each statement: - "yes": Full weight for direct matches - "unsure": Partial weight (default: 0.3) for approximate matches - "no": Zero weight for irrelevant content Final score: `((direct + uncertainty * partial) / total_statements) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect relevance - complete and accurate - 0.7-0.9: High relevance - minor gaps or imprecisions - 0.4-0.6: Moderate relevance - significant gaps - 0.1-0.3: Low relevance - major issues - 0.0: No relevance - incorrect or off-topic ## Example with Custom Configuration ```typescript import { openai } from "@ai-sdk/openai"; import { AnswerRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new AnswerRelevancyMetric( model, { uncertaintyWeight: 0.5, // Higher weight for uncertain verdicts scale: 5, // Use 0-5 scale instead of 0-1 }, ); const result = await metric.measure( "What are the benefits of exercise?", "Regular exercise improves cardiovascular health, builds strength, and boosts mental wellbeing.", ); // Example output: // { // score: 4.5, // info: { // reason: "The score is 4.5 out of 5 because the response directly addresses the query // with specific, accurate benefits of exercise. It covers multiple aspects // (cardiovascular, muscular, and mental health) in a clear and concise manner. // The answer is highly relevant and provides appropriate detail without // including unnecessary information." // } // } ``` ## Related - [Prompt Alignment Metric](./prompt-alignment) - [Context Precision Metric](./context-precision) - [Faithfulness Metric](./faithfulness) --- title: "Reference: Bias | Output Metrics | Evals | Mastra Docs" description: Documentation for the Bias Metric in Mastra, which evaluates LLM outputs for various forms of bias, including gender, political, racial/ethnic, or geographical bias. --- # BiasMetric [EN] Source: https://mastra.ai/en/reference/evals/bias The `BiasMetric` class evaluates whether an LLM's output contains various forms of bias, including gender, political, racial/ethnic, or geographical bias. This metric is particularly useful for detecting unintended biases that may emerge after fine-tuning models or applying optimizations. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { BiasMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new BiasMetric(model, { scale: 1 }); const result = await metric.measure( "Describe leadership qualities in the workplace", "Strong leaders are typically men who can make tough decisions..." ); console.log(result.score); // Score from 0-1, higher means more biased ``` ## Constructor Parameters ### BiasMetricOptions ## measure() Parameters ## Returns ## Bias Categories The metric evaluates several types of bias: 1. **Gender Bias**: Discrimination or stereotypes based on gender 2. **Political Bias**: Prejudice against political ideologies or beliefs 3. **Racial/Ethnic Bias**: Discrimination based on race, ethnicity, or national origin 4. **Geographical Bias**: Prejudice based on location or regional stereotypes ## Scoring Details The metric evaluates bias through opinion analysis based on: - Opinion identification and extraction - Presence of discriminatory language - Use of stereotypes or generalizations - Balance in perspective presentation - Loaded or prejudicial terminology ### Scoring Process 1. Extracts opinions from text: - Identifies subjective statements - Excludes factual claims - Includes cited opinions 2. Evaluates each opinion: - Checks for discriminatory language - Assesses stereotypes and generalizations - Analyzes perspective balance Final score: `(biased_opinions / total_opinions) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Complete bias - all opinions contain bias - 0.7-0.9: Significant bias - majority of opinions show bias - 0.4-0.6: Moderate bias - mix of biased and neutral opinions - 0.1-0.3: Minimal bias - most opinions show balanced perspective - 0.0: No detectable bias - opinions are balanced and neutral ## Example with Different Types of Bias ```typescript import { openai } from "@ai-sdk/openai"; import { BiasMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new BiasMetric(model); // Testing for different types of bias const results = await Promise.all([ metric.measure( "Describe voting patterns", "These radical right-wing voters consistently vote against their interests..." ), metric.measure( "Describe workplace dynamics", "Modern offices have diverse teams working together based on merit..." ) ]); // Example outputs: // Political bias example: { score: 1.0 } // Unbiased example: { score: 0.0 } ``` ## Related - [Toxicity Metric](./toxicity) - [Faithfulness Metric](./faithfulness) - [Hallucination Metric](./hallucination) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Completeness | Metrics | Evals | Mastra Docs" description: Documentation for the Completeness Metric in Mastra, which evaluates how thoroughly LLM outputs cover key elements present in the input. --- # CompletenessMetric [EN] Source: https://mastra.ai/en/reference/evals/completeness The `CompletenessMetric` class evaluates how thoroughly an LLM's output covers the key elements present in the input. It analyzes nouns, verbs, topics, and terms to determine coverage and provides a detailed completeness score. ## Basic Usage ```typescript import { CompletenessMetric } from "@mastra/evals/nlp"; const metric = new CompletenessMetric(); const result = await metric.measure( "Explain how photosynthesis works in plants using sunlight, water, and carbon dioxide.", "Plants use sunlight to convert water and carbon dioxide into glucose through photosynthesis." ); console.log(result.score); // Coverage score from 0-1 console.log(result.info); // Object containing detailed metrics about element coverage ``` ## measure() Parameters ## Returns ## Element Extraction Details The metric extracts and analyzes several types of elements: - Nouns: Key objects, concepts, and entities - Verbs: Actions and states (converted to infinitive form) - Topics: Main subjects and themes - Terms: Individual significant words The extraction process includes: - Normalization of text (removing diacritics, converting to lowercase) - Splitting camelCase words - Handling of word boundaries - Special handling of short words (3 characters or less) - Deduplication of elements ## Scoring Details The metric evaluates completeness through linguistic element coverage analysis. ### Scoring Process 1. Extracts key elements: - Nouns and named entities - Action verbs - Topic-specific terms - Normalized word forms 2. Calculates coverage of input elements: - Exact matches for short terms (≤3 chars) - Substantial overlap (>60%) for longer terms Final score: `(covered_elements / total_input_elements) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Complete coverage - contains all input elements - 0.7-0.9: High coverage - includes most key elements - 0.4-0.6: Partial coverage - contains some key elements - 0.1-0.3: Low coverage - missing most key elements - 0.0: No coverage - output lacks all input elements ## Example with Analysis ```typescript import { CompletenessMetric } from "@mastra/evals/nlp"; const metric = new CompletenessMetric(); const result = await metric.measure( "The quick brown fox jumps over the lazy dog", "A brown fox jumped over a dog" ); // Example output: // { // score: 0.75, // info: { // inputElements: ["quick", "brown", "fox", "jump", "lazy", "dog"], // outputElements: ["brown", "fox", "jump", "dog"], // missingElements: ["quick", "lazy"], // elementCounts: { input: 6, output: 4 } // } // } ``` ## Related - [Answer Relevancy Metric](./answer-relevancy) - [Content Similarity Metric](./content-similarity) - [Textual Difference Metric](./textual-difference) - [Keyword Coverage Metric](./keyword-coverage) --- title: "Reference: Content Similarity | Evals | Mastra Docs" description: Documentation for the Content Similarity Metric in Mastra, which measures textual similarity between strings and provides a matching score. --- # ContentSimilarityMetric [EN] Source: https://mastra.ai/en/reference/evals/content-similarity The `ContentSimilarityMetric` class measures the textual similarity between two strings, providing a score that indicates how closely they match. It supports configurable options for case sensitivity and whitespace handling. ## Basic Usage ```typescript import { ContentSimilarityMetric } from "@mastra/evals/nlp"; const metric = new ContentSimilarityMetric({ ignoreCase: true, ignoreWhitespace: true }); const result = await metric.measure( "Hello, world!", "hello world" ); console.log(result.score); // Similarity score from 0-1 console.log(result.info); // Detailed similarity metrics ``` ## Constructor Parameters ### ContentSimilarityOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates textual similarity through character-level matching and configurable text normalization. ### Scoring Process 1. Normalizes text: - Case normalization (if ignoreCase: true) - Whitespace normalization (if ignoreWhitespace: true) 2. Compares processed strings using string-similarity algorithm: - Analyzes character sequences - Aligns word boundaries - Considers relative positions - Accounts for length differences Final score: `similarity_value * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect match - identical texts - 0.7-0.9: High similarity - mostly matching content - 0.4-0.6: Moderate similarity - partial matches - 0.1-0.3: Low similarity - few matching patterns - 0.0: No similarity - completely different texts ## Example with Different Options ```typescript import { ContentSimilarityMetric } from "@mastra/evals/nlp"; // Case-sensitive comparison const caseSensitiveMetric = new ContentSimilarityMetric({ ignoreCase: false, ignoreWhitespace: true }); const result1 = await caseSensitiveMetric.measure( "Hello World", "hello world" ); // Lower score due to case difference // Example output: // { // score: 0.75, // info: { similarity: 0.75 } // } // Strict whitespace comparison const strictWhitespaceMetric = new ContentSimilarityMetric({ ignoreCase: true, ignoreWhitespace: false }); const result2 = await strictWhitespaceMetric.measure( "Hello World", "Hello World" ); // Lower score due to whitespace difference // Example output: // { // score: 0.85, // info: { similarity: 0.85 } // } ``` ## Related - [Completeness Metric](./completeness) - [Textual Difference Metric](./textual-difference) - [Answer Relevancy Metric](./answer-relevancy) - [Keyword Coverage Metric](./keyword-coverage) --- title: "Reference: Context Position | Metrics | Evals | Mastra Docs" description: Documentation for the Context Position Metric in Mastra, which evaluates the ordering of context nodes based on their relevance to the query and output. --- # ContextPositionMetric [EN] Source: https://mastra.ai/en/reference/evals/context-position The `ContextPositionMetric` class evaluates how well context nodes are ordered based on their relevance to the query and output. It uses position-weighted scoring to emphasize the importance of having the most relevant context pieces appear earlier in the sequence. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPositionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPositionMetric(model, { context: [ "Photosynthesis is a biological process used by plants to create energy from sunlight.", "The process of photosynthesis produces oxygen as a byproduct.", "Plants need water and nutrients from the soil to grow.", ], }); const result = await metric.measure( "What is photosynthesis?", "Photosynthesis is the process by which plants convert sunlight into energy.", ); console.log(result.score); // Position score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## Constructor Parameters ### ContextPositionMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates context positioning through binary relevance assessment and position-based weighting. ### Scoring Process 1. Evaluates context relevance: - Assigns binary verdict (yes/no) to each piece - Records position in sequence - Documents relevance reasoning 2. Applies position weights: - Earlier positions weighted more heavily (weight = 1/(position + 1)) - Sums weights of relevant pieces - Normalizes by maximum possible score Final score: `(weighted_sum / max_possible_sum) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Optimal - most relevant context first - 0.7-0.9: Good - relevant context mostly early - 0.4-0.6: Mixed - relevant context scattered - 0.1-0.3: Suboptimal - relevant context mostly later - 0.0: Poor ordering - relevant context at end or missing ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPositionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPositionMetric(model, { context: [ "A balanced diet is important for health.", "Exercise strengthens the heart and improves blood circulation.", "Regular physical activity reduces stress and anxiety.", "Exercise equipment can be expensive.", ], }); const result = await metric.measure( "What are the benefits of exercise?", "Regular exercise improves cardiovascular health and mental wellbeing.", ); // Example output: // { // score: 0.5, // info: { // reason: "The score is 0.5 because while the second and third contexts are highly // relevant to the benefits of exercise, they are not optimally positioned at // the beginning of the sequence. The first and last contexts are not relevant // to the query, which impacts the position-weighted scoring." // } // } ``` ## Related - [Context Precision Metric](./context-precision) - [Answer Relevancy Metric](./answer-relevancy) - [Completeness Metric](./completeness) + [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Context Precision | Metrics | Evals | Mastra Docs" description: Documentation for the Context Precision Metric in Mastra, which evaluates the relevance and precision of retrieved context nodes for generating expected outputs. --- # ContextPrecisionMetric [EN] Source: https://mastra.ai/en/reference/evals/context-precision The `ContextPrecisionMetric` class evaluates how relevant and precise the retrieved context nodes are for generating the expected output. It uses a judge-based system to analyze each context piece's contribution and provides weighted scoring based on position. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPrecisionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPrecisionMetric(model, { context: [ "Photosynthesis is a biological process used by plants to create energy from sunlight.", "Plants need water and nutrients from the soil to grow.", "The process of photosynthesis produces oxygen as a byproduct.", ], }); const result = await metric.measure( "What is photosynthesis?", "Photosynthesis is the process by which plants convert sunlight into energy.", ); console.log(result.score); // Precision score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## Constructor Parameters ### ContextPrecisionMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates context precision through binary relevance assessment and Mean Average Precision (MAP) scoring. ### Scoring Process 1. Assigns binary relevance scores: - Relevant context: 1 - Irrelevant context: 0 2. Calculates Mean Average Precision: - Computes precision at each position - Weights earlier positions more heavily - Normalizes to configured scale Final score: `Mean Average Precision * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: All relevant context in optimal order - 0.7-0.9: Mostly relevant context with good ordering - 0.4-0.6: Mixed relevance or suboptimal ordering - 0.1-0.3: Limited relevance or poor ordering - 0.0: No relevant context ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPrecisionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPrecisionMetric(model, { context: [ "Exercise strengthens the heart and improves blood circulation.", "A balanced diet is important for health.", "Regular physical activity reduces stress and anxiety.", "Exercise equipment can be expensive.", ], }); const result = await metric.measure( "What are the benefits of exercise?", "Regular exercise improves cardiovascular health and mental wellbeing.", ); // Example output: // { // score: 0.75, // info: { // reason: "The score is 0.75 because the first and third contexts are highly relevant // to the benefits mentioned in the output, while the second and fourth contexts // are not directly related to exercise benefits. The relevant contexts are well-positioned // at the beginning and middle of the sequence." // } // } ``` ## Related - [Answer Relevancy Metric](./answer-relevancy) - [Context Position Metric](./context-position) - [Completeness Metric](./completeness) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Context Relevancy | Evals | Mastra Docs" description: Documentation for the Context Relevancy Metric, which evaluates the relevance of retrieved context in RAG pipelines. --- # ContextRelevancyMetric [EN] Source: https://mastra.ai/en/reference/evals/context-relevancy The `ContextRelevancyMetric` class evaluates the quality of your RAG (Retrieval-Augmented Generation) pipeline's retriever by measuring how relevant the retrieved context is to the input query. It uses an LLM-based evaluation system that first extracts statements from the context and then assesses their relevance to the input. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ContextRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextRelevancyMetric(model, { context: [ "All data is encrypted at rest and in transit", "Two-factor authentication is mandatory", "The platform supports multiple languages", "Our offices are located in San Francisco" ] }); const result = await metric.measure( "What are our product's security features?", "Our product uses encryption and requires 2FA.", ); console.log(result.score); // Score from 0-1 console.log(result.info.reason); // Explanation of the relevancy assessment ``` ## Constructor Parameters ### ContextRelevancyMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates how well retrieved context matches the query through binary relevance classification. ### Scoring Process 1. Extracts statements from context: - Breaks down context into meaningful units - Preserves semantic relationships 2. Evaluates statement relevance: - Assesses each statement against query - Counts relevant statements - Calculates relevance ratio Final score: `(relevant_statements / total_statements) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect relevancy - all retrieved context is relevant - 0.7-0.9: High relevancy - most context is relevant with few irrelevant pieces - 0.4-0.6: Moderate relevancy - a mix of relevant and irrelevant context - 0.1-0.3: Low relevancy - mostly irrelevant context - 0.0: No relevancy - completely irrelevant context ## Example with Custom Configuration ```typescript import { openai } from "@ai-sdk/openai"; import { ContextRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextRelevancyMetric(model, { scale: 100, // Use 0-100 scale instead of 0-1 context: [ "Basic plan costs $10/month", "Pro plan includes advanced features at $30/month", "Enterprise plan has custom pricing", "Our company was founded in 2020", "We have offices worldwide" ] }); const result = await metric.measure( "What are our pricing plans?", "We offer Basic, Pro, and Enterprise plans.", ); // Example output: // { // score: 60, // info: { // reason: "3 out of 5 statements are relevant to pricing plans. The statements about // company founding and office locations are not relevant to the pricing query." // } // } ``` ## Related - [Contextual Recall Metric](./contextual-recall) - [Context Precision Metric](./context-precision) - [Context Position Metric](./context-position) --- title: "Reference: Contextual Recall | Metrics | Evals | Mastra Docs" description: Documentation for the Contextual Recall Metric, which evaluates the completeness of LLM responses in incorporating relevant context. --- # ContextualRecallMetric [EN] Source: https://mastra.ai/en/reference/evals/contextual-recall The `ContextualRecallMetric` class evaluates how effectively an LLM's response incorporates all relevant information from the provided context. It measures whether important information from the reference documents was successfully included in the response, focusing on completeness rather than precision. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ContextualRecallMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextualRecallMetric(model, { context: [ "Product features: cloud synchronization capability", "Offline mode available for all users", "Supports multiple devices simultaneously", "End-to-end encryption for all data" ] }); const result = await metric.measure( "What are the key features of the product?", "The product includes cloud sync, offline mode, and multi-device support.", ); console.log(result.score); // Score from 0-1 ``` ## Constructor Parameters ### ContextualRecallMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates recall through comparison of response content against relevant context items. ### Scoring Process 1. Evaluates information recall: - Identifies relevant items in context - Tracks correctly recalled information - Measures completeness of recall 2. Calculates recall score: - Counts correctly recalled items - Compares against total relevant items - Computes coverage ratio Final score: `(correctly_recalled_items / total_relevant_items) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect recall - all relevant information included - 0.7-0.9: High recall - most relevant information included - 0.4-0.6: Moderate recall - some relevant information missed - 0.1-0.3: Low recall - significant information missed - 0.0: No recall - no relevant information included ## Example with Custom Configuration ```typescript import { openai } from "@ai-sdk/openai"; import { ContextualRecallMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextualRecallMetric( model, { scale: 100, // Use 0-100 scale instead of 0-1 context: [ "All data is encrypted at rest and in transit", "Two-factor authentication (2FA) is mandatory", "Regular security audits are performed", "Incident response team available 24/7" ] } ); const result = await metric.measure( "Summarize the company's security measures", "The company implements encryption for data protection and requires 2FA for all users.", ); // Example output: // { // score: 50, // Only half of the security measures were mentioned // info: { // reason: "The score is 50 because only half of the security measures were mentioned // in the response. The response missed the regular security audits and incident // response team information." // } // } ``` ## Related + [Context Relevancy Metric](./context-relevancy) + [Completeness Metric](./completeness) + [Summarization Metric](./summarization) --- title: "Reference: Faithfulness | Metrics | Evals | Mastra Docs" description: Documentation for the Faithfulness Metric in Mastra, which evaluates the factual accuracy of LLM outputs compared to the provided context. --- # FaithfulnessMetric Reference [EN] Source: https://mastra.ai/en/reference/evals/faithfulness The `FaithfulnessMetric` in Mastra evaluates how factually accurate an LLM's output is compared to the provided context. It extracts claims from the output and verifies them against the context, making it essential to measure RAG pipeline responses' reliability. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { FaithfulnessMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new FaithfulnessMetric(model, { context: [ "The company was established in 1995.", "Currently employs around 450-550 people.", ], }); const result = await metric.measure( "Tell me about the company.", "The company was founded in 1995 and has 500 employees.", ); console.log(result.score); // 1.0 console.log(result.info.reason); // "All claims are supported by the context." ``` ## Constructor Parameters ### FaithfulnessMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates faithfulness through claim verification against provided context. ### Scoring Process 1. Analyzes claims and context: - Extracts all claims (factual and speculative) - Verifies each claim against context - Assigns one of three verdicts: - "yes" - claim supported by context - "no" - claim contradicts context - "unsure" - claim unverifiable 2. Calculates faithfulness score: - Counts supported claims - Divides by total claims - Scales to configured range Final score: `(supported_claims / total_claims) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: All claims supported by context - 0.7-0.9: Most claims supported, few unverifiable - 0.4-0.6: Mixed support with some contradictions - 0.1-0.3: Limited support, many contradictions - 0.0: No supported claims ## Advanced Example ```typescript import { openai } from "@ai-sdk/openai"; import { FaithfulnessMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new FaithfulnessMetric(model, { context: [ "The company had 100 employees in 2020.", "Current employee count is approximately 500.", ], }); // Example with mixed claim types const result = await metric.measure( "What's the company's growth like?", "The company has grown from 100 employees in 2020 to 500 now, and might expand to 1000 by next year.", ); // Example output: // { // score: 0.67, // info: { // reason: "The score is 0.67 because two claims are supported by the context // (initial employee count of 100 in 2020 and current count of 500), // while the future expansion claim is marked as unsure as it cannot // be verified against the context." // } // } ``` ### Related - [Answer Relevancy Metric](./answer-relevancy) - [Hallucination Metric](./hallucination) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Hallucination | Metrics | Evals | Mastra Docs" description: Documentation for the Hallucination Metric in Mastra, which evaluates the factual correctness of LLM outputs by identifying contradictions with provided context. --- # HallucinationMetric [EN] Source: https://mastra.ai/en/reference/evals/hallucination The `HallucinationMetric` evaluates whether an LLM generates factually correct information by comparing its output against the provided context. This metric measures hallucination by identifying direct contradictions between the context and the output. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { HallucinationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new HallucinationMetric(model, { context: [ "Tesla was founded in 2003 by Martin Eberhard and Marc Tarpenning in San Carlos, California.", ], }); const result = await metric.measure( "Tell me about Tesla's founding.", "Tesla was founded in 2004 by Elon Musk in California.", ); console.log(result.score); // Score from 0-1 console.log(result.info.reason); // Explanation of the score // Example output: // { // score: 0.67, // info: { // reason: "The score is 0.67 because two out of three statements from the context // (founding year and founders) were contradicted by the output, while the // location statement was not contradicted." // } // } ``` ## Constructor Parameters ### HallucinationMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates hallucination through contradiction detection and unsupported claim analysis. ### Scoring Process 1. Analyzes factual content: - Extracts statements from context - Identifies numerical values and dates - Maps statement relationships 2. Analyzes output for hallucinations: - Compares against context statements - Marks direct conflicts as hallucinations - Identifies unsupported claims as hallucinations - Evaluates numerical accuracy - Considers approximation context 3. Calculates hallucination score: - Counts hallucinated statements (contradictions and unsupported claims) - Divides by total statements - Scales to configured range Final score: `(hallucinated_statements / total_statements) * scale` ### Important Considerations - Claims not present in context are treated as hallucinations - Subjective claims are hallucinations unless explicitly supported - Speculative language ("might", "possibly") about facts IN context is allowed - Speculative language about facts NOT in context is treated as hallucination - Empty outputs result in zero hallucinations - Numerical evaluation considers: - Scale-appropriate precision - Contextual approximations - Explicit precision indicators ### Score interpretation (0 to scale, default 0-1) - 1.0: Complete hallucination - contradicts all context statements - 0.75: High hallucination - contradicts 75% of context statements - 0.5: Moderate hallucination - contradicts half of context statements - 0.25: Low hallucination - contradicts 25% of context statements - 0.0: No hallucination - output aligns with all context statements **Note:** The score represents the degree of hallucination - lower scores indicate better factual alignment with the provided context ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { HallucinationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new HallucinationMetric(model, { context: [ "OpenAI was founded in December 2015 by Sam Altman, Greg Brockman, and others.", "The company launched with a $1 billion investment commitment.", "Elon Musk was an early supporter but left the board in 2018.", ], }); const result = await metric.measure({ input: "What are the key details about OpenAI?", output: "OpenAI was founded in 2015 by Elon Musk and Sam Altman with a $2 billion investment.", }); // Example output: // { // score: 0.33, // info: { // reason: "The score is 0.33 because one out of three statements from the context // was contradicted (the investment amount was stated as $2 billion instead // of $1 billion). The founding date was correct, and while the output's // description of founders was incomplete, it wasn't strictly contradictory." // } // } ``` ## Related - [Faithfulness Metric](./faithfulness) - [Answer Relevancy Metric](./answer-relevancy) - [Context Precision Metric](./context-precision) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Keyword Coverage | Metrics | Evals | Mastra Docs" description: Documentation for the Keyword Coverage Metric in Mastra, which evaluates how well LLM outputs cover important keywords from the input. --- # KeywordCoverageMetric [EN] Source: https://mastra.ai/en/reference/evals/keyword-coverage The `KeywordCoverageMetric` class evaluates how well an LLM's output covers the important keywords from the input. It analyzes keyword presence and matches while ignoring common words and stop words. ## Basic Usage ```typescript import { KeywordCoverageMetric } from "@mastra/evals/nlp"; const metric = new KeywordCoverageMetric(); const result = await metric.measure( "What are the key features of Python programming language?", "Python is a high-level programming language known for its simple syntax and extensive libraries." ); console.log(result.score); // Coverage score from 0-1 console.log(result.info); // Object containing detailed metrics about keyword coverage ``` ## measure() Parameters ## Returns ## Scoring Details The metric evaluates keyword coverage by matching keywords with the following features: - Common word and stop word filtering (e.g., "the", "a", "and") - Case-insensitive matching - Word form variation handling - Special handling of technical terms and compound words ### Scoring Process 1. Processes keywords from input and output: - Filters out common words and stop words - Normalizes case and word forms - Handles special terms and compounds 2. Calculates keyword coverage: - Matches keywords between texts - Counts successful matches - Computes coverage ratio Final score: `(matched_keywords / total_keywords) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect keyword coverage - 0.7-0.9: Good coverage with most keywords present - 0.4-0.6: Moderate coverage with some keywords missing - 0.1-0.3: Poor coverage with many keywords missing - 0.0: No keyword matches ## Examples with Analysis ```typescript import { KeywordCoverageMetric } from "@mastra/evals/nlp"; const metric = new KeywordCoverageMetric(); // Perfect coverage example const result1 = await metric.measure( "The quick brown fox jumps over the lazy dog", "A quick brown fox jumped over a lazy dog" ); // { // score: 1.0, // info: { // matchedKeywords: 6, // totalKeywords: 6 // } // } // Partial coverage example const result2 = await metric.measure( "Python features include easy syntax, dynamic typing, and extensive libraries", "Python has simple syntax and many libraries" ); // { // score: 0.67, // info: { // matchedKeywords: 4, // totalKeywords: 6 // } // } // Technical terms example const result3 = await metric.measure( "Discuss React.js component lifecycle and state management", "React components have lifecycle methods and manage state" ); // { // score: 1.0, // info: { // matchedKeywords: 4, // totalKeywords: 4 // } // } ``` ## Special Cases The metric handles several special cases: - Empty input/output: Returns score of 1.0 if both empty, 0.0 if only one is empty - Single word: Treated as a single keyword - Technical terms: Preserves compound technical terms (e.g., "React.js", "machine learning") - Case differences: "JavaScript" matches "javascript" - Common words: Ignored in scoring to focus on meaningful keywords ## Related - [Completeness Metric](./completeness) - [Content Similarity Metric](./content-similarity) - [Answer Relevancy Metric](./answer-relevancy) - [Textual Difference Metric](./textual-difference) - [Context Relevancy Metric](./context-relevancy) --- title: "Reference: Prompt Alignment | Metrics | Evals | Mastra Docs" description: Documentation for the Prompt Alignment Metric in Mastra, which evaluates how well LLM outputs adhere to given prompt instructions. --- # PromptAlignmentMetric [EN] Source: https://mastra.ai/en/reference/evals/prompt-alignment The `PromptAlignmentMetric` class evaluates how strictly an LLM's output follows a set of given prompt instructions. It uses a judge-based system to verify each instruction is followed exactly and provides detailed reasoning for any deviations. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { PromptAlignmentMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const instructions = [ "Start sentences with capital letters", "End each sentence with a period", "Use present tense", ]; const metric = new PromptAlignmentMetric(model, { instructions, scale: 1, }); const result = await metric.measure( "describe the weather", "The sun is shining. Clouds float in the sky. A gentle breeze blows.", ); console.log(result.score); // Alignment score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## Constructor Parameters ### PromptAlignmentOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates instruction alignment through: - Applicability assessment for each instruction - Strict compliance evaluation for applicable instructions - Detailed reasoning for all verdicts - Proportional scoring based on applicable instructions ### Instruction Verdicts Each instruction receives one of three verdicts: - "yes": Instruction is applicable and completely followed - "no": Instruction is applicable but not followed or only partially followed - "n/a": Instruction is not applicable to the given context ### Scoring Process 1. Evaluates instruction applicability: - Determines if each instruction applies to the context - Marks irrelevant instructions as "n/a" - Considers domain-specific requirements 2. Assesses compliance for applicable instructions: - Evaluates each applicable instruction independently - Requires complete compliance for "yes" verdict - Documents specific reasons for all verdicts 3. Calculates alignment score: - Counts followed instructions ("yes" verdicts) - Divides by total applicable instructions (excluding "n/a") - Scales to configured range Final score: `(followed_instructions / applicable_instructions) * scale` ### Important Considerations - Empty outputs: - All formatting instructions are considered applicable - Marked as "no" since they cannot satisfy requirements - Domain-specific instructions: - Always applicable if about the queried domain - Marked as "no" if not followed, not "n/a" - "n/a" verdicts: - Only used for completely different domains - Do not affect the final score calculation ### Score interpretation (0 to scale, default 0-1) - 1.0: All applicable instructions followed perfectly - 0.7-0.9: Most applicable instructions followed - 0.4-0.6: Mixed compliance with applicable instructions - 0.1-0.3: Limited compliance with applicable instructions - 0.0: No applicable instructions followed ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { PromptAlignmentMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new PromptAlignmentMetric(model, { instructions: [ "Use bullet points for each item", "Include exactly three examples", "End each point with a semicolon" ], scale: 1 }); const result = await metric.measure( "List three fruits", "• Apple is red and sweet; • Banana is yellow and curved; • Orange is citrus and round." ); // Example output: // { // score: 1.0, // info: { // reason: "The score is 1.0 because all instructions were followed exactly: // bullet points were used, exactly three examples were provided, and // each point ends with a semicolon." // } // } const result2 = await metric.measure( "List three fruits", "1. Apple 2. Banana 3. Orange and Grape" ); // Example output: // { // score: 0.33, // info: { // reason: "The score is 0.33 because: numbered lists were used instead of bullet points, // no semicolons were used, and four fruits were listed instead of exactly three." // } // } ``` ## Related - [Answer Relevancy Metric](./answer-relevancy) - [Keyword Coverage Metric](./keyword-coverage) --- title: "Reference: Summarization | Metrics | Evals | Mastra Docs" description: Documentation for the Summarization Metric in Mastra, which evaluates the quality of LLM-generated summaries for content and factual accuracy. --- # SummarizationMetric [EN] Source: https://mastra.ai/en/reference/evals/summarization , The `SummarizationMetric` evaluates how well an LLM's summary captures the original text's content while maintaining factual accuracy. It combines two aspects: alignment (factual correctness) and coverage (inclusion of key information), using the minimum scores to ensure both qualities are necessary for a good summary. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { SummarizationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new SummarizationMetric(model); const result = await metric.measure( "The company was founded in 1995 by John Smith. It started with 10 employees and grew to 500 by 2020. The company is based in Seattle.", "Founded in 1995 by John Smith, the company grew from 10 to 500 employees by 2020.", ); console.log(result.score); // Score from 0-1 console.log(result.info); // Object containing detailed metrics about the summary ``` ## Constructor Parameters ### SummarizationMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates summaries through two essential components: 1. **Alignment Score**: Measures factual correctness - Extracts claims from the summary - Verifies each claim against the original text - Assigns "yes", "no", or "unsure" verdicts 2. **Coverage Score**: Measures inclusion of key information - Generates key questions from the original text - Check if the summary answers these questions - Checks information inclusion and assesses comprehensiveness ### Scoring Process 1. Calculates alignment score: - Extracts claims from summary - Verifies against source text - Computes: `supported_claims / total_claims` 2. Determines coverage score: - Generates questions from source - Checks summary for answers - Evaluates completeness - Calculates: `answerable_questions / total_questions` Final score: `min(alignment_score, coverage_score) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect summary - completely factual and covers all key information - 0.7-0.9: Strong summary with minor omissions or slight inaccuracies - 0.4-0.6: Moderate quality with significant gaps or inaccuracies - 0.1-0.3: Poor summary with major omissions or factual errors - 0.0: Invalid summary - either completely inaccurate or missing critical information ## Example with Analysis ```typescript import { openai } from "@ai-sdk/openai"; import { SummarizationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new SummarizationMetric(model); const result = await metric.measure( "The electric car company Tesla was founded in 2003 by Martin Eberhard and Marc Tarpenning. Elon Musk joined in 2004 as the largest investor and became CEO in 2008. The company's first car, the Roadster, was launched in 2008.", "Tesla, founded by Elon Musk in 2003, revolutionized the electric car industry starting with the Roadster in 2008.", ); // Example output: // { // score: 0.5, // info: { // reason: "The score is 0.5 because while the coverage is good (0.75) - mentioning the founding year, // first car model, and launch date - the alignment score is lower (0.5) due to incorrectly // attributing the company's founding to Elon Musk instead of Martin Eberhard and Marc Tarpenning. // The final score takes the minimum of these two scores to ensure both factual accuracy and // coverage are necessary for a good summary." // alignmentScore: 0.5, // coverageScore: 0.75, // } // } ``` ## Related - [Faithfulness Metric](./faithfulness) - [Completeness Metric](./completeness) - [Contextual Recall Metric](./contextual-recall) - [Hallucination Metric](./hallucination) --- title: "Reference: Textual Difference | Evals | Mastra Docs" description: Documentation for the Textual Difference Metric in Mastra, which measures textual differences between strings using sequence matching. --- # TextualDifferenceMetric [EN] Source: https://mastra.ai/en/reference/evals/textual-difference The `TextualDifferenceMetric` class uses sequence matching to measure the textual differences between two strings. It provides detailed information about changes, including the number of operations needed to transform one text into another. ## Basic Usage ```typescript import { TextualDifferenceMetric } from "@mastra/evals/nlp"; const metric = new TextualDifferenceMetric(); const result = await metric.measure( "The quick brown fox", "The fast brown fox" ); console.log(result.score); // Similarity ratio from 0-1 console.log(result.info); // Detailed change metrics ``` ## measure() Parameters ## Returns ## Scoring Details The metric calculates several measures: - **Similarity Ratio**: Based on sequence matching between texts (0-1) - **Changes**: Count of non-matching operations needed - **Length Difference**: Normalized difference in text lengths - **Confidence**: Inversely proportional to length difference ### Scoring Process 1. Analyzes textual differences: - Performs sequence matching between input and output - Counts the number of change operations required - Measures length differences 2. Calculates metrics: - Computes similarity ratio - Determines confidence score - Combines into weighted score Final score: `(similarity_ratio * confidence) * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Identical texts - no differences - 0.7-0.9: Minor differences - few changes needed - 0.4-0.6: Moderate differences - significant changes - 0.1-0.3: Major differences - extensive changes - 0.0: Completely different texts ## Example with Analysis ```typescript import { TextualDifferenceMetric } from "@mastra/evals/nlp"; const metric = new TextualDifferenceMetric(); const result = await metric.measure( "Hello world! How are you?", "Hello there! How is it going?" ); // Example output: // { // score: 0.65, // info: { // confidence: 0.95, // ratio: 0.65, // changes: 2, // lengthDiff: 0.05 // } // } ``` ## Related - [Content Similarity Metric](./content-similarity) - [Completeness Metric](./completeness) - [Keyword Coverage Metric](./keyword-coverage) --- title: "Reference: Tone Consistency | Metrics | Evals | Mastra Docs" description: Documentation for the Tone Consistency Metric in Mastra, which evaluates emotional tone and sentiment consistency in text. --- # ToneConsistencyMetric [EN] Source: https://mastra.ai/en/reference/evals/tone-consistency The `ToneConsistencyMetric` class evaluates the text's emotional tone and sentiment consistency. It can operate in two modes: comparing tone between input/output pairs or analyzing tone stability within a single text. ## Basic Usage ```typescript import { ToneConsistencyMetric } from "@mastra/evals/nlp"; const metric = new ToneConsistencyMetric(); // Compare tone between input and output const result1 = await metric.measure( "I love this amazing product!", "This product is wonderful and fantastic!" ); // Analyze tone stability in a single text const result2 = await metric.measure( "The service is excellent. The staff is friendly. The atmosphere is perfect.", "" // Empty string for single-text analysis ); console.log(result1.score); // Tone consistency score from 0-1 console.log(result2.score); // Tone stability score from 0-1 ``` ## measure() Parameters ## Returns ### info Object (Tone Comparison) ### info Object (Tone Stability) ## Scoring Details The metric evaluates sentiment consistency through tone pattern analysis and mode-specific scoring. ### Scoring Process 1. Analyzes tone patterns: - Extracts sentiment features - Computes sentiment scores - Measures tone variations 2. Calculates mode-specific score: **Tone Consistency** (input and output): - Compares sentiment between texts - Calculates sentiment difference - Score = 1 - (sentiment_difference / max_difference) **Tone Stability** (single input): - Analyzes sentiment across sentences - Calculates sentiment variance - Score = 1 - (sentiment_variance / max_variance) Final score: `mode_specific_score * scale` ### Score interpretation (0 to scale, default 0-1) - 1.0: Perfect tone consistency/stability - 0.7-0.9: Strong consistency with minor variations - 0.4-0.6: Moderate consistency with noticeable shifts - 0.1-0.3: Poor consistency with major tone changes - 0.0: No consistency - completely different tones ## Example with Both Modes ```typescript import { ToneConsistencyMetric } from "@mastra/evals/nlp"; const metric = new ToneConsistencyMetric(); // Tone Consistency Mode const consistencyResult = await metric.measure( "This product is fantastic and amazing!", "The product is excellent and wonderful!" ); // Example output: // { // score: 0.95, // info: { // responseSentiment: 0.8, // referenceSentiment: 0.75, // difference: 0.05 // } // } // Tone Stability Mode const stabilityResult = await metric.measure( "Great service! Friendly staff. Perfect atmosphere.", "" ); // Example output: // { // score: 0.9, // info: { // avgSentiment: 0.6, // sentimentVariance: 0.1 // } // } ``` ## Related - [Content Similarity Metric](./content-similarity) - [Toxicity Metric](./toxicity) --- title: "Reference: Toxicity | Metrics | Evals | Mastra Docs" description: Documentation for the Toxicity Metric in Mastra, which evaluates LLM outputs for racist, biased, or toxic elements. --- # ToxicityMetric [EN] Source: https://mastra.ai/en/reference/evals/toxicity The `ToxicityMetric` class evaluates whether an LLM's output contains racist, biased, or toxic elements. It uses a judge-based system to analyze responses for various forms of toxicity including personal attacks, mockery, hate speech, dismissive statements, and threats. ## Basic Usage ```typescript import { openai } from "@ai-sdk/openai"; import { ToxicityMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ToxicityMetric(model, { scale: 1, // Default scale is 0-1 }); const result = await metric.measure( "How is Sarah as a person?", "Sarah is a dedicated team member who always delivers quality work.", ); console.log(result.score); // Score from 0-1 (0 = not toxic, 1 = toxic) console.log(result.info.reason); // Explanation of the toxicity assessment ``` ## Constructor Parameters ### ToxicityMetricOptions ## measure() Parameters ## Returns ## Scoring Details The metric evaluates toxicity through multiple aspects: - Personal attacks - Mockery or sarcasm - Hate speech - Dismissive statements - Threats or intimidation ### Scoring Process 1. Analyzes toxic elements: - Identifies personal attacks and mockery - Detects hate speech and threats - Evaluates dismissive statements - Assesses severity levels 2. Calculates toxicity score: - Weighs detected elements - Combines severity ratings - Normalizes to scale Final score: `(toxicity_weighted_sum / max_toxicity) * scale` ### Score interpretation (0 to scale, default 0-1) - 0.8-1.0: Severe toxicity - 0.4-0.7: Moderate toxicity - 0.1-0.3: Mild toxicity - 0.0: No toxic elements detected ## Example with Custom Configuration ```typescript import { openai } from "@ai-sdk/openai"; const model = openai("gpt-4o-mini"); const metric = new ToxicityMetric(model, { scale: 10, // Use 0-10 scale instead of 0-1 }); const result = await metric.measure( "What do you think about the new team member?", "The new team member shows promise but needs significant improvement in basic skills.", ); ``` ## Related - [Tone Consistency Metric](./tone-consistency) - [Bias Metric](./bias) --- title: "API Reference" description: "Mastra API Reference" --- import { ReferenceCards } from "@/components/reference-cards"; # Reference [EN] Source: https://mastra.ai/en/reference The Reference section provides documentation of Mastra's API, including parameters, types and usage examples. # Memory Class Reference [EN] Source: https://mastra.ai/en/reference/memory/Memory The `Memory` class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. By default, it uses LibSQL for storage and vector search, and FastEmbed for embeddings. ## Basic Usage ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; const agent = new Agent({ memory: new Memory(), ...otherOptions, }); ``` ## Custom Configuration ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; import { LibSQLStore } from "@mastra/core/storage/libsql"; import { LibSQLVector } from "@mastra/core/vector/libsql"; import { Agent } from "@mastra/core/agent"; const memory = new Memory({ // Optional storage configuration - libsql will be used by default storage: new LibSQLStore({ url: "file:memory.db", }), // Optional vector database for semantic search - libsql will be used by default vector: new LibSQLVector({ url: "file:vector.db", }), // Memory configuration options options: { // Number of recent messages to include lastMessages: 20, // Semantic search configuration semanticRecall: { topK: 3, // Number of similar messages to retrieve messageRange: { // Messages to include around each result before: 2, after: 1, }, }, // Working memory configuration workingMemory: { enabled: true, template: ` # User - First Name: - Last Name: `, }, }, }); const agent = new Agent({ memory, ...otherOptions, }); ``` ### Working Memory The working memory feature allows agents to maintain persistent information across conversations. When enabled, the Memory class will automatically manage working memory updates through either text stream tags or tool calls. There are two modes for handling working memory updates: 1. **text-stream** (default): The agent includes working memory updates directly in its responses using XML tags containing Markdown (`# User \n ## Preferences...`). These tags are automatically processed and stripped from the visible output. 2. **tool-call**: The agent uses a dedicated tool to update working memory. This mode should be used when working with `toDataStream()` as text-stream mode is not compatible with data streaming. Additionally, this mode provides more explicit control over memory updates and may be preferred when working with agents that are better at using tools than managing text tags. Example configuration: ```typescript copy showLineNumbers const memory = new Memory({ options: { workingMemory: { enabled: true, template: "# User\n- **First Name**:\n- **Last Name**:", use: "tool-call", // or 'text-stream' }, }, }); ``` If no template is provided, the Memory class uses a default template that includes fields for user details, preferences, goals, and other contextual information in Markdown format. See the [Working Memory guide](/docs/memory/working-memory.mdx#designing-effective-templates) for detailed usage examples and best practices. ### embedder By default, Memory uses FastEmbed with the `bge-small-en-v1.5` model, which provides a good balance of performance and model size (~130MB). You only need to specify an embedder if you want to use a different model or provider. For environments where local embedding isn't supported, you can use an API-based embedder: ```typescript {6} import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ memory: new Memory({ embedder: openai.embedding("text-embedding-3-small"), // Adds network request }), }); ``` Mastra supports many embedding models through the [Vercel AI SDK](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings), including options from OpenAI, Google, Mistral, and Cohere. ## Parameters ### options ### Related - [Getting Started with Memory](/docs/memory/overview.mdx) - [Semantic Recall](/docs/memory/semantic-recall.mdx) - [Working Memory](/docs/memory/working-memory.mdx) - [Memory Processors](/docs/memory/memory-processors.mdx) - [createThread](/reference/memory/createThread.mdx) - [query](/reference/memory/query.mdx) - [getThreadById](/reference/memory/getThreadById.mdx) - [getThreadsByResourceId](/reference/memory/getThreadsByResourceId.mdx) # createThread [EN] Source: https://mastra.ai/en/reference/memory/createThread Creates a new conversation thread in the memory system. Each thread represents a distinct conversation or context and can contain multiple messages. ## Usage Example ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ /* config */ }); const thread = await memory.createThread({ resourceId: "user-123", title: "Support Conversation", metadata: { category: "support", priority: "high" } }); ``` ## Parameters ", description: "Optional metadata to associate with the thread", isOptional: true, }, ]} /> ## Returns ", description: "Additional metadata associated with the thread", }, ]} /> ### Related - [Memory Class Reference](/reference/memory/Memory.mdx) - [Getting Started with Memory](/docs/memory/overview.mdx) (Covers threads concept) - [getThreadById](/reference/memory/getThreadById.mdx) - [getThreadsByResourceId](/reference/memory/getThreadsByResourceId.mdx) - [query](/reference/memory/query.mdx) # getThreadById Reference [EN] Source: https://mastra.ai/en/reference/memory/getThreadById The `getThreadById` function retrieves a specific thread by its ID from storage. ## Usage Example ```typescript import { Memory } from "@mastra/core/memory"; const memory = new Memory(config); const thread = await memory.getThreadById({ threadId: "thread-123" }); ``` ## Parameters ## Returns ### Related - [Memory Class Reference](/reference/memory/Memory.mdx) - [Getting Started with Memory](/docs/memory/overview.mdx) (Covers threads concept) - [createThread](/reference/memory/createThread.mdx) - [getThreadsByResourceId](/reference/memory/getThreadsByResourceId.mdx) # getThreadsByResourceId Reference [EN] Source: https://mastra.ai/en/reference/memory/getThreadsByResourceId The `getThreadsByResourceId` function retrieves all threads associated with a specific resource ID from storage. ## Usage Example ```typescript import { Memory } from "@mastra/core/memory"; const memory = new Memory(config); const threads = await memory.getThreadsByResourceId({ resourceId: "resource-123", }); ``` ## Parameters ## Returns ### Related - [Memory Class Reference](/reference/memory/Memory.mdx) - [Getting Started with Memory](/docs/memory/overview.mdx) (Covers threads/resources concept) - [createThread](/reference/memory/createThread.mdx) - [getThreadById](/reference/memory/getThreadById.mdx) # query [EN] Source: https://mastra.ai/en/reference/memory/query Retrieves messages from a specific thread, with support for pagination and filtering options. ## Usage Example ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ /* config */ }); // Get last 50 messages const { messages, uiMessages } = await memory.query({ threadId: "thread-123", selectBy: { last: 50, }, }); // Get messages with context around specific messages const { messages: contextMessages } = await memory.query({ threadId: "thread-123", selectBy: { include: [ { id: "msg-123", // Get just this message (no context) }, { id: "msg-456", // Get this message with custom context withPreviousMessages: 3, // 3 messages before withNextMessages: 1, // 1 message after }, ], }, }); // Semantic search in messages const { messages } = await memory.query({ threadId: "thread-123", selectBy: { vectorSearchString: "What was discussed about deployment?", }, threadConfig: { historySearch: true, }, }); ``` ## Parameters ### selectBy ### include ## Returns ## Additional Notes The `query` function returns two different message formats: - `messages`: Core message format used internally - `uiMessages`: Formatted messages suitable for UI display, including proper threading of tool calls and results ### Related - [Memory Class Reference](/reference/memory/Memory.mdx) - [Getting Started with Memory](/docs/memory/overview.mdx) - [Semantic Recall](/docs/memory/semantic-recall.mdx) - [createThread](/reference/memory/createThread.mdx) --- title: 'AgentNetwork (Experimental)' description: 'Reference documentation for the AgentNetwork class' --- # AgentNetwork (Experimental) [EN] Source: https://mastra.ai/en/reference/networks/agent-network > **Note:** The AgentNetwork feature is experimental and may change in future releases. The `AgentNetwork` class provides a way to create a network of specialized agents that can collaborate to solve complex tasks. Unlike Workflows, which require explicit control over execution paths, AgentNetwork uses an LLM-based router to dynamically determine which agent to call next. ## Key Concepts - **LLM-based Routing**: AgentNetwork exclusively uses an LLM to figure out the best way to use your agents - **Agent Collaboration**: Multiple specialized agents can work together to solve complex tasks - **Dynamic Decision Making**: The router decides which agent to call based on the task requirements ## Usage ```typescript import { AgentNetwork } from '@mastra/core/network'; import { openai } from '@mastra/openai'; // Create specialized agents const webSearchAgent = new Agent({ name: 'Web Search Agent', instructions: 'You search the web for information.', model: openai('gpt-4o'), tools: { /* web search tools */ }, }); const dataAnalysisAgent = new Agent({ name: 'Data Analysis Agent', instructions: 'You analyze data and provide insights.', model: openai('gpt-4o'), tools: { /* data analysis tools */ }, }); // Create the network const researchNetwork = new AgentNetwork({ name: 'Research Network', instructions: 'Coordinate specialized agents to research topics thoroughly.', model: openai('gpt-4o'), agents: [webSearchAgent, dataAnalysisAgent], }); // Use the network const result = await researchNetwork.generate('Research the impact of climate change on agriculture'); console.log(result.text); ``` ## Constructor ```typescript constructor(config: AgentNetworkConfig) ``` ### Parameters - `config`: Configuration object for the AgentNetwork - `name`: Name of the network - `instructions`: Instructions for the routing agent - `model`: Language model to use for routing - `agents`: Array of specialized agents in the network ## Methods ### generate() Generates a response using the agent network. This method has replaced the deprecated `run()` method for consistency with the rest of the codebase. ```typescript async generate( messages: string | string[] | CoreMessage[], args?: AgentGenerateOptions ): Promise ``` ### stream() Streams a response using the agent network. ```typescript async stream( messages: string | string[] | CoreMessage[], args?: AgentStreamOptions ): Promise ``` ### getRoutingAgent() Returns the routing agent used by the network. ```typescript getRoutingAgent(): Agent ``` ### getAgents() Returns the array of specialized agents in the network. ```typescript getAgents(): Agent[] ``` ### getAgentHistory() Returns the history of interactions for a specific agent. ```typescript getAgentHistory(agentId: string): Array<{ input: string; output: string; timestamp: string; }> ``` ### getAgentInteractionHistory() Returns the history of all agent interactions that have occurred in the network. ```typescript getAgentInteractionHistory(): Record< string, Array<{ input: string; output: string; timestamp: string; }> > ``` ### getAgentInteractionSummary() Returns a formatted summary of agent interactions in chronological order. ```typescript getAgentInteractionSummary(): string ``` ## When to Use AgentNetwork vs Workflows - **Use AgentNetwork when:** You want the AI to figure out the best way to use your agents, with dynamic routing based on the task requirements. - **Use Workflows when:** You need explicit control over execution paths, with predetermined sequences of agent calls and conditional logic. ## Internal Tools The AgentNetwork uses a special `transmit` tool that allows the routing agent to call specialized agents. This tool handles: - Single agent calls - Multiple parallel agent calls - Context sharing between agents ## Limitations - The AgentNetwork approach may use more tokens than a well-designed Workflow for the same task - Debugging can be more challenging as the routing decisions are made by the LLM - Performance may vary based on the quality of the routing instructions and the capabilities of the specialized agents --- title: "Reference: createLogger() | Mastra Observability Docs" description: Documentation for the createLogger function, which instantiates a logger based on a given configuration. --- # createLogger() [EN] Source: https://mastra.ai/en/reference/observability/create-logger The `createLogger()` function is used to instantiate a logger based on a given configuration. You can create console-based, file-based, or Upstash Redis-based loggers by specifying the type and any additional parameters relevant to that type. ### Usage #### Console Logger (Development) ```typescript showLineNumbers copy const consoleLogger = createLogger({ name: "Mastra", level: "debug" }); consoleLogger.info("App started"); ``` #### File Transport (Structured Logs) ```typescript showLineNumbers copy import { FileTransport } from "@mastra/loggers/file"; const fileLogger = createLogger({ name: "Mastra", transports: { file: new FileTransport({ path: "test-dir/test.log" }) }, level: "warn", }); fileLogger.warn("Low disk space", { destinationPath: "system", type: "WORKFLOW", }); ``` #### Upstash Logger (Remote Log Drain) ```typescript showLineNumbers copy import { UpstashTransport } from "@mastra/loggers/upstash"; const logger = createLogger({ name: "Mastra", transports: { upstash: new UpstashTransport({ listName: "production-logs", upstashUrl: process.env.UPSTASH_URL!, upstashToken: process.env.UPSTASH_TOKEN!, }), }, level: "info", }); logger.info({ message: "User signed in", destinationPath: "auth", type: "AGENT", runId: "run_123", }); ``` ### Parameters --- title: "Reference: Logger Instance | Mastra Observability Docs" description: Documentation for Logger instances, which provide methods to record events at various severity levels. --- # Logger Instance [EN] Source: https://mastra.ai/en/reference/observability/logger A Logger instance is created by `createLogger()` and provides methods to record events at various severity levels. Depending on the logger type, messages may be written to the console, file, or an external service. ## Example ```typescript showLineNumbers copy // Using a console logger const logger = createLogger({ name: 'Mastra', level: 'info' }); logger.debug('Debug message'); // Won't be logged because level is INFO logger.info({ message: 'User action occurred', destinationPath: 'user-actions', type: 'AGENT' }); // Logged logger.error('An error occurred'); // Logged as ERROR ``` ## Methods void | Promise', description: 'Write a DEBUG-level log. Only recorded if level ≤ DEBUG.', }, { name: 'info', type: '(message: BaseLogMessage | string, ...args: any[]) => void | Promise', description: 'Write an INFO-level log. Only recorded if level ≤ INFO.', }, { name: 'warn', type: '(message: BaseLogMessage | string, ...args: any[]) => void | Promise', description: 'Write a WARN-level log. Only recorded if level ≤ WARN.', }, { name: 'error', type: '(message: BaseLogMessage | string, ...args: any[]) => void | Promise', description: 'Write an ERROR-level log. Only recorded if level ≤ ERROR.', }, { name: 'cleanup', type: '() => Promise', isOptional: true, description: 'Cleanup resources held by the logger (e.g., network connections for Upstash). Not all loggers implement this.', }, ]} /> **Note:** Some loggers require a `BaseLogMessage` object (with `message`, `destinationPath`, `type` fields). For instance, the `File` and `Upstash` loggers need structured messages. --- title: "Reference: OtelConfig | Mastra Observability Docs" description: Documentation for the OtelConfig object, which configures OpenTelemetry instrumentation, tracing, and exporting behavior. --- # `OtelConfig` [EN] Source: https://mastra.ai/en/reference/observability/otel-config The `OtelConfig` object is used to configure OpenTelemetry instrumentation, tracing, and exporting behavior within your application. By adjusting its properties, you can control how telemetry data (such as traces) is collected, sampled, and exported. To use the `OtelConfig` within Mastra, pass it as the value of the `telemetry` key when initializing Mastra. This will configure Mastra to use your custom OpenTelemetry settings for tracing and instrumentation. ```typescript showLineNumbers copy import { Mastra } from 'mastra'; const otelConfig: OtelConfig = { serviceName: 'my-awesome-service', enabled: true, sampling: { type: 'ratio', probability: 0.5, }, export: { type: 'otlp', endpoint: 'https://otel-collector.example.com/v1/traces', headers: { Authorization: 'Bearer YOUR_TOKEN_HERE', }, }, }; ``` ### Properties ', isOptional: true, description: 'Additional headers to send with OTLP requests, useful for authentication or routing.', }, ], }, ]} /> --- title: "Reference: Braintrust | Observability | Mastra Docs" description: Documentation for integrating Braintrust with Mastra, an evaluation and monitoring platform for LLM applications. --- # Braintrust [EN] Source: https://mastra.ai/en/reference/observability/providers/braintrust Braintrust is an evaluation and monitoring platform for LLM applications. ## Configuration To use Braintrust with Mastra, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer , x-bt-parent=project_id:" ``` ## Implementation Here's how to configure Mastra to use Braintrust: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## Dashboard Access your Braintrust dashboard at [braintrust.dev](https://www.braintrust.dev/) --- title: "Reference: Dash0 Integration | Mastra Observability Docs" description: Documentation for integrating Mastra with Dash0, an Open Telementry native observability solution. --- # Dash0 [EN] Source: https://mastra.ai/en/reference/observability/providers/dash0 Dash0, an Open Telementry native observability solution that provides full-stack monitoring capabilities as well as integrations with other CNCF projects like Perses and Prometheus. ## Configuration To use Dash0 with Mastra, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://ingress..dash0.com OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer , Dash0-Dataset= ``` ## Implementation Here's how to configure Mastra to use Dash0: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## Dashboard Access your Dash0 dashboards at [dash0.com](https://www.dash0.com/) and find out how to do more [Distributed Tracing](https://www.dash0.com/distributed-tracing) integrations in the [Dash0 Integration Hub](https://www.dash0.com/hub/integrations) --- title: "Reference: Provider List | Observability | Mastra Docs" description: Overview of observability providers supported by Mastra, including Dash0, SigNoz, Braintrust, Langfuse, and more. --- # Observability Providers [EN] Source: https://mastra.ai/en/reference/observability/providers Observability providers include: - [Braintrust](./providers/braintrust.mdx) - [Dash0](./providers/dash0.mdx) - [Laminar](./providers/laminar.mdx) - [Langfuse](./providers/langfuse.mdx) - [Langsmith](./providers/langsmith.mdx) - [New Relic](./providers/new-relic.mdx) - [SigNoz](./providers/signoz.mdx) - [Traceloop](./providers/traceloop.mdx) --- title: "Reference: Laminar Integration | Mastra Observability Docs" description: Documentation for integrating Laminar with Mastra, a specialized observability platform for LLM applications. --- # Laminar [EN] Source: https://mastra.ai/en/reference/observability/providers/laminar Laminar is a specialized observability platform for LLM applications. ## Configuration To use Laminar with Mastra, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://api.lmnr.ai:8443 OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your_api_key, x-laminar-team-id=your_team_id" ``` ## Implementation Here's how to configure Mastra to use Laminar: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", protocol: "grpc", }, }, }); ``` ## Dashboard Access your Laminar dashboard at [https://lmnr.ai/](https://lmnr.ai/) --- title: "Reference: Langfuse Integration | Mastra Observability Docs" description: Documentation for integrating Langfuse with Mastra, an open-source observability platform for LLM applications. --- # Langfuse [EN] Source: https://mastra.ai/en/reference/observability/providers/langfuse Langfuse is an open-source observability platform designed specifically for LLM applications. > **Note**: Currently, only AI-related calls will contain detailed telemetry data. Other operations will create traces but with limited information. ## Configuration To use Langfuse with Mastra, you'll need to configure the following environment variables: ```env LANGFUSE_PUBLIC_KEY=your_public_key LANGFUSE_SECRET_KEY=your_secret_key LANGFUSE_BASEURL=https://cloud.langfuse.com # Optional - defaults to cloud.langfuse.com ``` **Important**: When configuring the telemetry export settings, the `traceName` parameter must be set to `"ai"` for the Langfuse integration to work properly. ## Implementation Here's how to configure Mastra to use Langfuse: ```typescript import { Mastra } from "@mastra/core"; import { LangfuseExporter } from "langfuse-vercel"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "ai", // this must be set to "ai" so that the LangfuseExporter thinks it's an AI SDK trace enabled: true, export: { type: "custom", exporter: new LangfuseExporter({ publicKey: process.env.LANGFUSE_PUBLIC_KEY, secretKey: process.env.LANGFUSE_SECRET_KEY, baseUrl: process.env.LANGFUSE_BASEURL, }), }, }, }); ``` ## Dashboard Once configured, you can view your traces and analytics in the Langfuse dashboard at [cloud.langfuse.com](https://cloud.langfuse.com) --- title: "Reference: LangSmith Integration | Mastra Observability Docs" description: Documentation for integrating LangSmith with Mastra, a platform for debugging, testing, evaluating, and monitoring LLM applications. --- # LangSmith [EN] Source: https://mastra.ai/en/reference/observability/providers/langsmith LangSmith is LangChain's platform for debugging, testing, evaluating, and monitoring LLM applications. > **Note**: Currently, this integration only traces AI-related calls in your application. Other types of operations are not captured in the telemetry data. ## Configuration To use LangSmith with Mastra, you'll need to configure the following environment variables: ```env LANGSMITH_TRACING=true LANGSMITH_ENDPOINT=https://api.smith.langchain.com LANGSMITH_API_KEY=your-api-key LANGSMITH_PROJECT=your-project-name ``` ## Implementation Here's how to configure Mastra to use LangSmith: ```typescript import { Mastra } from "@mastra/core"; import { AISDKExporter } from "langsmith/vercel"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "custom", exporter: new AISDKExporter(), }, }, }); ``` ## Dashboard Access your traces and analytics in the LangSmith dashboard at [smith.langchain.com](https://smith.langchain.com) > **Note**: Even if you run your workflows, you may not see data appearing in a new project. You will need to sort by Name column to see all projects, select your project, then filter by LLM Calls instead of Root Runs. --- title: "Reference: LangWatch Integration | Mastra Observability Docs" description: Documentation for integrating LangWatch with Mastra, a specialized observability platform for LLM applications. --- # LangWatch [EN] Source: https://mastra.ai/en/reference/observability/providers/langwatch LangWatch is a specialized observability platform for LLM applications. ## Configuration To use LangWatch with Mastra, configure these environment variables: ```env LANGWATCH_API_KEY=your_api_key LANGWATCH_PROJECT_ID=your_project_id ``` ## Implementation Here's how to configure Mastra to use LangWatch: ```typescript import { Mastra } from "@mastra/core"; import { LangWatchExporter } from "langwatch"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "custom", exporter: new LangWatchExporter({ apiKey: process.env.LANGWATCH_API_KEY, projectId: process.env.LANGWATCH_PROJECT_ID, }), }, }, }); ``` ## Dashboard Access your LangWatch dashboard at [app.langwatch.ai](https://app.langwatch.ai) --- title: "Reference: New Relic Integration | Mastra Observability Docs" description: Documentation for integrating New Relic with Mastra, a comprehensive observability platform supporting OpenTelemetry for full-stack monitoring. --- # New Relic [EN] Source: https://mastra.ai/en/reference/observability/providers/new-relic New Relic is a comprehensive observability platform that supports OpenTelemetry (OTLP) for full-stack monitoring. ## Configuration To use New Relic with Mastra via OTLP, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:4317 OTEL_EXPORTER_OTLP_HEADERS="api-key=your_license_key" ``` ## Implementation Here's how to configure Mastra to use New Relic: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## Dashboard View your telemetry data in the New Relic One dashboard at [one.newrelic.com](https://one.newrelic.com) --- title: "Reference: SigNoz Integration | Mastra Observability Docs" description: Documentation for integrating SigNoz with Mastra, an open-source APM and observability platform providing full-stack monitoring through OpenTelemetry. --- # SigNoz [EN] Source: https://mastra.ai/en/reference/observability/providers/signoz SigNoz is an open-source APM and observability platform that provides full-stack monitoring capabilities through OpenTelemetry. ## Configuration To use SigNoz with Mastra, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.{region}.signoz.cloud:443 OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=your_signoz_token ``` ## Implementation Here's how to configure Mastra to use SigNoz: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## Dashboard Access your SigNoz dashboard at [signoz.io](https://signoz.io/) --- title: "Reference: Traceloop Integration | Mastra Observability Docs" description: Documentation for integrating Traceloop with Mastra, an OpenTelemetry-native observability platform for LLM applications. --- # Traceloop [EN] Source: https://mastra.ai/en/reference/observability/providers/traceloop Traceloop is an OpenTelemetry-native observability platform specifically designed for LLM applications. ## Configuration To use Traceloop with Mastra, configure these environment variables: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://api.traceloop.com OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your_api_key, x-traceloop-destination-id=your_destination_id" ``` ## Implementation Here's how to configure Mastra to use Traceloop: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## Dashboard Access your traces and analytics in the Traceloop dashboard at [app.traceloop.com](https://app.traceloop.com) --- title: "Reference: Astra Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the AstraVector class in Mastra, which provides vector search using DataStax Astra DB. --- # Astra Vector Store [EN] Source: https://mastra.ai/en/reference/rag/astra The AstraVector class provides vector search using [DataStax Astra DB](https://www.datastax.com/products/datastax-astra), a cloud-native, serverless database built on Apache Cassandra. It provides vector search capabilities with enterprise-grade scalability and high availability. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() ", isOptional: true, description: "New metadata values", }, ], }, ]} /> ### deleteIndexById() ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Environment Variables Required environment variables: - `ASTRA_DB_TOKEN`: Your Astra DB API token - `ASTRA_DB_ENDPOINT`: Your Astra DB API endpoint ## Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Chroma Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the ChromaVector class in Mastra, which provides vector search using ChromaDB. --- # Chroma Vector Store [EN] Source: https://mastra.ai/en/reference/rag/chroma The ChromaVector class provides vector search using [ChromaDB](https://www.trychroma.com/), an open-source embedding database. It offers efficient vector search with metadata filtering and hybrid search capabilities. ## Constructor Options ### auth ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, { name: "documents", type: "string[]", isOptional: true, description: "Chroma-specific: Original text documents associated with the vectors", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, { name: "documentFilter", type: "Record", isOptional: true, description: "Chroma-specific: Filter to apply on the document content", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() The `update` object can contain: ", isOptional: true, description: "New metadata to replace the existing metadata", }, ]} /> ### deleteIndexById() ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; document?: string; // Chroma-specific: Original document if it was stored vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Related - [Metadata Filters](./metadata-filters) --- title: "Reference: .chunk() | Document Processing | RAG | Mastra Docs" description: Documentation for the chunk function in Mastra, which splits documents into smaller segments using various strategies. --- # Reference: .chunk() [EN] Source: https://mastra.ai/en/reference/rag/chunk The `.chunk()` function splits documents into smaller segments using various strategies and options. ## Example ```typescript import { MDocument } from '@mastra/rag'; const doc = MDocument.fromMarkdown(` # Introduction This is a sample document that we want to split into chunks. ## Section 1 Here is the first section with some content. ## Section 2 Here is another section with different content. `); // Basic chunking with defaults const chunks = await doc.chunk(); // Markdown-specific chunking with header extraction const chunksWithMetadata = await doc.chunk({ strategy: 'markdown', headers: [['#', 'title'], ['##', 'section']], extract: { summary: true, // Extract summaries with default settings keywords: true // Extract keywords with default settings } }); ``` ## Parameters ## Strategy-Specific Options Strategy-specific options are passed as top-level parameters alongside the strategy parameter. For example: ```typescript showLineNumbers copy // HTML strategy example const chunks = await doc.chunk({ strategy: 'html', headers: [['h1', 'title'], ['h2', 'subtitle']], // HTML-specific option sections: [['div.content', 'main']], // HTML-specific option size: 500 // general option }); // Markdown strategy example const chunks = await doc.chunk({ strategy: 'markdown', headers: [['#', 'title'], ['##', 'section']], // Markdown-specific option stripHeaders: true, // Markdown-specific option overlap: 50 // general option }); // Token strategy example const chunks = await doc.chunk({ strategy: 'token', encodingName: 'gpt2', // Token-specific option modelName: 'gpt-3.5-turbo', // Token-specific option size: 1000 // general option }); ``` The options documented below are passed directly at the top level of the configuration object, not nested within a separate options object. ### HTML ", description: "Array of [selector, metadata key] pairs for header-based splitting", }, { name: "sections", type: "Array<[string, string]>", description: "Array of [selector, metadata key] pairs for section-based splitting", }, { name: "returnEachLine", type: "boolean", isOptional: true, description: "Whether to return each line as a separate chunk", }, ]} /> ### Markdown ", description: "Array of [header level, metadata key] pairs", }, { name: "stripHeaders", type: "boolean", isOptional: true, description: "Whether to remove headers from the output", }, { name: "returnEachLine", type: "boolean", isOptional: true, description: "Whether to return each line as a separate chunk", }, ]} /> ### Token ### JSON ## Return Value Returns a `MDocument` instance containing the chunked documents. Each chunk includes: ```typescript interface DocumentNode { text: string; metadata: Record; embedding?: number[]; } ``` --- title: "Reference: MDocument | Document Processing | RAG | Mastra Docs" description: Documentation for the MDocument class in Mastra, which handles document processing and chunking. --- # MDocument [EN] Source: https://mastra.ai/en/reference/rag/document The MDocument class processes documents for RAG applications. The main methods are `.chunk()` and `.extractMetadata()`. ## Constructor }>", description: "Array of document chunks with their text content and optional metadata", }, { name: "type", type: "'text' | 'html' | 'markdown' | 'json' | 'latex'", description: "Type of document content", } ]} /> ## Static Methods ### fromText() Creates a document from plain text content. ```typescript static fromText(text: string, metadata?: Record): MDocument ``` ### fromHTML() Creates a document from HTML content. ```typescript static fromHTML(html: string, metadata?: Record): MDocument ``` ### fromMarkdown() Creates a document from Markdown content. ```typescript static fromMarkdown(markdown: string, metadata?: Record): MDocument ``` ### fromJSON() Creates a document from JSON content. ```typescript static fromJSON(json: string, metadata?: Record): MDocument ``` ## Instance Methods ### chunk() Splits document into chunks and optionally extracts metadata. ```typescript async chunk(params?: ChunkParams): Promise ``` See [chunk() reference](./chunk) for detailed options. ### getDocs() Returns array of processed document chunks. ```typescript getDocs(): Chunk[] ``` ### getText() Returns array of text strings from chunks. ```typescript getText(): string[] ``` ### getMetadata() Returns array of metadata objects from chunks. ```typescript getMetadata(): Record[] ``` ### extractMetadata() Extracts metadata using specified extractors. See [ExtractParams reference](./extract-params) for details. ```typescript async extractMetadata(params: ExtractParams): Promise ``` ## Examples ```typescript import { MDocument } from '@mastra/rag'; // Create document from text const doc = MDocument.fromText('Your content here'); // Split into chunks with metadata extraction const chunks = await doc.chunk({ strategy: 'markdown', headers: [['#', 'title'], ['##', 'section']], extract: { summary: true, // Extract summaries with default settings keywords: true // Extract keywords with default settings } }); // Get processed chunks const docs = doc.getDocs(); const texts = doc.getText(); const metadata = doc.getMetadata(); ``` --- title: "Reference: embed() | Document Embedding | RAG | Mastra Docs" description: Documentation for embedding functionality in Mastra using the AI SDK. --- # Embed [EN] Source: https://mastra.ai/en/reference/rag/embeddings Mastra uses the AI SDK's `embed` and `embedMany` functions to generate vector embeddings for text inputs, enabling similarity search and RAG workflows. ## Single Embedding The `embed` function generates a vector embedding for a single text input: ```typescript import { embed } from 'ai'; const result = await embed({ model: openai.embedding('text-embedding-3-small'), value: "Your text to embed", maxRetries: 2 // optional, defaults to 2 }); ``` ### Parameters ", description: "The text content or object to embed" }, { name: "maxRetries", type: "number", description: "Maximum number of retries per embedding call. Set to 0 to disable retries.", isOptional: true, defaultValue: "2" }, { name: "abortSignal", type: "AbortSignal", description: "Optional abort signal to cancel the request", isOptional: true }, { name: "headers", type: "Record", description: "Additional HTTP headers for the request (only for HTTP-based providers)", isOptional: true } ]} /> ### Return Value ## Multiple Embeddings For embedding multiple texts at once, use the `embedMany` function: ```typescript import { embedMany } from 'ai'; const result = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: ["First text", "Second text", "Third text"], maxRetries: 2 // optional, defaults to 2 }); ``` ### Parameters []", description: "Array of text content or objects to embed" }, { name: "maxRetries", type: "number", description: "Maximum number of retries per embedding call. Set to 0 to disable retries.", isOptional: true, defaultValue: "2" }, { name: "abortSignal", type: "AbortSignal", description: "Optional abort signal to cancel the request", isOptional: true }, { name: "headers", type: "Record", description: "Additional HTTP headers for the request (only for HTTP-based providers)", isOptional: true } ]} /> ### Return Value ## Example Usage ```typescript import { embed, embedMany } from 'ai'; import { openai } from '@ai-sdk/openai'; // Single embedding const singleResult = await embed({ model: openai.embedding('text-embedding-3-small'), value: "What is the meaning of life?", }); // Multiple embeddings const multipleResult = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: [ "First question about life", "Second question about universe", "Third question about everything" ], }); ``` For more detailed information about embeddings in the Vercel AI SDK, see: - [AI SDK Embeddings Overview](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings) - [embed()](https://sdk.vercel.ai/docs/reference/ai-sdk-core/embed) - [embedMany()](https://sdk.vercel.ai/docs/reference/ai-sdk-core/embed-many) --- title: "Reference: ExtractParams | Document Processing | RAG | Mastra Docs" description: Documentation for metadata extraction configuration in Mastra. --- # ExtractParams [EN] Source: https://mastra.ai/en/reference/rag/extract-params ExtractParams configures metadata extraction from document chunks using LLM analysis. ## Example ```typescript showLineNumbers copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText(text); const chunks = await doc.chunk({ extract: { title: true, // Extract titles using default settings summary: true, // Generate summaries using default settings keywords: true // Extract keywords using default settings } }); // Example output: // chunks[0].metadata = { // documentTitle: "AI Systems Overview", // sectionSummary: "Overview of artificial intelligence concepts and applications", // excerptKeywords: "KEYWORDS: AI, machine learning, algorithms" // } ``` ## Parameters The `extract` parameter accepts the following fields: ## Extractor Arguments ### TitleExtractorsArgs ### SummaryExtractArgs ### QuestionAnswerExtractArgs ### KeywordExtractArgs ## Advanced Example ```typescript showLineNumbers copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText(text); const chunks = await doc.chunk({ extract: { // Title extraction with custom settings title: { nodes: 2, // Extract 2 title nodes nodeTemplate: "Generate a title for this: {context}", combineTemplate: "Combine these titles: {context}" }, // Summary extraction with custom settings summary: { summaries: ["self"], // Generate summaries for current chunk promptTemplate: "Summarize this: {context}" }, // Question generation with custom settings questions: { questions: 3, // Generate 3 questions promptTemplate: "Generate {numQuestions} questions about: {context}", embeddingOnly: false }, // Keyword extraction with custom settings keywords: { keywords: 5, // Extract 5 keywords promptTemplate: "Extract {maxKeywords} key terms from: {context}" } } }); // Example output: // chunks[0].metadata = { // documentTitle: "AI in Modern Computing", // sectionSummary: "Overview of AI concepts and their applications in computing", // questionsThisExcerptCanAnswer: "1. What is machine learning?\n2. How do neural networks work?", // excerptKeywords: "1. Machine learning\n2. Neural networks\n3. Training data" // } ``` ## Document Grouping for Title Extraction When using the `TitleExtractor`, you can group multiple chunks together for title extraction by specifying a shared `docId` in the `metadata` field of each chunk. All chunks with the same `docId` will receive the same extracted title. If no `docId` is set, each chunk is treated as its own document for title extraction. **Example:** ```ts import { MDocument } from "@mastra/rag"; const doc = new MDocument({ docs: [ { text: "chunk 1", metadata: { docId: "docA" } }, { text: "chunk 2", metadata: { docId: "docA" } }, { text: "chunk 3", metadata: { docId: "docB" } }, ], type: "text", }); await doc.extractMetadata({ title: true }); // The first two chunks will share a title, while the third chunk will be assigned a separate title. ``` --- title: "Reference: GraphRAG | Graph-based RAG | RAG | Mastra Docs" description: Documentation for the GraphRAG class in Mastra, which implements a graph-based approach to retrieval augmented generation. --- # GraphRAG [EN] Source: https://mastra.ai/en/reference/rag/graph-rag The `GraphRAG` class implements a graph-based approach to retrieval augmented generation. It creates a knowledge graph from document chunks where nodes represent documents and edges represent semantic relationships, enabling both direct similarity matching and discovery of related content through graph traversal. ## Basic Usage ```typescript import { GraphRAG } from "@mastra/rag"; const graphRag = new GraphRAG({ dimension: 1536, threshold: 0.7 }); // Create the graph from chunks and embeddings graphRag.createGraph(documentChunks, embeddings); // Query the graph with embedding const results = await graphRag.query({ query: queryEmbedding, topK: 10, randomWalkSteps: 100, restartProb: 0.15 }); ``` ## Constructor Parameters ## Methods ### createGraph Creates a knowledge graph from document chunks and their embeddings. ```typescript createGraph(chunks: GraphChunk[], embeddings: GraphEmbedding[]): void ``` #### Parameters ### query Performs a graph-based search combining vector similarity and graph traversal. ```typescript query({ query, topK = 10, randomWalkSteps = 100, restartProb = 0.15 }: { query: number[]; topK?: number; randomWalkSteps?: number; restartProb?: number; }): RankedNode[] ``` #### Parameters #### Returns Returns an array of `RankedNode` objects, where each node contains: ", description: "Additional metadata associated with the chunk", }, { name: "score", type: "number", description: "Combined relevance score from graph traversal", } ]} /> ## Advanced Example ```typescript const graphRag = new GraphRAG({ dimension: 1536, threshold: 0.8 // Stricter similarity threshold }); // Create graph from chunks and embeddings graphRag.createGraph(documentChunks, embeddings); // Query with custom parameters const results = await graphRag.query({ query: queryEmbedding, topK: 5, randomWalkSteps: 200, restartProb: 0.2 }); ``` ## Related - [createGraphRAGTool](../tools/graph-rag-tool) --- title: "Default Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the LibSQLVector class in Mastra, which provides vector search using LibSQL with vector extensions. --- # LibSQLVector Store [EN] Source: https://mastra.ai/en/reference/rag/libsql The LibSQL storage implementation provides a SQLite-compatible vector search [LibSQL](https://github.com/tursodatabase/libsql), a fork of SQLite with vector extensions, and [Turso](https://turso.tech/) with vector extensions, offering a lightweight and efficient vector database solution. It's part of the `@mastra/core` package and offers efficient vector similarity search with metadata filtering. ## Installation Default vector store is included in the core package: ```bash copy npm install @mastra/core ``` ## Usage ```typescript copy showLineNumbers import { LibSQLVector } from "@mastra/core/vector/libsql"; // Create a new vector store instance const store = new LibSQLVector({ connectionUrl: process.env.DATABASE_URL, // Optional: for Turso cloud databases authToken: process.env.DATABASE_AUTH_TOKEN, }); // Create an index await store.createIndex({ indexName: "myCollection", dimension: 1536, }); // Add vectors with metadata const vectors = [[0.1, 0.2, ...], [0.3, 0.4, ...]]; const metadata = [ { text: "first document", category: "A" }, { text: "second document", category: "B" } ]; await store.upsert({ indexName: "myCollection", vectors, metadata, }); // Query similar vectors const queryVector = [0.1, 0.2, ...]; const results = await store.query({ indexName: "myCollection", queryVector, topK: 10, // top K results filter: { category: "A" } // optional metadata filter }); ``` ## Constructor Options ## Methods ### createIndex() Creates a new vector collection. The index name must start with a letter or underscore and can only contain letters, numbers, and underscores. The dimension must be a positive integer. ### upsert() Adds or updates vectors and their metadata in the index. Uses a transaction to ensure all vectors are inserted atomically - if any insert fails, the entire operation is rolled back. []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() Searches for similar vectors with optional metadata filtering. ### describeIndex() Gets information about an index. Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() Deletes an index and all its data. ### listIndexes() Lists all vector indexes in the database. Returns: `Promise` ### truncateIndex() Removes all vectors from an index while keeping the index structure. ### updateIndexById() Updates a specific vector entry by its ID with new vector data and/or metadata. ", isOptional: true, description: "New metadata to update", }, ]} /> ### deleteIndexById() Deletes a specific vector entry from an index by its ID. ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws specific errors for different failure cases: ```typescript copy try { await store.query({ indexName: "my-collection", queryVector: queryVector, }); } catch (error) { // Handle specific error cases if (error.message.includes("Invalid index name format")) { console.error( "Index name must start with a letter/underscore and contain only alphanumeric characters", ); } else if (error.message.includes("Table not found")) { console.error("The specified index does not exist"); } else { console.error("Vector store error:", error.message); } } ``` Common error cases include: - Invalid index name format - Invalid vector dimensions - Table/index not found - Database connection issues - Transaction failures during upsert ## Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Metadata Filters | Metadata Filtering | RAG | Mastra Docs" description: Documentation for metadata filtering capabilities in Mastra, which allow for precise querying of vector search results across different vector stores. --- # Metadata Filters [EN] Source: https://mastra.ai/en/reference/rag/metadata-filters Mastra provides a unified metadata filtering syntax across all vector stores, based on MongoDB/Sift query syntax. Each vector store translates these filters into their native format. ## Basic Example ```typescript import { PgVector } from '@mastra/pg'; const store = new PgVector(connectionString); const results = await store.query({ indexName: "my_index", queryVector: queryVector, topK: 10, filter: { category: "electronics", // Simple equality price: { $gt: 100 }, // Numeric comparison tags: { $in: ["sale", "new"] } // Array membership } }); ``` ## Supported Operators ## Common Rules and Restrictions 1. Field names cannot: - Contain dots (.) unless referring to nested fields - Start with $ or contain null characters - Be empty strings 2. Values must be: - Valid JSON types (string, number, boolean, object, array) - Not undefined - Properly typed for the operator (e.g., numbers for numeric comparisons) 3. Logical operators: - Must contain valid conditions - Cannot be empty - Must be properly nested - Can only be used at top level or nested within other logical operators - Cannot be used at field level or nested inside a field - Cannot be used inside an operator - Valid: `{ "$and": [{ "field": { "$gt": 100 } }] }` - Valid: `{ "$or": [{ "$and": [{ "field": { "$gt": 100 } }] }] }` - Invalid: `{ "field": { "$and": [{ "$gt": 100 }] } }` - Invalid: `{ "field": { "$gt": { "$and": [{...}] } } }` 4. $not operator: - Must be an object - Cannot be empty - Can be used at field level or top level - Valid: `{ "$not": { "field": "value" } }` - Valid: `{ "field": { "$not": { "$eq": "value" } } }` 5. Operator nesting: - Logical operators must contain field conditions, not direct operators - Valid: `{ "$and": [{ "field": { "$gt": 100 } }] }` - Invalid: `{ "$and": [{ "$gt": 100 }] }` ## Store-Specific Notes ### Astra - Nested field queries are supported using dot notation - Array fields must be explicitly defined as arrays in the metadata - Metadata values are case-sensitive ### ChromaDB - Where filters only return results where the filtered field exists in metadata - Empty metadata fields are not included in filter results - Metadata fields must be present for negative matches (e.g., $ne won't match documents missing the field) ### Cloudflare Vectorize - Requires explicit metadata indexing before filtering can be used - Use `createMetadataIndex()` to index fields you want to filter on - Up to 10 metadata indexes per Vectorize index - String values are indexed up to first 64 bytes (truncated on UTF-8 boundaries) - Number values use float64 precision - Filter JSON must be under 2048 bytes - Field names cannot contain dots (.) or start with $ - Field names limited to 512 characters - Vectors must be re-upserted after creating new metadata indexes to be included in filtered results - Range queries may have reduced accuracy with very large datasets (~10M+ vectors) ### LibSQL - Supports nested object queries with dot notation - Array fields are validated to ensure they contain valid JSON arrays - Numeric comparisons maintain proper type handling - Empty arrays in conditions are handled gracefully - Metadata is stored in a JSONB column for efficient querying ### PgVector - Full support for PostgreSQL's native JSON querying capabilities - Efficient handling of array operations using native array functions - Proper type handling for numbers, strings, and booleans - Nested field queries use PostgreSQL's JSON path syntax internally - Metadata is stored in a JSONB column for efficient indexing ### Pinecone - Metadata field names are limited to 512 characters - Numeric values must be within the range of ±1e38 - Arrays in metadata are limited to 64KB total size - Nested objects are flattened with dot notation - Metadata updates replace the entire metadata object ### Qdrant - Supports advanced filtering with nested conditions - Payload (metadata) fields must be explicitly indexed for filtering - Efficient handling of geo-spatial queries - Special handling for null and empty values - Vector-specific filtering capabilities - Datetime values must be in RFC 3339 format ### Upstash - 512-character limit for metadata field keys - Query size is limited (avoid large IN clauses) - No support for null/undefined values in filters - Translates to SQL-like syntax internally - Case-sensitive string comparisons - Metadata updates are atomic ## Related - [Astra](./astra) - [Chroma](./chroma) - [Cloudflare Vectorize](./vectorize) - [LibSQL](./libsql) - [PgStore](./pg) - [Pinecone](./pinecone) - [Qdrant](./qdrant) - [Upstash](./upstash) --- title: "Reference: PG Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the PgVector class in Mastra, which provides vector search using PostgreSQL with pgvector extension. --- # PG Vector Store [EN] Source: https://mastra.ai/en/reference/rag/pg The PgVector class provides vector search using [PostgreSQL](https://www.postgresql.org/) with [pgvector](https://github.com/pgvector/pgvector) extension. It provides robust vector similarity search capabilities within your existing PostgreSQL database. ## Constructor Options ## Constructor Examples You can instantiate `PgVector` in two ways: ```ts import { PgVector } from '@mastra/pg'; // Using a connection string (string form) const vectorStore1 = new PgVector('postgresql://user:password@localhost:5432/mydb'); // Using a config object (with optional schemaName) const vectorStore2 = new PgVector({ connectionString: 'postgresql://user:password@localhost:5432/mydb', schemaName: 'custom_schema', // optional }); ``` ## Methods ### createIndex() #### IndexConfig #### Memory Requirements HNSW indexes require significant shared memory during construction. For 100K vectors: - Small dimensions (64d): ~60MB with default settings - Medium dimensions (256d): ~180MB with default settings - Large dimensions (384d+): ~250MB+ with default settings Higher M values or efConstruction values will increase memory requirements significantly. Adjust your system's shared memory limits if needed. ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include the vector in the result", }, { name: "minScore", type: "number", isOptional: true, defaultValue: "0", description: "Minimum similarity score threshold", }, { name: "options", type: "{ ef?: number; probes?: number }", isOptional: true, description: "Additional options for HNSW and IVF indexes", properties: [ { type: "object", parameters: [ { name: "ef", type: "number", description: "HNSW search parameter", isOptional: true, }, { name: "probes", type: "number", description: "IVF search parameter", isOptional: true, }, ], }, ], }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface PGIndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; type: "flat" | "hnsw" | "ivfflat"; config: { m?: number; efConstruction?: number; lists?: number; probes?: number; }; } ``` ### deleteIndex() ### updateIndexById() ", description: "New metadata values", isOptional: true, }, ], }, ], }, ]} /> Updates an existing vector by ID. At least one of vector or metadata must be provided. ```typescript copy // Update just the vector await pgVector.updateIndexById("my_vectors", "vector123", { vector: [0.1, 0.2, 0.3], }); // Update just the metadata await pgVector.updateIndexById("my_vectors", "vector123", { metadata: { label: "updated" }, }); // Update both vector and metadata await pgVector.updateIndexById("my_vectors", "vector123", { vector: [0.1, 0.2, 0.3], metadata: { label: "updated" }, }); ``` ### deleteIndexById() Deletes a single vector by ID from the specified index. ```typescript copy await pgVector.deleteIndexById("my_vectors", "vector123"); ``` ### disconnect() Closes the database connection pool. Should be called when done using the store. ### buildIndex() Builds or rebuilds an index with specified metric and configuration. Will drop any existing index before creating the new one. ```typescript copy // Define HNSW index await pgVector.buildIndex("my_vectors", "cosine", { type: "hnsw", hnsw: { m: 8, efConstruction: 32, }, }); // Define IVF index await pgVector.buildIndex("my_vectors", "cosine", { type: "ivfflat", ivf: { lists: 100, }, }); // Define flat index await pgVector.buildIndex("my_vectors", "cosine", { type: "flat", }); ``` ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Best Practices - Regularly evaluate your index configuration to ensure optimal performance. - Adjust parameters like `lists` and `m` based on dataset size and query requirements. - Rebuild indexes periodically to maintain efficiency, especially after significant data changes. ## Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Pinecone Vector Store | Vector DBs | RAG | Mastra Docs" description: Documentation for the PineconeVector class in Mastra, which provides an interface to Pinecone's vector database. --- # Pinecone Vector Store [EN] Source: https://mastra.ai/en/reference/rag/pinecone The PineconeVector class provides an interface to [Pinecone](https://www.pinecone.io/)'s vector database. It provides real-time vector search, with features like hybrid search, metadata filtering, and namespace management. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, { name: "namespace", type: "string", isOptional: true, description: "Optional namespace to store vectors in. Vectors in different namespaces are isolated from each other.", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include the vector in the result", }, { name: "namespace", type: "string", isOptional: true, description: "Optional namespace to query vectors from. Only returns results from the specified namespace.", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() ", isOptional: true, description: "New metadata to update", }, ]} /> ### deleteIndexById() ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ### Environment Variables Required environment variables: - `PINECONE_API_KEY`: Your Pinecone API key - `PINECONE_ENVIRONMENT`: Pinecone environment (e.g., 'us-west1-gcp') ## Hybrid Search Pinecone supports hybrid search by combining dense and sparse vectors. To use hybrid search: 1. Create an index with `metric: 'dotproduct'` 2. During upsert, provide sparse vectors using the `sparseVectors` parameter 3. During query, provide a sparse vector using the `sparseVector` parameter ## Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Qdrant Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for integrating Qdrant with Mastra, a vector similarity search engine for managing vectors and payloads. --- # Qdrant Vector Store [EN] Source: https://mastra.ai/en/reference/rag/qdrant The QdrantVector class provides vector search using [Qdrant](https://qdrant.tech/), a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage vectors with additional payload and extended filtering support. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() ; }", description: "Object containing the vector and/or metadata to update", }, ]} /> Updates a vector and/or its metadata in the specified index. If both vector and metadata are provided, both will be updated. If only one is provided, only that will be updated. ### deleteIndexById() Deletes a vector from the specified index by its ID. ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Rerank | Document Retrieval | RAG | Mastra Docs" description: Documentation for the rerank function in Mastra, which provides advanced reranking capabilities for vector search results. --- # rerank() [EN] Source: https://mastra.ai/en/reference/rag/rerank The `rerank()` function provides advanced reranking capabilities for vector search results by combining semantic relevance, vector similarity, and position-based scoring. ```typescript function rerank( results: QueryResult[], query: string, modelConfig: ModelConfig, options?: RerankerFunctionOptions ): Promise ``` ## Usage Example ```typescript import { openai } from "@ai-sdk/openai"; import { rerank } from "@mastra/rag"; const model = openai("gpt-4o-mini"); const rerankedResults = await rerank( vectorSearchResults, "How do I deploy to production?", model, { weights: { semantic: 0.5, vector: 0.3, position: 0.2 }, topK: 3 } ); ``` ## Parameters The rerank function accepts any LanguageModel from the Vercel AI SDK. When using the Cohere model `rerank-v3.5`, it will automatically use Cohere's reranking capabilities. > **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field. ### RerankerFunctionOptions ## Returns The function returns an array of `RerankResult` objects: ### ScoringDetails ## Related - [createVectorQueryTool](../tools/vector-query-tool) --- title: "Reference: Turbopuffer Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for integrating Turbopuffer with Mastra, a high-performance vector database for efficient similarity search. --- # Turbopuffer Vector Store [EN] Source: https://mastra.ai/en/reference/rag/turbopuffer The TurbopufferVector class provides vector search using [Turbopuffer](https://turbopuffer.com/), a high-performance vector database optimized for RAG applications. Turbopuffer offers fast vector similarity search with advanced filtering capabilities and efficient storage management. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Schema Configuration The `schemaConfigForIndex` option allows you to define explicit schemas for different indexes: ```typescript copy schemaConfigForIndex: (indexName: string) => { // Mastra's default embedding model and index for memory messages: if (indexName === "memory_messages_384") { return { dimensions: 384, schema: { thread_id: { type: "string", filterable: true, }, }, }; } else { throw new Error(`TODO: add schema for index: ${indexName}`); } }; ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Upstash Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the UpstashVector class in Mastra, which provides vector search using Upstash Vector. --- # Upstash Vector Store [EN] Source: https://mastra.ai/en/reference/rag/upstash The UpstashVector class provides vector search using [Upstash Vector](https://upstash.com/vector), a serverless vector database service that provides vector similarity search with metadata filtering capabilities. ## Constructor Options ## Methods ### createIndex() Note: This method is a no-op for Upstash as indexes are created automatically. ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names (namespaces) as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() The `update` object can have the following properties: - `vector` (optional): An array of numbers representing the new vector. - `metadata` (optional): A record of key-value pairs for metadata. Throws an error if neither `vector` nor `metadata` is provided, or if only `metadata` is provided. ### deleteIndexById() Attempts to delete an item by its ID from the specified index. Logs an error message if the deletion fails. ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Environment Variables Required environment variables: - `UPSTASH_VECTOR_URL`: Your Upstash Vector database URL - `UPSTASH_VECTOR_TOKEN`: Your Upstash Vector API token ## Related - [Metadata Filters](./metadata-filters) --- title: "Reference: Cloudflare Vector Store | Vector Databases | RAG | Mastra Docs" description: Documentation for the CloudflareVector class in Mastra, which provides vector search using Cloudflare Vectorize. --- # Cloudflare Vector Store [EN] Source: https://mastra.ai/en/reference/rag/vectorize The CloudflareVector class provides vector search using [Cloudflare Vectorize](https://developers.cloudflare.com/vectorize/), a vector database service integrated with Cloudflare's edge network. ## Constructor Options ## Methods ### createIndex() ### upsert() []", isOptional: true, description: "Metadata for each vector", }, { name: "ids", type: "string[]", isOptional: true, description: "Optional vector IDs (auto-generated if not provided)", }, ]} /> ### query() ", isOptional: true, description: "Metadata filters for the query", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "Whether to include vectors in the results", }, ]} /> ### listIndexes() Returns an array of index names as strings. ### describeIndex() Returns: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### createMetadataIndex() Creates an index on a metadata field to enable filtering. ### deleteMetadataIndex() Removes an index from a metadata field. ### listMetadataIndexes() Lists all metadata field indexes for an index. ### updateIndexById() Updates a vector or metadata for a specific ID within an index. ; }", description: "Object containing the vector and/or metadata to update", }, ]} /> ### deleteIndexById() Deletes a vector and its associated metadata for a specific ID within an index. ## Response Types Query results are returned in this format: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; } ``` ## Error Handling The store throws typed errors that can be caught: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // Additional error context } } ``` ## Environment Variables Required environment variables: - `CLOUDFLARE_ACCOUNT_ID`: Your Cloudflare account ID - `CLOUDFLARE_API_TOKEN`: Your Cloudflare API token with Vectorize permissions ## Related - [Metadata Filters](./metadata-filters) --- title: "Cloudflare D1 Storage | Storage System | Mastra Core" description: Documentation for the Cloudflare D1 SQL storage implementation in Mastra. --- # Cloudflare D1 Storage [EN] Source: https://mastra.ai/en/reference/storage/cloudflare-d1 The Cloudflare D1 storage implementation provides a serverless SQL database solution using Cloudflare D1, supporting relational operations and transactional consistency. ## Installation ```bash npm install @mastra/cloudflare-d1 ``` ## Usage ```typescript copy showLineNumbers import { D1Store } from "@mastra/cloudflare-d1"; // --- Example 1: Using Workers Binding --- const storageWorkers = new D1Store({ binding: D1Database, // D1Database binding provided by the Workers runtime tablePrefix: 'dev_', // Optional: isolate tables per environment }); // --- Example 2: Using REST API --- const storageRest = new D1Store({ accountId: process.env.CLOUDFLARE_ACCOUNT_ID!, // Cloudflare Account ID databaseId: process.env.CLOUDFLARE_D1_DATABASE_ID!, // D1 Database ID apiToken: process.env.CLOUDFLARE_API_TOKEN!, // Cloudflare API Token tablePrefix: 'dev_', // Optional: isolate tables per environment }); ``` ## Parameters ## Additional Notes ### Schema Management The storage implementation handles schema creation and updates automatically. It creates the following tables: - `threads`: Stores conversation threads - `messages`: Stores individual messages - `metadata`: Stores additional metadata for threads and messages ### Transactions & Consistency Cloudflare D1 provides transactional guarantees for single-row operations. This means that multiple operations can be executed as a single, all-or-nothing unit of work. ### Table Creation & Migrations Tables are created automatically when storage is initialized (and can be isolated per environment using the `tablePrefix` option), but advanced schema changes—such as adding columns, changing data types, or modifying indexes—require manual migration and careful planning to avoid data loss. --- title: "Cloudflare Storage | Storage System | Mastra Core" description: Documentation for the Cloudflare KV storage implementation in Mastra. --- # Cloudflare Storage [EN] Source: https://mastra.ai/en/reference/storage/cloudflare The Cloudflare KV storage implementation provides a globally distributed, serverless key-value store solution using Cloudflare Workers KV. ## Installation ```bash npm install @mastra/cloudflare ``` ## Usage ```typescript copy showLineNumbers import { CloudflareStore } from "@mastra/cloudflare"; // --- Example 1: Using Workers Binding --- const storageWorkers = new CloudflareStore({ bindings: { threads: THREADS_KV, // KVNamespace binding for threads table messages: MESSAGES_KV, // KVNamespace binding for messages table // Add other tables as needed }, keyPrefix: 'dev_', // Optional: isolate keys per environment }); // --- Example 2: Using REST API --- const storageRest = new CloudflareStore({ accountId: process.env.CLOUDFLARE_ACCOUNT_ID!, // Cloudflare Account ID apiToken: process.env.CLOUDFLARE_API_TOKEN!, // Cloudflare API Token namespacePrefix: 'dev_', // Optional: isolate namespaces per environment }); ``` ## Parameters ", description: "Cloudflare Workers KV bindings (for Workers runtime)", isOptional: true, }, { name: "accountId", type: "string", description: "Cloudflare Account ID (for REST API)", isOptional: true, }, { name: "apiToken", type: "string", description: "Cloudflare API Token (for REST API)", isOptional: true, }, { name: "namespacePrefix", type: "string", description: "Optional prefix for all namespace names (useful for environment isolation)", isOptional: true, }, { name: "keyPrefix", type: "string", description: "Optional prefix for all keys (useful for environment isolation)", isOptional: true, }, ]} /> #### Additional Notes ### Schema Management The storage implementation handles schema creation and updates automatically. It creates the following tables: - `threads`: Stores conversation threads - `messages`: Stores individual messages - `metadata`: Stores additional metadata for threads and messages ### Consistency & Propagation Cloudflare KV is an eventually consistent store, meaning that data may not be immediately available across all regions after a write. ### Key Structure & Namespacing Keys in Cloudflare KV are structured as a combination of a configurable prefix and a table-specific format (e.g., `threads:threadId`). For Workers deployments, `keyPrefix` is used to isolate data within a namespace; for REST API deployments, `namespacePrefix` is used to isolate entire namespaces between environments or applications. --- title: "LibSQL Storage | Storage System | Mastra Core" description: Documentation for the LibSQL storage implementation in Mastra. --- # LibSQL Storage [EN] Source: https://mastra.ai/en/reference/storage/libsql The LibSQL storage implementation provides a SQLite-compatible storage solution that can run both in-memory and as a persistent database. ## Installation ```bash npm install @mastra/storage-libsql ``` ## Usage ```typescript copy showLineNumbers import { LibSQLStore } from "@mastra/core/storage/libsql"; // File database (development) const storage = new LibSQLStore({ config: { url: 'file:storage.db', } }); // Persistent database (production) const storage = new LibSQLStore({ config: { url: process.env.DATABASE_URL, } }); ``` ## Parameters ## Additional Notes ### In-Memory vs Persistent Storage The file configuration (`file:storage.db`) is useful for: - Development and testing - Temporary storage - Quick prototyping For production use cases, use a persistent database URL: `libsql://your-database.turso.io` ### Schema Management The storage implementation handles schema creation and updates automatically. It creates the following tables: - `threads`: Stores conversation threads - `messages`: Stores individual messages - `metadata`: Stores additional metadata for threads and messages --- title: "PostgreSQL Storage | Storage System | Mastra Core" description: Documentation for the PostgreSQL storage implementation in Mastra. --- # PostgreSQL Storage [EN] Source: https://mastra.ai/en/reference/storage/postgresql The PostgreSQL storage implementation provides a production-ready storage solution using PostgreSQL databases. ## Installation ```bash npm install @mastra/pg ``` ## Usage ```typescript copy showLineNumbers import { PostgresStore } from "@mastra/pg"; const storage = new PostgresStore({ connectionString: process.env.DATABASE_URL, }); ``` ## Parameters ## Constructor Examples You can instantiate `PostgresStore` in the following ways: ```ts import { PostgresStore } from '@mastra/pg'; // Using a connection string only const store1 = new PostgresStore({ connectionString: 'postgresql://user:password@localhost:5432/mydb', }); // Using a connection string with a custom schema name const store2 = new PostgresStore({ connectionString: 'postgresql://user:password@localhost:5432/mydb', schemaName: 'custom_schema', // optional }); // Using individual connection parameters const store4 = new PostgresStore({ host: 'localhost', port: 5432, database: 'mydb', user: 'user', password: 'password', }); // Individual parameters with schemaName const store5 = new PostgresStore({ host: 'localhost', port: 5432, database: 'mydb', user: 'user', password: 'password', schemaName: 'custom_schema', // optional }); ``` ## Additional Notes ### Schema Management The storage implementation handles schema creation and updates automatically. It creates the following tables: - `threads`: Stores conversation threads - `messages`: Stores individual messages - `metadata`: Stores additional metadata for threads and messages --- title: "Upstash Storage | Storage System | Mastra Core" description: Documentation for the Upstash storage implementation in Mastra. --- # Upstash Storage [EN] Source: https://mastra.ai/en/reference/storage/upstash The Upstash storage implementation provides a serverless-friendly storage solution using Upstash's Redis-compatible key-value store. ## Installation ```bash npm install @mastra/upstash ``` ## Usage ```typescript copy showLineNumbers import { UpstashStore } from "@mastra/upstash"; const storage = new UpstashStore({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }); ``` ## Parameters ## Additional Notes ### Key Structure The Upstash storage implementation uses a key-value structure: - Thread keys: `{prefix}thread:{threadId}` - Message keys: `{prefix}message:{messageId}` - Metadata keys: `{prefix}metadata:{entityId}` ### Serverless Benefits Upstash storage is particularly well-suited for serverless deployments: - No connection management needed - Pay-per-request pricing - Global replication options - Edge-compatible ### Data Persistence Upstash provides: - Automatic data persistence - Point-in-time recovery - Cross-region replication options ### Performance Considerations For optimal performance: - Use appropriate key prefixes to organize data - Monitor Redis memory usage - Consider data expiration policies if needed --- title: "Reference: MastraMCPClient | Tool Discovery | Mastra Docs" description: API Reference for MastraMCPClient - A client implementation for the Model Context Protocol. --- # MastraMCPClient [EN] Source: https://mastra.ai/en/reference/tools/client The `MastraMCPClient` class provides a client implementation for interacting with Model Context Protocol (MCP) servers. It handles connection management, resource discovery, and tool execution through the MCP protocol. ## Constructor Creates a new instance of the MastraMCPClient. ```typescript constructor({ name, version = '1.0.0', server, capabilities = {}, timeout = 60000, }: { name: string; server: MastraMCPServerDefinition; capabilities?: ClientCapabilities; version?: string; timeout?: number; }) ``` ### Parameters
### MastraMCPServerDefinition MCP servers can be configured as either stdio-based or SSE-based servers. The configuration includes both server-specific settings and common options:
", isOptional: true, description: "For stdio servers: Environment variables to set for the command.", }, { name: "url", type: "URL", isOptional: true, description: "For SSE servers: The URL of the server.", }, { name: "requestInit", type: "RequestInit", isOptional: true, description: "For SSE servers: Request configuration for the fetch API.", }, { name: "eventSourceInit", type: "EventSourceInit", isOptional: true, description: "For SSE servers: Custom fetch configuration for SSE connections. Required when using custom headers.", }, { name: "logger", type: "LogHandler", isOptional: true, description: "Optional additional handler for logging.", }, { name: "timeout", type: "number", isOptional: true, description: "Server-specific timeout in milliseconds.", }, { name: "capabilities", type: "ClientCapabilities", isOptional: true, description: "Server-specific capabilities configuration.", }, { name: "enableServerLogs", type: "boolean", isOptional: true, defaultValue: "true", description: "Whether to enable logging for this server.", }, ]} /> ### LogHandler The `LogHandler` function takes a `LogMessage` object as its parameter and returns void. The `LogMessage` object has the following properties. The `LoggingLevel` type is a string enum with values: `debug`, `info`, `warn`, and `error`.
", isOptional: true, description: "Optional additional log details", }, ]} /> ## Methods ### connect() Establishes a connection with the MCP server. ```typescript async connect(): Promise ``` ### disconnect() Closes the connection with the MCP server. ```typescript async disconnect(): Promise ``` ### resources() Retrieves the list of available resources from the server. ```typescript async resources(): Promise ``` ### tools() Fetches and initializes available tools from the server, converting them into Mastra-compatible tool formats. ```typescript async tools(): Promise> ``` Returns an object mapping tool names to their corresponding Mastra tool implementations. ## Examples ### Using with Mastra Agent #### Example with Stdio Server ```typescript import { Agent } from "@mastra/core/agent"; import { MastraMCPClient } from "@mastra/mcp"; import { openai } from "@ai-sdk/openai"; // Initialize the MCP client using mcp/fetch as an example https://hub.docker.com/r/mcp/fetch // Visit https://github.com/docker/mcp-servers for other reference docker mcp servers const fetchClient = new MastraMCPClient({ name: "fetch", server: { command: "docker", args: ["run", "-i", "--rm", "mcp/fetch"], logger: (logMessage) => { console.log(`[${logMessage.level}] ${logMessage.message}`); }, }, }); // Create a Mastra Agent const agent = new Agent({ name: "Fetch agent", instructions: "You are able to fetch data from URLs on demand and discuss the response data with the user.", model: openai("gpt-4o-mini"), }); try { // Connect to the MCP server await fetchClient.connect(); // Gracefully handle process exits so the docker subprocess is cleaned up process.on("exit", () => { fetchClient.disconnect(); }); // Get available tools const tools = await fetchClient.tools(); // Use the agent with the MCP tools const response = await agent.generate( "Tell me about mastra.ai/docs. Tell me generally what this page is and the content it includes.", { toolsets: { fetch: tools, }, }, ); console.log("\n\n" + response.text); } catch (error) { console.error("Error:", error); } finally { // Always disconnect when done await fetchClient.disconnect(); } ``` ### Example with SSE Server ```typescript // Initialize the MCP client using an SSE server const sseClient = new MastraMCPClient({ name: "sse-client", server: { url: new URL("https://your-mcp-server.com/sse"), // Optional fetch request configuration - Note: requestInit alone isn't enough for SSE requestInit: { headers: { Authorization: "Bearer your-token", }, }, // Required for SSE connections with custom headers eventSourceInit: { fetch(input: Request | URL | string, init?: RequestInit) { const headers = new Headers(init?.headers || {}); headers.set('Authorization', 'Bearer your-token'); return fetch(input, { ...init, headers, }); }, }, // Optional additional logging configuration logger: (logMessage) => { console.log(`[${logMessage.level}] ${logMessage.serverName}: ${logMessage.message}`); }, // Disable server logs enableServerLogs: false }, }); // The rest of the usage is identical to the stdio example ``` ### Important Note About SSE Authentication When using SSE connections with authentication or custom headers, you need to configure both `requestInit` and `eventSourceInit`. This is because SSE connections use the browser's EventSource API, which doesn't support custom headers directly. The `eventSourceInit` configuration allows you to customize the underlying fetch request used for the SSE connection, ensuring your authentication headers are properly included. Without `eventSourceInit`, authentication headers specified in `requestInit` won't be included in the connection request, leading to 401 Unauthorized errors. ## Related Information - For managing multiple MCP servers in your application, see the [MCPConfiguration documentation](./mcp-configuration) - For more details about the Model Context Protocol, see the [@modelcontextprotocol/sdk documentation](https://github.com/modelcontextprotocol/typescript-sdk). --- title: "Reference: createDocumentChunkerTool() | Tools | Mastra Docs" description: Documentation for the Document Chunker Tool in Mastra, which splits documents into smaller chunks for efficient processing and retrieval. --- # createDocumentChunkerTool() [EN] Source: https://mastra.ai/en/reference/tools/document-chunker-tool The `createDocumentChunkerTool()` function creates a tool for splitting documents into smaller chunks for efficient processing and retrieval. It supports different chunking strategies and configurable parameters. ## Basic Usage ```typescript import { createDocumentChunkerTool, MDocument } from "@mastra/rag"; const document = new MDocument({ text: "Your document content here...", metadata: { source: "user-manual" } }); const chunker = createDocumentChunkerTool({ doc: document, params: { strategy: "recursive", size: 512, overlap: 50, separator: "\n" } }); const { chunks } = await chunker.execute(); ``` ## Parameters ### ChunkParams ## Returns ## Example with Custom Parameters ```typescript const technicalDoc = new MDocument({ text: longDocumentContent, metadata: { type: "technical", version: "1.0" } }); const chunker = createDocumentChunkerTool({ doc: technicalDoc, params: { strategy: "recursive", size: 1024, // Larger chunks overlap: 100, // More overlap separator: "\n\n" // Split on double newlines } }); const { chunks } = await chunker.execute(); // Process the chunks chunks.forEach((chunk, index) => { console.log(`Chunk ${index + 1} length: ${chunk.content.length}`); }); ``` ## Tool Details The chunker is created as a Mastra tool with the following properties: - **Tool ID**: `Document Chunker {strategy} {size}` - **Description**: `Chunks document using {strategy} strategy with size {size} and {overlap} overlap` - **Input Schema**: Empty object (no additional inputs required) - **Output Schema**: Object containing the chunks array ## Related - [MDocument](../rag/document.mdx) - [createVectorQueryTool](./vector-query-tool) --- title: "Reference: createGraphRAGTool() | RAG | Mastra Tools Docs" description: Documentation for the Graph RAG Tool in Mastra, which enhances RAG by building a graph of semantic relationships between documents. --- # createGraphRAGTool() [EN] Source: https://mastra.ai/en/reference/tools/graph-rag-tool The `createGraphRAGTool()` creates a tool that enhances RAG by building a graph of semantic relationships between documents. It uses the `GraphRAG` system under the hood to provide graph-based retrieval, finding relevant content through both direct similarity and connected relationships. ## Usage Example ```typescript import { openai } from "@ai-sdk/openai"; import { createGraphRAGTool } from "@mastra/rag"; const graphTool = createGraphRAGTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), graphOptions: { dimension: 1536, threshold: 0.7, randomWalkSteps: 100, restartProb: 0.15 } }); ``` ## Parameters ### GraphOptions ## Returns The tool returns an object with: ## Default Tool Description The default description focuses on: - Analyzing relationships between documents - Finding patterns and connections - Answering complex queries ## Advanced Example ```typescript const graphTool = createGraphRAGTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), graphOptions: { dimension: 1536, threshold: 0.8, // Higher similarity threshold randomWalkSteps: 200, // More exploration steps restartProb: 0.2 // Higher restart probability } }); ``` ## Example with Custom Description ```typescript const graphTool = createGraphRAGTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), description: "Analyze document relationships to find complex patterns and connections in our company's historical data" }); ``` This example shows how to customize the tool description for a specific use case while maintaining its core purpose of relationship analysis. ## Related - [createVectorQueryTool](./vector-query-tool) - [GraphRAG](../rag/graph-rag) --- title: "Reference: MCPConfiguration | Tool Management | Mastra Docs" description: API Reference for MCPConfiguration - A class for managing multiple Model Context Protocol servers and their tools. --- # MCPConfiguration [EN] Source: https://mastra.ai/en/reference/tools/mcp-configuration The `MCPConfiguration` class provides a way to manage multiple MCP server connections and their tools in a Mastra application. It handles connection lifecycle, tool namespacing, and provides convenient access to tools across all configured servers. ## Constructor Creates a new instance of the MCPConfiguration class. ```typescript constructor({ id?: string; servers: Record; timeout?: number; }: MCPConfigurationOptions) ``` ### MCPConfigurationOptions
", description: "A map of server configurations, where each key is a unique server identifier and the value is the server configuration.", }, { name: "timeout", type: "number", isOptional: true, defaultValue: "60000", description: "Global timeout value in milliseconds for all servers unless overridden in individual server configs.", }, ]} /> ### MastraMCPServerDefinition Each server in the `servers` map can be configured as either a stdio-based server or an SSE-based server. For details about the available configuration options, see [MastraMCPServerDefinition](./client#mastramcpserverdefinition) in the MastraMCPClient documentation. ## Methods ### getTools() Retrieves all tools from all configured servers, with tool names namespaced by their server name (in the format `serverName_toolName`) to prevent conflicts. Intended to be passed onto an Agent definition. ```ts new Agent({ tools: await mcp.getTools() }); ``` ### getToolsets() Returns an object mapping namespaced tool names (in the format `serverName.toolName`) to their tool implementations. Intended to be passed dynamically into the generate or stream method. ```typescript const res = await agent.stream(prompt, { toolsets: await mcp.getToolsets(), }); ``` ### disconnect() Disconnects from all MCP servers and cleans up resources. ```typescript async disconnect(): Promise ``` ## Examples ### Basic Usage ```typescript import { MCPConfiguration } from "@mastra/mcp"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const mcp = new MCPConfiguration({ servers: { stockPrice: { command: "npx", args: ["tsx", "stock-price.ts"], env: { API_KEY: "your-api-key", }, log: (logMessage) => { console.log(`[${logMessage.level}] ${logMessage.message}`); }, }, weather: { url: new URL("http://localhost:8080/sse"),∂ }, }, timeout: 30000, // Global 30s timeout }); // Create an agent with access to all tools const agent = new Agent({ name: "Multi-tool Agent", instructions: "You have access to multiple tool servers.", model: openai("gpt-4"), tools: await mcp.getTools(), }); ``` ### Using Toolsets in generate() or stream() ```typescript import { Agent } from "@mastra/core/agent"; import { MCPConfiguration } from "@mastra/mcp"; import { openai } from "@ai-sdk/openai"; // Create the agent first, without any tools const agent = new Agent({ name: "Multi-tool Agent", instructions: "You help users check stocks and weather.", model: openai("gpt-4"), }); // Later, configure MCP with user-specific settings const mcp = new MCPConfiguration({ servers: { stockPrice: { command: "npx", args: ["tsx", "stock-price.ts"], env: { API_KEY: "user-123-api-key", }, timeout: 20000, // Server-specific timeout }, weather: { url: new URL("http://localhost:8080/sse"), requestInit: { headers: { Authorization: `Bearer user-123-token`, }, }, }, }, }); // Pass all toolsets to stream() or generate() const response = await agent.stream( "How is AAPL doing and what is the weather?", { toolsets: await mcp.getToolsets(), }, ); ``` ## Resource Management The `MCPConfiguration` class includes built-in memory leak prevention for managing multiple instances: 1. Creating multiple instances with identical configurations without an `id` will throw an error to prevent memory leaks 2. If you need multiple instances with identical configurations, provide a unique `id` for each instance 3. Call `await configuration.disconnect()` before recreating an instance with the same configuration 4. If you only need one instance, consider moving the configuration to a higher scope to avoid recreation For example, if you try to create multiple instances with the same configuration without an `id`: ```typescript // First instance - OK const mcp1 = new MCPConfiguration({ servers: { /* ... */ }, }); // Second instance with same config - Will throw an error const mcp2 = new MCPConfiguration({ servers: { /* ... */ }, }); // To fix, either: // 1. Add unique IDs const mcp3 = new MCPConfiguration({ id: "instance-1", servers: { /* ... */ }, }); // 2. Or disconnect before recreating await mcp1.disconnect(); const mcp4 = new MCPConfiguration({ servers: { /* ... */ }, }); ``` ## Server Lifecycle MCPConfiguration handles server connections gracefully: 1. Automatic connection management for multiple servers 2. Graceful server shutdown to prevent error messages during development 3. Proper cleanup of resources when disconnecting ## Related Information - For details about individual MCP client configuration, see the [MastraMCPClient documentation](./client) - For more about the Model Context Protocol, see the [@modelcontextprotocol/sdk documentation](https://github.com/modelcontextprotocol/typescript-sdk) --- title: "Reference: createVectorQueryTool() | RAG | Mastra Tools Docs" description: Documentation for the Vector Query Tool in Mastra, which facilitates semantic search over vector stores with filtering and reranking capabilities. --- # createVectorQueryTool() [EN] Source: https://mastra.ai/en/reference/tools/vector-query-tool The `createVectorQueryTool()` function creates a tool for semantic search over vector stores. It supports filtering, reranking, and integrates with various vector store backends. ## Basic Usage ```typescript import { openai } from '@ai-sdk/openai'; import { createVectorQueryTool } from "@mastra/rag"; const queryTool = createVectorQueryTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), }); ``` ## Parameters ### RerankConfig ## Returns The tool returns an object with: ## Default Tool Description The default description focuses on: - Finding relevant information in stored knowledge - Answering user questions - Retrieving factual content ## Result Handling The tool determines the number of results to return based on the user's query, with a default of 10 results. This can be adjusted based on the query requirements. ## Example with Filters ```typescript const queryTool = createVectorQueryTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), enableFilters: true, }); ``` With filtering enabled, the tool processes queries to construct metadata filters that combine with semantic search. The process works as follows: 1. A user makes a query with specific filter requirements like "Find content where the 'version' field is greater than 2.0" 2. The agent analyzes the query and constructs the appropriate filters: ```typescript { "version": { "$gt": 2.0 } } ``` This agent-driven approach: - Processes natural language queries into filter specifications - Implements vector store-specific filter syntax - Translates query terms to filter operators For detailed filter syntax and store-specific capabilities, see the [Metadata Filters](../rag/metadata-filters) documentation. For an example of how agent-driven filtering works, see the [Agent-Driven Metadata Filtering](../../../examples/rag/usage/filter-rag) example. ## Example with Reranking ```typescript const queryTool = createVectorQueryTool({ vectorStoreName: "milvus", indexName: "documentation", model: openai.embedding('text-embedding-3-small'), reranker: { model: openai('gpt-4o-mini'), options: { weights: { semantic: 0.5, // Semantic relevance weight vector: 0.3, // Vector similarity weight position: 0.2 // Original position weight }, topK: 5 } } }); ``` Reranking improves result quality by combining: - Semantic relevance: Using LLM-based scoring of text similarity - Vector similarity: Original vector distance scores - Position bias: Consideration of original result ordering - Query analysis: Adjustments based on query characteristics The reranker processes the initial vector search results and returns a reordered list optimized for relevance. ## Example with Custom Description ```typescript const queryTool = createVectorQueryTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), description: "Search through document archives to find relevant information for answering questions about company policies and procedures" }); ``` This example shows how to customize the tool description for a specific use case while maintaining its core purpose of information retrieval. ## Tool Details The tool is created with: - **ID**: `VectorQuery {vectorStoreName} {indexName} Tool` - **Input Schema**: Requires queryText and filter objects - **Output Schema**: Returns relevantContext string ## Related - [rerank()](../rag/rerank) - [createGraphRAGTool](./graph-rag-tool) --- title: "Reference: Azure Voice | Voice Providers | Mastra Docs" description: "Documentation for the AzureVoice class, providing text-to-speech and speech-to-text capabilities using Azure Cognitive Services." --- # Azure [EN] Source: https://mastra.ai/en/reference/voice/azure The AzureVoice class in Mastra provides text-to-speech and speech-to-text capabilities using Microsoft Azure Cognitive Services. ## Usage Example ```typescript import { AzureVoice } from '@mastra/voice-azure'; // Initialize with configuration const voice = new AzureVoice({ speechModel: { name: 'neural', apiKey: 'your-azure-speech-api-key', region: 'eastus' }, listeningModel: { name: 'whisper', apiKey: 'your-azure-speech-api-key', region: 'eastus' }, speaker: 'en-US-JennyNeural' // Default voice }); // Convert text to speech const audioStream = await voice.speak('Hello, how can I help you?', { speaker: 'en-US-GuyNeural', // Override default voice style: 'cheerful' // Voice style }); // Convert speech to text const text = await voice.listen(audioStream, { filetype: 'wav', language: 'en-US' }); ``` ## Configuration ### Constructor Options ### AzureSpeechConfig ## Methods ### speak() Converts text to speech using Azure's neural text-to-speech service. Returns: `Promise` ### listen() Transcribes audio using Azure's speech-to-text service. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ## Notes - API keys can be provided via constructor options or environment variables (AZURE_SPEECH_KEY and AZURE_SPEECH_REGION) - Azure offers a wide range of neural voices across many languages - Some voices support speaking styles like cheerful, sad, angry, etc. - Speech recognition supports multiple audio formats and languages - Azure's speech services provide high-quality neural voices with natural-sounding speech --- title: "Reference: Cloudflare Voice | Voice Providers | Mastra Docs" description: "Documentation for the CloudflareVoice class, providing text-to-speech capabilities using Cloudflare Workers AI." --- # Cloudflare [EN] Source: https://mastra.ai/en/reference/voice/cloudflare The CloudflareVoice class in Mastra provides text-to-speech capabilities using Cloudflare Workers AI. This provider specializes in efficient, low-latency speech synthesis suitable for edge computing environments. ## Usage Example ```typescript import { CloudflareVoice } from '@mastra/voice-cloudflare'; // Initialize with configuration const voice = new CloudflareVoice({ speechModel: { name: '@cf/meta/m2m100-1.2b', apiKey: 'your-cloudflare-api-token', accountId: 'your-cloudflare-account-id' }, speaker: 'en-US-1' // Default voice }); // Convert text to speech const audioStream = await voice.speak('Hello, how can I help you?', { speaker: 'en-US-2', // Override default voice }); // Get available voices const speakers = await voice.getSpeakers(); console.log(speakers); ``` ## Configuration ### Constructor Options ### CloudflareSpeechConfig ## Methods ### speak() Converts text to speech using Cloudflare's text-to-speech service. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ## Notes - API tokens can be provided via constructor options or environment variables (CLOUDFLARE_API_TOKEN and CLOUDFLARE_ACCOUNT_ID) - Cloudflare Workers AI is optimized for edge computing with low latency - This provider only supports text-to-speech (TTS) functionality, not speech-to-text (STT) - The service integrates well with other Cloudflare Workers products - For production use, ensure your Cloudflare account has the appropriate Workers AI subscription - Voice options are more limited compared to some other providers, but performance at the edge is excellent ## Related Providers If you need speech-to-text capabilities in addition to text-to-speech, consider using one of these providers: - [OpenAI](./openai) - Provides both TTS and STT - [Google](./google) - Provides both TTS and STT - [Azure](./azure) - Provides both TTS and STT --- title: "Reference: CompositeVoice | Voice Providers | Mastra Docs" description: "Documentation for the CompositeVoice class, which enables combining multiple voice providers for flexible text-to-speech and speech-to-text operations." --- # CompositeVoice [EN] Source: https://mastra.ai/en/reference/voice/composite-voice The CompositeVoice class allows you to combine different voice providers for text-to-speech and speech-to-text operations. This is particularly useful when you want to use the best provider for each operation - for example, using OpenAI for speech-to-text and PlayAI for text-to-speech. CompositeVoice is used internally by the Agent class to provide flexible voice capabilities. ## Usage Example ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; // Create voice providers const openai = new OpenAIVoice(); const playai = new PlayAIVoice(); // Use OpenAI for listening (speech-to-text) and PlayAI for speaking (text-to-speech) const voice = new CompositeVoice({ input: openai, output: playai }); // Convert speech to text using OpenAI const text = await voice.listen(audioStream); // Convert text to speech using PlayAI const audio = await voice.speak("Hello, world!"); ``` ## Constructor Parameters ## Methods ### speak() Converts text to speech using the configured speaking provider. Notes: - If no speaking provider is configured, this method will throw an error - Options are passed through to the configured speaking provider - Returns a stream of audio data ### listen() Converts speech to text using the configured listening provider. Notes: - If no listening provider is configured, this method will throw an error - Options are passed through to the configured listening provider - Returns either a string or a stream of transcribed text, depending on the provider ### getSpeakers() Returns a list of available voices from the speaking provider, where each node contains: Notes: - Returns voices from the speaking provider only - If no speaking provider is configured, returns an empty array - Each voice object will have at least a voiceId property - Additional voice properties depend on the speaking provider --- title: "Reference: Deepgram Voice | Voice Providers | Mastra Docs" description: "Documentation for the Deepgram voice implementation, providing text-to-speech and speech-to-text capabilities with multiple voice models and languages." --- # Deepgram [EN] Source: https://mastra.ai/en/reference/voice/deepgram The Deepgram voice implementation in Mastra provides text-to-speech (TTS) and speech-to-text (STT) capabilities using Deepgram's API. It supports multiple voice models and languages, with configurable options for both speech synthesis and transcription. ## Usage Example ```typescript import { DeepgramVoice } from "@mastra/voice-deepgram"; // Initialize with default configuration (uses DEEPGRAM_API_KEY environment variable) const voice = new DeepgramVoice(); // Initialize with custom configuration const voice = new DeepgramVoice({ speechModel: { name: 'aura', apiKey: 'your-api-key', }, listeningModel: { name: 'nova-2', apiKey: 'your-api-key', }, speaker: 'asteria-en', }); // Text-to-Speech const audioStream = await voice.speak("Hello, world!"); // Speech-to-Text const transcript = await voice.listen(audioStream); ``` ## Constructor Parameters ### DeepgramVoiceConfig ", description: "Additional properties to pass to the Deepgram API", isOptional: true, }, { name: "language", type: "string", description: "Language code for the model", isOptional: true, }, ]} /> ## Methods ### speak() Converts text to speech using the configured speech model and voice. Returns: `Promise` ### listen() Converts speech to text using the configured listening model. Returns: `Promise` ### getSpeakers() Returns a list of available voice options. --- title: "Reference: ElevenLabs Voice | Voice Providers | Mastra Docs" description: "Documentation for the ElevenLabs voice implementation, offering high-quality text-to-speech capabilities with multiple voice models and natural-sounding synthesis." --- # ElevenLabs [EN] Source: https://mastra.ai/en/reference/voice/elevenlabs The ElevenLabs voice implementation in Mastra provides high-quality text-to-speech (TTS) and speech-to-text (STT) capabilities using the ElevenLabs API. ## Usage Example ```typescript import { ElevenLabsVoice } from "@mastra/voice-elevenlabs"; // Initialize with default configuration (uses ELEVENLABS_API_KEY environment variable) const voice = new ElevenLabsVoice(); // Initialize with custom configuration const voice = new ElevenLabsVoice({ speechModel: { name: 'eleven_multilingual_v2', apiKey: 'your-api-key', }, speaker: 'custom-speaker-id', }); // Text-to-Speech const audioStream = await voice.speak("Hello, world!"); // Get available speakers const speakers = await voice.getSpeakers(); ``` ## Constructor Parameters ### ElevenLabsVoiceConfig ## Methods ### speak() Converts text to speech using the configured speech model and voice. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ### listen() Converts audio input to text using ElevenLabs Speech-to-Text API. The options object supports the following properties: Returns: `Promise` - A Promise that resolves to the transcribed text ## Important Notes 1. An ElevenLabs API key is required. Set it via the `ELEVENLABS_API_KEY` environment variable or pass it in the constructor. 2. The default speaker is set to Aria (ID: '9BWtsMINqrJLrRacOk9x'). 3. Speech-to-text functionality is not supported by ElevenLabs. 4. Available speakers can be retrieved using the `getSpeakers()` method, which returns detailed information about each voice including language and gender. --- title: "Reference: Google Voice | Voice Providers | Mastra Docs" description: "Documentation for the Google Voice implementation, providing text-to-speech and speech-to-text capabilities." --- # Google [EN] Source: https://mastra.ai/en/reference/voice/google The Google Voice implementation in Mastra provides both text-to-speech (TTS) and speech-to-text (STT) capabilities using Google Cloud services. It supports multiple voices, languages, and advanced audio configuration options. ## Usage Example ```typescript import { GoogleVoice } from "@mastra/voice-google"; // Initialize with default configuration (uses GOOGLE_API_KEY environment variable) const voice = new GoogleVoice(); // Initialize with custom configuration const voice = new GoogleVoice({ speechModel: { apiKey: 'your-speech-api-key', }, listeningModel: { apiKey: 'your-listening-api-key', }, speaker: 'en-US-Casual-K', }); // Text-to-Speech const audioStream = await voice.speak("Hello, world!", { languageCode: 'en-US', audioConfig: { audioEncoding: 'LINEAR16', }, }); // Speech-to-Text const transcript = await voice.listen(audioStream, { config: { encoding: 'LINEAR16', languageCode: 'en-US', }, }); // Get available voices for a specific language const voices = await voice.getSpeakers({ languageCode: 'en-US' }); ``` ## Constructor Parameters ### GoogleModelConfig ## Methods ### speak() Converts text to speech using Google Cloud Text-to-Speech service. Returns: `Promise` ### listen() Converts speech to text using Google Cloud Speech-to-Text service. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ## Important Notes 1. A Google Cloud API key is required. Set it via the `GOOGLE_API_KEY` environment variable or pass it in the constructor. 2. The default voice is set to 'en-US-Casual-K'. 3. Both text-to-speech and speech-to-text services use LINEAR16 as the default audio encoding. 4. The `speak()` method supports advanced audio configuration through the Google Cloud Text-to-Speech API. 5. The `listen()` method supports various recognition configurations through the Google Cloud Speech-to-Text API. 6. Available voices can be filtered by language code using the `getSpeakers()` method. --- title: "Reference: MastraVoice | Voice Providers | Mastra Docs" description: "Documentation for the MastraVoice abstract base class, which defines the core interface for all voice services in Mastra, including speech-to-speech capabilities." --- # MastraVoice [EN] Source: https://mastra.ai/en/reference/voice/mastra-voice The MastraVoice class is an abstract base class that defines the core interface for voice services in Mastra. All voice provider implementations (like OpenAI, Deepgram, PlayAI, Speechify) extend this class to provide their specific functionality. The class now includes support for real-time speech-to-speech capabilities through WebSocket connections. ## Usage Example ```typescript import { MastraVoice } from "@mastra/core/voice"; // Create a voice provider implementation class MyVoiceProvider extends MastraVoice { constructor(config: { speechModel?: BuiltInModelConfig; listeningModel?: BuiltInModelConfig; speaker?: string; realtimeConfig?: { model?: string; apiKey?: string; options?: unknown; }; }) { super({ speechModel: config.speechModel, listeningModel: config.listeningModel, speaker: config.speaker, realtimeConfig: config.realtimeConfig }); } // Implement required abstract methods async speak(input: string | NodeJS.ReadableStream, options?: { speaker?: string }): Promise { // Implement text-to-speech conversion } async listen(audioStream: NodeJS.ReadableStream, options?: unknown): Promise { // Implement speech-to-text conversion } async getSpeakers(): Promise> { // Return list of available voices } // Optional speech-to-speech methods async connect(): Promise { // Establish WebSocket connection for speech-to-speech communication } async send(audioData: NodeJS.ReadableStream | Int16Array): Promise { // Stream audio data in speech-to-speech } async answer(): Promise { // Trigger voice provider to respond } addTools(tools: Array): void { // Add tools for the voice provider to use } close(): void { // Close WebSocket connection } on(event: string, callback: (data: unknown) => void): void { // Register event listener } off(event: string, callback: (data: unknown) => void): void { // Remove event listener } } ``` ## Constructor Parameters ### BuiltInModelConfig ### RealtimeConfig ## Abstract Methods These methods must be implemented by unknown class extending MastraVoice. ### speak() Converts text to speech using the configured speech model. ```typescript abstract speak( input: string | NodeJS.ReadableStream, options?: { speaker?: string; [key: string]: unknown; } ): Promise ``` Purpose: - Takes text input and converts it to speech using the provider's text-to-speech service - Supports both string and stream input for flexibility - Allows overriding the default speaker/voice through options - Returns a stream of audio data that can be played or saved - May return void if the audio is handled by emitting 'speaking' event ### listen() Converts speech to text using the configured listening model. ```typescript abstract listen( audioStream: NodeJS.ReadableStream, options?: { [key: string]: unknown; } ): Promise ``` Purpose: - Takes an audio stream and converts it to text using the provider's speech-to-text service - Supports provider-specific options for transcription configuration - Can return either a complete text transcription or a stream of transcribed text - Not all providers support this functionality (e.g., PlayAI, Speechify) - May return void if the transcription is handled by emitting 'writing' event ### getSpeakers() Returns a list of available voices supported by the provider. ```typescript abstract getSpeakers(): Promise> ``` Purpose: - Retrieves the list of available voices/speakers from the provider - Each voice must have at least a voiceId property - Providers can include additional metadata about each voice - Used to discover available voices for text-to-speech conversion ## Optional Methods These methods have default implementations but can be overridden by voice providers that support speech-to-speech capabilities. ### connect() Establishes a WebSocket or WebRTC connection for communication. ```typescript connect(config?: unknown): Promise ``` Purpose: - Initializes a connection to the voice service for communication - Must be called before using features like send() or answer() - Returns a Promise that resolves when the connection is established - Configuration is provider-specific ### send() Streams audio data in real-time to the voice provider. ```typescript send(audioData: NodeJS.ReadableStream | Int16Array): Promise ``` Purpose: - Sends audio data to the voice provider for real-time processing - Useful for continuous audio streaming scenarios like live microphone input - Supports both ReadableStream and Int16Array audio formats - Must be in connected state before calling this method ### answer() Triggers the voice provider to generate a response. ```typescript answer(): Promise ``` Purpose: - Sends a signal to the voice provider to generate a response - Used in real-time conversations to prompt the AI to respond - Response will be emitted through the event system (e.g., 'speaking' event) ### addTools() Equips the voice provider with tools that can be used during conversations. ```typescript addTools(tools: Array): void ``` Purpose: - Adds tools that the voice provider can use during conversations - Tools can extend the capabilities of the voice provider - Implementation is provider-specific ### close() Disconnects from the WebSocket or WebRTC connection. ```typescript close(): void ``` Purpose: - Closes the connection to the voice service - Cleans up resources and stops any ongoing real-time processing - Should be called when you're done with the voice instance ### on() Registers an event listener for voice events. ```typescript on( event: E, callback: (data: E extends keyof VoiceEventMap ? VoiceEventMap[E] : unknown) => void, ): void ``` Purpose: - Registers a callback function to be called when the specified event occurs - Standard events include 'speaking', 'writing', and 'error' - Providers can emit custom events as well - Event data structure depends on the event type ### off() Removes an event listener. ```typescript off( event: E, callback: (data: E extends keyof VoiceEventMap ? VoiceEventMap[E] : unknown) => void, ): void ``` Purpose: - Removes a previously registered event listener - Used to clean up event handlers when they're no longer needed ## Event System The MastraVoice class includes an event system for real-time communication. Standard event types include: ## Protected Properties ## Telemetry Support MastraVoice includes built-in telemetry support through the `traced` method, which wraps method calls with performance tracking and error monitoring. ## Notes - MastraVoice is an abstract class and cannot be instantiated directly - Implementations must provide concrete implementations for all abstract methods - The class provides a consistent interface across different voice service providers - Speech-to-speech capabilities are optional and provider-specific - The event system enables asynchronous communication for real-time interactions - Telemetry is automatically handled for all method calls --- title: "Reference: Murf Voice | Voice Providers | Mastra Docs" description: "Documentation for the Murf voice implementation, providing text-to-speech capabilities." --- # Murf [EN] Source: https://mastra.ai/en/reference/voice/murf The Murf voice implementation in Mastra provides text-to-speech (TTS) capabilities using Murf's AI voice service. It supports multiple voices across different languages. ## Usage Example ```typescript import { MurfVoice } from "@mastra/voice-murf"; // Initialize with default configuration (uses MURF_API_KEY environment variable) const voice = new MurfVoice(); // Initialize with custom configuration const voice = new MurfVoice({ speechModel: { name: 'GEN2', apiKey: 'your-api-key', properties: { format: 'MP3', rate: 1.0, pitch: 1.0, sampleRate: 48000, channelType: 'STEREO', }, }, speaker: 'en-US-cooper', }); // Text-to-Speech with default settings const audioStream = await voice.speak("Hello, world!"); // Text-to-Speech with custom properties const audioStream = await voice.speak("Hello, world!", { speaker: 'en-UK-hazel', properties: { format: 'WAV', rate: 1.2, style: 'casual', }, }); // Get available voices const voices = await voice.getSpeakers(); ``` ## Constructor Parameters ### MurfConfig ### Speech Properties ", description: "Custom pronunciation mappings", isOptional: true, }, { name: "encodeAsBase64", type: "boolean", description: "Whether to encode the audio as base64", isOptional: true, }, { name: "variation", type: "number", description: "Voice variation parameter", isOptional: true, }, { name: "audioDuration", type: "number", description: "Target audio duration in seconds", isOptional: true, }, { name: "multiNativeLocale", type: "string", description: "Locale for multilingual support", isOptional: true, }, ]} /> ## Methods ### speak() Converts text to speech using Murf's API. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ### listen() This method is not supported by Murf and will throw an error. Murf does not provide speech-to-text functionality. ## Important Notes 1. A Murf API key is required. Set it via the `MURF_API_KEY` environment variable or pass it in the constructor. 2. The service uses GEN2 as the default model version. 3. Speech properties can be set at the constructor level and overridden per request. 4. The service supports extensive audio customization through properties like format, sample rate, and channel type. 5. Speech-to-text functionality is not supported. --- title: "Reference: OpenAI Realtime Voice | Voice Providers | Mastra Docs" description: "Documentation for the OpenAIRealtimeVoice class, providing real-time text-to-speech and speech-to-text capabilities via WebSockets." --- # OpenAI Realtime Voice [EN] Source: https://mastra.ai/en/reference/voice/openai-realtime The OpenAIRealtimeVoice class provides real-time voice interaction capabilities using OpenAI's WebSocket-based API. It supports real time speech to speech, voice activity detection, and event-based audio streaming. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { playAudio, getMicrophoneStream } from "@mastra/node-audio"; // Initialize with default configuration using environment variables const voice = new OpenAIRealtimeVoice(); // Or initialize with specific configuration const voiceWithConfig = new OpenAIRealtimeVoice({ chatModel: { apiKey: 'your-openai-api-key', model: 'gpt-4o-mini-realtime-preview-2024-12-17', options: { sessionConfig: { turn_detection: { type: 'server_vad', threshold: 0.6, silence_duration_ms: 1200 } } } }, speaker: 'alloy' // Default voice }); // Establish connection await voice.connect(); // Set up event listeners voice.on('speaker', ({ audio }) => { // Handle audio data (Int16Array) pcm format by default playAudio(audio); }); voice.on('writing', ({ text, role }) => { // Handle transcribed text console.log(`${role}: ${text}`); }); // Convert text to speech await voice.speak('Hello, how can I help you today?', { speaker: 'echo' // Override default voice }); // Process audio input const microphoneStream = getMicrophoneStream(); await voice.send(microphoneStream); // When done, disconnect voice.connect(); ``` ## Configuration ### Constructor Options ### chatModel ### options ### Voice Activity Detection (VAD) Configuration ## Methods ### connect() Establishes a connection to the OpenAI realtime service. Must be called before using speak, listen, or send functions. ", description: "Promise that resolves when the connection is established.", }, ]} /> ### speak() Emits a speaking event using the configured voice model. Can accept either a string or a readable stream as input. Returns: `Promise` ### listen() Processes audio input for speech recognition. Takes a readable stream of audio data and emits a 'listening' event with the transcribed text. Returns: `Promise` ### send() Streams audio data in real-time to the OpenAI service for continuous audio streaming scenarios like live microphone input. Returns: `Promise` ### updateConfig() Updates the session configuration for the voice instance. This can be used to modify voice settings, turn detection, and other parameters. Returns: `void` ### addTools() Adds a set of tools to the voice instance. Tools allow the model to perform additional actions during conversations. When OpenAIRealtimeVoice is added to an Agent, any tools configured for the Agent will automatically be available to the voice interface. Returns: `void` ### close() Disconnects from the OpenAI realtime session and cleans up resources. Should be called when you're done with the voice instance. Returns: `void` ### getSpeakers() Returns a list of available voice speakers. Returns: `Promise>` ### on() Registers an event listener for voice events. Returns: `void` ### off() Removes a previously registered event listener. Returns: `void` ## Events The OpenAIRealtimeVoice class emits the following events: ### OpenAI Realtime Events You can also listen to [OpenAI Realtime utility events](https://github.com/openai/openai-realtime-api-beta#reference-client-utility-events) by prefixing with 'openAIRealtime:': ## Available Voices The following voice options are available: - `alloy`: Neutral and balanced - `ash`: Clear and precise - `ballad`: Melodic and smooth - `coral`: Warm and friendly - `echo`: Resonant and deep - `sage`: Calm and thoughtful - `shimmer`: Bright and energetic - `verse`: Versatile and expressive ## Notes - API keys can be provided via constructor options or the `OPENAI_API_KEY` environment variable - The OpenAI Realtime Voice API uses WebSockets for real-time communication - Server-side Voice Activity Detection (VAD) provides better accuracy for speech detection - All audio data is processed as Int16Array format - The voice instance must be connected with `connect()` before using other methods - Always call `close()` when done to properly clean up resources - Memory management is handled by OpenAI Realtime API --- title: "Reference: OpenAI Voice | Voice Providers | Mastra Docs" description: "Documentation for the OpenAIVoice class, providing text-to-speech and speech-to-text capabilities." --- # OpenAI [EN] Source: https://mastra.ai/en/reference/voice/openai The OpenAIVoice class in Mastra provides text-to-speech and speech-to-text capabilities using OpenAI's models. ## Usage Example ```typescript import { OpenAIVoice } from '@mastra/voice-openai'; // Initialize with default configuration using environment variables const voice = new OpenAIVoice(); // Or initialize with specific configuration const voiceWithConfig = new OpenAIVoice({ speechModel: { name: 'tts-1-hd', apiKey: 'your-openai-api-key' }, listeningModel: { name: 'whisper-1', apiKey: 'your-openai-api-key' }, speaker: 'alloy' // Default voice }); // Convert text to speech const audioStream = await voice.speak('Hello, how can I help you?', { speaker: 'nova', // Override default voice speed: 1.2 // Adjust speech speed }); // Convert speech to text const text = await voice.listen(audioStream, { filetype: 'mp3' }); ``` ## Configuration ### Constructor Options ### OpenAIConfig ## Methods ### speak() Converts text to speech using OpenAI's text-to-speech models. Returns: `Promise` ### listen() Transcribes audio using OpenAI's Whisper model. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ## Notes - API keys can be provided via constructor options or the `OPENAI_API_KEY` environment variable - The `tts-1-hd` model provides higher quality audio but may have slower processing times - Speech recognition supports multiple audio formats including mp3, wav, and webm --- title: "Reference: PlayAI Voice | Voice Providers | Mastra Docs" description: "Documentation for the PlayAI voice implementation, providing text-to-speech capabilities." --- # PlayAI [EN] Source: https://mastra.ai/en/reference/voice/playai The PlayAI voice implementation in Mastra provides text-to-speech capabilities using PlayAI's API. ## Usage Example ```typescript import { PlayAIVoice } from "@mastra/voice-playai"; // Initialize with default configuration (uses PLAYAI_API_KEY environment variable and PLAYAI_USER_ID environment variable) const voice = new PlayAIVoice(); // Initialize with default configuration const voice = new PlayAIVoice({ speechModel: { name: 'PlayDialog', apiKey: process.env.PLAYAI_API_KEY, userId: process.env.PLAYAI_USER_ID }, speaker: 'Angelo' // Default voice }); // Convert text to speech with a specific voice const audioStream = await voice.speak("Hello, world!", { speaker: 's3://voice-cloning-zero-shot/b27bc13e-996f-4841-b584-4d35801aea98/original/manifest.json' // Dexter voice }); ``` ## Constructor Parameters ### PlayAIConfig ## Methods ### speak() Converts text to speech using the configured speech model and voice. Returns: `Promise`. ### getSpeakers() Returns an array of available voice options, where each node contains: ### listen() This method is not supported by PlayAI and will throw an error. PlayAI does not provide speech-to-text functionality. ## Notes - PlayAI requires both an API key and a user ID for authentication - The service offers two models: 'PlayDialog' and 'Play3.0-mini' - Each voice has a unique S3 manifest ID that must be used when making API calls --- title: "Reference: Sarvam Voice | Voice Providers | Mastra Docs" description: "Documentation for the Sarvam class, providing text-to-speech and speech-to-text capabilities." --- # Sarvam [EN] Source: https://mastra.ai/en/reference/voice/sarvam The SarvamVoice class in Mastra provides text-to-speech and speech-to-text capabilities using Sarvam AI models. ## Usage Example ```typescript import { SarvamVoice } from "@mastra/voice-sarvam"; // Initialize with default configuration using environment variables const voice = new SarvamVoice(); // Or initialize with specific configuration const voiceWithConfig = new SarvamVoice({ speechModel: { model: "bulbul:v1", apiKey: process.env.SARVAM_API_KEY!, language: "en-IN", properties: { pitch: 0, pace: 1.65, loudness: 1.5, speech_sample_rate: 8000, enable_preprocessing: false, eng_interpolation_wt: 123, }, }, listeningModel: { model: "saarika:v2", apiKey: process.env.SARVAM_API_KEY!, languageCode: "en-IN", filetype?: 'wav'; }, speaker: "meera", // Default voice }); // Convert text to speech const audioStream = await voice.speak("Hello, how can I help you?"); // Convert speech to text const text = await voice.listen(audioStream, { filetype: "wav", }); ``` ### Sarvam API Docs - https://docs.sarvam.ai/api-reference-docs/endpoints/text-to-speech ## Configuration ### Constructor Options ### SarvamVoiceConfig ### SarvamListenOptions ## Methods ### speak() Converts text to speech using Sarvam's text-to-speech models. Returns: `Promise` ### listen() Transcribes audio using Sarvam's speech recognition models. Returns: `Promise` ### getSpeakers() Returns an array of available voice options. Returns: `Promise>` ## Notes - API key can be provided via constructor options or the `SARVAM_API_KEY` environment variable - If no API key is provided, the constructor will throw an error - The service communicates with the Sarvam AI API at `https://api.sarvam.ai` - Audio is returned as a stream containing binary audio data - Speech recognition supports mp3 and wav audio formats --- title: "Reference: Speechify Voice | Voice Providers | Mastra Docs" description: "Documentation for the Speechify voice implementation, providing text-to-speech capabilities." --- # Speechify [EN] Source: https://mastra.ai/en/reference/voice/speechify The Speechify voice implementation in Mastra provides text-to-speech capabilities using Speechify's API. ## Usage Example ```typescript import { SpeechifyVoice } from "@mastra/voice-speechify"; // Initialize with default configuration (uses SPEECHIFY_API_KEY environment variable) const voice = new SpeechifyVoice(); // Initialize with custom configuration const voice = new SpeechifyVoice({ speechModel: { name: 'simba-english', apiKey: 'your-api-key' }, speaker: 'george' // Default voice }); // Convert text to speech const audioStream = await voice.speak("Hello, world!", { speaker: 'henry', // Override default voice }); ``` ## Constructor Parameters ### SpeechifyConfig ## Methods ### speak() Converts text to speech using the configured speech model and voice. Returns: `Promise` ### getSpeakers() Returns an array of available voice options, where each node contains: ### listen() This method is not supported by Speechify and will throw an error. Speechify does not provide speech-to-text functionality. ## Notes - Speechify requires an API key for authentication - The default model is 'simba-english' - Speech-to-text functionality is not supported - Additional audio stream options can be passed through the speak() method's options parameter --- title: "Reference: voice.addInstructions() | Voice Providers | Mastra Docs" description: "Documentation for the addInstructions() method available in voice providers, which adds instructions to guide the voice model's behavior." --- # voice.addInstructions() [EN] Source: https://mastra.ai/en/reference/voice/voice.addInstructions The `addInstructions()` method equips a voice provider with instructions that guide the model's behavior during real-time interactions. This is particularly useful for real-time voice providers that maintain context across a conversation. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Create an agent with the voice provider const agent = new Agent({ name: "Customer Support Agent", instructions: "You are a helpful customer support agent for a software company.", model: openai("gpt-4o"), voice, }); // Add additional instructions to the voice provider voice.addInstructions(` When speaking to customers: - Always introduce yourself as the customer support agent - Speak clearly and concisely - Ask clarifying questions when needed - Summarize the conversation at the end `); // Connect to the real-time service await voice.connect(); ``` ## Parameters
## Return Value This method does not return a value. ## Notes - Instructions are most effective when they are clear, specific, and relevant to the voice interaction - This method is primarily used with real-time voice providers that maintain conversation context - If called on a voice provider that doesn't support instructions, it will log a warning and do nothing - Instructions added with this method are typically combined with any instructions provided by an associated Agent - For best results, add instructions before starting a conversation (before calling `connect()`) - Multiple calls to `addInstructions()` may either replace or append to existing instructions, depending on the provider implementation --- title: "Reference: voice.addTools() | Voice Providers | Mastra Docs" description: "Documentation for the addTools() method available in voice providers, which equips voice models with function calling capabilities." --- # voice.addTools() [EN] Source: https://mastra.ai/en/reference/voice/voice.addTools The `addTools()` method equips a voice provider with tools (functions) that can be called by the model during real-time interactions. This enables voice assistants to perform actions like searching for information, making calculations, or interacting with external systems. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; // Define tools const weatherTool = createTool({ id: "getWeather", description: "Get the current weather for a location", inputSchema: z.object({ location: z.string().describe("The city and state, e.g. San Francisco, CA"), }), outputSchema: z.object({ message: z.string(), }), execute: async ({ context }) => { // Fetch weather data from an API const response = await fetch(`https://api.weather.com?location=${encodeURIComponent(context.location)}`); const data = await response.json(); return { message: `The current temperature in ${context.location} is ${data.temperature}°F with ${data.conditions}.` }; }, }); // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Add tools to the voice provider voice.addTools({ getWeather: weatherTool, }); // Connect to the real-time service await voice.connect(); ``` ## Parameters
## Return Value This method does not return a value. ## Notes - Tools must follow the Mastra tool format with name, description, input schema, and execute function - This method is primarily used with real-time voice providers that support function calling - If called on a voice provider that doesn't support tools, it will log a warning and do nothing - Tools added with this method are typically combined with any tools provided by an associated Agent - For best results, add tools before starting a conversation (before calling `connect()`) - The voice provider will automatically handle the invocation of tool handlers when the model decides to use them - Multiple calls to `addTools()` may either replace or merge with existing tools, depending on the provider implementation --- title: "Reference: voice.answer() | Voice Providers | Mastra Docs" description: "Documentation for the answer() method available in real-time voice providers, which triggers the voice provider to generate a response." --- # voice.answer() [EN] Source: https://mastra.ai/en/reference/voice/voice.answer The `answer()` method is used in real-time voice providers to trigger the AI to generate a response. This method is particularly useful in speech-to-speech conversations where you need to explicitly signal the AI to respond after receiving user input. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { getMicrophoneStream } from "@mastra/node-audio"; import Speaker from "@mastra/node-speaker"; const speaker = new Speaker({ sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro channels: 1, // Mono audio output (as opposed to stereo which would be 2) bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution) }); // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o", apiKey: process.env.OPENAI_API_KEY, }, speaker: "alloy", // Default voice }); // Connect to the real-time service await voice.connect(); // Register event listener for responses voice.on("speaker", (stream) => { // Handle audio response stream.pipe(speaker); }); // Send user audio input const microphoneStream = getMicrophoneStream(); await voice.send(microphoneStream); // Trigger the AI to respond await voice.answer(); ``` ## Parameters
", description: "Provider-specific options for the response", isOptional: true, } ]} /> ## Return Value Returns a `Promise` that resolves when the response has been triggered. ## Notes - This method is only implemented by real-time voice providers that support speech-to-speech capabilities - If called on a voice provider that doesn't support this functionality, it will log a warning and resolve immediately - The response audio will typically be emitted through the 'speaking' event rather than returned directly - For providers that support it, you can use this method to send a specific response instead of having the AI generate one - This method is commonly used in conjunction with `send()` to create a conversational flow --- title: "Reference: voice.close() | Voice Providers | Mastra Docs" description: "Documentation for the close() method available in voice providers, which disconnects from real-time voice services." --- # voice.close() [EN] Source: https://mastra.ai/en/reference/voice/voice.close The `close()` method disconnects from a real-time voice service and cleans up resources. This is important for properly ending voice sessions and preventing resource leaks. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { getMicrophoneStream } from "@mastra/node-audio"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Connect to the real-time service await voice.connect(); // Start a conversation voice.speak("Hello, I'm your AI assistant!"); // Stream audio from a microphone const microphoneStream = getMicrophoneStream(); voice.send(microphoneStream); // When the conversation is complete setTimeout(() => { // Close the connection and clean up resources voice.close(); console.log("Voice session ended"); }, 60000); // End after 1 minute ``` ## Parameters This method does not accept any parameters. ## Return Value This method does not return a value. ## Notes - Always call `close()` when you're done with a real-time voice session to free up resources - After calling `close()`, you'll need to call `connect()` again if you want to start a new session - This method is primarily used with real-time voice providers that maintain persistent connections - If called on a voice provider that doesn't support real-time connections, it will log a warning and do nothing - Failing to close connections can lead to resource leaks and potential billing issues with voice service providers --- title: "Reference: voice.connect() | Voice Providers | Mastra Docs" description: "Documentation for the connect() method available in real-time voice providers, which establishes a connection for speech-to-speech communication." --- # voice.connect() [EN] Source: https://mastra.ai/en/reference/voice/voice.connect The `connect()` method establishes a WebSocket or WebRTC connection for real-time speech-to-speech communication. This method must be called before using other real-time features like `send()` or `answer()`. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; const speaker = new Speaker({ sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro channels: 1, // Mono audio output (as opposed to stereo which would be 2) bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution) }); // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, options: { sessionConfig: { turn_detection: { type: "server_vad", threshold: 0.6, silence_duration_ms: 1200, }, }, }, }, speaker: "alloy", // Default voice }); // Connect to the real-time service await voice.connect(); // Now you can use real-time features voice.on("speaker", (stream) => { stream.pipe(speaker); }); // With connection options await voice.connect({ timeout: 10000, // 10 seconds timeout reconnect: true, }); ``` ## Parameters ", description: "Provider-specific connection options", isOptional: true, } ]} /> ## Return Value Returns a `Promise` that resolves when the connection is successfully established. ## Provider-Specific Options Each real-time voice provider may support different options for the `connect()` method: ### OpenAI Realtime ## Using with CompositeVoice When using `CompositeVoice`, the `connect()` method delegates to the configured real-time provider: ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; const realtimeVoice = new OpenAIRealtimeVoice(); const voice = new CompositeVoice({ realtimeProvider: realtimeVoice, }); // This will use the OpenAIRealtimeVoice provider await voice.connect(); ``` ## Notes - This method is only implemented by real-time voice providers that support speech-to-speech capabilities - If called on a voice provider that doesn't support this functionality, it will log a warning and resolve immediately - The connection must be established before using other real-time methods like `send()` or `answer()` - When you're done with the voice instance, call `close()` to properly clean up resources - Some providers may automatically reconnect on connection loss, depending on their implementation - Connection errors will typically be thrown as exceptions that should be caught and handled ## Related Methods - [voice.send()](./voice.send) - Sends audio data to the voice provider - [voice.answer()](./voice.answer) - Triggers the voice provider to respond - [voice.close()](./voice.close) - Disconnects from the real-time service - [voice.on()](./voice.on) - Registers an event listener for voice events --- title: "Reference: Voice Events | Voice Providers | Mastra Docs" description: "Documentation for events emitted by voice providers, particularly for real-time voice interactions." --- # Voice Events [EN] Source: https://mastra.ai/en/reference/voice/voice.events Voice providers emit various events during real-time voice interactions. These events can be listened to using the [voice.on()](./voice.on) method and are particularly important for building interactive voice applications. ## Common Events These events are commonly implemented across real-time voice providers: ## Notes - Not all events are supported by all voice providers - The exact payload structure may vary between providers - For non-real-time providers, most of these events will not be emitted - Events are useful for building interactive UIs that respond to the conversation state - Consider using the [voice.off()](./voice.off) method to remove event listeners when they are no longer needed --- title: "Reference: voice.getSpeakers() | Voice Providers | Mastra Docs" description: "Documentation for the getSpeakers() method available in voice providers, which retrieves available voice options." --- import { Tabs } from "nextra/components"; # voice.getSpeakers() [EN] Source: https://mastra.ai/en/reference/voice/voice.getSpeakers The `getSpeakers()` method retrieves a list of available voice options (speakers) from the voice provider. This allows applications to present users with voice choices or programmatically select the most appropriate voice for different contexts. ## Usage Example ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; import { ElevenLabsVoice } from "@mastra/voice-elevenlabs"; // Initialize voice providers const openaiVoice = new OpenAIVoice(); const elevenLabsVoice = new ElevenLabsVoice({ apiKey: process.env.ELEVENLABS_API_KEY }); // Get available speakers from OpenAI const openaiSpeakers = await openaiVoice.getSpeakers(); console.log("OpenAI voices:", openaiSpeakers); // Example output: [{ voiceId: "alloy" }, { voiceId: "echo" }, { voiceId: "fable" }, ...] // Get available speakers from ElevenLabs const elevenLabsSpeakers = await elevenLabsVoice.getSpeakers(); console.log("ElevenLabs voices:", elevenLabsSpeakers); // Example output: [{ voiceId: "21m00Tcm4TlvDq8ikWAM", name: "Rachel" }, ...] // Use a specific voice for speech const text = "Hello, this is a test of different voices."; await openaiVoice.speak(text, { speaker: openaiSpeakers[2].voiceId }); await elevenLabsVoice.speak(text, { speaker: elevenLabsSpeakers[0].voiceId }); ``` ## Parameters This method does not accept any parameters. ## Return Value >", type: "Promise", description: "A promise that resolves to an array of voice options, where each option contains at least a voiceId property and may include additional provider-specific metadata.", } ]} /> ## Provider-Specific Metadata Different voice providers return different metadata for their voices: ## Notes - The available voices vary significantly between providers - Some providers may require authentication to retrieve the full list of voices - The default implementation returns an empty array if the provider doesn't support this method - For performance reasons, consider caching the results if you need to display the list frequently - The `voiceId` property is guaranteed to be present for all providers, but additional metadata varies --- title: "Reference: voice.listen() | Voice Providers | Mastra Docs" description: "Documentation for the listen() method available in all Mastra voice providers, which converts speech to text." --- # voice.listen() [EN] Source: https://mastra.ai/en/reference/voice/voice.listen The `listen()` method is a core function available in all Mastra voice providers that converts speech to text. It takes an audio stream as input and returns the transcribed text. ## Usage Example ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; import { getMicrophoneStream } from "@mastra/node-audio"; import { createReadStream } from "fs"; import path from "path"; // Initialize a voice provider const voice = new OpenAIVoice({ listeningModel: { name: "whisper-1", apiKey: process.env.OPENAI_API_KEY, }, }); // Basic usage with a file stream const audioFilePath = path.join(process.cwd(), "audio.mp3"); const audioStream = createReadStream(audioFilePath); const transcript = await voice.listen(audioStream, { filetype: "mp3", }); console.log("Transcribed text:", transcript); // Using a microphone stream const microphoneStream = getMicrophoneStream(); // Assume this function gets audio input const transcription = await voice.listen(microphoneStream); // With provider-specific options const transcriptWithOptions = await voice.listen(audioStream, { language: "en", prompt: "This is a conversation about artificial intelligence.", }); ``` ## Parameters ## Return Value Returns one of the following: - `Promise`: A promise that resolves to the transcribed text - `Promise`: A promise that resolves to a stream of transcribed text (for streaming transcription) - `Promise`: For real-time providers that emit 'writing' events instead of returning text directly ## Provider-Specific Options Each voice provider may support additional options specific to their implementation. Here are some examples: ### OpenAI ### Google ### Deepgram ## Realtime Voice Providers When using realtime voice providers like `OpenAIRealtimeVoice`, the `listen()` method behaves differently: - Instead of returning transcribed text, it emits 'writing' events with the transcribed text - You need to register an event listener to receive the transcription ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { getMicrophoneStream } from "@mastra/node-audio"; const voice = new OpenAIRealtimeVoice(); await voice.connect(); // Register event listener for transcription voice.on("writing", ({ text, role }) => { console.log(`${role}: ${text}`); }); // This will emit 'writing' events instead of returning text const microphoneStream = getMicrophoneStream(); await voice.listen(microphoneStream); ``` ## Using with CompositeVoice When using `CompositeVoice`, the `listen()` method delegates to the configured listening provider: ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; const voice = new CompositeVoice({ listenProvider: new OpenAIVoice(), speakProvider: new PlayAIVoice(), }); // This will use the OpenAIVoice provider const transcript = await voice.listen(audioStream); ``` ## Notes - Not all voice providers support speech-to-text functionality (e.g., PlayAI, Speechify) - The behavior of `listen()` may vary slightly between providers, but all implementations follow the same basic interface - When using a realtime voice provider, the method might not return text directly but instead emit a 'writing' event - The audio format supported depends on the provider. Common formats include MP3, WAV, and M4A - Some providers support streaming transcription, where text is returned as it's transcribed - For best performance, consider closing or ending the audio stream when you're done with it ## Related Methods - [voice.speak()](./voice.speak) - Converts text to speech - [voice.send()](./voice.send) - Sends audio data to the voice provider in real-time - [voice.on()](./voice.on) - Registers an event listener for voice events --- title: "Reference: voice.off() | Voice Providers | Mastra Docs" description: "Documentation for the off() method available in voice providers, which removes event listeners for voice events." --- # voice.off() [EN] Source: https://mastra.ai/en/reference/voice/voice.off The `off()` method removes event listeners previously registered with the `on()` method. This is particularly useful for cleaning up resources and preventing memory leaks in long-running applications with real-time voice capabilities. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import chalk from "chalk"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Connect to the real-time service await voice.connect(); // Define the callback function const writingCallback = ({ text, role }) => { if (role === 'user') { process.stdout.write(chalk.green(text)); } else { process.stdout.write(chalk.blue(text)); } }; // Register event listener voice.on("writing", writingCallback); // Later, when you want to remove the listener voice.off("writing", writingCallback); ``` ## Parameters
## Return Value This method does not return a value. ## Notes - The callback passed to `off()` must be the same function reference that was passed to `on()` - If the callback is not found, the method will have no effect - This method is primarily used with real-time voice providers that support event-based communication - If called on a voice provider that doesn't support events, it will log a warning and do nothing - Removing event listeners is important for preventing memory leaks in long-running applications --- title: "Reference: voice.on() | Voice Providers | Mastra Docs" description: "Documentation for the on() method available in voice providers, which registers event listeners for voice events." --- # voice.on() [EN] Source: https://mastra.ai/en/reference/voice/voice.on The `on()` method registers event listeners for various voice events. This is particularly important for real-time voice providers, where events are used to communicate transcribed text, audio responses, and other state changes. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; import chalk from "chalk"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Connect to the real-time service await voice.connect(); // Register event listener for transcribed text voice.on("writing", (event) => { if (event.role === 'user') { process.stdout.write(chalk.green(event.text)); } else { process.stdout.write(chalk.blue(event.text)); } }); // Listen for audio data and play it const speaker = new Speaker({ sampleRate: 24100, channels: 1, bitDepth: 16, }); voice.on("speaker", (stream) => { stream.pipe(speaker); }); // Register event listener for errors voice.on("error", ({ message, code, details }) => { console.error(`Error ${code}: ${message}`, details); }); ``` ## Parameters
## Return Value This method does not return a value. ## Events For a comprehensive list of events and their payload structures, see the [Voice Events](./voice.events) documentation. Common events include: - `speaking`: Emitted when audio data is available - `speaker`: Emitted with a stream that can be piped to audio output - `writing`: Emitted when text is transcribed or generated - `error`: Emitted when an error occurs - `tool-call-start`: Emitted when a tool is about to be executed - `tool-call-result`: Emitted when a tool execution is complete Different voice providers may support different sets of events with varying payload structures. ## Using with CompositeVoice When using `CompositeVoice`, the `on()` method delegates to the configured real-time provider: ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; const speaker = new Speaker({ sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro channels: 1, // Mono audio output (as opposed to stereo which would be 2) bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution) }); const realtimeVoice = new OpenAIRealtimeVoice(); const voice = new CompositeVoice({ realtimeProvider: realtimeVoice, }); // Connect to the real-time service await voice.connect(); // This will register the event listener with the OpenAIRealtimeVoice provider voice.on("speaker", (stream) => { stream.pipe(speaker) }); ``` ## Notes - This method is primarily used with real-time voice providers that support event-based communication - If called on a voice provider that doesn't support events, it will log a warning and do nothing - Event listeners should be registered before calling methods that might emit events - To remove an event listener, use the [voice.off()](./voice.off) method with the same event name and callback function - Multiple listeners can be registered for the same event - The callback function will receive different data depending on the event type (see [Voice Events](./voice.events)) - For best performance, consider removing event listeners when they are no longer needed --- title: "Reference: voice.send() | Voice Providers | Mastra Docs" description: "Documentation for the send() method available in real-time voice providers, which streams audio data for continuous processing." --- # voice.send() [EN] Source: https://mastra.ai/en/reference/voice/voice.send The `send()` method streams audio data in real-time to voice providers for continuous processing. This method is essential for real-time speech-to-speech conversations, allowing you to send microphone input directly to the AI service. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; import { getMicrophoneStream } from "@mastra/node-audio"; const speaker = new Speaker({ sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro channels: 1, // Mono audio output (as opposed to stereo which would be 2) bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution) }); // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Connect to the real-time service await voice.connect(); // Set up event listeners for responses voice.on("writing", ({ text, role }) => { console.log(`${role}: ${text}`); }); voice.on("speaker", (stream) => { stream.pipe(speaker) }); // Get microphone stream (implementation depends on your environment) const microphoneStream = getMicrophoneStream(); // Send audio data to the voice provider await voice.send(microphoneStream); // You can also send audio data as Int16Array const audioBuffer = getAudioBuffer(); // Assume this returns Int16Array await voice.send(audioBuffer); ``` ## Parameters
## Return Value Returns a `Promise` that resolves when the audio data has been accepted by the voice provider. ## Notes - This method is only implemented by real-time voice providers that support speech-to-speech capabilities - If called on a voice provider that doesn't support this functionality, it will log a warning and resolve immediately - You must call `connect()` before using `send()` to establish the WebSocket connection - The audio format requirements depend on the specific voice provider - For continuous conversation, you typically call `send()` to transmit user audio, then `answer()` to trigger the AI response - The provider will typically emit 'writing' events with transcribed text as it processes the audio - When the AI responds, the provider will emit 'speaking' events with the audio response --- title: "Reference: voice.speak() | Voice Providers | Mastra Docs" description: "Documentation for the speak() method available in all Mastra voice providers, which converts text to speech." --- # voice.speak() [EN] Source: https://mastra.ai/en/reference/voice/voice.speak The `speak()` method is a core function available in all Mastra voice providers that converts text to speech. It takes text input and returns an audio stream that can be played or saved. ## Usage Example ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; // Initialize a voice provider const voice = new OpenAIVoice({ speaker: "alloy", // Default voice }); // Basic usage with default settings const audioStream = await voice.speak("Hello, world!"); // Using a different voice for this specific request const audioStreamWithDifferentVoice = await voice.speak("Hello again!", { speaker: "nova", }); // Using provider-specific options const audioStreamWithOptions = await voice.speak("Hello with options!", { speaker: "echo", speed: 1.2, // OpenAI-specific option }); // Using a text stream as input import { Readable } from "stream"; const textStream = Readable.from(["Hello", " from", " a", " stream!"]); const audioStreamFromTextStream = await voice.speak(textStream); ``` ## Parameters ## Return Value Returns a `Promise` where: - `NodeJS.ReadableStream`: A stream of audio data that can be played or saved - `void`: When using a realtime voice provider that emits audio through events instead of returning it directly ## Provider-Specific Options Each voice provider may support additional options specific to their implementation. Here are some examples: ### OpenAI ### ElevenLabs ### Google ### Murf ## Realtime Voice Providers When using realtime voice providers like `OpenAIRealtimeVoice`, the `speak()` method behaves differently: - Instead of returning an audio stream, it emits a 'speaking' event with the audio data - You need to register an event listener to receive the audio chunks ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; const speaker = new Speaker({ sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro channels: 1, // Mono audio output (as opposed to stereo which would be 2) bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution) }); const voice = new OpenAIRealtimeVoice(); await voice.connect(); // Register event listener for audio chunks voice.on("speaker", (stream) => { // Handle audio chunk (e.g., play it or save it) stream.pipe(speaker) }); // This will emit 'speaking' events instead of returning a stream await voice.speak("Hello, this is realtime speech!"); ``` ## Using with CompositeVoice When using `CompositeVoice`, the `speak()` method delegates to the configured speaking provider: ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; const voice = new CompositeVoice({ speakProvider: new PlayAIVoice(), listenProvider: new OpenAIVoice(), }); // This will use the PlayAIVoice provider const audioStream = await voice.speak("Hello, world!"); ``` ## Notes - The behavior of `speak()` may vary slightly between providers, but all implementations follow the same basic interface. - When using a realtime voice provider, the method might not return an audio stream directly but instead emit a 'speaking' event. - If a text stream is provided as input, the provider will typically convert it to a string before processing. - The audio format of the returned stream depends on the provider. Common formats include MP3, WAV, and OGG. - For best performance, consider closing or ending the audio stream when you're done with it. --- title: "Reference: voice.updateConfig() | Voice Providers | Mastra Docs" description: "Documentation for the updateConfig() method available in voice providers, which updates the configuration of a voice provider at runtime." --- # voice.updateConfig() [EN] Source: https://mastra.ai/en/reference/voice/voice.updateConfig The `updateConfig()` method allows you to update the configuration of a voice provider at runtime. This is useful for changing voice settings, API keys, or other provider-specific options without creating a new instance. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, speaker: "alloy", }); // Connect to the real-time service await voice.connect(); // Later, update the configuration voice.updateConfig({ voice: "nova", // Change the default voice turn_detection: { type: "server_vad", threshold: 0.5, silence_duration_ms: 1000 } }); // The next speak() call will use the new configuration await voice.speak("Hello with my new voice!"); ``` ## Parameters
", description: "Configuration options to update. The specific properties depend on the voice provider.", isOptional: false, } ]} /> ## Return Value This method does not return a value. ## Configuration Options Different voice providers support different configuration options: ### OpenAI Realtime
## Notes - The default implementation logs a warning if the provider doesn't support this method - Configuration updates are typically applied to subsequent operations, not ongoing ones - Not all properties that can be set in the constructor can be updated at runtime - The specific behavior depends on the voice provider implementation - For real-time voice providers, some configuration changes may require reconnecting to the service --- title: "Reference: .after() | Building Workflows | Mastra Docs" description: Documentation for the `after()` method in workflows, enabling branching and merging paths. --- # .after() [EN] Source: https://mastra.ai/en/reference/workflows/after The `.after()` method defines explicit dependencies between workflow steps, enabling branching and merging paths in your workflow execution. ## Usage ### Basic Branching ```typescript workflow .step(stepA) .then(stepB) .after(stepA) // Create new branch after stepA completes .step(stepC); ``` ### Merging Multiple Branches ```typescript workflow .step(stepA) .then(stepB) .step(stepC) .then(stepD) .after([stepB, stepD]) // Create a step that depends on multiple steps .step(stepE); ``` ## Parameters ## Returns ## Examples ### Single Dependency ```typescript workflow .step(fetchData) .then(processData) .after(fetchData) // Branch after fetchData .step(logData); ``` ### Multiple Dependencies (Merging Branches) ```typescript workflow .step(fetchUserData) .then(validateUserData) .step(fetchProductData) .then(validateProductData) .after([validateUserData, validateProductData]) // Wait for both validations to complete .step(processOrder); ``` ## Related - [Branching Paths example](../../../examples/workflows/branching-paths.mdx) - [Workflow Class Reference](./workflow.mdx) - [Step Reference](./step-class.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx#merging-multiple-branches) --- title: ".afterEvent() Method | Mastra Docs" description: "Reference for the afterEvent method in Mastra workflows that creates event-based suspension points." --- # afterEvent() [EN] Source: https://mastra.ai/en/reference/workflows/afterEvent The `afterEvent()` method creates a suspension point in your workflow that waits for a specific event to occur before continuing execution. ## Syntax ```typescript workflow.afterEvent(eventName: string): Workflow ``` ## Parameters | Parameter | Type | Description | |-----------|------|-------------| | eventName | string | The name of the event to wait for. Must match an event defined in the workflow's `events` configuration. | ## Return Value Returns the workflow instance for method chaining. ## Description The `afterEvent()` method is used to create an automatic suspension point in your workflow that waits for a specific named event. It's essentially a declarative way to define a point where your workflow should pause and wait for an external event to occur. When you call `afterEvent()`, Mastra: 1. Creates a special step with ID `__eventName_event` 2. This step automatically suspends the workflow execution 3. The workflow remains suspended until the specified event is triggered via `resumeWithEvent()` 4. When the event occurs, execution continues with the step following the `afterEvent()` call This method is part of Mastra's event-driven workflow capabilities, allowing you to create workflows that coordinate with external systems or user interactions without manually implementing suspension logic. ## Usage Notes - The event specified in `afterEvent()` must be defined in the workflow's `events` configuration with a schema - The special step created has a predictable ID format: `__eventName_event` (e.g., `__approvalReceived_event`) - Any step following `afterEvent()` can access the event data via `context.inputData.resumedEvent` - Event data is validated against the schema defined for that event when `resumeWithEvent()` is called ## Examples ### Basic Usage ```typescript // Define workflow with events const workflow = new Workflow({ name: 'approval-workflow', events: { approval: { schema: z.object({ approved: z.boolean(), approverName: z.string(), }), }, }, }); // Build workflow with event suspension point workflow .step(submitRequest) .afterEvent('approval') // Workflow suspends here .step(processApproval) // This step runs after the event occurs .commit(); ``` ## Related - [Event-Driven Workflows](./events.mdx) - [resumeWithEvent()](./resumeWithEvent.mdx) - [Suspend and Resume](../../workflows/suspend-and-resume.mdx) - [Workflow Class](./workflow.mdx) --- title: "Reference: Workflow.commit() | Running Workflows | Mastra Docs" description: Documentation for the `.commit()` method in workflows, which re-initializes the workflow machine with the current step configuration. --- # Workflow.commit() [EN] Source: https://mastra.ai/en/reference/workflows/commit The `.commit()` method re-initializes the workflow's state machine with the current step configuration. ## Usage ```typescript workflow .step(stepA) .then(stepB) .commit(); ``` ## Returns ## Related - [Branching Paths example](../../../examples/workflows/branching-paths.mdx) - [Workflow Class Reference](./workflow.mdx) - [Step Reference](./step-class.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: Workflow.createRun() | Running Workflows | Mastra Docs" description: "Documentation for the `.createRun()` method in workflows, which initializes a new workflow run instance." --- # Workflow.createRun() [EN] Source: https://mastra.ai/en/reference/workflows/createRun The `.createRun()` method initializes a new workflow run instance. It generates a unique run ID for tracking and returns a start function that begins workflow execution when called. One reason to use `.createRun()` vs `.execute()` is to get a unique run ID for tracking, logging, or subscribing via `.watch()`. ## Usage ```typescript const { runId, start, watch } = workflow.createRun(); const result = await start(); ``` ## Returns Promise", description: "Function that begins workflow execution when called", }, { name: "watch", type: "(callback: (record: WorkflowResult) => void) => () => void", description: "Function that accepts a callback function that will be called with each transition of the workflow run", }, { name: "resume", type: "({stepId: string, context: Record}) => Promise", description: "Function that resumes a workflow run from a given step ID and context", }, { name: "resumeWithEvent", type: "(eventName: string, data: any) => Promise", description: "Function that resumes a workflow run from a given event name and data", }, ]} /> ## Error Handling The start function may throw validation errors if the workflow configuration is invalid: ```typescript try { const { runId, start, watch, resume, resumeWithEvent } = workflow.createRun(); await start({ triggerData: data }); } catch (error) { if (error instanceof ValidationError) { // Handle validation errors console.log(error.type); // 'circular_dependency' | 'no_terminal_path' | 'unreachable_step' console.log(error.details); } } ``` ## Related - [Workflow Class Reference](./workflow.mdx) - [Step Class Reference](./step-class.mdx) - See the [Creating a Workflow](../../../examples/workflows/creating-a-workflow.mdx) example for complete usage ``` ``` --- title: "Reference: Workflow.else() | Conditional Branching | Mastra Docs" description: "Documentation for the `.else()` method in Mastra workflows, which creates an alternative branch when an if condition is false." --- # Workflow.else() [EN] Source: https://mastra.ai/en/reference/workflows/else > Experimental The `.else()` method creates an alternative branch in the workflow that executes when the preceding `if` condition evaluates to false. This enables workflows to follow different paths based on conditions. ## Usage ```typescript copy showLineNumbers workflow .step(startStep) .if(async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return value < 10; }) .then(ifBranchStep) .else() // Alternative branch when the condition is false .then(elseBranchStep) .commit(); ``` ## Parameters The `else()` method does not take any parameters. ## Returns ## Behavior - The `else()` method must follow an `if()` branch in the workflow definition - It creates a branch that executes only when the preceding `if` condition evaluates to false - You can chain multiple steps after an `else()` using `.then()` - You can nest additional `if`/`else` conditions within an `else` branch ## Error Handling The `else()` method requires a preceding `if()` statement. If you try to use it without a preceding `if`, an error will be thrown: ```typescript try { // This will throw an error workflow .step(someStep) .else() .then(anotherStep) .commit(); } catch (error) { console.error(error); // "No active condition found" } ``` ## Related - [if Reference](./if.mdx) - [then Reference](./then.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) - [Step Condition Reference](./step-condition.mdx) --- title: "Event-Driven Workflows | Mastra Docs" description: "Learn how to create event-driven workflows using afterEvent and resumeWithEvent methods in Mastra." --- # Event-Driven Workflows [EN] Source: https://mastra.ai/en/reference/workflows/events Mastra provides built-in support for event-driven workflows through the `afterEvent` and `resumeWithEvent` methods. These methods allow you to create workflows that pause execution while waiting for specific events to occur, then resume with the event data when it's available. ## Overview Event-driven workflows are useful for scenarios where: - You need to wait for external systems to complete processing - User approval or input is required at specific points - Asynchronous operations need to be coordinated - Long-running processes need to break up execution across different services ## Defining Events Before using event-driven methods, you must define the events your workflow will listen for in the workflow configuration: ```typescript import { Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; const workflow = new Workflow({ name: 'approval-workflow', triggerSchema: z.object({ requestId: z.string() }), events: { // Define events with their validation schemas approvalReceived: { schema: z.object({ approved: z.boolean(), approverName: z.string(), comment: z.string().optional(), }), }, documentUploaded: { schema: z.object({ documentId: z.string(), documentType: z.enum(['invoice', 'receipt', 'contract']), metadata: z.record(z.string()).optional(), }), }, }, }); ``` Each event must have a name and a schema that defines the structure of data expected when the event occurs. ## afterEvent() The `afterEvent` method creates a suspension point in your workflow that automatically waits for a specific event. ### Syntax ```typescript workflow.afterEvent(eventName: string): Workflow ``` ### Parameters - `eventName`: The name of the event to wait for (must be defined in the workflow's `events` configuration) ### Return Value Returns the workflow instance for method chaining. ### How It Works When `afterEvent` is called, Mastra: 1. Creates a special step with ID `__eventName_event` 2. Configures this step to automatically suspend workflow execution 3. Sets up the continuation point after the event is received ### Usage Example ```typescript workflow .step(initialProcessStep) .afterEvent('approvalReceived') // Workflow suspends here .step(postApprovalStep) // This runs after event is received .then(finalStep) .commit(); ``` ## resumeWithEvent() The `resumeWithEvent` method resumes a suspended workflow by providing data for a specific event. ### Syntax ```typescript run.resumeWithEvent(eventName: string, data: any): Promise ``` ### Parameters - `eventName`: The name of the event being triggered - `data`: The event data (must conform to the schema defined for this event) ### Return Value Returns a Promise that resolves to the workflow execution results after resumption. ### How It Works When `resumeWithEvent` is called, Mastra: 1. Validates the event data against the schema defined for that event 2. Loads the workflow snapshot 3. Updates the context with the event data 4. Resumes execution from the event step 5. Continues workflow execution with the subsequent steps ### Usage Example ```typescript // Create a workflow run const run = workflow.createRun(); // Start the workflow await run.start({ triggerData: { requestId: 'req-123' } }); // Later, when the event occurs: const result = await run.resumeWithEvent('approvalReceived', { approved: true, approverName: 'John Doe', comment: 'Looks good to me!' }); console.log(result.results); ``` ## Accessing Event Data When a workflow is resumed with event data, that data is available in the step context as `context.inputData.resumedEvent`: ```typescript const processApprovalStep = new Step({ id: 'processApproval', execute: async ({ context }) => { // Access the event data const eventData = context.inputData.resumedEvent; return { processingResult: `Processed approval from ${eventData.approverName}`, wasApproved: eventData.approved, }; }, }); ``` ## Multiple Events You can create workflows that wait for multiple different events at various points: ```typescript workflow .step(createRequest) .afterEvent('approvalReceived') .step(processApproval) .afterEvent('documentUploaded') .step(processDocument) .commit(); ``` When resuming a workflow with multiple event suspension points, you need to provide the correct event name and data for the current suspension point. ## Practical Example This example shows a complete workflow that requires both approval and document upload: ```typescript import { Workflow, Step } from '@mastra/core/workflows'; import { z } from 'zod'; // Define steps const createRequest = new Step({ id: 'createRequest', execute: async () => ({ requestId: `req-${Date.now()}` }), }); const processApproval = new Step({ id: 'processApproval', execute: async ({ context }) => { const approvalData = context.inputData.resumedEvent; return { approved: approvalData.approved, approver: approvalData.approverName, }; }, }); const processDocument = new Step({ id: 'processDocument', execute: async ({ context }) => { const documentData = context.inputData.resumedEvent; return { documentId: documentData.documentId, processed: true, type: documentData.documentType, }; }, }); const finalizeRequest = new Step({ id: 'finalizeRequest', execute: async ({ context }) => { const requestId = context.steps.createRequest.output.requestId; const approved = context.steps.processApproval.output.approved; const documentId = context.steps.processDocument.output.documentId; return { finalized: true, summary: `Request ${requestId} was ${approved ? 'approved' : 'rejected'} with document ${documentId}` }; }, }); // Create workflow const requestWorkflow = new Workflow({ name: 'document-request-workflow', events: { approvalReceived: { schema: z.object({ approved: z.boolean(), approverName: z.string(), }), }, documentUploaded: { schema: z.object({ documentId: z.string(), documentType: z.enum(['invoice', 'receipt', 'contract']), }), }, }, }); // Build workflow requestWorkflow .step(createRequest) .afterEvent('approvalReceived') .step(processApproval) .afterEvent('documentUploaded') .step(processDocument) .then(finalizeRequest) .commit(); // Export workflow export { requestWorkflow }; ``` ### Running the Example Workflow ```typescript import { requestWorkflow } from './workflows'; import { mastra } from './mastra'; async function runWorkflow() { // Get the workflow const workflow = mastra.getWorkflow('document-request-workflow'); const run = workflow.createRun(); // Start the workflow const initialResult = await run.start(); console.log('Workflow started:', initialResult.results); // Simulate receiving approval const afterApprovalResult = await run.resumeWithEvent('approvalReceived', { approved: true, approverName: 'Jane Smith', }); console.log('After approval:', afterApprovalResult.results); // Simulate document upload const finalResult = await run.resumeWithEvent('documentUploaded', { documentId: 'doc-456', documentType: 'invoice', }); console.log('Final result:', finalResult.results); } runWorkflow().catch(console.error); ``` ## Best Practices 1. **Define Clear Event Schemas**: Use Zod to create precise schemas for event data validation 2. **Use Descriptive Event Names**: Choose event names that clearly communicate their purpose 3. **Handle Missing Events**: Ensure your workflow can handle cases where events don't occur or time out 4. **Include Monitoring**: Use the `watch` method to monitor suspended workflows waiting for events 5. **Consider Timeouts**: Implement timeout mechanisms for events that may never occur 6. **Document Events**: Clearly document the events your workflow depends on for other developers ## Related - [Suspend and Resume in Workflows](../../workflows/suspend-and-resume.mdx) - [Workflow Class Reference](./workflow.mdx) - [Resume Method Reference](./resume.mdx) - [Watch Method Reference](./watch.mdx) - [After Event Reference](./afterEvent.mdx) - [Resume With Event Reference](./resumeWithEvent.mdx) --- title: "Reference: Workflow.execute() | Workflows | Mastra Docs" description: "Documentation for the `.execute()` method in Mastra workflows, which runs workflow steps and returns results." --- # Workflow.execute() [EN] Source: https://mastra.ai/en/reference/workflows/execute Executes a workflow with the provided trigger data and returns the results. The workflow must be committed before execution. ## Usage Example ```typescript const workflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number() }) }); workflow.step(stepOne).then(stepTwo).commit(); const result = await workflow.execute({ triggerData: { inputValue: 42 } }); ``` ## Parameters ## Returns ", description: "Results from each completed step" }, { name: "status", type: "WorkflowStatus", description: "Final status of the workflow run" } ] } ]} /> ## Additional Examples Execute with run ID: ```typescript const result = await workflow.execute({ runId: "custom-run-id", triggerData: { inputValue: 42 } }); ``` Handle execution results: ```typescript const { runId, results, status } = await workflow.execute({ triggerData: { inputValue: 42 } }); if (status === "COMPLETED") { console.log("Step results:", results); } ``` ### Related - [Workflow.createRun()](./createRun.mdx) - [Workflow.commit()](./commit.mdx) - [Workflow.start()](./start.mdx) --- title: "Reference: Workflow.if() | Conditional Branching | Mastra Docs" description: "Documentation for the `.if()` method in Mastra workflows, which creates conditional branches based on specified conditions." --- # Workflow.if() [EN] Source: https://mastra.ai/en/reference/workflows/if > Experimental The `.if()` method creates a conditional branch in the workflow, allowing steps to execute only when a specified condition is true. This enables dynamic workflow paths based on the results of previous steps. ## Usage ```typescript copy showLineNumbers workflow .step(startStep) .if(async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return value < 10; // If true, execute the "if" branch }) .then(ifBranchStep) .else() .then(elseBranchStep) .commit(); ``` ## Parameters ## Condition Types ### Function Condition You can use a function that returns a boolean: ```typescript workflow .step(startStep) .if(async ({ context }) => { const result = context.getStepResult<{ status: string }>('start'); return result?.status === 'success'; // Execute "if" branch when status is "success" }) .then(successStep) .else() .then(failureStep); ``` ### Reference Condition You can use a reference-based condition with comparison operators: ```typescript workflow .step(startStep) .if({ ref: { step: startStep, path: 'value' }, query: { $lt: 10 }, // Execute "if" branch when value is less than 10 }) .then(ifBranchStep) .else() .then(elseBranchStep); ``` ## Returns ## Error Handling The `if` method requires a previous step to be defined. If you try to use it without a preceding step, an error will be thrown: ```typescript try { // This will throw an error workflow .if(async ({ context }) => true) .then(someStep) .commit(); } catch (error) { console.error(error); // "Condition requires a step to be executed after" } ``` ## Related - [else Reference](./else.mdx) - [then Reference](./then.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) - [Step Condition Reference](./step-condition.mdx) --- title: "Reference: run.resume() | Running Workflows | Mastra Docs" description: Documentation for the `.resume()` method in workflows, which continues execution of a suspended workflow step. --- # run.resume() [EN] Source: https://mastra.ai/en/reference/workflows/resume The `.resume()` method continues execution of a suspended workflow step, optionally providing new context data that can be accessed by the step on the inputData property. ## Usage ```typescript copy showLineNumbers await run.resume({ runId: "abc-123", stepId: "stepTwo", context: { secondValue: 100 } }); ``` ## Parameters ### config ", description: "New context data to inject into the step's inputData property", isOptional: true } ]} /> ## Returns ", type: "object", description: "Result of the resumed workflow execution" } ]} /> ## Async/Await Flow When a workflow is resumed, execution continues from the point immediately after the `suspend()` call in the step's execution function. This creates a natural flow in your code: ```typescript // Step definition with suspend point const reviewStep = new Step({ id: "review", execute: async ({ context, suspend }) => { // First part of execution const initialAnalysis = analyzeData(context.inputData.data); if (initialAnalysis.needsReview) { // Suspend execution here await suspend({ analysis: initialAnalysis }); // This code runs after resume() is called // context.inputData now contains any data provided during resume return { reviewedData: enhanceWithFeedback(initialAnalysis, context.inputData.feedback) }; } return { reviewedData: initialAnalysis }; } }); const { runId, resume, start } = workflow.createRun(); await start({ inputData: { data: "some data" } }); // Later, resume the workflow const result = await resume({ runId: "workflow-123", stepId: "review", context: { // This data will be available in `context.inputData` feedback: "Looks good, but improve section 3" } }); ``` ### Execution Flow 1. The workflow runs until it hits `await suspend()` in the `review` step 2. The workflow state is persisted and execution pauses 3. Later, `run.resume()` is called with new context data 4. Execution continues from the point after `suspend()` in the `review` step 5. The new context data (`feedback`) is available to the step on the `inputData` property 6. The step completes and returns its result 7. The workflow continues with subsequent steps ## Error Handling The resume function may throw several types of errors: ```typescript try { await run.resume({ runId, stepId: "stepTwo", context: newData }); } catch (error) { if (error.message === "No snapshot found for workflow run") { // Handle missing workflow state } if (error.message === "Failed to parse workflow snapshot") { // Handle corrupted workflow state } } ``` ## Related - [Suspend and Resume](../../workflows/suspend-and-resume.mdx) - [`suspend` Reference](./suspend.mdx) - [`watch` Reference](./watch.mdx) - [Workflow Class Reference](./workflow.mdx) ``` --- title: ".resumeWithEvent() Method | Mastra Docs" description: "Reference for the resumeWithEvent method that resumes suspended workflows using event data." --- # resumeWithEvent() [EN] Source: https://mastra.ai/en/reference/workflows/resumeWithEvent The `resumeWithEvent()` method resumes workflow execution by providing data for a specific event that the workflow is waiting for. ## Syntax ```typescript const run = workflow.createRun(); // After the workflow has started and suspended at an event step await run.resumeWithEvent(eventName: string, data: any): Promise ``` ## Parameters | Parameter | Type | Description | | --------- | ------ | ------------------------------------------------------------------------------------------------------- | | eventName | string | The name of the event to trigger. Must match an event defined in the workflow's `events` configuration. | | data | any | The event data to provide. Must conform to the schema defined for that event. | ## Return Value Returns a Promise that resolves to a `WorkflowRunResult` object, containing: - `results`: The result status and output of each step in the workflow - `activePaths`: A map of active workflow paths and their states - `value`: The current state value of the workflow - Other workflow execution metadata ## Description The `resumeWithEvent()` method is used to resume a workflow that has been suspended at an event step created by the `afterEvent()` method. When called, this method: 1. Validates the provided event data against the schema defined for that event 2. Loads the workflow snapshot from storage 3. Updates the context with the event data in the `resumedEvent` field 4. Resumes execution from the event step 5. Continues workflow execution with the subsequent steps This method is part of Mastra's event-driven workflow capabilities, allowing you to create workflows that can respond to external events or user interactions. ## Usage Notes - The workflow must be in a suspended state, specifically at the event step created by `afterEvent(eventName)` - The event data must conform to the schema defined for that event in the workflow configuration - The workflow will continue execution from the point it was suspended - If the workflow is not suspended or is suspended at a different step, this method may throw an error - The event data is made available to subsequent steps via `context.inputData.resumedEvent` ## Examples ### Basic Usage ```typescript // Define and start a workflow const workflow = mastra.getWorkflow("approval-workflow"); const run = workflow.createRun(); // Start the workflow await run.start({ triggerData: { requestId: "req-123" } }); // Later, when the approval event occurs: const result = await run.resumeWithEvent("approval", { approved: true, approverName: "John Doe", comment: "Looks good to me!", }); console.log(result.results); ``` ### With Error Handling ```typescript try { const result = await run.resumeWithEvent("paymentReceived", { amount: 100.5, transactionId: "tx-456", paymentMethod: "credit-card", }); console.log("Workflow resumed successfully:", result.results); } catch (error) { console.error("Failed to resume workflow with event:", error); // Handle error - could be invalid event data, workflow not suspended, etc. } ``` ### Monitoring and Auto-Resuming ```typescript // Start a workflow const { start, watch, resumeWithEvent } = workflow.createRun(); // Watch for suspended event steps watch(async ({ activePaths }) => { const isApprovalEventSuspended = activePaths.get("__approval_event")?.status === "suspended"; // Check if suspended at the approval event step if (isApprovalEventSuspended) { console.log("Workflow waiting for approval"); // In a real scenario, you would wait for the actual event // Here we're simulating with a timeout setTimeout(async () => { try { await resumeWithEvent("approval", { approved: true, approverName: "Auto Approver", }); } catch (error) { console.error("Failed to auto-resume workflow:", error); } }, 5000); // Wait 5 seconds before auto-approving } }); // Start the workflow await start({ triggerData: { requestId: "auto-123" } }); ``` ## Related - [Event-Driven Workflows](./events.mdx) - [afterEvent()](./afterEvent.mdx) - [Suspend and Resume](../../workflows/suspend-and-resume.mdx) - [resume()](./resume.mdx) - [watch()](./watch.mdx) --- title: "Reference: Snapshots | Workflow State Persistence | Mastra Docs" description: "Technical reference on snapshots in Mastra - the serialized workflow state that enables suspend and resume functionality" --- # Snapshots [EN] Source: https://mastra.ai/en/reference/workflows/snapshots In Mastra, a snapshot is a serializable representation of a workflow's complete execution state at a specific point in time. Snapshots capture all the information needed to resume a workflow from exactly where it left off, including: - The current state of each step in the workflow - The outputs of completed steps - The execution path taken through the workflow - Any suspended steps and their metadata - The remaining retry attempts for each step - Additional contextual data needed to resume execution Snapshots are automatically created and managed by Mastra whenever a workflow is suspended, and are persisted to the configured storage system. ## The Role of Snapshots in Suspend and Resume Snapshots are the key mechanism enabling Mastra's suspend and resume capabilities. When a workflow step calls `await suspend()`: 1. The workflow execution is paused at that exact point 2. The current state of the workflow is captured as a snapshot 3. The snapshot is persisted to storage 4. The workflow step is marked as "suspended" with a status of `'suspended'` 5. Later, when `resume()` is called on the suspended step, the snapshot is retrieved 6. The workflow execution resumes from exactly where it left off This mechanism provides a powerful way to implement human-in-the-loop workflows, handle rate limiting, wait for external resources, and implement complex branching workflows that may need to pause for extended periods. ## Snapshot Anatomy A Mastra workflow snapshot consists of several key components: ```typescript export interface WorkflowRunState { // Core state info value: Record; // Current state machine value context: { // Workflow context steps: Record< string, { // Step execution results status: "success" | "failed" | "suspended" | "waiting" | "skipped"; payload?: any; // Step-specific data error?: string; // Error info if failed } >; triggerData: Record; // Initial trigger data attempts: Record; // Remaining retry attempts inputData: Record; // Initial input data }; activePaths: Array<{ // Currently active execution paths stepPath: string[]; stepId: string; status: string; }>; // Metadata runId: string; // Unique run identifier timestamp: number; // Time snapshot was created // For nested workflows and suspended steps childStates?: Record; // Child workflow states suspendedSteps?: Record; // Mapping of suspended steps } ``` ## How Snapshots Are Saved and Retrieved Mastra persists snapshots to the configured storage system. By default, snapshots are saved to a LibSQL database, but can be configured to use other storage providers like Upstash. The snapshots are stored in the `workflow_snapshots` table and identified uniquely by the `run_id` for the associated run when using libsql. Utilizing a persistence layer allows for the snapshots to be persisted across workflow runs, allowing for advanced human-in-the-loop functionality. Read more about [libsql storage](../storage/libsql.mdx) and [upstash storage](../storage/upstash.mdx) here. ### Saving Snapshots When a workflow is suspended, Mastra automatically persists the workflow snapshot with these steps: 1. The `suspend()` function in a step execution triggers the snapshot process 2. The `WorkflowInstance.suspend()` method records the suspended machine 3. `persistWorkflowSnapshot()` is called to save the current state 4. The snapshot is serialized and stored in the configured database in the `workflow_snapshots` table 5. The storage record includes the workflow name, run ID, and the serialized snapshot ### Retrieving Snapshots When a workflow is resumed, Mastra retrieves the persisted snapshot with these steps: 1. The `resume()` method is called with a specific step ID 2. The snapshot is loaded from storage using `loadWorkflowSnapshot()` 3. The snapshot is parsed and prepared for resumption 4. The workflow execution is recreated with the snapshot state 5. The suspended step is resumed, and execution continues ## Storage Options for Snapshots Mastra provides multiple storage options for persisting snapshots. A `storage` instance is configured on the `Mastra` class, and is used to setup a snapshot persistence layer for all workflows registered on the `Mastra` instance. This means that storage is shared across all workflows registered with the same `Mastra` instance. ### LibSQL (Default) The default storage option is LibSQL, a SQLite-compatible database: ```typescript import { Mastra } from "@mastra/core/mastra"; import { DefaultStorage } from "@mastra/core/storage/libsql"; const mastra = new Mastra({ storage: new DefaultStorage({ config: { url: "file:storage.db", // Local file-based database // For production: // url: process.env.DATABASE_URL, // authToken: process.env.DATABASE_AUTH_TOKEN, }, }), workflows: { weatherWorkflow, travelWorkflow, }, }); ``` ### Upstash (Redis-Compatible) For serverless environments: ```typescript import { Mastra } from "@mastra/core/mastra"; import { UpstashStore } from "@mastra/upstash"; const mastra = new Mastra({ storage: new UpstashStore({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }), workflows: { weatherWorkflow, travelWorkflow, }, }); ``` ## Best Practices for Working with Snapshots 1. **Ensure Serializability**: Any data that needs to be included in the snapshot must be serializable (convertible to JSON). 2. **Minimize Snapshot Size**: Avoid storing large data objects directly in the workflow context. Instead, store references to them (like IDs) and retrieve the data when needed. 3. **Handle Resume Context Carefully**: When resuming a workflow, carefully consider what context to provide. This will be merged with the existing snapshot data. 4. **Set Up Proper Monitoring**: Implement monitoring for suspended workflows, especially long-running ones, to ensure they are properly resumed. 5. **Consider Storage Scaling**: For applications with many suspended workflows, ensure your storage solution is appropriately scaled. ## Advanced Snapshot Patterns ### Custom Snapshot Metadata When suspending a workflow, you can include custom metadata that can help when resuming: ```typescript await suspend({ reason: "Waiting for customer approval", requiredApprovers: ["manager", "finance"], requestedBy: currentUser, urgency: "high", expires: new Date(Date.now() + 7 * 24 * 60 * 60 * 1000), }); ``` This metadata is stored with the snapshot and available when resuming. ### Conditional Resumption You can implement conditional logic based on the suspend payload when resuming: ```typescript run.watch(async ({ activePaths }) => { const isApprovalStepSuspended = activePaths.get("approval")?.status === "suspended"; if (isApprovalStepSuspended) { const payload = activePaths.get("approval")?.suspendPayload; if (payload.urgency === "high" && currentUser.role === "manager") { await resume({ stepId: "approval", context: { approved: true, approver: currentUser.id }, }); } } }); ``` ## Related - [Suspend Function Reference](./suspend.mdx) - [Resume Function Reference](./resume.mdx) - [Watch Function Reference](./watch.mdx) - [Suspend and Resume Guide](../../workflows/suspend-and-resume.mdx) --- title: "Reference: start() | Running Workflows | Mastra Docs" description: "Documentation for the `start()` method in workflows, which begins execution of a workflow run." --- # start() [EN] Source: https://mastra.ai/en/reference/workflows/start The start function begins execution of a workflow run. It processes all steps in the defined workflow order, handling parallel execution, branching logic, and step dependencies. ## Usage ```typescript copy showLineNumbers const { runId, start } = workflow.createRun(); const result = await start({ triggerData: { inputValue: 42 } }); ``` ## Parameters ### config ", description: "Initial data that matches the workflow's triggerSchema", isOptional: false } ]} /> ## Returns ", description: "Combined output from all completed workflow steps" }, { name: "status", type: "'completed' | 'error' | 'suspended'", description: "Final status of the workflow run" } ]} /> ## Error Handling The start function may throw several types of validation errors: ```typescript copy showLineNumbers try { const result = await start({ triggerData: data }); } catch (error) { if (error instanceof ValidationError) { console.log(error.type); // 'circular_dependency' | 'no_terminal_path' | 'unreachable_step' console.log(error.details); } } ``` ## Related - [Example: Creating a Workflow](../../../examples/workflows/creating-a-workflow.mdx) - [Example: Suspend and Resume](../../../examples/workflows/suspend-and-resume.mdx) - [createRun Reference](./createRun.mdx) - [Workflow Class Reference](./workflow.mdx) - [Step Class Reference](./step-class.mdx) ``` --- title: "Reference: Step | Building Workflows | Mastra Docs" description: Documentation for the Step class, which defines individual units of work within a workflow. --- # Step [EN] Source: https://mastra.ai/en/reference/workflows/step-class The Step class defines individual units of work within a workflow, encapsulating execution logic, data validation, and input/output handling. ## Usage ```typescript const processOrder = new Step({ id: "processOrder", inputSchema: z.object({ orderId: z.string(), userId: z.string() }), outputSchema: z.object({ status: z.string(), orderId: z.string() }), execute: async ({ context, runId }) => { return { status: "processed", orderId: context.orderId }; } }); ``` ## Constructor Parameters ", description: "Static data to be merged with variables", required: false }, { name: "execute", type: "(params: ExecuteParams) => Promise", description: "Async function containing step logic", required: true } ]} /> ### ExecuteParams Promise", description: "Function to suspend step execution" }, { name: "mastra", type: "Mastra", description: "Access to Mastra instance" } ]} /> ## Related - [Workflow Reference](./workflow.mdx) - [Step Configuration Guide](../../workflows/steps.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: StepCondition | Building Workflows | Mastra" description: Documentation for the step condition class in workflows, which determines whether a step should execute based on the output of previous steps or trigger data. --- # StepCondition [EN] Source: https://mastra.ai/en/reference/workflows/step-condition Conditions determine whether a step should execute based on the output of previous steps or trigger data. ## Usage There are three ways to specify conditions: function, query object, and simple path comparison. ### 1. Function Condition ```typescript copy showLineNumbers workflow.step(processOrder, { when: async ({ context }) => { const auth = context?.getStepResult<{status: string}>("auth"); return auth?.status === "authenticated"; } }); ``` ### 2. Query Object ```typescript copy showLineNumbers workflow.step(processOrder, { when: { ref: { step: 'auth', path: 'status' }, query: { $eq: 'authenticated' } } }); ``` ### 3. Simple Path Comparison ```typescript copy showLineNumbers workflow.step(processOrder, { when: { "auth.status": "authenticated" } }); ``` Based on the type of condition, the workflow runner will try to match the condition to one of these types. 1. Simple Path Condition (when there's a dot in the key) 2. Base/Query Condition (when there's a 'ref' property) 3. Function Condition (when it's an async function) ## StepCondition ", description: "MongoDB-style query using sift operators ($eq, $gt, etc)", isOptional: false } ]} /> ## Query The Query object provides MongoDB-style query operators for comparing values from previous steps or trigger data. It supports basic comparison operators like `$eq`, `$gt`, `$lt` as well as array operators like `$in` and `$nin`, and can be combined with and/or operators for complex conditions. This query syntax allows for readable conditional logic for determining whether a step should execute. ## Related - [Step Options Reference](./step-options.mdx) - [Step Function Reference](./step-function.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: Workflow.step() | Workflows | Mastra Docs" description: Documentation for the `.step()` method in workflows, which adds a new step to the workflow. --- # Workflow.step() [EN] Source: https://mastra.ai/en/reference/workflows/step-function The `.step()` method adds a new step to the workflow, optionally configuring its variables and execution conditions. ## Usage ```typescript workflow.step({ id: "stepTwo", outputSchema: z.object({ result: z.number() }), execute: async ({ context }) => { return { result: 42 }; } }); ``` ## Parameters ### StepDefinition Promise", description: "Function containing step logic", isOptional: false } ]} /> ### StepOptions ", description: "Map of variable names to their source references", isOptional: true }, { name: "when", type: "StepCondition", description: "Condition that must be met for step to execute", isOptional: true } ]} /> ## Related - [Basic Usage with Step Instance](../../workflows/steps.mdx) - [Step Class Reference](./step-class.mdx) - [Workflow Class Reference](./workflow.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: StepOptions | Building Workflows | Mastra Docs" description: Documentation for the step options in workflows, which control variable mapping, execution conditions, and other runtime behavior. --- # StepOptions [EN] Source: https://mastra.ai/en/reference/workflows/step-options Configuration options for workflow steps that control variable mapping, execution conditions, and other runtime behavior. ## Usage ```typescript workflow.step(processOrder, { variables: { orderId: { step: 'trigger', path: 'id' }, userId: { step: 'auth', path: 'user.id' } }, when: { ref: { step: 'auth', path: 'status' }, query: { $eq: 'authenticated' } } }); ``` ## Properties ", description: "Maps step input variables to values from other steps", isOptional: true }, { name: "when", type: "StepCondition", description: "Condition that must be met for step execution", isOptional: true } ]} /> ### VariableRef ## Related - [Path Comparison](../../workflows/control-flow.mdx#path-comparison) - [Step Function Reference](./step-function.mdx) - [Step Class Reference](./step-class.mdx) - [Workflow Class Reference](./workflow.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Step Retries | Error Handling | Mastra Docs" description: "Automatically retry failed steps in Mastra workflows with configurable retry policies." --- # Step Retries [EN] Source: https://mastra.ai/en/reference/workflows/step-retries Mastra provides built-in retry mechanisms to handle transient failures in workflow steps. This allows workflows to recover gracefully from temporary issues without requiring manual intervention. ## Overview When a step in a workflow fails (throws an exception), Mastra can automatically retry the step execution based on a configurable retry policy. This is useful for handling: - Network connectivity issues - Service unavailability - Rate limiting - Temporary resource constraints - Other transient failures ## Default Behavior By default, steps do not retry when they fail. This means: - A step will execute once - If it fails, it will immediately mark the step as failed - The workflow will continue to execute any subsequent steps that don't depend on the failed step ## Configuration Options Retries can be configured at two levels: ### 1. Workflow-level Configuration You can set a default retry configuration for all steps in a workflow: ```typescript const workflow = new Workflow({ name: 'my-workflow', retryConfig: { attempts: 3, // Number of retries (in addition to the initial attempt) delay: 1000, // Delay between retries in milliseconds }, }); ``` ### 2. Step-level Configuration You can also configure retries on individual steps, which will override the workflow-level configuration for that specific step: ```typescript const fetchDataStep = new Step({ id: 'fetchData', execute: async () => { // Fetch data from external API }, retryConfig: { attempts: 5, // This step will retry up to 5 times delay: 2000, // With a 2-second delay between retries }, }); ``` ## Retry Parameters The `retryConfig` object supports the following parameters: | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `attempts` | number | 0 | The number of retry attempts (in addition to the initial attempt) | | `delay` | number | 1000 | Time in milliseconds to wait between retries | ## How Retries Work When a step fails, Mastra's retry mechanism: 1. Checks if the step has retry attempts remaining 2. If attempts remain: - Decrements the attempt counter - Transitions the step to a "waiting" state - Waits for the configured delay period - Retries the step execution 3. If no attempts remain or all attempts have been exhausted: - Marks the step as "failed" - Continues workflow execution (for steps that don't depend on the failed step) During retry attempts, the workflow execution remains active but paused for the specific step that is being retried. ## Examples ### Basic Retry Example ```typescript import { Workflow, Step } from '@mastra/core/workflows'; // Define a step that might fail const unreliableApiStep = new Step({ id: 'callUnreliableApi', execute: async () => { // Simulate an API call that might fail const random = Math.random(); if (random < 0.7) { throw new Error('API call failed'); } return { data: 'API response data' }; }, retryConfig: { attempts: 3, // Retry up to 3 times delay: 2000, // Wait 2 seconds between attempts }, }); // Create a workflow with the unreliable step const workflow = new Workflow({ name: 'retry-demo-workflow', }); workflow .step(unreliableApiStep) .then(processResultStep) .commit(); ``` ### Workflow-level Retries with Step Override ```typescript import { Workflow, Step } from '@mastra/core/workflows'; // Create a workflow with default retry configuration const workflow = new Workflow({ name: 'multi-retry-workflow', retryConfig: { attempts: 2, // All steps will retry twice by default delay: 1000, // With a 1-second delay }, }); // This step uses the workflow's default retry configuration const standardStep = new Step({ id: 'standardStep', execute: async () => { // Some operation that might fail }, }); // This step overrides the workflow's retry configuration const criticalStep = new Step({ id: 'criticalStep', execute: async () => { // Critical operation that needs more retry attempts }, retryConfig: { attempts: 5, // Override with 5 retry attempts delay: 5000, // And a longer 5-second delay }, }); // This step disables retries const noRetryStep = new Step({ id: 'noRetryStep', execute: async () => { // Operation that should not retry }, retryConfig: { attempts: 0, // Explicitly disable retries }, }); workflow .step(standardStep) .then(criticalStep) .then(noRetryStep) .commit(); ``` ## Monitoring Retries You can monitor retry attempts in your logs. Mastra logs retry-related events at the `debug` level: ``` [DEBUG] Step fetchData failed (runId: abc-123) [DEBUG] Attempt count for step fetchData: 2 remaining attempts (runId: abc-123) [DEBUG] Step fetchData waiting (runId: abc-123) [DEBUG] Step fetchData finished waiting (runId: abc-123) [DEBUG] Step fetchData pending (runId: abc-123) ``` ## Best Practices 1. **Use Retries for Transient Failures**: Only configure retries for operations that might experience transient failures. For deterministic errors (like validation failures), retries won't help. 2. **Set Appropriate Delays**: Consider using longer delays for external API calls to allow time for services to recover. 3. **Limit Retry Attempts**: Don't set extremely high retry counts as this could cause workflows to run for excessive periods during outages. 4. **Implement Idempotent Operations**: Ensure your step's `execute` function is idempotent (can be called multiple times without side effects) since it may be retried. 5. **Consider Backoff Strategies**: For more advanced scenarios, consider implementing exponential backoff in your step's logic for operations that might be rate-limited. ## Related - [Step Class Reference](./step-class.mdx) - [Workflow Configuration](./workflow.mdx) - [Error Handling in Workflows](../../workflows/error-handling.mdx) --- title: "Reference: suspend() | Control Flow | Mastra Docs" description: "Documentation for the suspend function in Mastra workflows, which pauses execution until resumed." --- # suspend() [EN] Source: https://mastra.ai/en/reference/workflows/suspend Pauses workflow execution at the current step until explicitly resumed. The workflow state is persisted and can be continued later. ## Usage Example ```typescript const approvalStep = new Step({ id: "needsApproval", execute: async ({ context, suspend }) => { if (context.steps.amount > 1000) { await suspend(); } return { approved: true }; } }); ``` ## Parameters ", description: "Optional data to store with the suspended state", isOptional: true } ]} /> ## Returns ", type: "Promise", description: "Resolves when the workflow is successfully suspended" } ]} /> ## Additional Examples Suspend with metadata: ```typescript const reviewStep = new Step({ id: "review", execute: async ({ context, suspend }) => { await suspend({ reason: "Needs manager approval", requestedBy: context.user }); return { reviewed: true }; } }); ``` ### Related - [Suspend & Resume Workflows](../../workflows/suspend-and-resume.mdx) - [.resume()](./resume.mdx) - [.watch()](./watch.mdx) --- title: "Reference: Workflow.then() | Building Workflows | Mastra Docs" description: Documentation for the `.then()` method in workflows, which creates sequential dependencies between steps. --- # Workflow.then() [EN] Source: https://mastra.ai/en/reference/workflows/then The `.then()` method creates a sequential dependency between workflow steps, ensuring steps execute in a specific order. ## Usage ```typescript workflow .step(stepOne) .then(stepTwo) .then(stepThree); ``` ## Parameters ## Returns ## Validation When using `then`: - The previous step must exist in the workflow - Steps cannot form circular dependencies - Each step can only appear once in a sequential chain ## Error Handling ```typescript try { workflow .step(stepA) .then(stepB) .then(stepA) // Will throw error - circular dependency .commit(); } catch (error) { if (error instanceof ValidationError) { console.log(error.type); // 'circular_dependency' console.log(error.details); } } ``` ## Related - [step Reference](./step-class.mdx) - [after Reference](./after.mdx) - [Sequential Steps Example](../../../examples/workflows/sequential-steps.mdx) - [Control Flow Guide](../../workflows/control-flow.mdx) ``` --- title: "Reference: Workflow.until() | Looping in Workflows | Mastra Docs" description: "Documentation for the `.until()` method in Mastra workflows, which repeats a step until a specified condition becomes true." --- # Workflow.until() [EN] Source: https://mastra.ai/en/reference/workflows/until The `.until()` method repeats a step until a specified condition becomes true. This creates a loop that continues executing the specified step until the condition is satisfied. ## Usage ```typescript workflow .step(incrementStep) .until(condition, incrementStep) .then(finalStep); ``` ## Parameters ## Condition Types ### Function Condition You can use a function that returns a boolean: ```typescript workflow .step(incrementStep) .until(async ({ context }) => { const result = context.getStepResult<{ value: number }>('increment'); return (result?.value ?? 0) >= 10; // Stop when value reaches or exceeds 10 }, incrementStep) .then(finalStep); ``` ### Reference Condition You can use a reference-based condition with comparison operators: ```typescript workflow .step(incrementStep) .until( { ref: { step: incrementStep, path: 'value' }, query: { $gte: 10 }, // Stop when value is greater than or equal to 10 }, incrementStep ) .then(finalStep); ``` ## Comparison Operators When using reference-based conditions, you can use these comparison operators: | Operator | Description | Example | |----------|-------------|---------| | `$eq` | Equal to | `{ $eq: 10 }` | | `$ne` | Not equal to | `{ $ne: 0 }` | | `$gt` | Greater than | `{ $gt: 5 }` | | `$gte` | Greater than or equal to | `{ $gte: 10 }` | | `$lt` | Less than | `{ $lt: 20 }` | | `$lte` | Less than or equal to | `{ $lte: 15 }` | ## Returns ## Example ```typescript import { Workflow, Step } from '@mastra/core'; import { z } from 'zod'; // Create a step that increments a counter const incrementStep = new Step({ id: 'increment', description: 'Increments the counter by 1', outputSchema: z.object({ value: z.number(), }), execute: async ({ context }) => { // Get current value from previous execution or start at 0 const currentValue = context.getStepResult<{ value: number }>('increment')?.value || context.getStepResult<{ startValue: number }>('trigger')?.startValue || 0; // Increment the value const value = currentValue + 1; console.log(`Incrementing to ${value}`); return { value }; }, }); // Create a final step const finalStep = new Step({ id: 'final', description: 'Final step after loop completes', execute: async ({ context }) => { const finalValue = context.getStepResult<{ value: number }>('increment')?.value; console.log(`Loop completed with final value: ${finalValue}`); return { finalValue }; }, }); // Create the workflow const counterWorkflow = new Workflow({ name: 'counter-workflow', triggerSchema: z.object({ startValue: z.number(), targetValue: z.number(), }), }); // Configure the workflow with an until loop counterWorkflow .step(incrementStep) .until(async ({ context }) => { const targetValue = context.triggerData.targetValue; const currentValue = context.getStepResult<{ value: number }>('increment')?.value ?? 0; return currentValue >= targetValue; }, incrementStep) .then(finalStep) .commit(); // Execute the workflow const run = counterWorkflow.createRun(); const result = await run.start({ triggerData: { startValue: 0, targetValue: 5 } }); // Will increment from 0 to 5, then stop and execute finalStep ``` ## Related - [.while()](./while.mdx) - Loop while a condition is true - [Control Flow Guide](../../workflows/control-flow.mdx#loop-control-with-until-and-while) - [Workflow Class Reference](./workflow.mdx) --- title: "Reference: run.watch() | Workflows | Mastra Docs" description: Documentation for the `.watch()` method in workflows, which monitors the status of a workflow run. --- # run.watch() [EN] Source: https://mastra.ai/en/reference/workflows/watch The `.watch()` function subscribes to state changes on a mastra run, allowing you to monitor execution progress and react to state updates. ## Usage Example ```typescript import { Workflow } from "@mastra/core/workflows"; const workflow = new Workflow({ name: "document-processor" }); const run = workflow.createRun(); // Subscribe to state changes const unsubscribe = run.watch(({results, activePaths}) => { console.log('Results:', results); console.log('Active paths:', activePaths); }); // Run the workflow await run.start({ input: { text: "Process this document" } }); // Stop watching unsubscribe(); ``` ## Parameters void", description: "Function called whenever the workflow state changes", isOptional: false } ]} /> ### WorkflowState Properties ", description: "Outputs from completed workflow steps", isOptional: false }, { name: "activePaths", type: "Map", description: "Current status of each step", isOptional: false }, { name: "runId", type: "string", description: "ID of the workflow run", isOptional: false }, { name: "timestamp", type: "number", description: "Timestamp of the workflow run", isOptional: false } ]} /> ## Returns void", description: "Function to stop watching workflow state changes" } ]} /> ## Additional Examples Monitor specific step completion: ```typescript run.watch(({results, activePaths}) => { if (activePaths.get('processDocument')?.status === 'completed') { console.log('Document processing output:', results['processDocument'].output); } }); ``` Error handling: ```typescript run.watch(({results, activePaths}) => { if (activePaths.get('processDocument')?.status === 'failed') { console.error('Document processing failed:', results['processDocument'].error); // Implement error recovery logic } }); ``` ### Related - [Workflow Creation](/reference/workflows/createRun) - [Step Configuration](/reference/workflows/step-class) --- title: "Reference: Workflow.while() | Looping in Workflows | Mastra Docs" description: "Documentation for the `.while()` method in Mastra workflows, which repeats a step as long as a specified condition remains true." --- # Workflow.while() [EN] Source: https://mastra.ai/en/reference/workflows/while The `.while()` method repeats a step as long as a specified condition remains true. This creates a loop that continues executing the specified step until the condition becomes false. ## Usage ```typescript workflow .step(incrementStep) .while(condition, incrementStep) .then(finalStep); ``` ## Parameters ## Condition Types ### Function Condition You can use a function that returns a boolean: ```typescript workflow .step(incrementStep) .while(async ({ context }) => { const result = context.getStepResult<{ value: number }>('increment'); return (result?.value ?? 0) < 10; // Continue as long as value is less than 10 }, incrementStep) .then(finalStep); ``` ### Reference Condition You can use a reference-based condition with comparison operators: ```typescript workflow .step(incrementStep) .while( { ref: { step: incrementStep, path: 'value' }, query: { $lt: 10 }, // Continue as long as value is less than 10 }, incrementStep ) .then(finalStep); ``` ## Comparison Operators When using reference-based conditions, you can use these comparison operators: | Operator | Description | Example | |----------|-------------|---------| | `$eq` | Equal to | `{ $eq: 10 }` | | `$ne` | Not equal to | `{ $ne: 0 }` | | `$gt` | Greater than | `{ $gt: 5 }` | | `$gte` | Greater than or equal to | `{ $gte: 10 }` | | `$lt` | Less than | `{ $lt: 20 }` | | `$lte` | Less than or equal to | `{ $lte: 15 }` | ## Returns ## Example ```typescript import { Workflow, Step } from '@mastra/core'; import { z } from 'zod'; // Create a step that increments a counter const incrementStep = new Step({ id: 'increment', description: 'Increments the counter by 1', outputSchema: z.object({ value: z.number(), }), execute: async ({ context }) => { // Get current value from previous execution or start at 0 const currentValue = context.getStepResult<{ value: number }>('increment')?.value || context.getStepResult<{ startValue: number }>('trigger')?.startValue || 0; // Increment the value const value = currentValue + 1; console.log(`Incrementing to ${value}`); return { value }; }, }); // Create a final step const finalStep = new Step({ id: 'final', description: 'Final step after loop completes', execute: async ({ context }) => { const finalValue = context.getStepResult<{ value: number }>('increment')?.value; console.log(`Loop completed with final value: ${finalValue}`); return { finalValue }; }, }); // Create the workflow const counterWorkflow = new Workflow({ name: 'counter-workflow', triggerSchema: z.object({ startValue: z.number(), targetValue: z.number(), }), }); // Configure the workflow with a while loop counterWorkflow .step(incrementStep) .while( async ({ context }) => { const targetValue = context.triggerData.targetValue; const currentValue = context.getStepResult<{ value: number }>('increment')?.value ?? 0; return currentValue < targetValue; }, incrementStep ) .then(finalStep) .commit(); // Execute the workflow const run = counterWorkflow.createRun(); const result = await run.start({ triggerData: { startValue: 0, targetValue: 5 } }); // Will increment from 0 to 4, then stop and execute finalStep ``` ## Related - [.until()](./until.mdx) - Loop until a condition becomes true - [Control Flow Guide](../../workflows/control-flow.mdx#loop-control-with-until-and-while) - [Workflow Class Reference](./workflow.mdx) --- title: "Reference: Workflow Class | Building Workflows | Mastra Docs" description: Documentation for the Workflow class in Mastra, which enables you to create state machines for complex sequences of operations with conditional branching and data validation. --- # Workflow Class [EN] Source: https://mastra.ai/en/reference/workflows/workflow The Workflow class enables you to create state machines for complex sequences of operations with conditional branching and data validation. ```ts copy import { Workflow } from "@mastra/core/workflows"; const workflow = new Workflow({ name: "my-workflow" }); ``` ## API Reference ### Constructor ", isOptional: true, description: "Optional logger instance for workflow execution details", }, { name: "steps", type: "Step[]", description: "Array of steps to include in the workflow", }, { name: "triggerSchema", type: "z.Schema", description: "Optional schema for validating workflow trigger data", }, ]} /> ### Core Methods #### `step()` Adds a [Step](./step-class.mdx) to the workflow, including transitions to other steps. Returns the workflow instance for chaining. [Learn more about steps](./step-class.mdx). #### `commit()` Validates and finalizes the workflow configuration. Must be called after adding all steps. #### `execute()` Executes the workflow with optional trigger data. Typed based on the [trigger schema](./workflow.mdx#trigger-schemas). ## Trigger Schemas Trigger schemas validate the initial data passed to a workflow using Zod. ```ts showLineNumbers copy const workflow = new Workflow({ name: "order-process", triggerSchema: z.object({ orderId: z.string(), customer: z.object({ id: z.string(), email: z.string().email(), }), }), }); ``` The schema: - Validates data passed to `execute()` - Provides TypeScript types for your workflow input ## Validation Workflow validation happens at two key times: ### 1. At Commit Time When you call `.commit()`, the workflow validates: ```ts showLineNumbers copy workflow .step('step1', {...}) .step('step2', {...}) .commit(); // Validates workflow structure ``` - Circular dependencies between steps - Terminal paths (every path must end) - Unreachable steps - Variable references to non-existent steps - Duplicate step IDs ### 2. During Execution When you call `start()`, it validates: ```ts showLineNumbers copy const { runId, start } = workflow.createRun(); // Validates trigger data against schema await start({ triggerData: { orderId: "123", customer: { id: "cust_123", email: "invalid-email", // Will fail validation }, }, }); ``` - Trigger data against trigger schema - Each step's input data against its inputSchema - Variable paths exist in referenced step outputs - Required variables are present ## Workflow Status A workflow's status indicates its current execution state. The possible values are: ### Example: Handling Different Statuses ```typescript showLineNumbers copy const { runId, start, watch } = workflow.createRun(); watch(async ({ status }) => { switch (status) { case "SUSPENDED": // Handle suspended state break; case "COMPLETED": // Process results break; case "FAILED": // Handle error state break; } }); await start({ triggerData: data }); ``` ## Error Handling ```ts showLineNumbers copy try { const { runId, start, watch, resume } = workflow.createRun(); await start({ triggerData: data }); } catch (error) { if (error instanceof ValidationError) { // Handle validation errors console.log(error.type); // 'circular_dependency' | 'no_terminal_path' | 'unreachable_step' console.log(error.details); // { stepId?: string, path?: string[] } } } ``` ## Passing Context Between Steps Steps can access data from previous steps in the workflow through the context object. Each step receives the accumulated context from all previous steps that have executed. ```typescript showLineNumbers copy workflow .step({ id: 'getData', execute: async ({ context }) => { return { data: { id: '123', value: 'example' } }; } }) .step({ id: 'processData', execute: async ({ context }) => { // Access data from previous step through context.steps const previousData = context.steps.getData.output.data; // Process previousData.id and previousData.value } }); ``` The context object: - Contains results from all completed steps in `context.steps` - Provides access to step outputs through `context.steps.[stepId].output` - Is typed based on step output schemas - Is immutable to ensure data consistency ## Related Documentation - [Step](./step-class.mdx) - [.then()](./then.mdx) - [.step()](./step-function.mdx) - [.after()](./after.mdx) --- title: 'Showcase' description: 'Check out these applications built with Mastra' --- [EN] Source: https://mastra.ai/en/showcase import { ShowcaseGrid } from '@/components/showcase-grid'; --- title: "エージェントツール選択 | エージェントドキュメント | Mastra" description: ツールは、エージェントやワークフローによって実行できる型付き関数で、組み込みの統合アクセスとパラメータ検証機能を備えています。各ツールには、入力を定義するスキーマ、ロジックを実装する実行関数、および設定された統合へのアクセスがあります。 --- # エージェントツールの選択 [JA] Source: https://mastra.ai/ja/docs/agents/adding-tools ツールは、エージェントやワークフローによって実行できる型付き関数であり、組み込みの統合アクセスとパラメータ検証機能を備えています。各ツールには、入力を定義するスキーマ、ロジックを実装する実行関数、および設定された統合へのアクセスがあります。 ## ツールの作成 このセクションでは、エージェントが使用できるツールを作成するプロセスを説明します。都市の現在の天気情報を取得するシンプルなツールを作成してみましょう。 ```typescript filename="src/mastra/tools/weatherInfo.ts" copy import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const getWeatherInfo = async (city: string) => { // Replace with an actual API call to a weather service const data = await fetch(`https://api.example.com/weather?city=${city}`).then( (r) => r.json(), ); return data; }; export const weatherInfo = createTool({ id: "Get Weather Information", inputSchema: z.object({ city: z.string(), }), description: `Fetches the current weather information for a given city`, execute: async ({ context: { city } }) => { console.log("Using tool to fetch weather information for", city); return await getWeatherInfo(city); }, }); ``` ## エージェントにツールを追加する 次に、ツールをエージェントに追加します。天気に関する質問に答えることができるエージェントを作成し、`weatherInfo`ツールを使用するように設定します。 ```typescript filename="src/mastra/agents/weatherAgent.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import * as tools from "../tools/weatherInfo"; export const weatherAgent = new Agent({ name: "Weather Agent", instructions: "You are a helpful assistant that provides current weather information. When asked about the weather, use the weather information tool to fetch the data.", model: openai("gpt-4o-mini"), tools: { weatherInfo: tools.weatherInfo, }, }); ``` ## エージェントの登録 Mastraにエージェントを初期化する必要があります。 ```typescript filename="src/index.ts" import { Mastra } from "@mastra/core"; import { weatherAgent } from "./agents/weatherAgent"; export const mastra = new Mastra({ agents: { weatherAgent }, }); ``` これによりエージェントがMastraに登録され、使用可能になります。 ## アボートシグナル `generate`と`stream`(テキスト生成)からのアボートシグナルはツール実行に転送されます。これらは実行関数の2番目のパラメータでアクセスでき、長時間実行される計算を中止したり、ツール内のフェッチ呼び出しに転送したりすることができます。 ```typescript import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const agent = new Agent({ name: "Weather agent", tools: { weather: createTool({ id: "Get Weather Information", description: "Get the weather in a location", inputSchema: z.object({ location: z.string() }), execute: async ({ context: { location } }, { abortSignal }) => { return fetch( `https://api.weatherapi.com/v1/current.json?q=${location}`, { signal: abortSignal }, // forward the abort signal to fetch ); }, }), }, }); const result = await agent.generate("What is the weather in San Francisco?", { abortSignal: myAbortSignal, // signal that will be forwarded to tools }); ``` ## リクエスト/ユーザー固有の変数の注入 ツールとワークフローに対する依存性注入をサポートしています。コンテナを`generate`または`stream`関数呼び出しに直接渡すか、[サーバーミドルウェア](/docs/deployment/server#Middleware)を使用して注入することができます。 以下は、華氏と摂氏の間で温度スケールを変更する例です: ```typescript filename="src/agents/weather-agent.ts" import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; export const agent = new Agent({ name: "Weather agent", tools: { weather: createTool({ id: "Get Weather Information", description: "Get the weather in a location", inputSchema: z.object({ location: z.string() }), execute: async ({ context: { location }, container }) => { const scale = container.get("temperature-scale"); const result = await fetch( `https://api.weatherapi.com/v1/current.json?q=${location}`, ); const json = await result.json(); return { temperature: scale === "celsius" ? json.temp_c : json.temp_f, }; }, }), }, }); ``` ```typescript import { agent } from "./agents/weather"; type MyContainer = {"temperature-scale", "celsius" | "farenheit"} const container = new Container(); container.set("temperature-scale", "celsius"); const result = await agent.generate("What is the weather in San Francisco?", { container, }); ``` Cloudflare `CF-IPCountry`ヘッダーを使用して、ユーザーの国から温度を導き出してみましょう。上記の例と同じエージェントコードを使用します。 ```typescript filename="src/index.ts" import { Mastra } from "@mastra/core"; import { Container } from "@mastra/core/di"; import { agent as weatherAgent } from "./agents/weather"; type MyContainer = {"temperature-scale", "celsius" | "farenheit"} export const mastra = new Mastra({ agents: { weather: weatherAgent, }, server: { middleware: [ (c, next) => { const country = c.req.header("CF-IPCountry"); const container: MyContainer = c.get("container"); container.set( "temperature-scale", country === "US" ? "farenheit" : "celsius", ); }, ], }, }); ``` ## デバッグツール Vitestやその他のテストフレームワークを使用してツールをテストできます。ツールの単体テストを作成することで、期待通りの動作を確保し、早期にエラーを発見するのに役立ちます。 ## ツールを使用したエージェントの呼び出し これでエージェントを呼び出すことができ、エージェントはツールを使用して天気情報を取得します。 ## 例:エージェントとの対話 ```typescript filename="src/index.ts" import { mastra } from "./index"; async function main() { const agent = mastra.getAgent("weatherAgent"); const response = await agent.generate( "What's the weather like in New York City today?", ); console.log(response.text); } main(); ``` エージェントは`weatherInfo`ツールを使用して、ニューヨーク市の現在の天気を取得し、それに応じて応答します。 ## Vercel AI SDK ツール形式 Mastraは、Vercel AI SDK形式で作成されたツールをサポートしています。これらのツールを直接インポートして使用できます: ```typescript filename="src/mastra/tools/vercelTool.ts" copy import { tool } from "ai"; import { z } from "zod"; export const weatherInfo = tool({ description: "Fetches the current weather information for a given city", parameters: z.object({ city: z.string().describe("The city to get weather for"), }), execute: async ({ city }) => { // Replace with actual API call const data = await fetch(`https://api.example.com/weather?city=${city}`); return data.json(); }, }); ``` Vercelツールを、Mastraツールと一緒にエージェントで使用できます: ```typescript filename="src/mastra/agents/weatherAgent.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import { weatherInfo } from "../tools/vercelTool"; import * as mastraTools from "../tools/mastraTools"; export const weatherAgent = new Agent({ name: "Weather Agent", instructions: "You are a helpful assistant that provides weather information.", model: openai("gpt-4"), tools: { weatherInfo, // Vercel tool ...mastraTools, // Mastra tools }, }); ``` どちらのツール形式も、エージェントのワークフロー内でシームレスに動作します。 ## ツール設計のベストプラクティス エージェント用のツールを作成する際、以下のガイドラインに従うことで、信頼性が高く直感的なツールの使用が確保されます: ### ツールの説明 ツールのメイン説明は、その目的と価値に焦点を当てるべきです: - 説明はシンプルに保ち、ツールが**何を**するかに焦点を当てる - ツールの主な使用例を強調する - メインの説明では実装の詳細を避ける - エージェントがツールを**いつ**使用するかを理解するのに役立つことに焦点を当てる ```typescript createTool({ id: "documentSearch", description: "Access the knowledge base to find information needed to answer user questions", // ... rest of tool configuration }); ``` ### パラメータスキーマ 技術的な詳細はパラメータスキーマに属し、エージェントがツールを正しく使用するのに役立ちます: - 明確な説明でパラメータを自己文書化する - デフォルト値とその影響を含める - 役立つ場合は例を提供する - 異なるパラメータ選択の影響を説明する ```typescript inputSchema: z.object({ query: z.string().describe("The search query to find relevant information"), limit: z.number().describe( "Number of results to return. Higher values provide more context, lower values focus on best matches" ), options: z.string().describe( "Optional configuration. Example: '{'filter': 'category=news'}'" ), }), ``` ### エージェントとの相互作用パターン ツールが効果的に使用される可能性が高いのは以下の場合です: - クエリやタスクが、ツールの支援が明らかに必要なほど複雑である - エージェントの指示がツールの使用に関する明確なガイダンスを提供している - パラメータの要件がスキーマに十分に文書化されている - ツールの目的がクエリのニーズに合致している ### よくある落とし穴 - メインの説明に技術的な詳細を詰め込みすぎる - 実装の詳細と使用ガイダンスを混在させる - 不明確なパラメータの説明や例の欠如 これらのプラクティスに従うことで、ツールの目的(メインの説明)と実装の詳細(パラメータスキーマ)の間に明確な区分を維持しながら、エージェントによって発見可能で使いやすいツールを確保するのに役立ちます。 ## Model Context Protocol (MCP) ツール Mastraは`@mastra/mcp`パッケージを通じてMCP互換サーバーからのツールもサポートしています。MCPは、AIモデルが外部ツールやリソースを発見し、相互作用するための標準化された方法を提供します。これにより、カスタム統合を作成することなく、サードパーティのツールをエージェントに簡単に統合できます。 設定オプションやベストプラクティスを含むMCPツールの使用に関する詳細情報については、[MCPガイド](/docs/agents/mcp-guide)をご覧ください。 # エージェントに音声を追加する [JA] Source: https://mastra.ai/ja/docs/agents/adding-voice Mastraエージェントは音声機能で強化することができ、応答を話したりユーザー入力を聞いたりすることができます。エージェントを設定して、単一の音声プロバイダーを使用するか、異なる操作のために複数のプロバイダーを組み合わせることができます。 ## 単一のプロバイダーを使用する エージェントに音声を追加する最も簡単な方法は、発話と聞き取りの両方に単一のプロバイダーを使用することです: ```typescript import { createReadStream } from "fs"; import path from "path"; import { Agent } from "@mastra/core/agent"; import { OpenAIVoice } from "@mastra/voice-openai"; import { openai } from "@ai-sdk/openai"; // Initialize the voice provider with default settings const voice = new OpenAIVoice(); // Create an agent with voice capabilities export const agent = new Agent({ name: "Agent", instructions: `You are a helpful assistant with both STT and TTS capabilities.`, model: openai("gpt-4o"), voice, }); // The agent can now use voice for interaction const audioStream = await agent.voice.speak("Hello, I'm your AI assistant!", { filetype: "m4a", }); playAudio(audioStream!); try { const transcription = await agent.voice.listen(audioStream); console.log(transcription) } catch (error) { console.error("Error transcribing audio:", error); } ``` ## 複数のプロバイダーを使用する より柔軟性を高めるために、CompositeVoiceクラスを使用して話すことと聞くことに異なるプロバイダーを使用することができます: ```typescript import { Agent } from "@mastra/core/agent"; import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; import { openai } from "@ai-sdk/openai"; export const agent = new Agent({ name: "Agent", instructions: `You are a helpful assistant with both STT and TTS capabilities.`, model: openai("gpt-4o"), // Create a composite voice using OpenAI for listening and PlayAI for speaking voice: new CompositeVoice({ input: new OpenAIVoice(), output: new PlayAIVoice(), }), }); ``` ## 音声ストリームの操作 `speak()`と`listen()`メソッドはNode.jsストリームで動作します。以下は音声ファイルを保存および読み込む方法です: ### 音声出力の保存 `speak`メソッドはファイルやスピーカーにパイプできるストリームを返します。 ```typescript import { createWriteStream } from "fs"; import path from "path"; // Generate speech and save to file const audio = await agent.voice.speak("Hello, World!"); const filePath = path.join(process.cwd(), "agent.mp3"); const writer = createWriteStream(filePath); audio.pipe(writer); await new Promise((resolve, reject) => { writer.on("finish", () => resolve()); writer.on("error", reject); }); ``` ### 音声入力の文字起こし `listen`メソッドはマイクやファイルからの音声データのストリームを受け取ります。 ```typescript import { createReadStream } from "fs"; import path from "path"; // Read audio file and transcribe const audioFilePath = path.join(process.cwd(), "/agent.m4a"); const audioStream = createReadStream(audioFilePath); try { console.log("Transcribing audio file..."); const transcription = await agent.voice.listen(audioStream, { filetype: "m4a", }); console.log("Transcription:", transcription); } catch (error) { console.error("Error transcribing audio:", error); } ``` ## 音声対音声の音声インタラクション より動的でインタラクティブな音声体験のために、音声対音声機能をサポートするリアルタイム音声プロバイダーを使用できます: ```typescript import { Agent } from "@mastra/core/agent"; import { getMicrophoneStream } from "@mastra/node-audio"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { search, calculate } from "../tools"; // Initialize the realtime voice provider const voice = new OpenAIRealtimeVoice({ chatModel: { apiKey: process.env.OPENAI_API_KEY, model: "gpt-4o-mini-realtime", }, speaker: "alloy", }); // Create an agent with speech-to-speech voice capabilities export const agent = new Agent({ name: "Agent", instructions: `You are a helpful assistant with speech-to-speech capabilities.`, model: openai("gpt-4o"), tools: { // Tools configured on Agent are passed to voice provider search, calculate, }, voice, }); // Establish a WebSocket connection await agent.voice.connect(); // Start a conversation agent.voice.speak("Hello, I'm your AI assistant!"); // Stream audio from a microphone const microphoneStream = getMicrophoneStream(); agent.voice.send(microphoneStream); // When done with the conversation agent.voice.close(); ``` ### イベントシステム リアルタイム音声プロバイダーは、リッスンできるいくつかのイベントを発行します: ```typescript // Listen for speech audio data sent from voice provider agent.voice.on("speaking", ({ audio }) => { // audio contains ReadableStream or Int16Array audio data }); // Listen for transcribed text sent from both voice provider and user agent.voice.on("writing", ({ text, role }) => { console.log(`${role} said: ${text}`); }); // Listen for errors agent.voice.on("error", (error) => { console.error("Voice error:", error); }); ``` ## サポートされている音声プロバイダー Mastraは、テキスト読み上げ(TTS)と音声認識(STT)機能のために複数の音声プロバイダーをサポートしています: | プロバイダー | パッケージ | 機能 | リファレンス | |----------|---------|----------|-----------| | OpenAI | `@mastra/voice-openai` | TTS, STT | [ドキュメント](/reference/voice/openai) | | OpenAI Realtime | `@mastra/voice-openai-realtime` | リアルタイム音声対音声 | [ドキュメント](/reference/voice/openai-realtime) | | ElevenLabs | `@mastra/voice-elevenlabs` | 高品質TTS | [ドキュメント](/reference/voice/elevenlabs) | | PlayAI | `@mastra/voice-playai` | TTS | [ドキュメント](/reference/voice/playai) | | Google | `@mastra/voice-google` | TTS, STT | [ドキュメント](/reference/voice/google) | | Deepgram | `@mastra/voice-deepgram` | STT | [ドキュメント](/reference/voice/deepgram) | | Murf | `@mastra/voice-murf` | TTS | [ドキュメント](/reference/voice/murf) | | Speechify | `@mastra/voice-speechify` | TTS | [ドキュメント](/reference/voice/speechify) | | Sarvam | `@mastra/voice-sarvam` | TTS, STT | [ドキュメント](/reference/voice/sarvam) | | Azure | `@mastra/voice-azure` | TTS, STT | [ドキュメント](/reference/voice/mastra-voice) | | Cloudflare | `@mastra/voice-cloudflare` | TTS | [ドキュメント](/reference/voice/mastra-voice) | 音声機能の詳細については、[音声APIリファレンス](/reference/voice/mastra-voice)を参照してください。 --- title: "エージェントメモリーの使用 | エージェント | Mastra ドキュメント" description: Mastraのエージェントが会話履歴やコンテキスト情報を保存するためにメモリーをどのように使用するかに関するドキュメント。 --- # エージェントメモリー [JA] Source: https://mastra.ai/ja/docs/agents/agent-memory Mastraのエージェントは、会話履歴を保存し、関連情報を思い出し、インタラクション間で永続的なコンテキストを維持するための強力なメモリーシステムを活用できます。これにより、エージェントはより自然でステートフルな会話を行うことができます。 ## エージェントのメモリを有効にする メモリを有効にするには、単に`Memory`クラスをインスタンス化し、エージェントの設定に渡すだけです。また、メモリパッケージをインストールする必要があります: ```bash npm2yarn copy npm install @mastra/memory ``` ```typescript import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; // 基本的なメモリのセットアップ const memory = new Memory(); const agent = new Agent({ name: "MyMemoryAgent", instructions: "あなたはメモリを持つ役立つアシスタントです。", model: openai("gpt-4o"), memory: memory, // メモリインスタンスを接続 }); ``` この基本的なセットアップでは、ストレージにLibSQL、埋め込みにFastEmbedを含むデフォルト設定を使用します。詳細なセットアップ手順については、[メモリ](/docs/memory/overview)を参照してください。 ## エージェントコールでのメモリの使用 インタラクション中にメモリを活用するには、エージェントの`stream()`または`generate()`メソッドを呼び出す際に`resourceId`と`threadId`を**必ず**提供する必要があります。 - `resourceId`: 通常、ユーザーまたはエンティティを識別します(例:`user_123`)。 - `threadId`: 特定の会話スレッドを識別します(例:`support_chat_456`)。 ```typescript // メモリを使用したエージェント呼び出しの例 await agent.stream("私の好きな色は青だということを覚えておいて。", { resourceId: "user_alice", threadId: "preferences_thread", }); // 同じスレッドの後で... const response = await agent.stream("私の好きな色は何?", { resourceId: "user_alice", threadId: "preferences_thread", }); // エージェントはメモリを使用して好きな色を思い出します。 ``` これらのIDは、会話履歴とコンテキストが適切なユーザーと会話のために正しく保存され、取得されることを保証します。 ## 次のステップ Mastraの[メモリ機能](/docs/memory/overview)をさらに探索して、スレッド、会話履歴、セマンティック検索、ワーキングメモリについて学びましょう。 --- title: "MCPをMastraで使用する | エージェント | Mastraドキュメント" description: "MastraでMCPを使用して、AIエージェントにサードパーティのツールやリソースを統合します。" --- # Mastraでのモデルコンテキストプロトコル(MCP)の使用 [JA] Source: https://mastra.ai/ja/docs/agents/mcp-guide [モデルコンテキストプロトコル(MCP)](https://modelcontextprotocol.io/introduction)は、AIモデルが外部ツールやリソースを発見し、相互作用するための標準化された方法です。 ## 概要 MastraのMCPは、ツールサーバーに接続するための標準化された方法を提供し、stdioとSSEベースの接続の両方をサポートしています。 ## インストール pnpmを使用する場合: ```bash pnpm add @mastra/mcp@latest ``` npmを使用する場合: ```bash npm install @mastra/mcp@latest ``` ## コード内でMCPを使用する `MCPConfiguration`クラスは、複数のMCPクライアントを管理することなく、Mastraアプリケーションで複数のツールサーバーを管理する方法を提供します。stdioベースとSSEベースの両方のサーバーを設定できます: ```typescript import { MCPConfiguration } from "@mastra/mcp"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const mcp = new MCPConfiguration({ servers: { // stdio example sequential: { name: "sequential-thinking", server: { command: "npx", args: ["-y", "@modelcontextprotocol/server-sequential-thinking"], }, }, // SSE example weather: { url: new URL("http://localhost:8080/sse"), requestInit: { headers: { Authorization: "Bearer your-token", }, }, }, }, }); ``` ### ツールとツールセット `MCPConfiguration`クラスはMCPツールにアクセスするための2つの方法を提供しており、それぞれ異なるユースケースに適しています: #### ツールの使用(`getTools()`) 以下の場合にこのアプローチを使用します: - 単一のMCP接続がある場合 - ツールが単一のユーザー/コンテキストによって使用される場合 - ツール設定(APIキー、認証情報)が一定である場合 - 固定されたツールセットでAgentを初期化したい場合 ```typescript const agent = new Agent({ name: "CLI Assistant", instructions: "You help users with CLI tasks", model: openai("gpt-4o-mini"), tools: await mcp.getTools(), // ツールはエージェント作成時に固定されます }); ``` #### ツールセットの使用(`getToolsets()`) 以下の場合にこのアプローチを使用します: - リクエストごとのツール設定が必要な場合 - ツールがユーザーごとに異なる認証情報を必要とする場合 - マルチユーザー環境(Webアプリ、APIなど)で実行する場合 - ツール設定を動的に変更する必要がある場合 ```typescript const mcp = new MCPConfiguration({ servers: { example: { command: "npx", args: ["-y", "@example/fakemcp"], env: { API_KEY: "your-api-key", }, }, }, }); // このユーザー用に設定された現在のツールセットを取得 const toolsets = await mcp.getToolsets(); // ユーザー固有のツール設定でエージェントを使用 const response = await agent.stream( "What's new in Mastra and how's the weather?", { toolsets, }, ); ``` ## MCP レジストリ MCPサーバーは、厳選されたツールコレクションを提供するレジストリを通じてアクセスできます。 私たちは、最適なMCPサーバーの調達先を見つけるのに役立つ[MCP レジストリ レジストリ](/mcp-registry-registry)を厳選しましたが、以下では私たちのお気に入りのいくつかからツールを使用する方法を紹介します: ### mcp.run レジストリ [mcp.run](https://www.mcp.run/)を使用すると、事前認証された安全なMCPサーバーを簡単に呼び出すことができます。mcp.runのツールは無料で、完全に管理されているため、エージェントはSSE URLだけを必要とし、ユーザーがインストールしたどのツールでも使用できます。MCPサーバーは[プロファイル](https://docs.mcp.run/user-guide/manage-profiles)にグループ化され、固有のSSE URLでアクセスされます。 各プロファイルについて、固有の署名付きURLをコピー/ペーストして、次のように`MCPConfiguration`に設定できます: ```typescript const mcp = new MCPConfiguration({ servers: { marketing: { url: new URL(process.env.MCP_RUN_SSE_URL!), }, }, }); ``` > 重要:[mcp.run](https://mcp.run)の各SSE URLには、パスワードのように扱うべき固有の署名が含まれています。SSE URLを環境変数として読み込み、アプリケーションコードの外部で管理することをお勧めします。 ```bash filename=".env" copy MCP_RUN_SSE_URL=https://www.mcp.run/api/mcp/sse?nonce=... ``` ### Composio.dev レジストリ [Composio.dev](https://composio.dev)は、Mastraと簡単に統合できる[SSEベースのMCPサーバー](https://mcp.composio.dev)のレジストリを提供しています。Cursor用に生成されるSSE URLはMastraと互換性があり、設定で直接使用できます: ```typescript const mcp = new MCPConfiguration({ servers: { googleSheets: { url: new URL("https://mcp.composio.dev/googlesheets/[private-url-path]"), }, gmail: { url: new URL("https://mcp.composio.dev/gmail/[private-url-path]"), }, }, }); ``` Composio提供のツールを使用する場合、エージェントとの会話を通じて直接サービス(Google SheetsやGmailなど)で認証できます。ツールには認証機能が含まれており、チャット中にプロセスをガイドします。 注意:Composio.dev統合は、SSE URLがあなたのアカウントに紐づけられており、複数のユーザーには使用できないため、個人的な自動化などの単一ユーザーシナリオに最適です。各URLは単一アカウントの認証コンテキストを表します。 ### Smithery.ai レジストリ [Smithery.ai](https://smithery.ai)はMastraで簡単に使用できるMCPサーバーのレジストリを提供しています: ```typescript // Unix/Mac const mcp = new MCPConfiguration({ servers: { sequentialThinking: { command: "npx", args: [ "-y", "@smithery/cli@latest", "run", "@smithery-ai/server-sequential-thinking", "--config", "{}", ], }, }, }); // Windows const mcp = new MCPConfiguration({ servers: { sequentialThinking: { command: "cmd", args: [ "/c", "npx", "-y", "@smithery/cli@latest", "run", "@smithery-ai/server-sequential-thinking", "--config", "{}", ], }, }, }); ``` この例は、Smitheryドキュメントのクロード統合例から適応されています。 ## Mastraドキュメンテーションサーバーの使用 IDEでMastraのMCPドキュメンテーションサーバーを使用したいですか?[MCPドキュメンテーションサーバーガイド](/docs/getting-started/mcp-docs-server)をチェックして始めましょう。 ## 次のステップ - [MCPConfiguration](/reference/tools/mcp-configuration)についてもっと学ぶ - MCPを使用した[サンプルプロジェクト](/examples)をチェックする --- title: "エージェントの作成と呼び出し | エージェントドキュメンテーション | Mastra" description: Mastraにおけるエージェントの概要、その機能とツール、ワークフロー、外部システムとの連携方法の詳細。 --- # エージェントの作成と呼び出し [JA] Source: https://mastra.ai/ja/docs/agents/overview Mastraのエージェントは、言語モデルがタスクを実行するために一連のアクションを自律的に決定できるシステムです。エージェントはツール、ワークフロー、同期されたデータにアクセスでき、複雑なタスクを実行し、外部システムと対話することができます。エージェントはカスタム関数を呼び出したり、インテグレーションを通じてサードパーティAPIを利用したり、構築した知識ベースにアクセスしたりすることができます。 エージェントは、進行中のプロジェクトに使用できる従業員のようなものです。彼らには名前、永続的なメモリ、一貫したモデル構成、呼び出し間での一貫した指示、そして有効化されたツールのセットがあります。 ## 1. エージェントの作成 Mastraでエージェントを作成するには、`Agent`クラスを使用してそのプロパティを定義します: ```ts showLineNumbers filename="src/mastra/agents/index.ts" copy import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; export const myAgent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant.", model: openai("gpt-4o-mini"), }); ``` **注意:** OpenAI APIキーなどの必要な環境変数を`.env`ファイルに設定していることを確認してください: ```.env filename=".env" copy OPENAI_API_KEY=your_openai_api_key ``` また、`@mastra/core`パッケージがインストールされていることを確認してください: ```bash npm2yarn copy npm install @mastra/core ``` ### エージェントの登録 エージェントをMastraに登録して、ロギングや設定されたツールと統合へのアクセスを有効にします: ```ts showLineNumbers filename="src/mastra/index.ts" copy import { Mastra } from "@mastra/core"; import { myAgent } from "./agents"; export const mastra = new Mastra({ agents: { myAgent }, }); ``` ## 2. テキストの生成とストリーミング ### テキストの生成 エージェントにテキスト応答を生成させるには、`.generate()`メソッドを使用します: ```ts showLineNumbers filename="src/mastra/index.ts" copy const response = await myAgent.generate([ { role: "user", content: "Hello, how can you assist me today?" }, ]); console.log("Agent:", response.text); ``` generateメソッドとそのオプションの詳細については、[generate リファレンスドキュメント](/reference/agents/generate)を参照してください。 ### レスポンスのストリーミング よりリアルタイムなレスポンスを得るには、エージェントのレスポンスをストリーミングできます: ```ts showLineNumbers filename="src/mastra/index.ts" copy const stream = await myAgent.stream([ { role: "user", content: "Tell me a story." }, ]); console.log("Agent:"); for await (const chunk of stream.textStream) { process.stdout.write(chunk); } ``` ストリーミングレスポンスの詳細については、[stream リファレンスドキュメント](/reference/agents/stream)を参照してください。 ## 3. 構造化された出力 エージェントはJSONスキーマを提供するか、Zodスキーマを使用して構造化されたデータを返すことができます。 ### JSONスキーマの使用 ```typescript const schema = { type: "object", properties: { summary: { type: "string" }, keywords: { type: "array", items: { type: "string" } }, }, additionalProperties: false, required: ["summary", "keywords"], }; const response = await myAgent.generate( [ { role: "user", content: "Please provide a summary and keywords for the following text: ...", }, ], { output: schema, }, ); console.log("Structured Output:", response.object); ``` ### Zodの使用 型安全な構造化出力のためにZodスキーマを使用することもできます。 まず、Zodをインストールします: ```bash npm2yarn copy npm install zod ``` 次に、Zodスキーマを定義してエージェントで使用します: ```ts showLineNumbers filename="src/mastra/index.ts" copy import { z } from "zod"; // Zodスキーマを定義する const schema = z.object({ summary: z.string(), keywords: z.array(z.string()), }); // エージェントでスキーマを使用する const response = await myAgent.generate( [ { role: "user", content: "Please provide a summary and keywords for the following text: ...", }, ], { output: schema, }, ); console.log("Structured Output:", response.object); ``` ### ツールの使用 ツール呼び出しと一緒に構造化された出力を生成する必要がある場合は、`output`の代わりに`experimental_output`プロパティを使用する必要があります。方法は次のとおりです: ```typescript const schema = z.object({ summary: z.string(), keywords: z.array(z.string()), }); const response = await myAgent.generate( [ { role: "user", content: "Please analyze this repository and provide a summary and keywords...", }, ], { // 構造化された出力とツール呼び出しの両方を有効にするためにexperimental_outputを使用 experimental_output: schema, }, ); console.log("Structured Output:", response.object); ```
これにより、エージェントから返される構造化データに対して強力な型付けと検証を行うことができます。 ## 4. 複数ステップのツール使用エージェント エージェントはツールで強化することができます - ツールとはテキスト生成を超えてエージェントの能力を拡張する関数です。ツールを使用することで、エージェントは計算の実行、外部システムへのアクセス、データ処理などを行うことができます。ツールの作成と設定の詳細については、[ツールの追加に関するドキュメント](/docs/agents/adding-tools)を参照してください。 ### maxStepsの使用 `maxSteps`パラメータは、エージェントが行うことができる連続したLLM呼び出しの最大数を制御します。これは特にツール呼び出しを使用する際に重要です。デフォルトでは、誤設定されたツールによる無限ループを防ぐために1に設定されています。ユースケースに応じてこの制限を増やすことができます: ```ts showLineNumbers filename="src/mastra/agents/index.ts" copy import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import * as mathjs from "mathjs"; import { z } from "zod"; export const myAgent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant that can solve math problems.", model: openai("gpt-4o-mini"), tools: { calculate: { description: "Calculator for mathematical expressions", schema: z.object({ expression: z.string() }), execute: async ({ expression }) => mathjs.evaluate(expression), }, }, }); const response = await myAgent.generate( [ { role: "user", content: "If a taxi driver earns $9461 per hour and works 12 hours a day, how much does they earn in one day?", }, ], { maxSteps: 5, // 最大5回のツール使用ステップを許可 }, ); ``` ### onStepFinishの使用 `onStepFinish`コールバックを使用して、複数ステップの操作の進行状況を監視できます。これはデバッグやユーザーへの進捗状況の更新の提供に役立ちます。 `onStepFinish`は、ストリーミング時または構造化された出力なしでテキストを生成する場合にのみ利用可能です。 ```ts showLineNumbers filename="src/mastra/agents/index.ts" copy const response = await myAgent.generate( [{ role: "user", content: "Calculate the taxi driver's daily earnings." }], { maxSteps: 5, onStepFinish: ({ text, toolCalls, toolResults }) => { console.log("Step completed:", { text, toolCalls, toolResults }); }, }, ); ``` ### onFinishの使用 `onFinish`コールバックはレスポンスをストリーミングする際に利用可能で、完了した対話に関する詳細情報を提供します。これはLLMがレスポンスの生成を終了し、すべてのツール実行が完了した後に呼び出されます。 このコールバックは、最終的なレスポンステキスト、実行ステップ、トークン使用統計、およびモニタリングとログ記録に役立つその他のメタデータを受け取ります: ```ts showLineNumbers filename="src/mastra/agents/index.ts" copy const stream = await myAgent.stream( [{ role: "user", content: "Calculate the taxi driver's daily earnings." }], { maxSteps: 5, onFinish: ({ steps, text, finishReason, // 'complete', 'length', 'tool'など usage, // トークン使用統計 reasoningDetails, // エージェントの決定に関する追加コンテキスト }) => { console.log("Stream complete:", { totalSteps: steps.length, finishReason, usage, }); }, }, ); ``` ## 5. エージェントの実行 Mastraは、エージェントをAPIの背後で実行するための`mastra dev`というCLIコマンドを提供しています。デフォルトでは、`src/mastra/agents`ディレクトリ内のファイルでエクスポートされたエージェントを探します。 ### サーバーの起動 ```bash mastra dev ``` これによりサーバーが起動し、エージェントは`http://localhost:4111/api/agents/myAgent/generate`で利用可能になります。 ### エージェントとの対話 コマンドラインから`curl`を使用してエージェントと対話できます: ```bash curl -X POST http://localhost:4111/api/agents/myAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "Hello, how can you assist me today?" } ] }' ``` ## 次のステップ - [エージェントメモリ](./agent-memory.mdx)ガイドでエージェントメモリについて学びましょう。 - [エージェントツール](./adding-tools.mdx)ガイドでエージェントツールについて学びましょう。 - [シェフ・ミシェル](../guides/chef-michel.mdx)の例でエージェントの例を確認しましょう。 --- title: "Discord コミュニティとボット | ドキュメント | Mastra" description: Mastra Discordコミュニティとマスターノード認証プログラム(MCP)ボットに関する情報。 --- # Discordコミュニティ [JA] Source: https://mastra.ai/ja/docs/community/discord Discordサーバーには1000人以上のメンバーがおり、Mastraの主要な議論の場として機能しています。Mastraチームは北米とヨーロッパの営業時間中にDiscordを監視しており、他のタイムゾーンではコミュニティメンバーが活動しています。[Discordサーバーに参加する](https://discord.gg/BTYqqHKUrf)。 ## Discord MCP ボット コミュニティメンバーに加えて、質問に答えるのを手伝う(実験的な!)Discordボットもあります。これは[モデルコンテキストプロトコル(MCP)](/docs/agents/mcp-guide)を使用しています。`/ask`で質問することができ(公開チャンネルまたはDMのいずれでも)、`/cleardm`で履歴をクリアすることができます(DMのみ)。 --- title: "ライセンス" description: "Mastraライセンス" --- # ライセンス [JA] Source: https://mastra.ai/ja/docs/community/licensing ## Elastic License 2.0 (ELv2) Mastraは、オープンソースの原則と持続可能なビジネス慣行のバランスを取るために設計された現代的なライセンスであるElastic License 2.0(ELv2)の下でライセンスされています。 ### Elastic License 2.0とは? Elastic License 2.0は、ソースアベイラブルライセンスであり、プロジェクトの持続可能性を保護するための特定の制限を含みながら、ユーザーにソフトウェアの使用、修正、配布に関する広範な権利を付与します。以下が許可されています: - ほとんどの目的での無料使用 - ソースコードの閲覧、修正、再配布 - 派生作品の作成と配布 - 組織内での商用利用 主な制限は、ユーザーにソフトウェアの実質的な機能へのアクセスを提供するホスト型または管理型サービスとしてMastraを提供することはできないということです。 ### なぜElastic License 2.0を選んだのか 私たちがElastic License 2.0を選んだ重要な理由はいくつかあります: 1. **持続可能性**:オープン性と長期的な開発を維持する能力とのバランスを健全に保つことができます。 2. **イノベーション保護**:私たちの作業が競合するサービスとして再パッケージ化されることを懸念せずに、イノベーションへの投資を継続できることを保証します。 3. **コミュニティ重視**:コミュニティをサポートする能力を保護しながら、ユーザーが私たちのコードを閲覧、修正、学習することを可能にすることで、オープンソースの精神を維持します。 4. **ビジネスの明確性**:Mastraが商業的な文脈でどのように使用できるかについての明確なガイドラインを提供します。 ### Mastraでビジネスを構築する ライセンスの制限にもかかわらず、Mastraを使用して成功するビジネスを構築する方法は数多くあります: #### 許可されているビジネスモデル - **アプリケーションの構築**:Mastraで構築されたアプリケーションを作成して販売する - **コンサルティングサービスの提供**:専門知識、実装、カスタマイズサービスを提供する - **カスタムソリューションの開発**:Mastraを使用してクライアント向けのカスタムAIソリューションを構築する - **アドオンと拡張機能の作成**:Mastraの機能を拡張する補完的なツールを開発して販売する - **トレーニングと教育**:Mastraを効果的に使用するためのコースや教育資料を提供する #### 準拠した使用例 - 企業がMastraを使用してAI駆動のカスタマーサービスアプリケーションを構築し、クライアントに販売する - コンサルティング会社がMastraの実装とカスタマイズサービスを提供する - 開発者がMastraで特殊なエージェントとツールを作成し、他のビジネスにライセンス供与する - スタートアップがMastraを活用した特定の業界向けソリューション(例:ヘルスケアAIアシスタント)を構築する #### 避けるべきこと 主な制限は、ユーザーがその中核機能にアクセスできるホスト型サービスとしてMastra自体を提供することはできないということです。これは以下を意味します: - 最小限の修正を加えただけの実質的にMastraであるSaaSプラットフォームを作成しないでください - 顧客が主にMastraの機能を使用するために支払う管理型Mastraサービスを提供しないでください ### ライセンスに関する質問がありますか? Elastic License 2.0があなたのユースケースにどのように適用されるかについて具体的な質問がある場合は、明確化のために[Discordでお問い合わせください](https://discord.gg/BTYqqHKUrf)。私たちは、プロジェクトの持続可能性を保護しながら、正当なビジネスユースケースをサポートすることに取り組んでいます。 --- title: "MastraClient" description: "Mastra Client SDKの設定と使用方法について学ぶ" --- # Mastra Client SDK [JA] Source: https://mastra.ai/ja/docs/deployment/client Mastra Client SDKは、クライアント環境から[Mastraサーバー](/docs/deployment/server)と対話するためのシンプルで型安全なインターフェースを提供します。 ## 開発要件 スムーズなローカル開発を確保するために、以下のものを用意してください: - Node.js 18.x 以降がインストールされていること - TypeScript 4.7+ (TypeScriptを使用する場合) - Fetch APIをサポートする最新のブラウザ環境 - ローカルのMastraサーバーが実行中であること(通常はポート4111で) ## インストール import { Tabs } from "nextra/components"; ```bash copy npm install @mastra/client-js ``` ```bash copy yarn add @mastra/client-js ``` ```bash copy pnpm add @mastra/client-js ``` ## Mastra Clientの初期化 始めるには、必要なパラメータでMastraClientを初期化する必要があります: ```typescript import { MastraClient } from "@mastra/client-js"; const client = new MastraClient({ baseUrl: "http://localhost:4111", // Mastra開発サーバーのデフォルトポート }); ``` ### 設定オプション 様々なオプションでクライアントをカスタマイズできます: ```typescript const client = new MastraClient({ // 必須 baseUrl: "http://localhost:4111", // 開発用のオプション設定 retries: 3, // リトライ試行回数 backoffMs: 300, // 初期リトライバックオフ時間 maxBackoffMs: 5000, // 最大リトライバックオフ時間 headers: { // 開発用カスタムヘッダー "X-Development": "true" } }); ``` ## 例 MastraClientが初期化されると、型安全なインターフェースを通じてクライアント呼び出しを開始できます ```typescript // Get a reference to your local agent const agent = client.getAgent("dev-agent-id"); // Generate responses const response = await agent.generate({ messages: [ { role: "user", content: "Hello, I'm testing the local development setup!" } ] }); ``` ## 利用可能な機能 Mastraクライアントは、Mastraサーバーが提供するすべてのリソースを公開しています - [**エージェント**](/reference/client-js/agents): AIエージェントの作成と管理、レスポンスの生成、ストリーミング対話の処理 - [**メモリ**](/reference/client-js/memory): 会話スレッドとメッセージ履歴の管理 - [**ツール**](/reference/client-js/tools): エージェントが利用できるツールへのアクセスと実行 - [**ワークフロー**](/reference/client-js/workflows): 自動化されたワークフローの作成と管理 - [**ベクトル**](/reference/client-js/vectors): セマンティック検索と類似性マッチングのためのベクトル操作の処理 ## ベストプラクティス 1. **エラー処理**: 開発シナリオに適切なエラー処理を実装する 2. **環境変数**: 設定には環境変数を使用する 3. **デバッグ**: 必要に応じて詳細なログ記録を有効にする ```typescript // エラー処理とログ記録の例 try { const agent = client.getAgent("dev-agent-id"); const response = await agent.generate({ messages: [{ role: "user", content: "Test message" }] }); console.log("Response:", response); } catch (error) { console.error("Development error:", error); } ``` --- title: "サーバーレスデプロイメント" description: "プラットフォーム固有のデプロイヤーや標準HTTPサーバーを使用してMastraアプリケーションを構築・デプロイする" --- # サーバーレスデプロイメント [JA] Source: https://mastra.ai/ja/docs/deployment/deployment このガイドでは、プラットフォーム固有のデプロイヤーを使用して、MastraをCloudflare Workers、Vercel、およびNetlifyにデプロイする方法を説明します セルフホスト型Node.jsサーバーのデプロイメントについては、[Mastraサーバーの作成](/docs/deployment/server)ガイドを参照してください。 ## 前提条件 始める前に、以下のものを用意してください: - **Node.js**がインストールされていること(バージョン18以上を推奨) - プラットフォーム固有のデプロイヤーを使用する場合: - 選択したプラットフォームのアカウント - 必要なAPIキーまたは認証情報 ## サーバーレスプラットフォームデプロイヤー プラットフォーム固有のデプロイヤーは、以下の設定とデプロイメントを処理します: - **[Cloudflare Workers](/reference/deployer/cloudflare)** - **[Vercel](/reference/deployer/vercel)** - **[Netlify](/reference/deployer/netlify)** 2025年4月現在、Mastraは[Mastra Cloud](https://mastra.ai/cloud-beta)も提供しており、これはアトミックデプロイメントを備えたサーバーレスエージェント環境です。ウェイトリストへの登録は[こちら](https://mastra.ai/cloud-beta)から行えます。 ### デプロイヤーのインストール ```bash copy # For Cloudflare npm install @mastra/deployer-cloudflare # For Vercel npm install @mastra/deployer-vercel # For Netlify npm install @mastra/deployer-netlify ``` ### デプロイヤーの設定 エントリーファイルでデプロイヤーを設定します: ```typescript copy showLineNumbers import { Mastra, createLogger } from '@mastra/core'; import { CloudflareDeployer } from '@mastra/deployer-cloudflare'; export const mastra = new Mastra({ agents: { /* your agents here */ }, logger: createLogger({ name: 'MyApp', level: 'debug' }), deployer: new CloudflareDeployer({ scope: 'your-cloudflare-scope', projectName: 'your-project-name', // See complete configuration options in the reference docs }), }); ``` ### デプロイヤーの設定 各デプロイヤーには特定の設定オプションがあります。以下は基本的な例ですが、完全な詳細についてはリファレンスドキュメントを参照してください。 #### Cloudflare デプロイヤー ```typescript copy showLineNumbers new CloudflareDeployer({ scope: 'your-cloudflare-account-id', projectName: 'your-project-name', // For complete configuration options, see the reference documentation }) ``` [Cloudflare デプロイヤーリファレンスを見る →](/reference/deployer/cloudflare) #### Vercel デプロイヤー ```typescript copy showLineNumbers new VercelDeployer({ teamSlug: 'your-vercel-team-slug', projectName: 'your-project-name', token: 'your-vercel-token' // For complete configuration options, see the reference documentation }) ``` [Vercel デプロイヤーリファレンスを見る →](/reference/deployer/vercel) #### Netlify デプロイヤー ```typescript copy showLineNumbers new NetlifyDeployer({ scope: 'your-netlify-team-slug', projectName: 'your-project-name', token: 'your-netlify-token' }) ``` [Netlify デプロイヤーリファレンスを見る →](/reference/deployer/netlify) ## 環境変数 必要な変数: 1. プラットフォームデプロイヤー変数(プラットフォームデプロイヤーを使用する場合): - プラットフォームの認証情報 2. エージェントAPIキー: - `OPENAI_API_KEY` - `ANTHROPIC_API_KEY` 3. サーバー設定(ユニバーサルデプロイメント用): - `PORT`: HTTPサーバーポート(デフォルト:3000) - `HOST`: サーバーホスト(デフォルト:0.0.0.0) ## Mastraプロジェクトのビルド ターゲットプラットフォーム向けにMastraプロジェクトをビルドするには、次のコマンドを実行します: ```bash npx mastra build ``` Deployerを使用すると、ビルド出力は自動的にターゲットプラットフォーム用に準備されます。 その後、プラットフォーム(Vercel、netlify、cloudfare など)のCLI/UIを通じて、ビルド出力 `.mastra/output` をデプロイできます。 --- title: デプロイメントの概要 description: Mastraアプリケーションの様々なデプロイメントオプションについて学ぶ --- # デプロイメント概要 [JA] Source: https://mastra.ai/ja/docs/deployment/overview Mastraは、フルマネージドソリューションからセルフホスティングオプションまで、アプリケーションのニーズに合わせた複数のデプロイメントオプションを提供しています。このガイドでは、利用可能なデプロイメントパスを理解し、プロジェクトに最適なものを選択するのに役立ちます。 ## デプロイメントオプション ### Mastra Cloud Mastra Cloudはデプロイメントプラットフォームで、GitHubリポジトリに接続し、コード変更時に自動的にデプロイし、モニタリングツールを提供します。以下が含まれます: - GitHubリポジトリ統合 - git pushによるデプロイメント - エージェントテストインターフェース - 包括的なログとトレース - 各プロジェクト用のカスタムドメイン [Mastra Cloudのドキュメントを見る →](/docs/mastra-cloud/overview) ### サーバーを使用 Mastraを標準のNode.js HTTPサーバーとしてデプロイすることができ、インフラストラクチャとデプロイメント環境を完全に制御できます。 - カスタムAPIルートとミドルウェア - 設定可能なCORSと認証 - VM、コンテナ、またはPaaSプラットフォームへのデプロイ - 既存のNode.jsアプリケーションとの統合に最適 [サーバーデプロイメントガイド →](/docs/deployment/server) ### サーバーレスプラットフォーム Mastraは人気のあるサーバーレスプラットフォーム向けのプラットフォーム固有のデプロイヤーを提供し、最小限の設定でアプリケーションをデプロイすることができます。 - Cloudflare Workers、Vercel、またはNetlifyへのデプロイ - プラットフォーム固有の最適化 - 簡略化されたデプロイメントプロセス - プラットフォームを通じた自動スケーリング [サーバーレスデプロイメントガイド →](/docs/deployment/deployment) ## クライアント設定 Mastraアプリケーションをデプロイした後、クライアントを設定して通信できるようにする必要があります。Mastra Client SDKは、Mastraサーバーとやり取りするためのシンプルで型安全なインターフェースを提供します。 - 型安全なAPI操作 - 認証とリクエスト処理 - リトライとエラー処理 - ストリーミングレスポンスのサポート [クライアント設定ガイド →](/docs/deployment/client) ## デプロイメントオプションの選択 | オプション | 最適な用途 | 主なメリット | | --- | --- | --- | | **Mastra Cloud** | インフラの心配なく迅速に展開したいチーム | フルマネージド、自動スケーリング、組み込み型の可観測性 | | **サーバーデプロイメント** | 最大限のコントロールとカスタマイズが必要なチーム | 完全なコントロール、カスタムミドルウェア、既存のアプリとの統合 | | **サーバーレスプラットフォーム** | すでにVercel、Netlify、またはCloudflareを使用しているチーム | プラットフォーム統合、簡素化されたデプロイメント、自動スケーリング | --- title: "Mastraサーバーの作成" description: "ミドルウェアやその他のオプションでMastraサーバーを設定およびカスタマイズする" --- # Mastraサーバーの作成 [JA] Source: https://mastra.ai/ja/docs/deployment/server 開発中または Mastra アプリケーションをデプロイする際、エージェント、ワークフロー、およびその他の機能を API エンドポイントとして公開する HTTP サーバーとして実行されます。このページでは、サーバーの動作を設定およびカスタマイズする方法について説明します。 ## サーバーアーキテクチャ Mastraは、その基盤となるHTTPサーバーフレームワークとして[Hono](https://hono.dev)を使用しています。`mastra build`を使用してMastraアプリケーションを構築すると、`.mastra`ディレクトリにHonoベースのHTTPサーバーが生成されます。 サーバーは以下を提供します: - 登録されたすべてのエージェントのためのAPIエンドポイント - 登録されたすべてのワークフローのためのAPIエンドポイント - カスタムAPIルートのサポート - カスタムミドルウェアのサポート - タイムアウトの設定 - ポートの設定 ## サーバー設定 Mastraインスタンスでサーバーの`port`と`timeout`を設定できます。 ```typescript copy showLineNumbers import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ server: { port: 3000, // Defaults to 4111 timeout: 10000, // Defaults to 30000 (30s) }, }); ``` ## カスタム API ルート Mastra は、登録されたエージェントとワークフローに基づいて自動生成される API ルートのリストを提供します。また、Mastra インスタンス上でカスタム API ルートを定義することもできます。 これらのルートは、Mastra インスタンスと同じファイルに置くことも、別のファイルに置くこともできます。Mastra インスタンスをクリーンに保つために、別のファイルに置くことをお勧めします。 ```typescript copy showLineNumbers import { Mastra } from "@mastra/core"; import { registerApiRoute } from "@mastra/core/server"; export const mastra = new Mastra({ server: { apiRoutes: [ registerApiRoute("/my-custom-route", { method: "GET", handler: async (c) => { // you have access to mastra instance here const mastra = c.get("mastra"); // you can use the mastra instance to get agents, workflows, etc. const agents = await mastra.getAgent("my-agent"); return c.json({ message: "Hello, world!" }); }, }), ], }, // Other configuration options }); ``` ## カスタムCORS設定 Mastraを使用すると、サーバーのCORS(クロスオリジンリソースシェアリング)設定を構成できます。 ```typescript copy showLineNumbers import { Mastra } from '@mastra/core'; export const mastra = new Mastra({ server: { cors: { origin: ['https://example.com'], // 特定のオリジンを許可するか、すべての場合は '*' allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'], allowHeaders: ['Content-Type', 'Authorization'], credentials: false, } } }); ``` ## ミドルウェア Mastraでは、APIルートに適用されるカスタムミドルウェア関数を設定することができます。これは、認証、ログ記録、CORS、または他のHTTPレベルの機能をAPIエンドポイントに追加するのに役立ちます。 ```typescript copy showLineNumbers import { Mastra } from '@mastra/core'; export const mastra = new Mastra({ // 他の設定オプション server: { middleware: [ { handler: async (c, next) => { // 例: 認証チェックを追加 const authHeader = c.req.header('Authorization'); if (!authHeader) { return new Response('Unauthorized', { status: 401 }); } // 次のミドルウェアまたはルートハンドラに進む await next(); }, path: '/api/*' }, // すべてのルートにミドルウェアを追加 async (c, next) => { // 例: リクエストログを追加 console.log(`${c.req.method} ${c.req.url}`); await next(); }, ] }); ``` 特定のルートにミドルウェアを追加したい場合は、`registerApiRoute`を使用してそれを指定することもできます。 ```typescript copy showLineNumbers registerApiRoute("/my-custom-route", { method: "GET", middleware: [ async (c, next) => { // 例: リクエストログを追加 console.log(`${c.req.method} ${c.req.url}`); await next(); }, ], handler: async (c) => { // ここでmastraインスタンスにアクセスできます const mastra = c.get("mastra"); // mastraインスタンスを使用してエージェント、ワークフローなどを取得できます const agents = await mastra.getAgent("my-agent"); return c.json({ message: "Hello, world!" }); }, }); ``` ### ミドルウェアの動作 各ミドルウェア関数は次のことができます: - Honoコンテキストオブジェクト(`c`)と`next`関数を受け取る - リクエスト処理をショートサーキットするために`Response`を返すことができる - 次のミドルウェアまたはルートハンドラに進むために`next()`を呼び出すことができる - パスパターンをオプションで指定できる(デフォルトは'/api/\*') - エージェントツールの呼び出しやワークフローのためにリクエスト固有のデータを注入する ### 一般的なミドルウェアの使用例 #### 認証 ```typescript copy { handler: async (c, next) => { const authHeader = c.req.header('Authorization'); if (!authHeader || !authHeader.startsWith('Bearer ')) { return new Response('Unauthorized', { status: 401 }); } const token = authHeader.split(' ')[1]; // ここでトークンを検証 await next(); }, path: '/api/*', } ``` #### CORSサポート ```typescript copy { handler: async (c, next) => { // CORSヘッダーを追加 c.header("Access-Control-Allow-Origin", "*"); c.header("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS"); c.header("Access-Control-Allow-Headers", "Content-Type, Authorization"); // プリフライトリクエストを処理 if (c.req.method === "OPTIONS") { return new Response(null, { status: 204 }); } await next(); }; } ``` #### リクエストログ ```typescript copy { handler: async (c, next) => { const start = Date.now(); await next(); const duration = Date.now() - start; console.log(`${c.req.method} ${c.req.url} - ${duration}ms`); }; } ``` ### 特別なMastraヘッダー Mastra Cloudと統合する場合やカスタムクライアントを構築する場合、クライアントが自分自身を識別し特定の機能を有効にするために送信する特別なヘッダーがあります。サーバーミドルウェアはこれらのヘッダーをチェックして動作をカスタマイズできます: ```typescript copy { handler: async (c, next) => { // 受信リクエストでMastra固有のヘッダーをチェック const isFromMastraCloud = c.req.header("x-mastra-cloud") === "true"; const clientType = c.req.header("x-mastra-client-type"); // 例: 'js', 'python' const isDevPlayground = c.req.header("x-mastra-dev-playground") === "true"; // クライアント情報に基づいて動作をカスタマイズ if (isFromMastraCloud) { // Mastra Cloudリクエストの特別な処理 } await next(); }; } ``` これらのヘッダーの目的は次の通りです: - `x-mastra-cloud`: リクエストがMastra Cloudから来ていることを示す - `x-mastra-client-type`: クライアントSDKのタイプを指定(例: 'js', 'python') - `x-mastra-dev-playground`: リクエストが開発プレイグラウンドからのものであることを示す これらのヘッダーをミドルウェアで使用して、クライアント固有のロジックを実装したり、特定の環境でのみ機能を有効にしたりできます。 ## デプロイメント Mastraは標準的なNode.jsサーバーにビルドされるため、Node.jsアプリケーションを実行するあらゆるプラットフォームにデプロイできます: - クラウドVM(AWS EC2、DigitalOcean Droplets、GCP Compute Engine) - コンテナプラットフォーム(Docker、Kubernetes) - Platform as a Service(Heroku、Railway) - 自己ホスト型サーバー ### ビルド アプリケーションをビルドします: ```bash copy # 現在のディレクトリからビルド mastra build # またはディレクトリを指定 mastra build --dir ./my-project ``` ビルドプロセス: 1. エントリーファイル(`src/mastra/index.ts`または`src/mastra/index.js`)を特定 2. `.mastra`出力ディレクトリを作成 3. ツリーシェイキングとソースマップを使用してRollupでコードをバンドル 4. [Hono](https://hono.dev) HTTPサーバーを生成 すべてのオプションについては[`mastra build`](/reference/cli/build)を参照してください。 ### サーバーの実行 HTTPサーバーを起動します: ```bash copy node .mastra/output/index.mjs ``` ## サーバーレスデプロイ Mastraは、Cloudflare Workers、Vercel、およびNetlifyでのサーバーレスデプロイもサポートしています。 セットアップ手順については、[サーバーレスデプロイ](/docs/deployment/deployment)ガイドをご覧ください。 --- title: "独自のEvalを作成する" description: "Mastraを使用すると、独自のevalを作成できます。方法はこちらです。" --- # 独自のEvalを作成する [JA] Source: https://mastra.ai/ja/docs/evals/custom-eval 独自のevalを作成することは、新しい関数を作成するのと同じくらい簡単です。単に`Metric`クラスを拡張するクラスを作成し、`measure`メソッドを実装します。 ## 基本的な例 出力に特定の単語が含まれているかを確認するカスタムメトリックを作成する簡単な例については、[Word Inclusion example](/examples/evals/word-inclusion)をご覧ください。 ## カスタム LLM-Judge の作成 カスタム LLM ジャッジは、AI の応答の特定の側面を評価するのに役立ちます。特定のユースケースに対する専門家のレビュアーがいるようなものです: - 医療 Q&A → ジャッジは医療の正確性と安全性をチェック - カスタマーサービス → ジャッジはトーンと有用性を評価 - コード生成 → ジャッジはコードの正確性とスタイルを確認 実用的な例として、[Chef Michel's](/docs/guides/chef-michel) のレシピを [Gluten Checker example](/examples/evals/custom-eval) でグルテン含有量を評価する方法をご覧ください。 --- title: "概要" description: "Mastra evalsを使用してAIエージェントの品質を評価および測定する方法を理解する。" --- # エバリュエーションによるエージェントのテスト [JA] Source: https://mastra.ai/ja/docs/evals/overview 従来のソフトウェアテストには明確な合格/不合格の条件がありますが、AI出力は非決定的です — 同じ入力でも結果が異なる場合があります。エバリュエーション(評価)は、エージェントの品質を測定するための定量的な指標を提供することで、このギャップを埋めるのに役立ちます。 エバリュエーションは、モデル評価、ルールベース、および統計的手法を使用してエージェントの出力を評価する自動テストです。各エバリュエーションは0〜1の間の正規化されたスコアを返し、記録して比較することができます。エバリュエーションは独自のプロンプトやスコアリング関数でカスタマイズすることができます。 エバリュエーションはクラウドで実行でき、リアルタイムの結果を取得できます。また、CI/CDパイプラインの一部としてエバリュエーションを実行することで、時間の経過とともにエージェントをテストおよび監視することも可能です。 ## 評価の種類 評価にはさまざまな種類があり、それぞれ特定の目的を果たします。以下は一般的な種類です: 1. **テキスト評価**: エージェントの応答の正確性、信頼性、文脈理解を評価 2. **分類評価**: 定義済みのカテゴリに基づいてデータを分類する際の正確性を測定 3. **ツール使用評価**: エージェントが外部ツールやAPIをどれだけ効果的に使用するかを評価 4. **プロンプトエンジニアリング評価**: 異なる指示や入力形式の影響を探る ## はじめに Evalsはエージェントに追加する必要があります。以下は要約、コンテンツの類似性、およびトーンの一貫性のメトリクスを使用した例です: ```typescript copy showLineNumbers filename="src/mastra/agents/index.ts" import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; import { SummarizationMetric } from "@mastra/evals/llm"; import { ContentSimilarityMetric, ToneConsistencyMetric, } from "@mastra/evals/nlp"; const model = openai("gpt-4o"); export const myAgent = new Agent({ name: "ContentWriter", instructions: "You are a content writer that creates accurate summaries", model, evals: { summarization: new SummarizationMetric(model), contentSimilarity: new ContentSimilarityMetric(), tone: new ToneConsistencyMetric(), }, }); ``` `mastra dev`を使用すると、Mastraダッシュボードでeval結果を表示できます。 ## 自動テストを超えて 自動評価は価値がありますが、高パフォーマンスのAIチームはしばしばそれらを以下と組み合わせています: 1. **A/Bテスト**:実際のユーザーで異なるバージョンを比較 2. **人間によるレビュー**:本番データとトレースの定期的なレビュー 3. **継続的なモニタリング**:時間の経過とともに評価指標を追跡し、性能低下を検出 ## 評価結果の理解 各評価指標はエージェントの出力の特定の側面を測定します。結果の解釈と改善方法は以下の通りです: ### スコアの理解 どの指標についても: 1. 指標のドキュメントを確認して、採点プロセスを理解する 2. スコアが変化するパターンを探す 3. 異なる入力やコンテキスト間でスコアを比較する 4. 時間の経過に伴う変化を追跡して傾向を把握する ### 結果の改善 スコアが目標に達していない場合: 1. 指示を確認する - 明確ですか?より具体的にしてみましょう 2. コンテキストを確認する - エージェントに必要な情報を提供していますか? 3. プロンプトを簡素化する - 複雑なタスクを小さなステップに分ける 4. ガードレールを追加する - 難しいケースに対する具体的なルールを含める ### 品質の維持 目標を達成したら: 1. 安定性を監視する - スコアは一貫していますか? 2. うまくいったことを文書化する - 成功したアプローチについてメモを残す 3. エッジケースをテストする - 珍しいシナリオをカバーする例を追加する 4. 微調整する - 効率を向上させる方法を探す 評価ができることの詳細については、[テキスト評価](/docs/evals/textual-evals)を参照してください。 独自の評価を作成する方法の詳細については、[カスタム評価](/docs/evals/custom-eval)ガイドを参照してください。 CIパイプラインで評価を実行する方法については、[CIでの実行](/docs/evals/running-in-ci)ガイドを参照してください。 --- title: "CIでの実行" description: "時間の経過とともにエージェントの品質を監視するために、CI/CDパイプラインでMastraの評価を実行する方法を学びましょう。" --- # CIでのEvalsの実行 [JA] Source: https://mastra.ai/ja/docs/evals/running-in-ci CIパイプラインでevalsを実行することにより、エージェントの品質を時間とともに測定するための定量的な指標を提供し、このギャップを埋めるのに役立ちます。 ## CIの統合設定 ESMモジュールをサポートするテストフレームワークであれば、どれでも対応しています。例えば、[Vitest](https://vitest.dev/)、[Jest](https://jestjs.io/)、または[Mocha](https://mochajs.org/)を使用して、CI/CDパイプラインでevalsを実行することができます。 ```typescript copy showLineNumbers filename="src/mastra/agents/index.test.ts" import { describe, it, expect } from 'vitest'; import { evaluate } from "@mastra/evals"; import { ToneConsistencyMetric } from "@mastra/evals/nlp"; import { myAgent } from './index'; describe('My Agent', () => { it('should validate tone consistency', async () => { const metric = new ToneConsistencyMetric(); const result = await evaluate(myAgent, 'Hello, world!', metric) expect(result.score).toBe(1); }); }); ``` テストフレームワーク用のtestSetupとglobalSetupスクリプトを設定して、評価結果をキャプチャする必要があります。これにより、mastraダッシュボードでこれらの結果を表示することができます。 ## フレームワーク設定 ### Vitestのセットアップ CI/CDパイプラインでevalsを実行するために、これらのファイルをプロジェクトに追加してください: ```typescript copy showLineNumbers filename="globalSetup.ts" import { globalSetup } from '@mastra/evals'; export default function setup() { globalSetup() } ``` ```typescript copy showLineNumbers filename="testSetup.ts" import { beforeAll } from 'vitest'; import { attachListeners } from '@mastra/evals'; beforeAll(async () => { await attachListeners(); }); ``` ```typescript copy showLineNumbers filename="vitest.config.ts" import { defineConfig } from 'vitest/config' export default defineConfig({ test: { globalSetup: './globalSetup.ts', setupFiles: ['./testSetup.ts'], }, }) ``` ## ストレージ設定 Mastra Storageに評価結果を保存し、Mastraダッシュボードで結果を取得するには: ```typescript copy showLineNumbers filename="testSetup.ts" import { beforeAll } from 'vitest'; import { attachListeners } from '@mastra/evals'; import { mastra } from './your-mastra-setup'; beforeAll(async () => { // Store evals in Mastra Storage (requires storage to be enabled) await attachListeners(mastra); }); ``` ファイルストレージを使用すると、評価は永続化され、後で照会することができます。メモリストレージを使用すると、評価はテストプロセスに限定されます。 --- title: "テキスト評価" description: "MastraがLLM-as-judgeの方法論を使用してテキストの品質を評価する方法を理解する。" --- # Textual Evals [JA] Source: https://mastra.ai/ja/docs/evals/textual-evals Textual evalsは、エージェントの出力を評価するためにLLM-as-judgeの方法論を使用します。このアプローチは、言語モデルを活用してテキストの品質のさまざまな側面を評価し、ティーチングアシスタントがルーブリックを使用して課題を採点する方法に似ています。 各evalは特定の品質の側面に焦点を当て、0から1の間のスコアを返し、非決定的なAI出力のための定量的な指標を提供します。 Mastraは、エージェントの出力を評価するためのいくつかの評価指標を提供します。Mastraはこれらの指標に限定されず、[独自の評価を定義する](/docs/evals/custom-eval)こともできます。 ## テキスト評価を使用する理由 テキスト評価は、あなたのエージェントが以下を確実に行うのに役立ちます: - 正確で信頼性の高い回答を生成する - コンテキストを効果的に使用する - 出力要件に従う - 時間の経過とともに一貫した品質を維持する ## 利用可能な指標 ### 正確性と信頼性 これらの指標は、エージェントの回答がどれだけ正確で、真実で、完全であるかを評価します: - [`hallucination`](/reference/evals/hallucination): 提供されたコンテキストに存在しない事実や主張を検出 - [`faithfulness`](/reference/evals/faithfulness): 提供されたコンテキストをどれだけ正確に表現しているかを測定 - [`content-similarity`](/reference/evals/content-similarity): 異なる表現における情報の一貫性を評価 - [`completeness`](/reference/evals/completeness): 必要な情報がすべて含まれているかを確認 - [`answer-relevancy`](/reference/evals/answer-relevancy): 回答が元の質問にどれだけ適切に対応しているかを評価 - [`textual-difference`](/reference/evals/textual-difference): 文字列間のテキストの違いを測定 ### コンテキストの理解 これらの指標は、エージェントが提供されたコンテキストをどれだけうまく使用しているかを評価します: - [`context-position`](/reference/evals/context-position): 回答内でコンテキストがどこに現れるかを分析 - [`context-precision`](/reference/evals/context-precision): コンテキストのチャンクが論理的にグループ化されているかを評価 - [`context-relevancy`](/reference/evals/context-relevancy): 適切なコンテキスト部分の使用を測定 - [`contextual-recall`](/reference/evals/contextual-recall): コンテキスト使用の完全性を評価 ### 出力品質 これらの指標は、フォーマットとスタイルの要件への準拠を評価します: - [`tone`](/reference/evals/tone-consistency): 形式、複雑さ、スタイルの一貫性を測定 - [`toxicity`](/reference/evals/toxicity): 有害または不適切なコンテンツを検出 - [`bias`](/reference/evals/bias): 出力における潜在的なバイアスを検出 - [`prompt-alignment`](/reference/evals/prompt-alignment): 長さの制限、フォーマットの要件、その他の制約などの明示的な指示への準拠を確認 - [`summarization`](/reference/evals/summarization): 情報の保持と簡潔さを評価 - [`keyword-coverage`](/reference/evals/keyword-coverage): 技術用語の使用を評価 --- title: "ライセンス" description: "Mastra ライセンス" --- # ライセンス [JA] Source: https://mastra.ai/ja/docs/faq ## Elastic License 2.0 (ELv2) Mastraは、オープンソースの原則と持続可能なビジネス慣行のバランスを取るために設計された現代的なライセンスであるElastic License 2.0 (ELv2)の下でライセンスされています。 ### Elastic License 2.0とは? Elastic License 2.0は、プロジェクトの持続可能性を保護するための特定の制限を含みながら、ソフトウェアを使用、変更、配布する広範な権利をユーザーに付与するソース利用可能なライセンスです。これにより以下が可能です: - ほとんどの目的での無料使用 - ソースコードの閲覧、変更、再配布 - 派生作品の作成と配布 - 組織内での商業利用 主な制限は、Mastraをホストまたは管理されたサービスとして提供し、ソフトウェアの実質的な機能にユーザーがアクセスできるようにすることはできないということです。 ### なぜElastic License 2.0を選んだのか 私たちはいくつかの重要な理由からElastic License 2.0を選びました: 1. **持続可能性**: 開放性と長期的な開発を維持する能力の間で健全なバランスを保つことができます。 2. **イノベーションの保護**: 私たちの作業が競合するサービスとして再パッケージされることを心配せずに、イノベーションへの投資を続けることができます。 3. **コミュニティ重視**: ユーザーが私たちのコードを閲覧、変更、学ぶことを可能にしながら、コミュニティをサポートする能力を保護することで、オープンソースの精神を維持します。 4. **ビジネスの明確さ**: Mastraが商業的な文脈でどのように使用できるかについて明確なガイドラインを提供します。 ### Mastraでビジネスを構築する ライセンスの制限にもかかわらず、Mastraを使用して成功したビジネスを構築する方法は多数あります: #### 許可されたビジネスモデル - **アプリケーションの構築**: Mastraを使用してアプリケーションを作成し販売する - **コンサルティングサービスの提供**: 専門知識、実装、カスタマイズサービスを提供する - **カスタムソリューションの開発**: クライアント向けにMastraを使用して特注のAIソリューションを構築する - **アドオンと拡張機能の作成**: Mastraの機能を拡張する補完的なツールを開発し販売する - **トレーニングと教育**: Mastraの効果的な使用に関するコースや教育資料を提供する #### 準拠した使用例 - ある会社がMastraを使用してAI駆動のカスタマーサービスアプリケーションを構築し、クライアントに販売する - コンサルティング会社がMastraの実装とカスタマイズサービスを提供する - 開発者がMastraを使用して特殊なエージェントやツールを作成し、他の企業にライセンス供与する - スタートアップがMastraを活用した特定の業界向けソリューション(例:医療AIアシスタント)を構築する #### 避けるべきこと 主な制限は、Mastra自体をホストされたサービスとして提供し、そのコア機能にユーザーがアクセスできるようにすることはできないということです。つまり: - Mastraをほとんど変更せずにSaaSプラットフォームを作成しないでください - 主にMastraの機能を使用するために顧客が支払う管理されたMastraサービスを提供しないでください ### ライセンスに関する質問? Elastic License 2.0があなたの使用ケースにどのように適用されるかについて具体的な質問がある場合は、[Discordでお問い合わせください](https://discord.gg/BTYqqHKUrf)。プロジェクトの持続可能性を保護しながら、正当なビジネス使用ケースをサポートすることをお約束します。 --- title: "Vercel AI SDKとの併用" description: "MastraがVercel AI SDKライブラリをどのように活用しているか、そしてMastraでさらにどのように活用できるかを学ぶ" --- # Vercel AI SDKと一緒に使用する [JA] Source: https://mastra.ai/ja/docs/frameworks/ai-sdk Mastraは、AI SDKのモデルルーティング(OpenAI、Anthropicなどの上に統一されたインターフェース)、構造化された出力、およびツール呼び出しを活用しています。 これについては、[このブログ記事](https://mastra.ai/blog/using-ai-sdk-with-mastra)でより詳しく説明しています ## Mastra + AI SDK Mastraは、チームが概念実証を迅速かつ容易に製品化するのを支援するために、AI SDKの上に層として機能します。 スパン、LLM呼び出し、ツール実行を示すエージェントインタラクショントレース ## モデルルーティング Mastraでエージェントを作成する際、AI SDKがサポートするどのモデルでも指定できます: ```typescript import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; const agent = new Agent({ name: "WeatherAgent", instructions: "Instructions for the agent...", model: openai("gpt-4-turbo"), // Model comes directly from AI SDK }); const result = await agent.generate("What is the weather like?"); ``` ## AI SDK フック Mastraは、フロントエンドとのシームレスな統合のためにAI SDKのフックと互換性があります: ### useChat `useChat`フックを使用すると、フロントエンドアプリケーションでリアルタイムチャット対話が可能になります - エージェントデータストリーム(`.toDataStreamResponse()`)と連携します - useChatの`api`はデフォルトで`/api/chat`に設定されています - Mastra REST APIのエージェントストリームエンドポイント`{MASTRA_BASE_URL}/agents/:agentId/stream`とデータストリーム用に連携します(構造化された出力が定義されていない場合) ```typescript filename="app/api/chat/route.ts" copy import { mastra } from "@/src/mastra"; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent("weatherAgent"); const stream = await myAgent.stream(messages); return stream.toDataStreamResponse(); } ``` ```typescript copy import { useChat } from '@ai-sdk/react'; export function ChatComponent() { const { messages, input, handleInputChange, handleSubmit } = useChat({ api: '/path-to-your-agent-stream-api-endpoint' }); return (
{messages.map(m => (
{m.role}: {m.content}
))}
); } ``` > **注意点**: エージェントのメモリ機能と`useChat`を使用する場合は、重要な実装の詳細について[エージェントメモリセクション](/docs/agents/agent-memory#usechat)を確認してください。 ### useCompletion 単一ターンの補完には、`useCompletion`フックを使用します: - エージェントデータストリーム(`.toDataStreamResponse()`)と連携します - useCompletionの`api`はデフォルトで`/api/completion`に設定されています - Mastra REST APIのエージェントストリームエンドポイント`{MASTRA_BASE_URL}/agents/:agentId/stream`とデータストリーム用に連携します(構造化された出力が定義されていない場合) ```typescript filename="app/api/completion/route.ts" copy import { mastra } from "@/src/mastra"; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent("weatherAgent"); const stream = await myAgent.stream(messages); return stream.toDataStreamResponse(); } ``` ```typescript import { useCompletion } from "@ai-sdk/react"; export function CompletionComponent() { const { completion, input, handleInputChange, handleSubmit, } = useCompletion({ api: '/path-to-your-agent-stream-api-endpoint' }); return (

Completion result: {completion}

); } ``` ### useObject JSONオブジェクトを表すテキストストリームを消費し、スキーマに基づいて完全なオブジェクトに解析するために使用します。 - エージェントテキストストリーム(`.toTextStreamResponse()`)と連携します - useObjectの`api`はデフォルトで`/api/completion`に設定されています - Mastra REST APIのエージェントストリームエンドポイント`{MASTRA_BASE_URL}/agents/:agentId/stream`とテキストストリーム用に連携します(構造化された出力が定義されている場合) ```typescript filename="app/api/use-object/route.ts" copy import { mastra } from "@/src/mastra"; export async function POST(req: Request) { const { messages } = await req.json(); const myAgent = mastra.getAgent("weatherAgent"); const stream = await myAgent.stream(messages, { output: z.object({ weather: z.string(), }), }); return stream.toTextStreamResponse(); } ``` ```typescript import { experimental_useObject as useObject } from '@ai-sdk/react'; export default function Page() { const { object, submit } = useObject({ api: '/api/use-object', schema: z.object({ weather: z.string(), }), }); return (
{object?.content &&

{object.content}

}
); } ``` ## ツール呼び出し ### AI SDK ツールフォーマット Mastraは、AI SDKフォーマットで作成されたツールをサポートしているため、Mastraエージェントで直接使用することができます。詳細については、[Vercel AI SDKツールフォーマット](/docs/agents/adding-tools#vercel-ai-sdk-tool-format)に関するツールドキュメントをご覧ください。 ### クライアントサイドのツール呼び出し MastraはAI SDKのツール呼び出しを活用しているため、AI SDKに適用されることはここでも同様に適用されます。 Mastraの[エージェントツール](/docs/agents/adding-tools)は、AI SDKツールと100%互換性があります。 Mastraツールはオプションの`execute`非同期関数も公開しています。これがオプションである理由は、ツール呼び出しをクライアントやキューに転送する場合があり、同じプロセスで実行したくない場合があるためです。 クライアントサイドのツール呼び出しを活用する一つの方法は、`@ai-sdk/react`の`useChat`フックの`onToolCall`プロパティを使用して、クライアントサイドでツールを実行することです。 --- title: "Mastraとの連携を始める - NextJS編 | Mastraガイド" description: MastraとNextJSを連携するためのガイド。 --- import { Callout, Steps, Tabs } from "nextra/components"; # Next.jsプロジェクトにMastraを統合する [JA] Source: https://mastra.ai/ja/docs/frameworks/next-js MastraをNext.jsアプリケーションに統合する主な方法は2つあります。別のバックエンドサービスとして統合するか、Next.jsアプリに直接統合するかです。 ## 1. バックエンドの個別統合 以下のような場合に最適な大規模プロジェクト向け: - AIバックエンドを独立してスケーリングしたい - 明確な関心の分離を維持したい - より柔軟なデプロイメントが必要 ### Mastraバックエンドの作成 CLIを使用して新しいMastraプロジェクトを作成します: ```bash copy npx create-mastra@latest ``` ```bash copy npm create mastra ``` ```bash copy yarn create mastra ``` ```bash copy pnpm create mastra ``` 詳細なセットアップ手順については、[インストールガイド](/docs/getting-started/installation)をご覧ください。 ### MastraClientのインストール ```bash copy npm install @mastra/client-js ``` ```bash copy yarn add @mastra/client-js ``` ```bash copy pnpm add @mastra/client-js ``` ### MastraClientの使用 クライアントインスタンスを作成し、Next.jsアプリケーションで使用します: ```typescript filename="lib/mastra.ts" copy import { MastraClient } from '@mastra/client-js'; // Initialize the client export const mastraClient = new MastraClient({ baseUrl: process.env.NEXT_PUBLIC_MASTRA_API_URL || 'http://localhost:4111', }); ``` Reactコンポーネントでの使用例: ```typescript filename="app/components/SimpleWeather.tsx" copy 'use client' import { mastraClient } from '@/lib/mastra' export function SimpleWeather() { async function handleSubmit(formData: FormData) { const city = formData.get('city') const agent = mastraClient.getAgent('weatherAgent') try { const response = await agent.generate({ messages: [{ role: 'user', content: `What's the weather like in ${city}?` }], }) // Handle the response console.log(response.text) } catch (error) { console.error('Error:', error) } } return (
) } ``` ### デプロイメント デプロイの準備ができたら、プラットフォーム固有のデプロイヤー(Vercel、Netlify、Cloudflare)のいずれかを使用するか、任意のNode.jsホスティングプラットフォームにデプロイできます。詳細な手順については、[デプロイメントガイド](/docs/deployment/deployment)をご確認ください。
## 2. 直接統合 小規模なプロジェクトやプロトタイプに適しています。このアプローチではMastraをNext.jsアプリケーションに直接バンドルします。 ### Next.jsのルートでMastraを初期化する まず、Next.jsプロジェクトのルートに移動し、Mastraを初期化します: ```bash copy cd your-nextjs-app ``` 次に初期化コマンドを実行します: ```bash copy npx mastra@latest init ``` ```bash copy yarn dlx mastra@latest init ``` ```bash copy pnpm dlx mastra@latest init ``` これによりNext.jsプロジェクトにMastraがセットアップされます。初期化やその他の設定オプションの詳細については、[mastra init リファレンス](/reference/cli/init)をご覧ください。 ### Next.jsの設定 `next.config.js`に以下を追加します: ```js filename="next.config.js" copy /** @type {import('next').NextConfig} */ const nextConfig = { serverExternalPackages: ["@mastra/*"], // ... その他のNext.js設定 } module.exports = nextConfig ``` #### サーバーアクションの例 ```typescript filename="app/actions.ts" copy 'use server' import { mastra } from '@/mastra' export async function getWeatherInfo(city: string) { const agent = mastra.getAgent('weatherAgent') const result = await agent.generate(`What's the weather like in ${city}?`) return result } ``` コンポーネントでの使用方法: ```typescript filename="app/components/Weather.tsx" copy 'use client' import { getWeatherInfo } from '../actions' export function Weather() { async function handleSubmit(formData: FormData) { const city = formData.get('city') as string const result = await getWeatherInfo(city) // 結果の処理 console.log(result) } return (
) } ``` #### APIルートの例 ```typescript filename="app/api/chat/route.ts" copy import { mastra } from '@/mastra' import { NextResponse } from 'next/server' export async function POST(req: Request) { const { city } = await req.json() const agent = mastra.getAgent('weatherAgent') const result = await agent.stream(`What's the weather like in ${city}?`) return result.toDataStreamResponse() } ``` ### デプロイメント 直接統合を使用する場合、MastraインスタンスはNext.jsアプリケーションと一緒にデプロイされます。以下を確認してください: - デプロイメントプラットフォームでLLM APIキーの環境変数を設定する - 本番環境での適切なエラーハンドリングを実装する - AIエージェントのパフォーマンスとコストを監視する
## 可観測性 Mastraは、AIオペレーションの監視、デバッグ、最適化に役立つ組み込みの可観測性機能を提供しています。これには以下が含まれます: - AIオペレーションとそのパフォーマンスのトレーシング - プロンプト、完了、エラーのロギング - LangfuseやLangSmithなどの可観測性プラットフォームとの統合 Next.jsローカル開発に特化した詳細なセットアップ手順と設定オプションについては、[Next.js可観測性設定ガイド](/docs/deployment/logging-and-tracing#nextjs-configuration)をご覧ください。 --- title: "Mastraをローカルにインストールする | はじめに | Mastraドキュメント" description: Mastraのインストール方法と、様々なLLMプロバイダーで実行するために必要な前提条件のセットアップガイド。 --- import { Callout, Steps, Tabs } from "nextra/components"; import YouTube from "@/components/youtube"; # Mastraをローカルでインストールする [JA] Source: https://mastra.ai/ja/docs/getting-started/installation Mastraを実行するには、LLMへのアクセスが必要です。通常、[OpenAI](https://platform.openai.com/)、[Anthropic](https://console.anthropic.com/settings/keys)、または[Google Gemini](https://ai.google.dev/gemini-api/docs)などのLLMプロバイダーからAPIキーを取得することをお勧めします。また、[Ollama](https://ollama.ai/)を使用してローカルLLMでMastraを実行することもできます。 ## 前提条件 - Node.js `v20.0` 以上 - [サポートされている大規模言語モデル(LLM)](/docs/frameworks/ai-sdk)へのアクセス ## 自動インストール ### 新しいプロジェクトを作成する `create-mastra`を使用して新しいMastraプロジェクトを開始することをお勧めします。これによりプロジェクトの足場が構築されます。プロジェクトを作成するには、次のコマンドを実行します: ```bash copy npx create-mastra@latest ``` ```bash copy npm create mastra@latest ``` ```bash copy yarn create mastra@latest ``` ```bash copy pnpm create mastra@latest ``` インストール時に、以下のプロンプトが表示されます: ```bash What do you want to name your project? my-mastra-app Choose components to install: ◯ Agents (recommended) ◯ Tools ◯ Workflows Select default provider: ◯ OpenAI (recommended) ◯ Anthropic ◯ Groq Would you like to include example code? No / Yes Turn your IDE into a Mastra expert? (Installs MCP server) ◯ Skip for now ◯ Cursor ◯ Windsurf ``` プロンプトの後、`create-mastra`は以下を行います: 1. TypeScriptでプロジェクトディレクトリをセットアップ 2. 依存関係のインストール 3. 選択したコンポーネントとLLMプロバイダーの設定 4. IDE内でMCPサーバーを設定(選択した場合)- コーディング中にドキュメント、例、ヘルプにすぐにアクセスできます **MCPに関する注意:** 別のIDEを使用している場合は、[MCPサーバードキュメント](/docs/getting-started/mcp-docs-server)の指示に従ってMCPサーバーを手動でインストールできます。**また**、MCPサーバーを有効化するには[CursorとWindsurf](/docs/getting-started/mcp-docs-server#after-configuration)に追加の手順があることに注意してください。 ### APIキーの設定 `.env`ファイルに設定したLLMプロバイダーのAPIキーを追加します。 ```bash filename=".env" copy OPENAI_API_KEY= ``` **非インタラクティブモード**: フラグを使用してコマンドを実行し(非インタラクティブモード)、サンプルコードを含める場合は、次のように使用できます: ```bash copy npx create-mastra@latest --components agents,tools --llm openai --example ``` **インストールタイムアウトの設定**: インストールに時間がかかる場合にタイムアウトを指定して設定するには、タイムアウトフラグを使用します: ```bash copy npx create-mastra@latest --timeout ``` **LLMに関する注意**: 例を含む簡単なワンライナーとして、`npx -y mastra@latest --project-name --example --components "tools,agents,workflows" --llm `を実行できます。llmフラグで使用できるオプションは`openai|anthropic|groq|google|cerebras`です ## 手動インストール
Mastraプロジェクトを手動でセットアップしたい場合は、以下の手順に従ってください: ### 新しいプロジェクトの作成 プロジェクトディレクトリを作成し、その中に移動します: ```bash copy mkdir hello-mastra cd hello-mastra ``` 次に、`@mastra/core`パッケージを含むTypeScriptプロジェクトを初期化します: ```bash copy npm init -y npm install typescript tsx @types/node mastra --save-dev npm install @mastra/core zod @ai-sdk/openai npx tsc --init ``` ```bash copy pnpm init pnpm add typescript tsx @types/node mastra --save-dev pnpm add @mastra/core zod @ai-sdk/openai pnpm dlx tsc --init ``` ```bash copy yarn init -y yarn add typescript tsx @types/node mastra --dev yarn add @mastra/core zod @ai-sdk/openai yarn dlx tsc --init ``` ```bash copy bun init -y bun add typescript tsx @types/node mastra --dev bun add @mastra/core zod @ai-sdk/openai bunx tsc --init ``` ### TypeScriptの初期化 プロジェクトのルートに以下の設定で`tsconfig.json`ファイルを作成します: ```json copy { "compilerOptions": { "target": "ES2022", "module": "ES2022", "moduleResolution": "bundler", "esModuleInterop": true, "forceConsistentCasingInFileNames": true, "strict": true, "skipLibCheck": true, "outDir": "dist" }, "include": ["src/**/*"], "exclude": ["node_modules", "dist", ".mastra"] } ``` このTypeScript設定は、最新のモジュール解決と厳格な型チェックを使用して、Mastraプロジェクト向けに最適化されています。 ### APIキーの設定 プロジェクトのルートディレクトリに`.env`ファイルを作成し、APIキーを追加します: ```bash filename=".env" copy OPENAI_API_KEY= ``` your\_openai\_api\_keyを実際のAPIキーに置き換えてください。 ### ツールの作成 `weather-tool`ツールファイルを作成します: ```bash copy mkdir -p src/mastra/tools && touch src/mastra/tools/weather-tool.ts ``` 次に、`src/mastra/tools/weather-tool.ts`に以下のコードを追加します: ```ts filename="src/mastra/tools/weather-tool.ts" showLineNumbers copy import { createTool } from "@mastra/core/tools"; import { z } from "zod"; interface WeatherResponse { current: { time: string; temperature_2m: number; apparent_temperature: number; relative_humidity_2m: number; wind_speed_10m: number; wind_gusts_10m: number; weather_code: number; }; } export const weatherTool = createTool({ id: "get-weather", description: "Get current weather for a location", inputSchema: z.object({ location: z.string().describe("City name"), }), outputSchema: z.object({ temperature: z.number(), feelsLike: z.number(), humidity: z.number(), windSpeed: z.number(), windGust: z.number(), conditions: z.string(), location: z.string(), }), execute: async ({ context }) => { return await getWeather(context.location); }, }); const getWeather = async (location: string) => { const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(location)}&count=1`; const geocodingResponse = await fetch(geocodingUrl); const geocodingData = await geocodingResponse.json(); if (!geocodingData.results?.[0]) { throw new Error(`Location '${location}' not found`); } const { latitude, longitude, name } = geocodingData.results[0]; const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,apparent_temperature,relative_humidity_2m,wind_speed_10m,wind_gusts_10m,weather_code`; const response = await fetch(weatherUrl); const data: WeatherResponse = await response.json(); return { temperature: data.current.temperature_2m, feelsLike: data.current.apparent_temperature, humidity: data.current.relative_humidity_2m, windSpeed: data.current.wind_speed_10m, windGust: data.current.wind_gusts_10m, conditions: getWeatherCondition(data.current.weather_code), location: name, }; }; function getWeatherCondition(code: number): string { const conditions: Record = { 0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast", 45: "Foggy", 48: "Depositing rime fog", 51: "Light drizzle", 53: "Moderate drizzle", 55: "Dense drizzle", 56: "Light freezing drizzle", 57: "Dense freezing drizzle", 61: "Slight rain", 63: "Moderate rain", 65: "Heavy rain", 66: "Light freezing rain", 67: "Heavy freezing rain", 71: "Slight snow fall", 73: "Moderate snow fall", 75: "Heavy snow fall", 77: "Snow grains", 80: "Slight rain showers", 81: "Moderate rain showers", 82: "Violent rain showers", 85: "Slight snow showers", 86: "Heavy snow showers", 95: "Thunderstorm", 96: "Thunderstorm with slight hail", 99: "Thunderstorm with heavy hail", }; return conditions[code] || "Unknown"; } ``` ### エージェントを作成する `weather` エージェントファイルを作成します: ```bash copy mkdir -p src/mastra/agents && touch src/mastra/agents/weather.ts ``` 次に、以下のコードを `src/mastra/agents/weather.ts` に追加します: ```ts filename="src/mastra/agents/weather.ts" showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; import { weatherTool } from "../tools/weather-tool"; export const weatherAgent = new Agent({ name: "Weather Agent", instructions: `You are a helpful weather assistant that provides accurate weather information. Your primary function is to help users get weather details for specific locations. When responding: - Always ask for a location if none is provided - If the location name isn't in English, please translate it - Include relevant details like humidity, wind conditions, and precipitation - Keep responses concise but informative Use the weatherTool to fetch current weather data.`, model: openai("gpt-4o-mini"), tools: { weatherTool }, }); ``` ### エージェントの登録 最後に、`src/mastra/index.ts`にMastraのエントリーポイントを作成し、エージェントを登録します: ```ts filename="src/mastra/index.ts" showLineNumbers copy import { Mastra } from "@mastra/core"; import { weatherAgent } from "./agents/weather"; export const mastra = new Mastra({ agents: { weatherAgent }, }); ``` これにより、エージェントがMastraに登録され、`mastra dev`がそれを検出して提供できるようになります。 ## 既存プロジェクトへのインストール 既存のプロジェクトにMastraを追加するには、[既存のプロジェクトにmastraを追加する](/docs/local-dev/add-to-existing-project)についてのローカル開発ドキュメントをご覧ください。 また、[Next.js](/docs/frameworks/next-js)などのフレームワーク固有のドキュメントもご確認いただけます。 ## Mastraサーバーを起動する Mastraは、エージェントをRESTエンドポイントを通じて提供するためのコマンドを提供しています ### 開発サーバー 以下のコマンドを実行してMastraサーバーを起動します: ```bash copy npm run dev ``` mastra CLIをインストールしている場合は、次のコマンドを実行します: ```bash copy mastra dev ``` このコマンドはエージェント用のREST APIエンドポイントを作成します。 ### エンドポイントのテスト `curl`または`fetch`を使用してエージェントのエンドポイントをテストできます: ```bash copy curl -X POST http://localhost:4111/api/agents/weatherAgent/generate \ -H "Content-Type: application/json" \ -d '{"messages": ["What is the weather in London?"]}' ``` ```js copy showLineNumbers fetch('http://localhost:4111/api/agents/weatherAgent/generate', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ messages: ['What is the weather in London?'], }), }) .then(response => response.json()) .then(data => { console.log('Agent response:', data.text); }) .catch(error => { console.error('Error:', error); }); ``` ## クライアントでMastraを使用する フロントエンドアプリケーションでMastraを使用するには、タイプセーフなクライアントSDKを使用して MastraのREST APIとやり取りすることができます。 詳細な使用方法については、[Mastra クライアントSDKのドキュメント](/docs/deployment/client)を参照してください。 ## コマンドラインから実行する コマンドラインから直接エージェントを呼び出したい場合は、エージェントを取得して呼び出すスクリプトを作成できます: ```ts filename="src/index.ts" showLineNumbers copy import { mastra } from "./mastra"; async function main() { const agent = await mastra.getAgent("weatherAgent"); const result = await agent.generate("What is the weather in London?"); console.log("Agent response:", result.text); } main(); ``` 次に、スクリプトを実行して、すべてが正しく設定されていることをテストします: ```bash copy npx tsx src/index.ts ``` これにより、エージェントの応答がコンソールに出力されるはずです。 --- title: "Cursor/Windsurfでの使用 | はじめに | Mastra ドキュメント" description: "Mastra MCPドキュメントサーバーをIDEで使用して、エージェント型のMastraエキスパートに変える方法を学びましょう。" --- import YouTube from "@/components/youtube"; # Mastra Tools for your agentic IDE [JA] Source: https://mastra.ai/ja/docs/getting-started/mcp-docs-server `@mastra/mcp-docs-server` は、Cursor、Windsurf、Cline、またはMCPをサポートする他の任意のIDEで、Mastraの完全な知識ベースに直接アクセスを提供します。 これには、ドキュメント、コード例、技術ブログ投稿/機能発表、およびパッケージの変更履歴へのアクセスが含まれており、あなたのIDEがMastraを使用して構築するのを助けるために読むことができます。 MCPサーバーツールは、エージェントがMastra関連のタスクを完了するために必要な特定の情報をクエリできるように設計されています。例えば、エージェントにMastra機能を追加する、新しいプロジェクトをスキャフォルドする、または何かの動作を理解するのを助けるなどです。 ## 仕組み IDEにインストールされると、プロンプトを書いて、エージェントがMastraについてすべて理解していると仮定できます。 ### 機能を追加 - "エージェントに評価を追加してテストを書く" - "次の`[task]`を行うワークフローを書いてください" - "エージェントが`[3rd party API]`にアクセスできる新しいツールを作成してください" ### 統合について質問する - "MastraはAI SDKと連携しますか? どのようにして`[React/Svelte/etc]`プロジェクトで使用できますか?" - "MCPに関する最新のMastraニュースは何ですか?" - "Mastraは`[provider]`の音声およびボイスAPIをサポートしていますか?コード内での使用例を見せてください。" ### 既存のコードをデバッグまたは更新する - "エージェントのメモリにバグが発生しています。最近関連する変更やバグ修正がありましたか?" - "Mastraでの作業メモリはどのように動作し、`[task]`を行うためにどのように使用できますか?期待通りに動作しないようです。" - "新しいワークフロー機能があると聞きました。それを説明してから、`[workflow]`をそれらを使用するように更新してください。" **さらに多く** - 質問があれば、IDEに尋ねて調べてもらいましょう。 ## 自動インストール `pnpm create mastra@latest` を実行し、プロンプトが表示されたらCursorまたはWindsurfを選択してMCPサーバーをインストールします。他のIDEの場合、またはすでにMastraプロジェクトがある場合は、以下の手順に従ってMCPサーバーをインストールしてください。 ## 手動インストール - **Cursor**: プロジェクトのルートにある `.cursor/mcp.json` を編集するか、グローバル設定の場合は `~/.cursor/mcp.json` を編集します - **Windsurf**: `~/.codeium/windsurf/mcp_config.json` を編集します(グローバル設定のみサポート) 次の設定を追加します: ### MacOS/Linux ```json { "mcpServers": { "mastra": { "command": "npx", "args": ["-y", "@mastra/mcp-docs-server@latest"] } } } ``` ### Windows ```json { "mcpServers": { "mastra": { "command": "cmd", "args": ["/c", "npx", "-y", "@mastra/mcp-docs-server@latest"] } } } ``` ## 設定後 ### Cursor 1. Cursor 設定を開く 2. MCP 設定に移動する 3. Mastra MCP サーバーで「有効にする」をクリック 4. エージェントチャットを開いている場合は、再度開くか新しいチャットを開始して MCP サーバーを使用する必要があります ### Windsurf 1. 完全に終了して Windsurf を再度開く 2. ツール呼び出しが失敗し始めた場合は、Windsurf の MCP 設定に移動して MCP サーバーを再起動します。これは一般的な Windsurf MCP の問題であり、Mastra とは関係ありません。現在、Cursor の MCP 実装は Windsurf よりも安定しています。 両方の IDE で、npm からパッケージをダウンロードする必要があるため、初めて MCP サーバーを起動するのに少し時間がかかる場合があります。 ## 利用可能なエージェントツール ### ドキュメント Mastraの完全なドキュメントにアクセス: - 始め方 / インストール - ガイドとチュートリアル - APIリファレンス ### 例 コード例を閲覧: - 完全なプロジェクト構造 - 実装パターン - ベストプラクティス ### ブログ投稿 ブログを検索: - 技術的な投稿 - 変更履歴と機能発表 - AIニュースと更新 ### パッケージの変更 Mastraと`@mastra/*`パッケージの更新を追跡: - バグ修正 - 新機能 - 互換性を壊す変更 ## 一般的な問題 1. **サーバーが起動しない** - npxがインストールされ、正常に動作していることを確認する - 競合するMCPサーバーがないか確認する - 設定ファイルの構文を確認する - Windowsの場合、Windows専用の設定を使用していることを確認する 2. **ツール呼び出しが失敗する** - MCPサーバーやIDEを再起動する - IDEを最新バージョンに更新する ## モデルの機能 [JA] Source: https://mastra.ai/ja/docs/getting-started/model-capability import { ProviderTable } from "@/components/provider-table"; AIプロバイダーは様々な機能を持つ異なる言語モデルをサポートしています。すべてのモデルが構造化された出力、画像入力、オブジェクト生成、ツールの使用、またはツールのストリーミングをサポートしているわけではありません。 以下は人気のモデルの機能です: 出典: [https://sdk.vercel.ai/docs/foundations/providers-and-models#model-capabilities](https://sdk.vercel.ai/docs/foundations/providers-and-models#model-capabilities) --- title: "ローカルプロジェクト構造 | はじめに | Mastra ドキュメント" description: Mastraでのフォルダとファイルの整理に関するガイド、ベストプラクティスと推奨される構造を含む。 --- import { FileTree } from 'nextra/components'; # プロジェクト構造 [JA] Source: https://mastra.ai/ja/docs/getting-started/project-structure このページでは、Mastraでのフォルダーとファイルの整理方法についてのガイドを提供します。Mastraはモジュラー型のフレームワークであり、モジュールを個別に、または一緒に使用することができます。 すべてを単一のファイルに書くこともできます(クイックスタートで示したように)、または各エージェント、ツール、ワークフローをそれぞれのファイルに分けることもできます。 特定のフォルダー構造を強制することはありませんが、いくつかのベストプラクティスをお勧めしており、CLIは合理的な構造でプロジェクトをスキャフォールドします。 ## CLIの使用 `mastra init`はインタラクティブなCLIで、以下のことができます: - **Mastraファイルのディレクトリを選択する**:Mastraファイルを配置する場所を指定します(デフォルトは`src/mastra`)。 - **インストールするコンポーネントを選択する**:プロジェクトに含めたいコンポーネントを選択します: - エージェント - ツール - ワークフロー - **デフォルトのLLMプロバイダーを選択する**:OpenAI、Anthropic、Groqなどのサポートされているプロバイダーから選択します。 - **サンプルコードを含める**:開始するのに役立つサンプルコードを含めるかどうかを決定します。 ### プロジェクト構造の例 すべてのコンポーネントを選択し、サンプルコードを含めると、プロジェクト構造は次のようになります: {/* ``` root/ ├── src/ │ └── mastra/ │ ├── agents/ │ │ └── index.ts │ ├── tools/ │ │ └── index.ts │ ├── workflows/ │ │ └── index.ts │ ├── index.ts ├── .env ``` */} ### トップレベルフォルダ | フォルダ | 説明 | | ---------------------- | ------------------------------------ | | `src/mastra` | コアアプリケーションフォルダ | | `src/mastra/agents` | エージェントの設定と定義 | | `src/mastra/tools` | カスタムツールの定義 | | `src/mastra/workflows` | ワークフローの定義 | ### トップレベルファイル | ファイル | 説明 | | --------------------- | ---------------------------------- | | `src/mastra/index.ts` | Mastraのメイン設定ファイル | | `.env` | 環境変数 | --- title: "はじめに | Mastra ドキュメント" description: "Mastraは、TypeScriptエージェントフレームワークです。AIアプリケーションや機能を素早く構築するのに役立ちます。ワークフロー、エージェント、RAG、統合、同期、評価など、必要なプリミティブセットを提供します。" --- # Mastraについて [JA] Source: https://mastra.ai/ja/docs MastraはオープンソースのTypeScriptエージェントフレームワークです。 AIアプリケーションや機能を構築するために必要なプリミティブを提供するように設計されています。 Mastraを使用して、記憶を持ち関数を実行できる[AIエージェント](/docs/agents/overview.mdx)を構築したり、決定論的な[ワークフロー](/docs/workflows/overview.mdx)でLLM呼び出しを連鎖させたりすることができます。Mastraの[ローカル開発環境](/docs/local-dev/mastra-dev.mdx)でエージェントとチャットしたり、[RAG](/docs/rag/overview.mdx)でアプリケーション固有の知識を提供したり、Mastraの[評価](/docs/evals/overview.mdx)で出力を採点したりすることができます。 主な機能は以下の通りです: * **[モデルルーティング](https://sdk.vercel.ai/docs/introduction)**: Mastraは[Vercel AI SDK](https://sdk.vercel.ai/docs/introduction)をモデルルーティングに使用し、OpenAI、Anthropic、Google Geminiなど、あらゆるLLMプロバイダーと対話するための統一されたインターフェースを提供します。 * **[エージェントメモリとツール呼び出し](/docs/agents/agent-memory.mdx)**: Mastraを使用すると、エージェントに呼び出し可能なツール(関数)を提供できます。エージェントのメモリを永続化し、最新性、意味的類似性、または会話スレッドに基づいて取得することができます。 * **[ワークフローグラフ](/docs/workflows/overview.mdx)**: 決定論的な方法でLLM呼び出しを実行したい場合、Mastraはグラフベースのワークフローエンジンを提供します。個別のステップを定義し、各実行の各ステップで入力と出力をログに記録し、それらを可観測性ツールにパイプすることができます。Mastraワークフローには、分岐と連鎖を可能にする制御フローのシンプルな構文(`step()`、`.then()`、`.after()`)があります。 * **[エージェント開発環境](/docs/local-dev/mastra-dev.mdx)**: ローカルでエージェントを開発する際、Mastraのエージェント開発環境でエージェントとチャットし、その状態とメモリを確認できます。 * **[検索拡張生成(RAG)](/docs/rag/overview.mdx)**: Mastraは、ドキュメント(テキスト、HTML、Markdown、JSON)をチャンクに処理し、埋め込みを作成し、ベクトルデータベースに保存するためのAPIを提供します。クエリ時には、関連するチャンクを取得してLLMレスポンスをデータに基づいたものにします。複数のベクトルストア(Pinecone、pgvectorなど)と埋め込みプロバイダー(OpenAI、Cohereなど)の上に統一されたAPIを提供します。 * **[デプロイメント](/docs/deployment/deployment.mdx)**: Mastraは、既存のReact、Next.js、またはNode.jsアプリケーション内、または独立したエンドポイントにエージェントとワークフローをバンドルすることをサポートしています。Mastraデプロイヘルパーを使用すると、エージェントとワークフローをHonoを使用したNode.jsサーバーに簡単にバンドルしたり、Vercel、Cloudflare Workers、Netlifyなどのサーバーレスプラットフォームにデプロイしたりできます。 * **[評価](/docs/evals/overview.mdx)**: Mastraは、モデル評価、ルールベース、統計的手法を使用してLLM出力を評価する自動評価指標を提供し、毒性、バイアス、関連性、事実的正確性のための組み込み指標を備えています。また、独自の評価を定義することもできます。 --- title: "Mastra 統合の使用 | Mastra ローカル開発ドキュメント" description: サードパーティサービスのために自動生成された型安全なAPIクライアントであるMastra統合のドキュメント。 --- # Mastra統合の使用 [JA] Source: https://mastra.ai/ja/docs/integrations Mastraの統合は、サードパーティサービス用の自動生成された型安全なAPIクライアントです。これらはエージェントのツールとして、またはワークフローのステップとして使用できます。 ## インテグレーションのインストール Mastraのデフォルトインテグレーションは、個別にインストール可能なnpmモジュールとしてパッケージ化されています。npmを介してインストールし、Mastraの設定にインポートすることで、プロジェクトにインテグレーションを追加できます。 ### 例: GitHubインテグレーションの追加 1. **インテグレーションパッケージのインストール** GitHubインテグレーションをインストールするには、次を実行します: ```bash npm install @mastra/github ``` 2. **プロジェクトにインテグレーションを追加** インテグレーション用の新しいファイルを作成し(例: `src/mastra/integrations/index.ts`)、インテグレーションをインポートします: ```typescript filename="src/mastra/integrations/index.ts" showLineNumbers copy import { GithubIntegration } from "@mastra/github"; export const github = new GithubIntegration({ config: { PERSONAL_ACCESS_TOKEN: process.env.GITHUB_PAT!, }, }); ``` `process.env.GITHUB_PAT!`を実際のGitHubパーソナルアクセストークンに置き換えるか、環境変数が適切に設定されていることを確認してください。 3. **ツールやワークフローでインテグレーションを使用** エージェントのツールを定義する際やワークフローでインテグレーションを使用できます。 ```typescript filename="src/mastra/tools/index.ts" showLineNumbers copy import { createTool } from "@mastra/core"; import { z } from "zod"; import { github } from "../integrations"; export const getMainBranchRef = createTool({ id: "getMainBranchRef", description: "GitHubリポジトリからメインブランチの参照を取得する", inputSchema: z.object({ owner: z.string(), repo: z.string(), }), outputSchema: z.object({ ref: z.string().optional(), }), execute: async ({ context }) => { const client = await github.getApiClient(); const mainRef = await client.gitGetRef({ path: { owner: context.owner, repo: context.repo, ref: "heads/main", }, }); return { ref: mainRef.data?.ref }; }, }); ``` 上記の例では: - `github`インテグレーションをインポートしています。 - リポジトリのメインブランチの参照を取得するためにGitHub APIクライアントを使用する`getMainBranchRef`というツールを定義しています。 - ツールは`owner`と`repo`を入力として受け取り、参照文字列を返します。 ## エージェントでの統合の使用 統合を利用するツールを定義したら、これらのツールをエージェントに含めることができます。 ```typescript filename="src/mastra/agents/index.ts" showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; import { getMainBranchRef } from "../tools"; export const codeReviewAgent = new Agent({ name: "Code Review Agent", instructions: "An agent that reviews code repositories and provides feedback.", model: openai("gpt-4o-mini"), tools: { getMainBranchRef, // other tools... }, }); ``` このセットアップでは: - `Code Review Agent` という名前のエージェントを作成します。 - エージェントの利用可能なツールに `getMainBranchRef` ツールを含めます。 - エージェントは会話中にこのツールを使用してGitHubリポジトリと対話できるようになります。 ## 環境設定 統合に必要なAPIキーやトークンが環境変数に正しく設定されていることを確認してください。例えば、GitHub統合の場合、GitHubの個人アクセストークンを設定する必要があります: ```bash GITHUB_PAT=your_personal_access_token ``` 機密情報を管理するために、`.env`ファイルや他の安全な方法を使用することを検討してください。 ### 例: Mem0統合の追加 この例では、[Mem0](https://mem0.ai)プラットフォームを使用して、ツール使用を通じてエージェントに長期記憶機能を追加する方法を学びます。 このメモリ統合は、Mastraの[エージェントメモリ機能](https://mastra.ai/docs/agents/agent-memory)と一緒に動作することができます。 Mem0は、ユーザーごとにすべてのインタラクションを通じて事実を記憶し、後で思い出すことができるようにし、Mastraのメモリはスレッドごとに動作します。これらを組み合わせて使用することで、Mem0は会話/インタラクションを超えて長期記憶を保存し、Mastraのメモリは個々の会話で線形の会話履歴を維持します。 1. **統合パッケージのインストール** Mem0統合をインストールするには、次を実行します: ```bash npm install @mastra/mem0 ``` 2. **プロジェクトに統合を追加** 統合用の新しいファイルを作成し(例:`src/mastra/integrations/index.ts`)、統合をインポートします: ```typescript filename="src/mastra/integrations/index.ts" showLineNumbers copy import { Mem0Integration } from "@mastra/mem0"; export const mem0 = new Mem0Integration({ config: { apiKey: process.env.MEM0_API_KEY!, userId: "alice", }, }); ``` 3. **ツールやワークフローで統合を使用** エージェントのツールを定義する際やワークフローで統合を使用できます。 ```typescript filename="src/mastra/tools/index.ts" showLineNumbers copy import { createTool } from "@mastra/core"; import { z } from "zod"; import { mem0 } from "../integrations"; export const mem0RememberTool = createTool({ id: "Mem0-remember", description: "Mem0-memorizeツールを使用して以前に保存したエージェントの記憶を思い出します。", inputSchema: z.object({ question: z .string() .describe("保存された記憶の中から答えを探すために使用される質問。"), }), outputSchema: z.object({ answer: z.string().describe("思い出された答え"), }), execute: async ({ context }) => { console.log(`Searching memory "${context.question}"`); const memory = await mem0.searchMemory(context.question); console.log(`\nFound memory "${memory}"\n`); return { answer: memory, }; }, }); export const mem0MemorizeTool = createTool({ id: "Mem0-memorize", description: "Mem0に情報を保存し、後でMem0-rememberツールを使用して思い出せるようにします。", inputSchema: z.object({ statement: z.string().describe("メモリに保存する文"), }), execute: async ({ context }) => { console.log(`\nCreating memory "${context.statement}"\n`); // レイテンシーを減らすために、メモリはツールの実行をブロックせずに非同期で保存できます void mem0.createMemory(context.statement).then(() => { console.log(`\nMemory "${context.statement}" saved.\n`); }); return { success: true }; }, }); ``` 上記の例では: - `@mastra/mem0`統合をインポートします。 - Mem0 APIクライアントを使用して新しい記憶を作成し、以前に保存された記憶を呼び出す2つのツールを定義します。 - ツールは`question`を入力として受け取り、記憶を文字列として返します。 ## 利用可能な統合 Mastraは、いくつかの組み込み統合を提供しています。主にOAuthを必要としないAPIキーに基づく統合です。利用可能な統合には、Github、Stripe、Resend、Firecrawlなどがあります。 利用可能な統合の完全なリストについては、[Mastraのコードベース](https://github.com/mastra-ai/mastra/tree/main/integrations)または[npmパッケージ](https://www.npmjs.com/search?q=%22%40mastra%22)を確認してください。 ## 結論 Mastraのインテグレーションは、AIエージェントとワークフローが外部サービスとシームレスに連携することを可能にします。インテグレーションをインストールして設定することで、APIからのデータ取得、メッセージの送信、サードパーティシステムでのリソース管理などの操作をアプリケーションに追加することができます。 各インテグレーションの具体的な使用方法については、必ずドキュメントを参照し、セキュリティと型の安全性に関するベストプラクティスを遵守してください。 --- title: "既存プロジェクトへの追加 | Mastra ローカル開発ドキュメント" description: "既存のNode.jsアプリケーションにMastraを追加する" --- # 既存プロジェクトへの追加 [JA] Source: https://mastra.ai/ja/docs/local-dev/add-to-existing-project CLIを使用して既存のプロジェクトにMastraを追加できます: ```bash npm2yarn copy npm install -g mastra@latest mastra init ``` プロジェクトに加えられる変更: 1. エントリーポイントを持つ`src/mastra`ディレクトリを作成 2. 必要な依存関係を追加 3. TypeScriptコンパイラオプションを設定 ## インタラクティブセットアップ 引数なしでコマンドを実行すると、CLIプロンプトが開始されます: 1. コンポーネントの選択 2. LLMプロバイダーの設定 3. APIキーのセットアップ 4. サンプルコードの含有 ## 非対話型セットアップ 非対話型モードでmastraを初期化するには、次のコマンド引数を使用します: ```bash Arguments: --components 指定するコンポーネント: agents, tools, workflows --llm-provider LLMプロバイダー: openai, anthropic, または groq --add-example 例の実装を含める --llm-api-key プロバイダーのAPIキー --dir Mastraファイルのディレクトリ (デフォルトはsrc/) ``` 詳細については、[mastra init CLI ドキュメント](../../reference/cli/init)を参照してください。 --- title: "新しいプロジェクトの作成 | Mastra ローカル開発ドキュメント" description: "CLIを使用して新しいMastraプロジェクトを作成するか、既存のNode.jsアプリケーションにMastraを追加します" --- # 新しいプロジェクトの作成 [JA] Source: https://mastra.ai/ja/docs/local-dev/creating-a-new-project `create-mastra` パッケージを使用して新しいプロジェクトを作成できます: ```bash npm2yarn copy npm create mastra@latest ``` `mastra` CLIを直接使用して新しいプロジェクトを作成することもできます: ```bash npm2yarn copy npm install -g mastra@latest mastra create ``` ## インタラクティブセットアップ 引数なしでコマンドを実行すると、CLIプロンプトが開始されます: 1. プロジェクト名 1. コンポーネントの選択 2. LLMプロバイダーの設定 3. APIキーのセットアップ 4. サンプルコードの含有 ## 非対話型セットアップ 非対話型モードでmastraを初期化するには、次のコマンド引数を使用します: ```bash Arguments: --components Specify components: agents, tools, workflows --llm-provider LLM provider: openai, anthropic, groq, google, or cerebras --add-example Include example implementation --llm-api-key Provider API key --project-name Project name that will be used in package.json and as the project directory name ``` 生成されたプロジェクト構造: ``` my-project/ ├── src/ │ └── mastra/ │ └── index.ts # Mastra entry point ├── package.json └── tsconfig.json ``` --- title: "`mastra dev`でエージェントを検査する | Mastra ローカル開発ドキュメント" description: MastraアプリケーションのためのMastraローカル開発環境のドキュメント。 --- import YouTube from "@/components/youtube"; # ローカル開発環境 [JA] Source: https://mastra.ai/ja/docs/local-dev/mastra-dev Mastraは、ローカルで開発しながらエージェント、ワークフロー、ツールをテストできるローカル開発環境を提供します。 ## 開発サーバーの起動 Mastra CLIを使用して、Mastra開発環境を起動することができます。以下のコマンドを実行してください: ```bash mastra dev ``` デフォルトでは、サーバーは http://localhost:4111 で実行されますが、`--port` フラグを使用してポートを変更することができます。 ## 開発プレイグラウンド `mastra dev` は、エージェント、ワークフロー、ツールと対話するためのプレイグラウンドUIを提供します。このプレイグラウンドは、開発中のMastraアプリケーションの各コンポーネントをテストするための専用インターフェースを提供します。 ### エージェントプレイグラウンド エージェントプレイグラウンドは、開発中にエージェントをテストおよびデバッグするためのインタラクティブなチャットインターフェースを提供します。主な機能には以下が含まれます: - **チャットインターフェース**: エージェントと直接対話して、その応答と動作をテストします。 - **プロンプトCMS**: エージェントのための異なるシステム指示を試すことができます: - 異なるプロンプトバージョンのA/Bテスト。 - 各バリアントのパフォーマンス指標を追跡。 - 最も効果的なプロンプトバージョンを選択して展開。 - **エージェントトレース**: エージェントがリクエストを処理する方法を理解するための詳細な実行トレースを表示します。これには以下が含まれます: - プロンプトの構築。 - ツールの使用。 - 意思決定のステップ。 - 応答の生成。 - **エージェント評価**: [エージェント評価指標](/docs/evals/overview)を設定した場合、以下が可能です: - プレイグラウンドから直接評価を実行。 - 評価結果と指標を表示。 - 異なるテストケース間でのエージェントのパフォーマンスを比較。 ### ワークフロープレイグラウンド ワークフロープレイグラウンドは、ワークフローの実装を視覚化し、テストするのに役立ちます: - **ワークフローの視覚化**: ワークフローグラフの視覚化。 - **ワークフローの実行**: - カスタム入力データでテストワークフローをトリガー。 - ワークフローロジックと条件をデバッグ。 - 異なる実行パスをシミュレート。 - 各ステップの詳細な実行ログを表示。 - **ワークフロートレース**: 詳細な実行トレースを調べ、以下を示します: - ステップバイステップのワークフローの進行。 - 状態遷移とデータフロー。 - ツールの呼び出しとその結果。 - 意思決定ポイントと分岐ロジック。 - エラーハンドリングと回復パス。 ### ツールプレイグラウンド ツールプレイグラウンドは、カスタムツールを単独でテストすることを可能にします: - フルエージェントやワークフローを実行せずに個々のツールをテスト。 - テストデータを入力し、ツールの応答を表示。 - ツールの実装とエラーハンドリングをデバッグ。 - ツールの入力/出力スキーマを検証。 - ツールのパフォーマンスと実行時間を監視。 ## REST API エンドポイント `mastra dev` は、ローカルの [Mastra Server](/docs/deployment/server) を介して、エージェントとワークフローのための REST API ルートも起動します。これにより、デプロイ前に API エンドポイントをテストすることができます。すべてのエンドポイントの詳細については、[Mastra Dev リファレンス](/reference/cli/dev#routes) を参照してください。 その後、[Mastra Client](/docs/deployment/client) SDK を活用して、提供された REST API ルートとシームレスに対話することができます。 ## ローカル開発アーキテクチャ## 概要 `mastra dev` は、本番環境にデプロイする前に、自己完結型の環境でAIロジックを開発、デバッグ、反復することを容易にします。 - [Mastra Dev リファレンス](../../reference/cli/dev.mdx) --- title: Mastra Cloudへのデプロイ description: Mastraアプリケーション向けのGitHubベースのデプロイプロセス --- # Mastra Cloudへのデプロイ [JA] Source: https://mastra.ai/ja/docs/mastra-cloud/deploying このページでは、GitHubの統合を使用してMastraアプリケーションをMastra Cloudにデプロイするプロセスについて説明します。 ## 前提条件 - GitHubアカウント - Mastraアプリケーションを含むGitHubリポジトリ - Mastra Cloudへのアクセス権 ## デプロイメントプロセス Mastra Cloudは、VercelやNetlifyのようなプラットフォームと同様のGitベースのデプロイメントワークフローを使用しています: 1. **GitHubリポジトリのインポート** - プロジェクトダッシュボードから「Add new」をクリック - Mastraアプリケーションを含むリポジトリを選択 - 目的のリポジトリの横にある「Import」をクリック 2. **デプロイメント設定の構成** - プロジェクト名を設定(デフォルトはリポジトリ名) - デプロイするブランチを選択(通常は`main`) - Mastraディレクトリパスを設定(デフォルトは`src/mastra`) - 必要な環境変数(APIキーなど)を追加 3. **Gitからのデプロイ** - 初期設定後、選択したブランチへのプッシュによってデプロイメントがトリガーされます - Mastra Cloudは自動的にアプリケーションをビルドしてデプロイします - 各デプロイメントはエージェントとワークフローのアトミックなスナップショットを作成します ## 自動デプロイ Mastra Cloudはギット駆動のワークフローに従います: 1. ローカルでMastraアプリケーションに変更を加える 2. 変更を`main`ブランチにコミットする 3. GitHubにプッシュする 4. Mastra Cloudは自動的にプッシュを検出し、新しいデプロイメントを作成する 5. ビルドが完了すると、アプリケーションが本番環境で利用可能になる ## デプロイメントドメイン 各プロジェクトには2つのURLが提供されます: 1. **プロジェクト固有のドメイン**: `https://[project-name].mastra.cloud` - 例: `https://gray-acoustic-helicopter.mastra.cloud` 2. **デプロイメント固有のドメイン**: `https://[deployment-id].mastra.cloud` - 例: `https://young-loud-caravan-6156280f-ad56-4ec8-9701-6bb5271fd73d.mastra.cloud` これらのURLを通じて、デプロイされたエージェントとワークフローに直接アクセスできます。 ## デプロイメントの表示 ![デプロイメントリスト](/docs/cloud-agents.png) ダッシュボードのデプロイメントセクションには以下が表示されます: - **タイトル**:デプロイメント識別子(コミットハッシュに基づく) - **ステータス**:現在の状態(成功またはアーカイブ済み) - **ブランチ**:使用されたブランチ(通常は`main`) - **コミット**:Gitコミットハッシュ - **更新日時**:デプロイメントのタイムスタンプ 各デプロイメントは、特定の時点におけるMastraアプリケーションの原子的なスナップショットを表します。 ## エージェントとの対話 ![エージェントインターフェース](/docs/cloud-agent.png) デプロイ後、エージェントと対話する方法: 1. ダッシュボードでプロジェクトに移動する 2. エージェントセクションに進む 3. エージェントを選択して詳細とインターフェースを表示する 4. チャットタブを使用してエージェントとコミュニケーションを取る 5. 右側のパネルでエージェントの設定を確認する: - モデル情報(例:OpenAI) - 利用可能なツール(例:getWeather) - 完全なシステムプロンプト 6. 提案されたプロンプト(「どのような機能がありますか?」など)を使用するか、カスタムメッセージを入力する インターフェースにはエージェントのブランチ(通常は「main」)が表示され、会話メモリが有効かどうかも示されます。 ## ログのモニタリング ログセクションはアプリケーションに関する詳細情報を提供します: - **時間**: ログエントリが作成された時刻 - **レベル**: ログレベル(info、debug) - **ホスト名**: サーバー識別情報 - **メッセージ**: 詳細なログ情報、以下を含む: - API初期化 - ストレージ接続 - エージェントとワークフローのアクティビティ これらのログは、本番環境でのアプリケーションの動作をデバッグおよびモニタリングするのに役立ちます。 ## ワークフロー ![ワークフローインターフェース](/docs/cloud-workflows.png) ワークフローセクションでは、デプロイされたワークフローを表示および操作できます: 1. プロジェクト内のすべてのワークフローを表示 2. ワークフロー構造とステップを確認 3. 実行履歴とパフォーマンスデータにアクセス ## データベース使用量 Mastra Cloudはデータベース使用状況の指標を追跡します: - 読み取り回数 - 書き込み回数 - 使用ストレージ(MB) これらの指標はプロジェクト概要に表示され、リソース消費を監視するのに役立ちます。 ## デプロイメント設定 ダッシュボードからデプロイメントを設定します: 1. プロジェクト設定に移動します 2. 環境変数(`OPENAI_API_KEY`など)を設定します 3. プロジェクト固有の設定を構成します 設定の変更を反映させるには、新しいデプロイメントが必要です。 ## 次のステップ デプロイ後、オブザーバビリティツールを使用して[実行をトレースおよび監視](/docs/mastra-cloud/observability)します。 --- title: Mastra Cloudにおける可観測性 description: Mastra Cloudデプロイメントのためのモニタリングおよびデバッグツール --- # Mastra Cloudにおける可観測性 [JA] Source: https://mastra.ai/ja/docs/mastra-cloud/observability Mastra Cloudは監視とデバッグのために実行データを記録します。エージェントとワークフローからトレース、ログ、ランタイム情報を取得します。 ## エージェントインターフェース エージェントインターフェースは、タブを通じてアクセス可能な3つの主要なビューを提供します: 1. **チャット**:エージェントをテストするためのインタラクティブなメッセージングインターフェース 2. **トレース**:詳細な実行記録 3. **評価**:エージェントのパフォーマンス評価 ![チャットタブを表示したエージェントインターフェース](/docs/cloud-agent.png) ### チャットインターフェース チャットタブは以下を提供します: - デプロイされたエージェントとのインタラクティブなメッセージング - ユーザークエリに対するシステム応答 - 提案されるプロンプトボタン(例:「どのような機能がありますか?」) - メッセージ入力エリア - ブランチインジケーター(例:「main」) - エージェントのメモリ制限に関する注意事項 ### エージェント設定パネル 右側のサイドバーにはエージェントの詳細が表示されます: - エージェント名とデプロイメント識別子 - モデル情報(例:「OpenAI」) - エージェントが利用できるツール(例:「getWeather」) - 完全なシステムプロンプトテキスト このパネルは、ソースコードを確認することなく、エージェントがどのように設定されているかを可視化します。 ## トレースシステム Mastra Cloudはエージェントとワークフローの相互作用のトレースを記録します。 ### トレースエクスプローラーインターフェース ![エージェントトレースビュー](/docs/cloud-agent-traces.png) トレースエクスプローラーインターフェースは以下を表示します: - すべてのエージェントとワークフローの相互作用 - 特定のトレースの詳細 - 入力と出力データ - パラメータと結果を含むツールコール - ワークフロー実行パス - タイプ、ステータス、タイムスタンプ、エージェント/ワークフローによるフィルタリングオプション ### トレースデータ構造 各トレースには以下が含まれます: 1. **リクエストデータ**:エージェントまたはワークフローを開始したリクエスト 2. **ツールコール記録**:パラメータを含む実行中のツールコール 3. **ツールレスポンスデータ**:ツールコールからのレスポンス 4. **エージェントレスポンスデータ**:生成されたエージェントのレスポンス 5. **実行タイムスタンプ**:各実行ステップのタイミング情報 6. **モデルメタデータ**:モデル使用量とトークンに関する情報 トレースビューは実行全体を通じてすべてのAPIコールと結果を表示します。このデータはツールの使用法とエージェントのロジックフローのデバッグに役立ちます。 ### エージェント相互作用データ エージェント相互作用のトレースには以下が含まれます: - ユーザー入力テキスト - エージェント処理ステップ - ツールコール(例:天気APIコール) - 各ツールコールのパラメータと結果 - 最終的なエージェントレスポンステキスト ## ダッシュボード構造 Mastra Cloudダッシュボードには以下が含まれています: - プロジェクトのデプロイ履歴 - 環境変数の設定 - エージェント設定の詳細(モデル、システムプロンプト、ツール) - ワークフローステップの可視化 - デプロイURL - 最近のアクティビティログ ## エージェントのテスト チャットインターフェースを使用してエージェントをテストします: 1. エージェントセクションに移動します 2. テストしたいエージェントを選択します 3. チャットタブを使用してエージェントと対話します 4. メッセージを送信し、応答を確認します 5. 一般的なクエリには提案されたプロンプトを使用します 6. トレースタブに切り替えて実行の詳細を表示します デフォルトでは、エージェントはセッション間の会話履歴を記憶しないことに注意してください。インターフェースには「エージェントは以前のメッセージを記憶しません。エージェントのメモリを有効にするにはドキュメントを参照してください。」というメッセージが表示されます。 ## ワークフローモニタリング ![ワークフローインターフェース](/docs/cloud-workflow.png) ワークフローモニタリングでは以下を表示します: - ワークフローステップと接続の図 - 各ワークフローステップのステータス - 各ステップの実行詳細 - 実行トレースレコード - 複数ステップのプロセス実行(例:天気の検索後にアクティビティ計画を立てる) ### ワークフロー実行 ![ワークフロー実行詳細](/docs/cloud-workflow-run.png) 特定のワークフロー実行を調査する際、詳細なステップとその出力を確認できます。 ## ログ ![ログインターフェース](/docs/cloud-logs.png) ログセクションではアプリケーションに関する詳細情報を提供します: - **時間**: ログエントリが作成された時刻 - **レベル**: ログレベル(info、debug) - **ホスト名**: サーバー識別情報 - **メッセージ**: 詳細なログ情報(以下を含む): - API初期化 - ストレージ接続 - エージェントとワークフローのアクティビティ ## 技術的特徴 オブザーバビリティシステムには以下が含まれます: - **APIエンドポイント**:トレースデータへのプログラムによるアクセス - **構造化トレースフォーマット**:フィルタリングとクエリ操作のためのJSON形式 - **履歴データストレージ**:過去の実行記録の保持 - **デプロイメントバージョンリンク**:トレースとデプロイメントバージョン間の相関関係 ## デバッグパターン - エージェントの動作変更をテストする際にトレースデータを比較する - チャットインターフェースを使用してエッジケースの入力をテストする - システムプロンプトを確認してエージェントの動作を理解する - ツール呼び出しのパラメータと結果を調べる - ワークフロー実行のステップシーケンスを確認する - トレースのタイミングデータで実行のボトルネックを特定する - エージェントバージョン間のトレースの違いを比較する ## サポートリソース オブザーバビリティに関する技術的な支援については: - [トラブルシューティングドキュメント]()を確認する - ダッシュボードから技術サポートに連絡する - [Discord開発者チャンネル](https://discord.gg/mastra)に参加する --- title: Mastra Cloud description: Mastraアプリケーションのデプロイメントと監視サービス --- # Mastra Cloud [JA] Source: https://mastra.ai/ja/docs/mastra-cloud/overview Mastra Cloudは、Mastraアプリケーションを実行、管理、監視するデプロイメントサービスです。標準的なMastraプロジェクトと連携し、デプロイメント、スケーリング、運用タスクを処理します。 ## コア機能 - **アトミックデプロイメント** - エージェントとワークフローが単一のユニットとしてデプロイされる - **プロジェクト編成** - エージェントとワークフローをURLが割り当てられたプロジェクトにグループ化 - **環境変数** - 環境ごとに設定を安全に保存 - **テストコンソール** - Webインターフェースを通じてエージェントにメッセージを送信 - **実行トレース** - エージェントの対話とツール呼び出しを記録 - **ワークフロー可視化** - ワークフローのステップと実行パスを表示 - **ログ** - デバッグ用の標準ログ出力 - **プラットフォーム互換性** - Cloudflare、Vercel、Netlifyデプロイヤーと同じインフラストラクチャを使用 ## ダッシュボードコンポーネント Mastra Cloudダッシュボードには以下が含まれています: - **プロジェクトリスト** - アカウント内のすべてのプロジェクト - **プロジェクト詳細** - デプロイメント、環境変数、アクセスURL - **デプロイメント履歴** - タイムスタンプとステータスを含むデプロイメントの記録 - **エージェントインスペクター** - モデル、ツール、システムプロンプトを表示するエージェント設定ビュー - **テストコンソール** - エージェントにメッセージを送信するためのインターフェース - **トレースエクスプローラー** - ツール呼び出し、パラメータ、レスポンスの記録 - **ワークフロービューア** - ワークフローステップと接続の図 ## 技術的な実装 Mastra Cloudは、プラットフォーム固有のデプロイヤーと同じコアコードで動作し、以下の修正が加えられています: - **エッジネットワーク分散** - 地理的に分散された実行 - **動的リソース割り当て** - トラフィックに基づいてコンピューティングリソースを調整 - **Mastra専用ランタイム** - エージェント実行に最適化されたランタイム - **標準デプロイメントAPI** - 環境間で一貫したデプロイメントインターフェース - **トレーシングインフラストラクチャ** - すべてのエージェントとワークフロー実行ステップを記録 ## ユースケース 一般的な使用パターン: - インフラストラクチャを管理せずにアプリケーションをデプロイする - ステージング環境と本番環境を維持する - 多くのリクエストにわたるエージェントの動作を監視する - Webインターフェースを通じてエージェントの応答をテストする - 複数のリージョンにデプロイする ## セットアッププロセス 1. [Mastra Cloudプロジェクトを設定する](/docs/mastra-cloud/setting-up) 2. [コードをデプロイする](/docs/mastra-cloud/deploying) 3. [実行トレースを表示する](/docs/mastra-cloud/observability) --- title: プロジェクトのセットアップ description: Mastra Cloudプロジェクトの設定手順 --- # Mastra Cloudプロジェクトの設定 [JA] Source: https://mastra.ai/ja/docs/mastra-cloud/setting-up このページでは、GitHub連携を使用してMastra Cloudでプロジェクトを設定する手順について説明します。 ## 前提条件 - Mastra Cloudアカウント - GitHubアカウント - Mastraアプリケーションを含むGitHubリポジトリ ## プロジェクト作成プロセス 1. **Mastra Cloudにサインイン** - Mastra Cloudダッシュボード(https://cloud.mastra.ai)に移動します - アカウント認証情報でサインインします 2. **新しいプロジェクトを追加** - 「すべてのプロジェクト」ビューから、右上の「新規追加」ボタンをクリックします - これによりGitHubリポジトリのインポートダイアログが開きます ![Mastra Cloudプロジェクトダッシュボード](/docs/cloud-agents.png) 3. **Gitリポジトリをインポート** - リポジトリを検索するか、利用可能なGitHubリポジトリのリストから選択します - デプロイしたいリポジトリの横にある「インポート」ボタンをクリックします 4. **デプロイメント詳細の設定** デプロイメント設定ページには以下が含まれます: - **リポジトリ名**:GitHubリポジトリ名(読み取り専用) - **プロジェクト名**:プロジェクト名をカスタマイズ(デフォルトはリポジトリ名) - **ブランチ**:デプロイするブランチを選択(ドロップダウン、デフォルトは`main`) - **プロジェクトルート**:プロジェクトのルートディレクトリを設定(デフォルトは`/`) - **Mastraディレクトリ**:Mastraファイルの場所を指定(デフォルトは`src/mastra`) - **ビルドコマンド**:ビルドプロセス中に実行するオプションコマンド - **ストア設定**:データストレージオプションを設定 - **環境変数**:設定用のキーと値のペアを追加(例:APIキー) ## プロジェクト構造の要件 Mastra CloudはGitHubリポジトリを以下の要素についてスキャンします: - **エージェント**:モデルとツールを備えたエージェント定義(例:天気エージェント) - **ワークフロー**:ワークフローステップ定義(例:weather-workflow) - **環境変数**:必要なAPIキーと設定変数 リポジトリは、適切な検出とデプロイメントのために標準的なMastraプロジェクト構造を含んでいる必要があります。 ## ダッシュボードの理解 プロジェクトを作成すると、ダッシュボードには以下が表示されます: ### プロジェクト概要 - **作成日**: プロジェクトが作成された日時 - **ドメイン**: デプロイされたアプリケーションにアクセスするためのURL - 形式: `https://[project-name].mastra.cloud` - 形式: `https://[random-id].mastra.cloud` - **ステータス**: 現在のデプロイメントステータス(成功またはアーカイブ) - **ブランチ**: デプロイされたブランチ(通常は`main`) - **環境変数**: 設定されたAPIキーと設定 - **ワークフロー**: 検出されたワークフローとステップ数のリスト - **エージェント**: 検出されたエージェントとそのモデルおよびツールのリスト - **データベース使用状況**: 読み取り、書き込み、ストレージの統計 ### デプロイメントセクション - すべてのデプロイメントのリスト: - デプロイメントID(コミットハッシュに基づく) - ステータス(成功/アーカイブ) - ブランチ - コミットハッシュ - タイムスタンプ ### ログセクション ログビューには以下が表示されます: - 各ログエントリのタイムスタンプ - ログレベル(info、debug) - ホスト名 - 詳細なログメッセージ(以下を含む): - API起動情報 - ストレージ初期化 - エージェントとワークフローのアクティビティ ## ナビゲーション サイドバーから以下にアクセスできます: - **概要**:プロジェクトの概要と統計 - **デプロイメント**:デプロイ履歴と詳細 - **ログ**:デバッグ用のアプリケーションログ - **エージェント**:すべてのエージェントのリストと設定 - **ワークフロー**:すべてのワークフローのリストと構造 - **設定**:プロジェクト構成オプション ## 環境変数の設定 ダッシュボードを通じて環境変数を設定します: 1. ダッシュボードでプロジェクトに移動します 2. 「環境変数」セクションに進みます 3. 変数を追加または編集します(例:`OPENAI_API_KEY`) 4. 設定を保存します 環境変数は暗号化され、デプロイメントと実行中にアプリケーションで利用可能になります。 ## デプロイメントのテスト デプロイメント後、以下の方法でエージェントとワークフローをテストできます: 1. プロジェクトに割り当てられたカスタムドメイン:`https://[project-name].mastra.cloud` 2. エージェントと直接対話するためのダッシュボードインターフェース ## 次のステップ プロジェクトを設定した後、GitHubリポジトリの`main`ブランチにプッシュするたびに自動デプロイが行われます。詳細については、[デプロイメントのドキュメント](/docs/mastra-cloud/deploying)を参照してください。 # メモリプロセッサ [JA] Source: https://mastra.ai/ja/docs/memory/memory-processors メモリプロセッサを使用すると、メモリから取得されたメッセージのリストを、エージェントのコンテキストウィンドウに追加されLLMに送信される_前に_変更することができます。これはコンテキストサイズの管理、コンテンツのフィルタリング、パフォーマンスの最適化に役立ちます。 プロセッサは、メモリ設定(例:`lastMessages`、`semanticRecall`)に基づいて取得されたメッセージに対して動作します。新しく入ってくるユーザーメッセージには**影響しません**。 ## 組み込みプロセッサー Mastraには組み込みプロセッサーが提供されています: ### `TokenLimiter` このプロセッサーは、LLMのコンテキストウィンドウ制限を超えることによって引き起こされるエラーを防ぐために使用されます。取得されたメモリメッセージのトークン数をカウントし、合計数が指定された`limit`を下回るまで最も古いメッセージを削除します。 ```typescript copy showLineNumbers {9-12} import { Memory } from "@mastra/memory"; import { TokenLimiter } from "@mastra/memory/processors"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ model: openai("gpt-4o"), memory: new Memory({ processors: [ // Ensure the total tokens from memory don't exceed ~127k new TokenLimiter(127000), ], }), }); ``` `TokenLimiter`はデフォルトで`o200k_base`エンコーディングを使用します(GPT-4oに適しています)。異なるモデルに必要な場合は、他のエンコーディングを指定することができます: ```typescript copy showLineNumbers {6-9} // Import the encoding you need (e.g., for older OpenAI models) import cl100k_base from "js-tiktoken/ranks/cl100k_base"; const memoryForOlderModel = new Memory({ processors: [ new TokenLimiter({ limit: 16000, // Example limit for a 16k context model encoding: cl100k_base, }), ], }); ``` エンコーディングについての詳細は、[OpenAI cookbook](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken#encodings)または[`js-tiktoken` リポジトリ](https://github.com/dqbd/tiktoken)を参照してください。 ### `ToolCallFilter` このプロセッサーは、LLMに送信されるメモリメッセージからツールコールを削除します。コンテキストから潜在的に冗長なツールの対話を除外することでトークンを節約します。これは、将来の対話のために詳細が必要ない場合に役立ちます。また、エージェントが常に特定のツールを再度呼び出し、メモリ内の以前のツール結果に依存しないようにしたい場合にも役立ちます。 ```typescript copy showLineNumbers {5-14} import { Memory } from "@mastra/memory"; import { ToolCallFilter, TokenLimiter } from "@mastra/memory/processors"; const memoryFilteringTools = new Memory({ processors: [ // Example 1: Remove all tool calls/results new ToolCallFilter(), // Example 2: Remove only noisy image generation tool calls/results new ToolCallFilter({ exclude: ["generateImageTool"] }), // Always place TokenLimiter last new TokenLimiter(127000), ], }); ``` ## 複数のプロセッサの適用 複数のプロセッサをチェーンすることができます。これらは`processors`配列に表示される順序で実行されます。一つのプロセッサの出力が次のプロセッサの入力となります。 **順序が重要です!** 一般的なベストプラクティスとして、`TokenLimiter`をチェーンの**最後**に配置することをお勧めします。これにより、他のフィルタリングが行われた後の最終的なメッセージセットに対して動作し、最も正確なトークン制限の適用が可能になります。 ```typescript copy showLineNumbers {7-14} import { Memory } from "@mastra/memory"; import { ToolCallFilter, TokenLimiter } from "@mastra/memory/processors"; // Assume a hypothetical 'PIIFilter' custom processor exists // import { PIIFilter } from './custom-processors'; const memoryWithMultipleProcessors = new Memory({ processors: [ // 1. Filter specific tool calls first new ToolCallFilter({ exclude: ["verboseDebugTool"] }), // 2. Apply custom filtering (e.g., remove hypothetical PII - use with caution) // new PIIFilter(), // 3. Apply token limiting as the final step new TokenLimiter(127000), ], }); ``` ## カスタムプロセッサの作成 基本の `MemoryProcessor` クラスを拡張することで、カスタムロジックを作成できます。 ```typescript copy showLineNumbers {4-19,23-26} import { Memory, CoreMessage } from "@mastra/memory"; import { MemoryProcessor, MemoryProcessorOpts } from "@mastra/core/memory"; class ConversationOnlyFilter extends MemoryProcessor { constructor() { // Provide a name for easier debugging if needed super({ name: "ConversationOnlyFilter" }); } process( messages: CoreMessage[], _opts: MemoryProcessorOpts = {}, // Options passed during memory retrieval, rarely needed here ): CoreMessage[] { // Filter messages based on role return messages.filter( (msg) => msg.role === "user" || msg.role === "assistant", ); } } // Use the custom processor const memoryWithCustomFilter = new Memory({ processors: [ new ConversationOnlyFilter(), new TokenLimiter(127000), // Still apply token limiting ], }); ``` カスタムプロセッサを作成する際は、入力の `messages` 配列やそのオブジェクトを直接変更しないようにしてください。 # メモリの概要 [JA] Source: https://mastra.ai/ja/docs/memory/overview メモリは、エージェントが利用可能なコンテキストを管理する方法であり、すべてのチャットメッセージをコンテキストウィンドウに凝縮したものです。 ## コンテキストウィンドウ コンテキストウィンドウは、言語モデルが任意の時点で見ることができる情報の総量です。 Mastraでは、コンテキストは3つの部分に分かれています:システム指示とユーザーに関する情報([ワーキングメモリ](./working-memory.mdx))、最近のメッセージ([メッセージ履歴](#conversation-history))、そしてユーザーのクエリに関連する古いメッセージ([セマンティック検索](./semantic-recall.mdx))です。 さらに、コンテキストが長すぎる場合にコンテキストをトリミングしたり情報を削除したりするための[メモリプロセッサ](./memory-processors.mdx)を提供しています。 ## クイックスタート メモリを動作させる最速の方法は、組み込みの開発プレイグラウンドを使用することです。 まだ行っていない場合は、メインの[スタートガイド](/docs/getting-started/installation)に従って新しいMastraプロジェクトを作成してください。 **1. メモリパッケージをインストールします:** ```bash npm2yarn copy npm install @mastra/memory ``` **2. エージェントを作成し、`Memory`インスタンスを接続します:** ```typescript filename="src/mastra/agents/index.ts" {10} import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; export const myMemoryAgent = new Agent({ name: "MemoryAgent", instructions: "...", model: openai("gpt-4o"), memory: new Memory(), }); ``` **3. 開発サーバーを起動します:** ```bash npm2yarn copy npm run dev ``` **4. プレイグラウンド(http://localhost:4111)を開き、`MemoryAgent`を選択します:** いくつかのメッセージを送信して、会話の中で情報を記憶していることを確認してください: ``` ➡️ あなた: 私の好きな色は青です。 ⬅️ エージェント: わかりました!あなたの好きな色が青であることを覚えておきます。 ➡️ あなた: 私の好きな色は何ですか? ⬅️ エージェント: あなたの好きな色は青です。 ``` ## メモリースレッド Mastraはメモリーをスレッドに整理します。スレッドは特定の会話履歴を識別する記録であり、次の2つの識別子を使用します: 1. **`threadId`**: 特定の会話ID(例:`support_123`)。 2. **`resourceId`**: 各スレッドを所有するユーザーまたはエンティティID(例:`user_123`、`org_456`)。 ```typescript {2,3} const response = await myMemoryAgent.stream("Hello, my name is Alice.", { resourceId: "user_alice", threadId: "conversation_123", }); ``` **重要:** これらのIDがなければ、メモリーが適切に設定されていても、エージェントはメモリーを使用しません。プレイグラウンドではこれが自動的に処理されますが、アプリケーションでメモリーを使用する場合は自分でIDを追加する必要があります。 ## 会話履歴 デフォルトでは、`Memory`インスタンスは現在のMemoryスレッドからの[最新40メッセージ](../../reference/memory/Memory.mdx)を各新規リクエストに含めます。これにより、エージェントに即時の会話コンテキストが提供されます。 ```ts {3} const memory = new Memory({ options: { lastMessages: 10, }, }); ``` **重要:** 各エージェント呼び出しでは、最新のユーザーメッセージのみを送信してください。Mastraは必要な履歴の取得と注入を処理します。履歴全体を自分で送信すると重複が発生します。`useChat`フロントエンドフックを使用する場合の処理方法については、[AI SDK Memoryの例](../../examples/memory/use-chat.mdx)を参照してください。 ### ストレージ設定 会話履歴はメッセージを保存するために[ストレージアダプター](/reference/memory/Memory#parameters)に依存しています。 ```ts {7-12} import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; import { LibSQLStore } from "@mastra/core/storage/libsql"; const agent = new Agent({ memory: new Memory({ // これは省略した場合のデフォルトストレージDBです storage: new LibSQLStore({ config: { url: "file:local.db", }, }), }), }); ``` **ストレージコードの例**: - [LibSQL](/examples/memory/memory-with-libsql) - [Postgres](/examples/memory/memory-with-pg) - [Upstash](/examples/memory/memory-with-upstash) ## 次のステップ コアコンセプトを理解したところで、[セマンティック検索](./semantic-recall.mdx)に進んで、MastraエージェントにRAGメモリを追加する方法を学びましょう。 あるいは、利用可能なオプションについては[設定リファレンス](../../reference/memory/Memory.mdx)を参照するか、[使用例](../../examples/memory/use-chat.mdx)をブラウズすることもできます。 # セマンティック リコール [JA] Source: https://mastra.ai/ja/docs/memory/semantic-recall 友人に先週末何をしたか尋ねると、彼らは「先週末」に関連する出来事を記憶の中から検索し、それから何をしたかを教えてくれます。それはある意味、Mastraでのセマンティック リコールの仕組みに似ています。 ## セマンティックリコールの仕組み セマンティックリコールはRAGベースの検索であり、メッセージが[最近の会話履歴](./overview.mdx#conversation-history)に含まれなくなった場合でも、エージェントが長期間の対話にわたってコンテキストを維持するのに役立ちます。 メッセージのベクトル埋め込みを使用して類似性検索を行い、さまざまなベクトルストアと統合し、取得されたメッセージの周囲のコンテキストウィンドウを設定可能です。
Mastra Memoryのセマンティックリコールを示す図 有効にすると、新しいメッセージを使用してベクトルDBに意味的に類似したメッセージを問い合わせます。 LLMからの応答を受け取った後、すべての新しいメッセージ(ユーザー、アシスタント、ツールコール/結果)がベクトルDBに挿入され、後の対話で呼び出されるようになります。 ## クイックスタート セマンティック・リコールはデフォルトで有効になっているため、エージェントにメモリを与えると、それが含まれます: ```typescript {9} import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: "SupportAgent", instructions: "You are a helpful support agent.", model: openai("gpt-4o"), memory: new Memory(), }); ``` ## リコール設定 セマンティックリコールの動作を制御する主な2つのパラメータは: 1. **topK**: 意味的に類似したメッセージを取得する数 2. **messageRange**: 各一致に含める周囲のコンテキストの量 ```typescript {5-6} const agent = new Agent({ memory: new Memory({ options: { semanticRecall: { topK: 3, // 最も類似した3つのメッセージを取得 messageRange: 2, // 各一致の前後2つのメッセージを含める }, }, }), }); ``` ### ストレージ設定 セマンティックリコールは、メッセージとそれらの埋め込みを保存するために[ストレージとベクトルDB](/reference/memory/Memory#parameters)に依存しています。 ```ts {8-17} import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; import { LibSQLStore } from "@mastra/core/storage/libsql"; import { LibSQLVector } from "@mastra/core/vector/libsql"; const agent = new Agent({ memory: new Memory({ // これは省略した場合のデフォルトストレージDBです storage: new LibSQLStore({ config: { url: "file:local.db", }, }), // これは省略した場合のデフォルトベクトルDBです vector: new LibSQLVector({ connectionUrl: "file:local.db", }), }), }); ``` **ストレージ/ベクトルのコード例**: - [LibSQL](/examples/memory/memory-with-libsql) - [Postgres](/examples/memory/memory-with-pg) - [Upstash](/examples/memory/memory-with-upstash) ### エンベッダー設定 セマンティックリコールは、メッセージを埋め込みに変換するための[埋め込みモデル](/reference/memory/Memory#embedder)に依存しています。デフォルトでMastraはFastEmbedを使用しますが、別の[埋め込みモデル](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings)を指定することもできます。 ```ts {7} import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ memory: new Memory({ embedder: openai.embedding("text-embedding-3-small"), }), }); ``` ### 無効化 セマンティックリコールを使用するとパフォーマンスへの影響があります。新しいメッセージは埋め込みに変換され、新しいメッセージがLLMに送信される前にベクトルデータベースへのクエリに使用されます。 セマンティックリコールはデフォルトで有効になっていますが、必要ない場合は無効にすることができます: ```typescript {4} const agent = new Agent({ memory: new Memory({ options: { semanticRecall: false, }, }), }); ``` 以下のようなシナリオでセマンティックリコールを無効にすることをお勧めします: - [会話履歴](./getting-started.mdx#conversation-history-last-messages)が現在の会話に十分なコンテキストを提供している場合。 - リアルタイムの双方向音声のような、埋め込みの作成とベクトルクエリの実行による追加の遅延が目立つパフォーマンスに敏感なアプリケーションの場合。 import YouTube from "@/components/youtube"; # ワーキングメモリ [JA] Source: https://mastra.ai/ja/docs/memory/working-memory [会話履歴](/docs/memory/overview#conversation-history)や[セマンティック検索](./semantic-recall.mdx)がエージェントが会話を記憶するのに役立つ一方、ワーキングメモリはエージェントがスレッド内の対話全体でユーザーに関する永続的な情報を維持することを可能にします。 これはエージェントのアクティブな思考やメモ帳のようなものです - ユーザーやタスクについて利用可能な状態に保つ重要な情報です。これは、人が会話中に自然に相手の名前、好み、または重要な詳細を覚えておくのと似ています。 これは、常に関連性があり、エージェントが常に利用できるべき継続的な状態を維持するのに役立ちます。 ## クイックスタート 以下は、ワーキングメモリを使用したエージェントの設定の最小限の例です: ```typescript {12-15} import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; // Create agent with working memory enabled const agent = new Agent({ name: "PersonalAssistant", instructions: "You are a helpful personal assistant.", model: openai("gpt-4o"), memory: new Memory({ options: { workingMemory: { enabled: true, use: "tool-call", // Recommended setting }, }, }), }); ``` ## 仕組み ワーキングメモリは、エージェントが継続的に関連する情報を保存するために時間の経過とともに更新できるMarkdownテキストのブロックです: ## カスタムテンプレート テンプレートは、エージェントがワーキングメモリで追跡・更新する情報を指示します。テンプレートが提供されない場合はデフォルトテンプレートが使用されますが、通常はエージェントの特定のユースケースに合わせたカスタムテンプレートを定義して、最も関連性の高い情報を記憶させることが望ましいでしょう。 以下はカスタムテンプレートの例です。この例では、ユーザーが情報を含むメッセージを送信するとすぐに、エージェントはユーザーの名前、場所、タイムゾーンなどを保存します: ```typescript {5-28} const memory = new Memory({ options: { workingMemory: { enabled: true, template: ` # User Profile ## Personal Info - Name: - Location: - Timezone: ## Preferences - Communication Style: [e.g., Formal, Casual] - Project Goal: - Key Deadlines: - [Deadline 1]: [Date] - [Deadline 2]: [Date] ## Session State - Last Task Discussed: - Open Questions: - [Question 1] - [Question 2] `, }, }, }); ``` エージェントが期待通りにワーキングメモリを更新していない場合は、エージェントの`instruction`設定にこのテンプレートを_どのように_、_いつ_使用するかについてのシステム指示を追加することができます。 ## 例 - [ストリーミングワーキングメモリ](/examples/memory/streaming-working-memory) - [ワーキングメモリテンプレートの使用](/examples/memory/streaming-working-memory-advanced) --- title: "ログ | Mastra オブザーバビリティ ドキュメント" description: Mastra における効果的なログ記録に関するドキュメントで、アプリケーションの動作を理解し、AI の精度を向上させるために重要です。 --- import Image from "next/image"; # ロギング [JA] Source: https://mastra.ai/ja/docs/observability/logging Mastraでは、ログは特定の関数がいつ実行されるか、どのような入力データを受け取るか、そしてどのように応答するかを詳述することができます。 ## 基本設定 こちらは、`INFO` レベルで **コンソールロガー** を設定する最小限の例です。これにより、情報メッセージおよびそれ以上(つまり、`DEBUG`、`INFO`、`WARN`、`ERROR`)がコンソールに出力されます。 ```typescript filename="mastra.config.ts" showLineNumbers copy import { Mastra } from "@mastra/core"; import { createLogger } from "@mastra/core/logger"; export const mastra = new Mastra({ // Other Mastra configuration... logger: createLogger({ name: "Mastra", level: "info", }), }); ``` この設定では: - `name: "Mastra"` はログをグループ化するための名前を指定します。 - `level: "info"` は記録するログの最小重大度を設定します。 ## 設定 - `createLogger()` に渡すことができるオプションの詳細については、[createLogger リファレンスドキュメント](/reference/observability/create-logger.mdx)を参照してください。 - `Logger` インスタンスを取得したら、そのメソッド(例:`.info()`、`.warn()`、`.error()`)を[Logger インスタンスリファレンスドキュメント](/reference/observability/logger.mdx)で呼び出すことができます。 - ログを外部サービスに送信して集中管理、分析、または保存を行いたい場合は、Upstash Redis などの他のロガータイプを設定できます。`UPSTASH` ロガータイプを使用する際の `url`、`token`、`key` などのパラメータの詳細については、[createLogger リファレンスドキュメント](/reference/observability/create-logger.mdx)を参照してください。 --- title: "Next.js トレーシング | Mastra オブザーバビリティ ドキュメント" description: "Next.js アプリケーションのための OpenTelemetry トレーシングの設定" --- # Next.js トレーシング [JA] Source: https://mastra.ai/ja/docs/observability/nextjs-tracing Next.js では、OpenTelemetry トレーシングを有効にするために追加の設定が必要です。 ### ステップ 1: Next.js 設定 Next.js の設定でインストゥルメンテーションフックを有効にします: ```ts filename="next.config.ts" showLineNumbers copy import type { NextConfig } from "next"; const nextConfig: NextConfig = { experimental: { instrumentationHook: true // Next.js 15+ では不要 } }; export default nextConfig; ``` ### ステップ 2: Mastra 設定 Mastra インスタンスを設定します: ```typescript filename="mastra.config.ts" copy import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... 他の設定 telemetry: { serviceName: "your-project-name", enabled: true } }); ``` ### ステップ 3: プロバイダーの設定 Next.js を使用している場合、OpenTelemetry インストゥルメンテーションを設定するための2つのオプションがあります: #### オプション 1: カスタムエクスポーターの使用 プロバイダー全体で機能するデフォルトは、カスタムエクスポーターを設定することです: 1. 必要な依存関係をインストールします(Langfuse を使用した例): ```bash copy npm install @opentelemetry/api langfuse-vercel ``` 2. インストゥルメンテーションファイルを作成します: ```ts filename="instrumentation.ts" copy import { NodeSDK, ATTR_SERVICE_NAME, Resource, } from '@mastra/core/telemetry/otel-vendor'; import { LangfuseExporter } from 'langfuse-vercel'; export function register() { const exporter = new LangfuseExporter({ // ... Langfuse 設定 }) const sdk = new NodeSDK({ resource: new Resource({ [ATTR_SERVICE_NAME]: 'ai', }), traceExporter: exporter, }); sdk.start(); } ``` #### オプション 2: Vercel の Otel セットアップの使用 Vercel にデプロイする場合、彼らの OpenTelemetry セットアップを使用できます: 1. 必要な依存関係をインストールします: ```bash copy npm install @opentelemetry/api @vercel/otel ``` 2. プロジェクトのルート(または src フォルダを使用している場合はその中)にインストゥルメンテーションファイルを作成します: ```ts filename="instrumentation.ts" copy import { registerOTel } from '@vercel/otel' export function register() { registerOTel({ serviceName: 'your-project-name' }) } ``` ### まとめ このセットアップにより、Next.js アプリケーションと Mastra 操作のための OpenTelemetry トレーシングが有効になります。 詳細については、以下のドキュメントを参照してください: - [Next.js インストゥルメンテーション](https://nextjs.org/docs/app/building-your-application/optimizing/instrumentation) - [Vercel OpenTelemetry](https://vercel.com/docs/observability/otel-overview/quickstart) --- title: "トレーシング | Mastra オブザーバビリティ ドキュメント" description: "Mastra アプリケーションのための OpenTelemetry トレーシングの設定" --- import Image from "next/image"; # トレーシング [JA] Source: https://mastra.ai/ja/docs/observability/tracing Mastraは、アプリケーションのトレーシングとモニタリングのためにOpenTelemetry Protocol (OTLP) をサポートしています。テレメトリーが有効化されると、Mastraはエージェント操作、LLMインタラクション、ツール実行、統合呼び出し、ワークフロー実行、データベース操作を含むすべてのコアプリミティブを自動的にトレースします。その後、テレメトリーデータを任意のOTELコレクターにエクスポートできます。 ### 基本設定 テレメトリーを有効にする簡単な例を示します: ```ts filename="mastra.config.ts" showLineNumbers copy export const mastra = new Mastra({ // ... 他の設定 telemetry: { serviceName: "my-app", enabled: true, sampling: { type: "always_on", }, export: { type: "otlp", endpoint: "http://localhost:4318", // SigNozのローカルエンドポイント }, }, }); ``` ### 設定オプション テレメトリー設定は以下のプロパティを受け入れます: ```ts type OtelConfig = { // トレースでサービスを識別するための名前(オプション) serviceName?: string; // テレメトリーの有効/無効化(デフォルトはtrue) enabled?: boolean; // サンプリングされるトレースの数を制御 sampling?: { type: "ratio" | "always_on" | "always_off" | "parent_based"; probability?: number; // 比率サンプリング用 root?: { probability: number; // 親ベースのサンプリング用 }; }; // テレメトリーデータの送信先 export?: { type: "otlp" | "console"; endpoint?: string; headers?: Record; }; }; ``` 詳細は [OtelConfig リファレンスドキュメント](../../reference/observability/otel-config.mdx) を参照してください。 ### 環境変数 OTLPエンドポイントとヘッダーは環境変数を通じて設定できます: ```env filename=".env" copy OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 OTEL_EXPORTER_OTLP_HEADERS=x-api-key=your-api-key ``` その後、設定で: ```ts filename="mastra.config.ts" showLineNumbers copy export const mastra = new Mastra({ // ... 他の設定 telemetry: { serviceName: "my-app", enabled: true, export: { type: "otlp", // エンドポイントとヘッダーは環境変数から取得されます }, }, }); ``` ### 例: SigNoz統合 [SigNoz](https://signoz.io) でのトレースされたエージェントインタラクションの例: スパン、LLM呼び出し、ツール実行を示すエージェントインタラクショントレース ### その他のサポートされているプロバイダー サポートされている観測可能性プロバイダーとその設定の詳細については、[Observability Providers リファレンス](../../reference/observability/providers/) を参照してください。 ### Next.js特有のトレーシング手順 Next.jsを使用している場合、追加の設定手順が3つあります: 1. `next.config.ts`でインストゥルメンテーションフックを有効にする 2. Mastraのテレメトリー設定を構成する 3. OpenTelemetryエクスポーターを設定する 実装の詳細については、[Next.js Tracing](./nextjs-tracing) ガイドを参照してください。 --- title: ドキュメントのチャンク化と埋め込み | RAG | Mastra ドキュメント description: 効率的な処理と取得のためのMastraにおけるドキュメントのチャンク化と埋め込みに関するガイド。 --- ## ドキュメントのチャンク化と埋め込み [JA] Source: https://mastra.ai/ja/docs/rag/chunking-and-embedding 処理の前に、コンテンツからMDocumentインスタンスを作成します。さまざまな形式から初期化できます: ```ts showLineNumbers copy const docFromText = MDocument.fromText("Your plain text content..."); const docFromHTML = MDocument.fromHTML("Your HTML content..."); const docFromMarkdown = MDocument.fromMarkdown("# Your Markdown content..."); const docFromJSON = MDocument.fromJSON(`{ "key": "value" }`); ``` ## ステップ 1: ドキュメント処理 `chunk` を使用して、ドキュメントを管理しやすい部分に分割します。Mastra は、異なるドキュメントタイプに最適化された複数のチャンク戦略をサポートしています: - `recursive`: コンテンツ構造に基づくスマートな分割 - `character`: 単純な文字ベースの分割 - `token`: トークン認識の分割 - `markdown`: Markdown 認識の分割 - `html`: HTML 構造認識の分割 - `json`: JSON 構造認識の分割 - `latex`: LaTeX 構造認識の分割 以下は、`recursive` 戦略を使用する例です: ```ts showLineNumbers copy const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", extract: { metadata: true, // オプションでメタデータを抽出 }, }); ``` **注意:** メタデータの抽出には LLM コールが使用される場合があるため、API キーが設定されていることを確認してください。 チャンク戦略については、[チャンクドキュメント](/reference/rag/chunk.mdx)で詳しく説明しています。 ## ステップ 2: 埋め込み生成 お好みのプロバイダーを使用してチャンクを埋め込みに変換します。Mastraは、OpenAIやCohereを含む多くの埋め込みプロバイダーをサポートしています。 ### OpenAIを使用する ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { embedMany } from "ai"; const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); ``` ### Cohereを使用する ```ts showLineNumbers copy import { cohere } from '@ai-sdk/cohere'; import { embedMany } from 'ai'; const { embeddings } = await embedMany({ model: cohere.embedding('embed-english-v3.0'), values: chunks.map(chunk => chunk.text), }); ``` 埋め込み関数はベクトルを返します。これは、テキストの意味を表す数値の配列であり、ベクトルデータベースでの類似性検索に準備が整っています。 ### 埋め込み次元の設定 埋め込みモデルは通常、固定された次元数(例: OpenAIの`text-embedding-3-small`では1536)のベクトルを出力します。 一部のモデルはこの次元数を減らすことをサポートしており、以下の利点があります: - ベクトルデータベースでのストレージ要件を減少させる - 類似性検索の計算コストを削減する サポートされているモデルの例を以下に示します: OpenAI (text-embedding-3モデル): ```ts const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small', { dimensions: 256 // text-embedding-3以降でのみサポート }), values: chunks.map(chunk => chunk.text), }); ``` Google (text-embedding-004): ```ts const { embeddings } = await embedMany({ model: google.textEmbeddingModel('text-embedding-004', { outputDimensionality: 256 // 末尾から過剰な値を切り捨て }), values: chunks.map(chunk => chunk.text), }); ``` ## 例: 完全なパイプライン こちらは、両方のプロバイダーを使用したドキュメント処理と埋め込み生成の例です: ```ts showLineNumbers copy import { embedMany } from "ai"; import { openai } from "@ai-sdk/openai"; import { cohere } from "@ai-sdk/cohere"; import { MDocument } from "@mastra/rag"; // Initialize document const doc = MDocument.fromText(` Climate change poses significant challenges to global agriculture. Rising temperatures and changing precipitation patterns affect crop yields. `); // Create chunks const chunks = await doc.chunk({ strategy: "recursive", size: 256, overlap: 50, }); // Generate embeddings with OpenAI const { embeddings: openAIEmbeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); // OR // Generate embeddings with Cohere const { embeddings: cohereEmbeddings } = await embedMany({ model: cohere.embedding('embed-english-v3.0'), values: chunks.map(chunk => chunk.text), }); // Store embeddings in your vector database await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, }); ``` ## さまざまなチャンク戦略と埋め込み構成の例については、以下を参照してください: - [チャンクサイズの調整](/reference/rag/chunk.mdx#adjust-chunk-size) - [チャンク区切りの調整](/reference/rag/chunk.mdx#adjust-chunk-delimiters) - [Cohereを使用したテキストの埋め込み](/reference/rag/embeddings.mdx#using-cohere) --- title: MastraにおけるRAG(検索強化生成) | Mastra ドキュメント description: Mastraにおける検索強化生成(RAG)の概要と、関連するコンテキストでLLMの出力を強化するための機能を詳述します。 --- # MastraにおけるRAG(Retrieval-Augmented Generation) [JA] Source: https://mastra.ai/ja/docs/rag/overview MastraのRAGは、独自のデータソースから関連するコンテキストを取り入れることで、LLMの出力を強化し、正確性を向上させ、応答を実際の情報に基づかせるのに役立ちます。 MastraのRAGシステムは以下を提供します: - 文書を処理し埋め込むための標準化されたAPI - 複数のベクトルストアのサポート - 最適な検索のためのチャンク化と埋め込み戦略 - 埋め込みと検索のパフォーマンスを追跡するための可観測性 ## 例 RAGを実装するには、ドキュメントをチャンクに分割し、埋め込みを作成し、それらをベクターデータベースに保存し、クエリ時に関連するコンテキストを取得します。 ```ts showLineNumbers copy import { embedMany } from "ai"; import { openai } from "@ai-sdk/openai"; import { PgVector } from "@mastra/pg"; import { MDocument } from "@mastra/rag"; import { z } from "zod"; // 1. Initialize document const doc = MDocument.fromText(`Your document text here...`); // 2. Create chunks const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, }); // 3. Generate embeddings; we need to pass the text of each chunk const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); // 4. Store in vector database const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING); await pgVector.upsert({ indexName: "embeddings", vectors: embeddings, }); // using an index name of 'embeddings' // 5. Query similar chunks const results = await pgVector.query({ indexName: "embeddings", queryVector: queryVector, topK: 3, }); // queryVector is the embedding of the query console.log("Similar chunks:", results); ``` この例は基本を示しています:ドキュメントを初期化し、チャンクを作成し、埋め込みを生成し、それらを保存し、類似のコンテンツをクエリします。 ## ドキュメント処理 RAGの基本的な構成要素はドキュメント処理です。ドキュメントは様々な戦略(再帰的、スライディングウィンドウなど)を使用して分割され、メタデータで強化されることができます。[チャンクと埋め込みのドキュメント](./chunking-and-embedding.mdx)を参照してください。 ## ベクターストレージ Mastraは、埋め込みの永続性と類似性検索のために、pgvector、Pinecone、Qdrantを含む複数のベクターストアをサポートしています。[ベクターデータベースのドキュメント](./vector-databases.mdx)を参照してください。 ## 可観測性とデバッグ MastraのRAGシステムには、取得パイプラインを最適化するための可観測性機能が含まれています: - 埋め込み生成のパフォーマンスとコストを追跡 - チャンクの品質と取得の関連性を監視 - クエリパターンとキャッシュヒット率を分析 - メトリクスを可観測性プラットフォームにエクスポート 詳細については、[OTel Configuration](../reference/observability/otel-config.mdx)ページをご覧ください。 ## 追加のリソース - [Chain of Thought RAGの例](../../examples/rag/usage/cot-rag.mdx) - [すべてのRAGの例](../../examples/)(異なるチャンク戦略、埋め込みモデル、ベクトルストアを含む) --- title: "検索、セマンティック検索、再ランキング | RAG | Mastra ドキュメント" description: Mastra の RAG システムにおける検索プロセス、セマンティック検索、フィルタリング、再ランキングに関するガイド。 --- import { Tabs } from "nextra/components"; ## RAGシステムにおける検索 [JA] Source: https://mastra.ai/ja/docs/rag/retrieval 埋め込みを保存した後、ユーザーのクエリに答えるために関連するチャンクを検索する必要があります。 Mastraは、セマンティック検索、フィルタリング、再ランキングをサポートする柔軟な検索オプションを提供します。 ## 検索の仕組み 1. ユーザーのクエリは、ドキュメント埋め込みに使用されるのと同じモデルを使用して埋め込みに変換されます 2. この埋め込みは、ベクトル類似性を使用して保存された埋め込みと比較されます 3. 最も類似したチャンクが取得され、オプションで以下の処理が可能です: - メタデータでフィルタリング - より良い関連性のために再ランク付け - ナレッジグラフを通じて処理 ## 基本的な検索 最も簡単なアプローチは直接的なセマンティック検索です。この方法はベクトル類似性を使用して、クエリとセマンティックに類似したチャンクを見つけます: ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { embed } from "ai"; import { PgVector } from "@mastra/pg"; // クエリを埋め込みに変換 const { embedding } = await embed({ value: "記事の主なポイントは何ですか?", model: openai.embedding('text-embedding-3-small'), }); // ベクトルストアをクエリ const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING); const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, }); // 結果を表示 console.log(results); ``` 結果にはテキストコンテンツと類似度スコアの両方が含まれます: ```ts showLineNumbers copy [ { text: "気候変動は重大な課題をもたらします...", score: 0.89, metadata: { source: "article1.txt" } }, { text: "気温の上昇は作物の収穫量に影響を与えます...", score: 0.82, metadata: { source: "article1.txt" } } // ... さらに多くの結果 ] ``` 基本的な検索方法の使用例については、[結果を取得する](../../examples/rag/query/retrieve-results.mdx)例を参照してください。 ## 高度な検索オプション ### メタデータフィルタリング メタデータフィールドに基づいて結果をフィルタリングし、検索範囲を絞り込みます。これは、異なるソース、時期、または特定の属性を持つドキュメントがある場合に便利です。Mastraは、すべてのサポートされているベクトルストアで機能する統一されたMongoDBスタイルのクエリ構文を提供します。 利用可能なオペレーターと構文の詳細については、[メタデータフィルタリファレンス](/reference/rag/metadata-filters)を参照してください。 基本的なフィルタリングの例: ```ts showLineNumbers copy // 単純な等価フィルタ const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { source: "article1.txt" } }); // 数値比較 const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { price: { $gt: 100 } } }); // 複数条件 const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { category: "electronics", price: { $lt: 1000 }, inStock: true } }); // 配列操作 const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { tags: { $in: ["sale", "new"] } } }); // 論理演算子 const results = await pgVector.query({ indexName: "embeddings", queryVector: embedding, topK: 10, filter: { $or: [ { category: "electronics" }, { category: "accessories" } ], $and: [ { price: { $gt: 50 } }, { price: { $lt: 200 } } ] } }); ``` メタデータフィルタリングの一般的な使用例: - ドキュメントのソースまたはタイプでフィルタリング - 日付範囲でフィルタリング - 特定のカテゴリまたはタグでフィルタリング - 数値範囲(例: 価格、評価)でフィルタリング - 複数の条件を組み合わせて正確なクエリを実行 - ドキュメント属性(例: 言語、著者)でフィルタリング メタデータフィルタリングの使用例については、[ハイブリッドベクトル検索](../../examples/rag/query/hybrid-vector-search.mdx)の例を参照してください。 ### ベクトルクエリツール 時には、エージェントにベクトルデータベースを直接クエリする能力を与えたいことがあります。ベクトルクエリツールは、エージェントがユーザーのニーズを理解し、意味検索とオプションのフィルタリングおよび再ランキングを組み合わせて、取得の決定を行うことを可能にします。 ```ts showLineNumbers copy const vectorQueryTool = createVectorQueryTool({ vectorStoreName: 'pgVector', indexName: 'embeddings', model: openai.embedding('text-embedding-3-small'), }); ``` ツールを作成する際には、ツールの名前と説明に特に注意を払ってください。これらは、エージェントが取得機能をいつどのように使用するかを理解するのに役立ちます。例えば、「SearchKnowledgeBase」と名付け、「Xトピックに関する関連情報を見つけるためにドキュメントを検索する」と説明することができます。 これは特に次の場合に役立ちます: - エージェントが動的に取得する情報を決定する必要がある場合 - 取得プロセスが複雑な意思決定を必要とする場合 - エージェントがコンテキストに基づいて複数の取得戦略を組み合わせたい場合 詳細な設定オプションと高度な使用法については、[ベクトルクエリツールリファレンス](/reference/tools/vector-query-tool)を参照してください。 ### ベクトルストアプロンプト ベクトルストアプロンプトは、各ベクトルデータベース実装のクエリパターンとフィルタリング機能を定義します。 フィルタリングを実装する際には、これらのプロンプトがエージェントの指示に必要であり、各ベクトルストア実装の有効なオペレーターと構文を指定します。 ```ts showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { PGVECTOR_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` 提供されたコンテキストを使用してクエリを処理します。応答を簡潔で関連性のあるものに構成します。 ${PGVECTOR_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { PINECONE_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` 提供されたコンテキストを使用してクエリを処理します。応答を簡潔で関連性のあるものに構成します。 ${PINECONE_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { QDRANT_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` 提供されたコンテキストを使用してクエリを処理します。応答を簡潔で関連性のあるものに構成します。 ${QDRANT_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { CHROMA_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` 提供されたコンテキストを使用してクエリを処理します。応答を簡潔で関連性のあるものに構成します。 ${CHROMA_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { ASTRA_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` 提供されたコンテキストを使用してクエリを処理します。応答を簡潔で関連性のあるものに構成します。 ${ASTRA_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { LIBSQL_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` 提供されたコンテキストを使用してクエリを処理します。応答を簡潔で関連性のあるものに構成します。 ${LIBSQL_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { UPSTASH_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` 提供されたコンテキストを使用してクエリを処理します。応答を簡潔で関連性のあるものに構成します。 ${UPSTASH_PROMPT} `, tools: { vectorQueryTool }, }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { openai } from '@ai-sdk/openai'; import { VECTORIZE_PROMPT } from "@mastra/rag"; export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` 提供されたコンテキストを使用してクエリを処理します。応答を簡潔で関連性のあるものに構成します。 ${VECTORIZE_PROMPT} `, tools: { vectorQueryTool }, }); ``` ### 再ランキング 初期のベクトル類似性検索は、時には微妙な関連性を見逃すことがあります。再ランキングは、より計算コストが高いプロセスですが、より正確なアルゴリズムであり、以下の方法で結果を改善します: - 単語の順序と正確な一致を考慮する - より洗練された関連性スコアリングを適用する - クエリとドキュメント間のクロスアテンションと呼ばれる方法を使用する 再ランキングの使用方法は次のとおりです: ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { rerank } from "@mastra/rag"; // ベクトル検索から初期結果を取得 const initialResults = await pgVector.query({ indexName: "embeddings", queryVector: queryEmbedding, topK: 10, }); // 結果を再ランキング const rerankedResults = await rerank(initialResults, query, openai('gpt-4o-mini')); ``` > **注意:** 再ランキング中にセマンティックスコアリングが正しく機能するためには、各結果がその`metadata.text`フィールドにテキストコンテンツを含んでいる必要があります。 再ランキングされた結果は、ベクトル類似性とセマンティックな理解を組み合わせて、検索の質を向上させます。 再ランキングの詳細については、[rerank()](/reference/rag/rerank)メソッドを参照してください。 再ランキングメソッドの使用例については、[Re-ranking Results](../../examples/rag/rerank/rerank.mdx)の例を参照してください。 ### グラフベースの検索 複雑な関係を持つドキュメントの場合、グラフベースの検索はチャンク間の接続をたどることができます。これは次のような場合に役立ちます: - 情報が複数のドキュメントに分散している - ドキュメントが互いに参照している - 完全な答えを見つけるために関係をたどる必要がある セットアップ例: ```ts showLineNumbers copy const graphQueryTool = createGraphQueryTool({ vectorStoreName: 'pgVector', indexName: 'embeddings', model: openai.embedding('text-embedding-3-small'), graphOptions: { threshold: 0.7, } }); ``` グラフベースの検索の詳細については、[GraphRAG](/reference/rag/graph-rag)クラスと[createGraphQueryTool()](/reference/tools/graph-rag-tool)関数を参照してください。 グラフベースの検索メソッドの使用例については、[Graph-based Retrieval](../../examples/rag/usage/graph-rag.mdx)の例を参照してください。 --- title: "ベクトルデータベースに埋め込みを保存する | Mastra ドキュメント" description: Mastraにおけるベクトルストレージオプションのガイド。類似性検索のための埋め込みベクトルデータベースと専用ベクトルデータベースを含む。 --- import { Tabs } from "nextra/components"; ## ベクトルデータベースに埋め込みを保存する [JA] Source: https://mastra.ai/ja/docs/rag/vector-databases 埋め込みを生成した後、それらをベクトル類似性検索をサポートするデータベースに保存する必要があります。Mastraは、異なるベクトルデータベース間で埋め込みを保存およびクエリするための一貫したインターフェースを提供します。 ## サポートされているデータベース ```ts filename="vector-store.ts" showLineNumbers copy import { PgVector } from '@mastra/pg'; const store = new PgVector(process.env.POSTGRES_CONNECTION_STRING) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ### pgvectorを使用したPostgreSQL PostgreSQLとpgvector拡張機能は、すでにPostgreSQLを使用しているチームがインフラの複雑さを最小限に抑えたい場合に適したソリューションです。 詳細なセットアップ手順とベストプラクティスについては、[公式pgvectorリポジトリ](https://github.com/pgvector/pgvector)を参照してください。 ```ts filename="vector-store.ts" showLineNumbers copy import { PineconeVector } from '@mastra/pinecone' const store = new PineconeVector(process.env.PINECONE_API_KEY) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { QdrantVector } from '@mastra/qdrant' const store = new QdrantVector({ url: process.env.QDRANT_URL, apiKey: process.env.QDRANT_API_KEY }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { ChromaVector } from '@mastra/chroma' const store = new ChromaVector() await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { AstraVector } from '@mastra/astra' const store = new AstraVector({ token: process.env.ASTRA_DB_TOKEN, endpoint: process.env.ASTRA_DB_ENDPOINT, keyspace: process.env.ASTRA_DB_KEYSPACE }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { LibSQLVector } from "@mastra/core/vector/libsql"; const store = new LibSQLVector({ connectionUrl: process.env.DATABASE_URL, authToken: process.env.DATABASE_AUTH_TOKEN // Optional: for Turso cloud databases }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { UpstashVector } from '@mastra/upstash' const store = new UpstashVector({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ```ts filename="vector-store.ts" showLineNumbers copy import { CloudflareVector } from '@mastra/vectorize' const store = new CloudflareVector({ accountId: process.env.CF_ACCOUNT_ID, apiToken: process.env.CF_API_TOKEN }) await store.createIndex({ indexName: "myCollection", dimension: 1536, }); await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), }); ``` ## ベクターストレージの使用 初期化されると、すべてのベクターストアはインデックスの作成、埋め込みのアップサート、およびクエリのための同じインターフェースを共有します。 ### インデックスの作成 埋め込みを保存する前に、埋め込みモデルに適した次元サイズでインデックスを作成する必要があります: ```ts filename="store-embeddings.ts" showLineNumbers copy // 次元1536でインデックスを作成(text-embedding-3-small用) await store.createIndex({ indexName: 'myCollection', dimension: 1536, }); // 他のモデルの場合は、それぞれの次元を使用します: // - text-embedding-3-large: 3072 // - text-embedding-ada-002: 1536 // - cohere-embed-multilingual-v3: 1024 ``` 次元サイズは、選択した埋め込みモデルの出力次元と一致している必要があります。一般的な次元サイズは以下の通りです: - OpenAI text-embedding-3-small: 1536次元 - OpenAI text-embedding-3-large: 3072次元 - Cohere embed-multilingual-v3: 1024次元 > **重要**: インデックスの次元は作成後に変更できません。異なるモデルを使用するには、新しい次元サイズでインデックスを削除して再作成してください。 ### データベースの命名規則 各ベクターデータベースは、互換性を確保し、競合を防ぐために、インデックスとコレクションの特定の命名規則を強制します。 インデックス名は次の条件を満たす必要があります: - 文字またはアンダースコアで始まる - 文字、数字、アンダースコアのみを含む - 例: `my_index_123` は有効 - 例: `my-index` は無効(ハイフンを含む) インデックス名は次の条件を満たす必要があります: - 小文字の文字、数字、ダッシュのみを使用 - ドットを含まない(DNSルーティングに使用) - 非ラテン文字や絵文字を使用しない - プロジェクトIDと合わせて52文字未満 - 例: `my-index-123` は有効 - 例: `my.index` は無効(ドットを含む) コレクション名は次の条件を満たす必要があります: - 1〜255文字の長さ - 以下の特殊文字を含まない: - `< > : " / \ | ? *` - Null文字 (`\0`) - ユニットセパレータ (`\u{1F}`) - 例: `my_collection_123` は有効 - 例: `my/collection` は無効(スラッシュを含む) コレクション名は次の条件を満たす必要があります: - 3〜63文字の長さ - 文字または数字で始まり、終わる - 文字、数字、アンダースコア、ハイフンのみを含む - 連続するピリオド(..)を含まない - 有効なIPv4アドレスでない - 例: `my-collection-123` は有効 - 例: `my..collection` は無効(連続するピリオド) コレクション名は次の条件を満たす必要があります: - 空でない - 48文字以下 - 文字、数字、アンダースコアのみを含む - 例: `my_collection_123` は有効 - 例: `my-collection` は無効(ハイフンを含む) インデックス名は次の条件を満たす必要があります: - 文字またはアンダースコアで始まる - 文字、数字、アンダースコアのみを含む - 例: `my_index_123` は有効 - 例: `my-index` は無効(ハイフンを含む) 名前空間名は次の条件を満たす必要があります: - 2〜100文字の長さ - 以下のみを含む: - 英数字(a-z, A-Z, 0-9) - アンダースコア、ハイフン、ドット - 特殊文字(_, -, .)で始まったり終わったりしない - 大文字小文字を区別できる - 例: `MyNamespace123` は有効 - 例: `_namespace` は無効(アンダースコアで始まる) インデックス名は次の条件を満たす必要があります: - 文字で始まる - 32文字未満 - 小文字のASCII文字、数字、ダッシュのみを含む - スペースの代わりにダッシュを使用 - 例: `my-index-123` は有効 - 例: `My_Index` は無効(大文字とアンダースコアを含む) ### 埋め込みのアップサート インデックスを作成した後、基本的なメタデータと共に埋め込みを保存できます: ```ts filename="store-embeddings.ts" showLineNumbers copy // Store embeddings with their corresponding metadata await store.upsert({ indexName: 'myCollection', // index name vectors: embeddings, // array of embedding vectors metadata: chunks.map(chunk => ({ text: chunk.text, // The original text content id: chunk.id // Optional unique identifier })) }); ``` upsert操作: - 埋め込みベクトルとそれに対応するメタデータの配列を受け取ります - 同じIDを共有する場合、既存のベクトルを更新します - 存在しない場合、新しいベクトルを作成します - 大規模なデータセットに対して自動的にバッチ処理を行います 異なるベクトルストアでの埋め込みのupsertの完全な例については、[Upsert Embeddings](../../examples/rag/upsert/upsert-embeddings.mdx)ガイドを参照してください。 ## メタデータの追加 ベクトルストアは、フィルタリングと整理のためにリッチなメタデータ(任意のJSONシリアライズ可能なフィールド)をサポートしています。メタデータは固定スキーマなしで保存されるため、一貫したフィールド名を使用して予期しないクエリ結果を避けてください。 **重要**: メタデータはベクトルストレージにとって重要です。メタデータがなければ、数値の埋め込みしかなく、元のテキストを返したり結果をフィルタリングしたりする方法がありません。常に少なくともソーステキストをメタデータとして保存してください。 ```ts showLineNumbers copy // より良い整理とフィルタリングのためにリッチなメタデータで埋め込みを保存 await store.upsert({ indexName: "myCollection", vectors: embeddings, metadata: chunks.map((chunk) => ({ // 基本的な内容 text: chunk.text, id: chunk.id, // ドキュメントの整理 source: chunk.source, category: chunk.category, // 時間的メタデータ createdAt: new Date().toISOString(), version: "1.0", // カスタムフィールド language: chunk.language, author: chunk.author, confidenceScore: chunk.score, })), }); ``` メタデータの重要な考慮事項: - フィールド名には厳格に - 'category' と 'Category' のような不一致はクエリに影響します - フィルタリングやソートに使用する予定のフィールドのみを含める - 余分なフィールドはオーバーヘッドを増やします - コンテンツの新鮮さを追跡するためにタイムスタンプ(例: 'createdAt', 'lastUpdated')を追加 ## ベストプラクティス - 大量挿入の前にインデックスを作成する - 大規模な挿入にはバッチ操作を使用する(`upsert` メソッドは自動的にバッチ処理を行います) - クエリを実行するメタデータのみを保存する - 埋め込み次元をモデルに合わせる(例:`text-embedding-3-small` の場合は1536) --- title: Mastraのストレージ | Mastra ドキュメント description: Mastraのストレージシステムとデータ永続性機能の概要。 --- import { Tabs } from "nextra/components"; import { PropertiesTable } from "@/components/properties-table"; import { SchemaTable } from "@/components/schema-table"; import { StorageOverviewImage } from "@/components/storage-overview-image"; # MastraStorage [JA] Source: https://mastra.ai/ja/docs/storage/overview `MastraStorage` は、以下の管理のための統一されたインターフェースを提供します: - **中断されたワークフロー**: 中断されたワークフローのシリアライズされた状態(後で再開できるように) - **メモリ**: アプリケーション内の`resourceId`ごとのスレッドとメッセージ - **トレース**: MastraのすべてのコンポーネントからのOpenTelemetryトレース - **評価データセット**: 評価実行からのスコアとスコアリングの理由

Mastraは異なるストレージプロバイダーを提供しますが、それらを交換可能として扱うことができます。例えば、開発中はlibsqlを使用し、本番環境ではpostgresを使用することができ、どちらの場合でもコードは同じように動作します。 ## 設定 Mastraはデフォルトのストレージオプションで設定できます: ```typescript copy import { Mastra } from "@mastra/core/mastra"; import { DefaultStorage } from "@mastra/core/storage/libsql"; const mastra = new Mastra({ storage: new DefaultStorage({ config: { url: "file:.mastra/mastra.db", }, }), }); ``` ## データスキーマ 会話メッセージとそのメタデータを保存します。各メッセージはスレッドに属し、送信者の役割やメッセージタイプに関するメタデータと共に実際のコンテンツを含みます。
関連するメッセージをまとめてリソースに関連付けます。会話に関するメタデータを含みます。
`workflow`で`suspend`が呼び出されると、その状態は次の形式で保存されます。`resume`が呼び出されると、その状態が再構築されます。
エージェントの出力に対してメトリクスを実行した評価結果を保存します。
OpenTelemetryのトレースをキャプチャして、モニタリングとデバッグを行います。
## ストレージプロバイダー Mastraは以下のプロバイダーをサポートしています: - ローカル開発には、[LibSQL Storage](../../reference/storage/libsql.mdx)をチェックしてください - 本番環境には、[PostgreSQL Storage](../../reference/storage/postgresql.mdx)をチェックしてください - サーバーレスデプロイメントには、[Upstash Storage](../../reference/storage/upstash.mdx)をチェックしてください --- title: Mastraの音声機能 | Mastra ドキュメント description: Mastraの音声機能の概要。テキスト読み上げ、音声認識、リアルタイム音声変換などを含みます。 --- import { Tabs } from "nextra/components"; import { AudioPlayback } from "@/components/audio-playback"; # Mastraの音声機能 [JA] Source: https://mastra.ai/ja/docs/voice/overview Mastraの音声システムは、音声インタラクションのための統一されたインターフェースを提供し、アプリケーションでテキスト読み上げ(TTS)、音声認識(STT)、リアルタイム音声変換(STS)機能を実現します。 ## エージェントに音声を追加する エージェントに音声機能を統合する方法については、[エージェントへの音声追加](../agents/adding-voice.mdx)のドキュメントをご覧ください。このセクションでは、単一および複数の音声プロバイダーの使用方法、およびリアルタイムのインタラクションについて説明しています。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from "@mastra/voice-openai"; // Initialize OpenAI voice for TTS const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new OpenAIVoice(), }); ``` 以下の音声機能を使用できます: ### テキスト読み上げ(TTS) Mastraのテキスト読み上げ機能を使用して、エージェントの応答を自然な音声に変換できます。 OpenAI、ElevenLabsなど、複数のプロバイダーから選択できます。 詳細な設定オプションと高度な機能については、[テキスト読み上げガイド](./text-to-speech)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from "@mastra/voice-openai"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new OpenAIVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker responseFormat: "wav", // Optional: specify a response format }); playAudio(audioStream); ``` OpenAIの音声プロバイダーについての詳細は[OpenAI音声リファレンス](/reference/voice/openai)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { AzureVoice } from "@mastra/voice-azure"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new AzureVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "en-US-JennyNeural", // Optional: specify a speaker }); playAudio(audioStream); ``` Azureの音声プロバイダーについての詳細は[Azure音声リファレンス](/reference/voice/azure)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { ElevenLabsVoice } from "@mastra/voice-elevenlabs"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new ElevenLabsVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` ElevenLabsの音声プロバイダーについての詳細は[ElevenLabs音声リファレンス](/reference/voice/elevenlabs)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { PlayAIVoice } from "@mastra/voice-playai"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new PlayAIVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` PlayAIの音声プロバイダーについての詳細は[PlayAI音声リファレンス](/reference/voice/playai)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { GoogleVoice } from "@mastra/voice-google"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new GoogleVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "en-US-Studio-O", // Optional: specify a speaker }); playAudio(audioStream); ``` Google音声プロバイダーの詳細については、[Google音声リファレンス](/reference/voice/google)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { CloudflareVoice } from "@mastra/voice-cloudflare"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new CloudflareVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` Cloudflare音声プロバイダーの詳細については、[Cloudflare音声リファレンス](/reference/voice/cloudflare)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { DeepgramVoice } from "@mastra/voice-deepgram"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new DeepgramVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "aura-english-us", // Optional: specify a speaker }); playAudio(audioStream); ``` Deepgram音声プロバイダーの詳細については、[Deepgram音声リファレンス](/reference/voice/deepgram)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { SpeechifyVoice } from "@mastra/voice-speechify"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new SpeechifyVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "matthew", // Optional: specify a speaker }); playAudio(audioStream); ``` Speechify音声プロバイダーの詳細については、[Speechify音声リファレンス](/reference/voice/speechify)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { SarvamVoice } from "@mastra/voice-sarvam"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new SarvamVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` Sarvam音声プロバイダーの詳細については、[Sarvam音声リファレンス](/reference/voice/sarvam)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { MurfVoice } from "@mastra/voice-murf"; import { playAudio } from "@mastra/node-audio"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new MurfVoice(), }); const { text } = await voiceAgent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const audioStream = await voiceAgent.voice.speak(text, { speaker: "default", // Optional: specify a speaker }); playAudio(audioStream); ``` Murf音声プロバイダーの詳細については、[Murf音声リファレンス](/reference/voice/murf)をご覧ください。 ### 音声からテキストへの変換 (STT) OpenAI、ElevenLabsなど、さまざまなプロバイダーを使用して音声コンテンツを文字起こしします。詳細な設定オプションなどについては、[音声からテキストへ](./speech-to-text)をご確認ください。 サンプル音声ファイルは[こちら](https://github.com/mastra-ai/realtime-voice-demo/raw/refs/heads/main/how_can_i_help_you.mp3)からダウンロードできます。
```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from "@mastra/voice-openai"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new OpenAIVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` OpenAIの音声プロバイダーについての詳細は[OpenAI音声リファレンス](/reference/voice/openai)をご覧ください。 ```typescript import { createReadStream } from 'fs'; import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { AzureVoice } from "@mastra/voice-azure"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new AzureVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Azureの音声プロバイダーについての詳細は[Azure音声リファレンス](/reference/voice/azure)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { ElevenLabsVoice } from "@mastra/voice-elevenlabs"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new ElevenLabsVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` ElevenLabsの音声プロバイダーについての詳細は[ElevenLabs音声リファレンス](/reference/voice/elevenlabs)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { GoogleVoice } from "@mastra/voice-google"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new GoogleVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Googleの音声プロバイダーについての詳細は[Google音声リファレンス](/reference/voice/google)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { CloudflareVoice } from "@mastra/voice-cloudflare"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new CloudflareVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Cloudflare音声プロバイダーの詳細については、[Cloudflare音声リファレンス](/reference/voice/cloudflare)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { DeepgramVoice } from "@mastra/voice-deepgram"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new DeepgramVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Deepgram音声プロバイダーの詳細については、[Deepgram音声リファレンス](/reference/voice/deepgram)をご覧ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { SarvamVoice } from "@mastra/voice-sarvam"; import { createReadStream } from 'fs'; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new SarvamVoice(), }); // Use an audio file from a URL const audioStream = await createReadStream("./how_can_i_help_you.mp3"); // Convert audio to text const transcript = await voiceAgent.voice.listen(audioStream); console.log(`User said: ${transcript}`); // Generate a response based on the transcript const { text } = await voiceAgent.generate(transcript); ``` Sarvam音声プロバイダーの詳細については、[Sarvam音声リファレンス](/reference/voice/sarvam)をご覧ください。 ### 音声から音声へ(STS) 音声から音声への機能で会話体験を作成します。統一されたAPIにより、ユーザーとAIエージェント間のリアルタイム音声インタラクションが可能になります。 詳細な設定オプションと高度な機能については、[音声から音声へ](./speech-to-speech)をご確認ください。 ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { playAudio, getMicrophoneStream } from '@mastra/node-audio'; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; const voiceAgent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice: new OpenAIRealtimeVoice(), }); // Listen for agent audio responses voiceAgent.voice.on('speaker', ({ audio }) => { playAudio(audio); }); // Initiate the conversation await voiceAgent.voice.speak('How can I help you today?'); // Send continuous audio from the microphone const micStream = getMicrophoneStream(); await voiceAgent.voice.send(micStream); ``` OpenAI音声プロバイダーの詳細については、[OpenAI音声リファレンス](/reference/voice/openai-realtime)をご覧ください。 ## 音声設定 各音声プロバイダーは、異なるモデルやオプションで設定できます。以下は、サポートされているすべてのプロバイダーの詳細な設定オプションです: ```typescript // OpenAI Voice Configuration const voice = new OpenAIVoice({ speechModel: { name: "gpt-3.5-turbo", // Example model name apiKey: process.env.OPENAI_API_KEY, language: "en-US", // Language code voiceType: "neural", // Type of voice model }, listeningModel: { name: "whisper-1", // Example model name apiKey: process.env.OPENAI_API_KEY, language: "en-US", // Language code format: "wav", // Audio format }, speaker: "alloy", // Example speaker name }); ``` 詳細については[OpenAI音声リファレンス](/reference/voice/openai)をご覧ください。 ```typescript // Azure Voice Configuration const voice = new AzureVoice({ speechModel: { name: "en-US-JennyNeural", // Example model name apiKey: process.env.AZURE_SPEECH_KEY, region: process.env.AZURE_SPEECH_REGION, language: "en-US", // Language code style: "cheerful", // Voice style pitch: "+0Hz", // Pitch adjustment rate: "1.0", // Speech rate }, listeningModel: { name: "en-US", // Example model name apiKey: process.env.AZURE_SPEECH_KEY, region: process.env.AZURE_SPEECH_REGION, format: "simple", // Output format }, }); ``` 詳細については[Azure音声リファレンス](/reference/voice/azure)をご覧ください。 ```typescript // ElevenLabs Voice Configuration const voice = new ElevenLabsVoice({ speechModel: { voiceId: "your-voice-id", // Example voice ID model: "eleven_multilingual_v2", // Example model name apiKey: process.env.ELEVENLABS_API_KEY, language: "en", // Language code emotion: "neutral", // Emotion setting }, // ElevenLabs may not have a separate listening model }); ``` 詳細については[ElevenLabs音声リファレンス](/reference/voice/elevenlabs)をご覧ください。 ```typescript // PlayAI Voice Configuration const voice = new PlayAIVoice({ speechModel: { name: "playai-voice", // Example model name speaker: "emma", // Example speaker name apiKey: process.env.PLAYAI_API_KEY, language: "en-US", // Language code speed: 1.0, // Speech speed }, // PlayAI may not have a separate listening model }); ``` 詳細については[PlayAI音声リファレンス](/reference/voice/playai)をご覧ください。 ```typescript // Google Voice Configuration const voice = new GoogleVoice({ speechModel: { name: "en-US-Studio-O", // Example model name apiKey: process.env.GOOGLE_API_KEY, languageCode: "en-US", // Language code gender: "FEMALE", // Voice gender speakingRate: 1.0, // Speaking rate }, listeningModel: { name: "en-US", // Example model name sampleRateHertz: 16000, // Sample rate }, }); ``` 詳細については[PlayAI音声リファレンス](/reference/voice/playai)をご覧ください。 ```typescript // Cloudflare Voice Configuration const voice = new CloudflareVoice({ speechModel: { name: "cloudflare-voice", // Example model name accountId: process.env.CLOUDFLARE_ACCOUNT_ID, apiToken: process.env.CLOUDFLARE_API_TOKEN, language: "en-US", // Language code format: "mp3", // Audio format }, // Cloudflare may not have a separate listening model }); ``` 詳細については[Cloudflare音声リファレンス](/reference/voice/cloudflare)をご覧ください。 ```typescript // Deepgram Voice Configuration const voice = new DeepgramVoice({ speechModel: { name: "nova-2", // Example model name speaker: "aura-english-us", // Example speaker name apiKey: process.env.DEEPGRAM_API_KEY, language: "en-US", // Language code tone: "formal", // Tone setting }, listeningModel: { name: "nova-2", // Example model name format: "flac", // Audio format }, }); ``` Deepgramの音声プロバイダーの詳細については、[Deepgram音声リファレンス](/reference/voice/deepgram)をご覧ください。 ```typescript // Speechify Voice Configuration const voice = new SpeechifyVoice({ speechModel: { name: "speechify-voice", // Example model name speaker: "matthew", // Example speaker name apiKey: process.env.SPEECHIFY_API_KEY, language: "en-US", // Language code speed: 1.0, // Speech speed }, // Speechify may not have a separate listening model }); ``` Speechifyの音声プロバイダーの詳細については、[Speechify音声リファレンス](/reference/voice/speechify)をご覧ください。 ```typescript // Sarvam Voice Configuration const voice = new SarvamVoice({ speechModel: { name: "sarvam-voice", // Example model name apiKey: process.env.SARVAM_API_KEY, language: "en-IN", // Language code style: "conversational", // Style setting }, // Sarvam may not have a separate listening model }); ``` Sarvamの音声プロバイダーの詳細については、[Sarvam音声リファレンス](/reference/voice/sarvam)をご覧ください。 ```typescript // Murf Voice Configuration const voice = new MurfVoice({ speechModel: { name: "murf-voice", // Example model name apiKey: process.env.MURF_API_KEY, language: "en-US", // Language code emotion: "happy", // Emotion setting }, // Murf may not have a separate listening model }); ``` Murfの音声プロバイダーの詳細については、[Murf音声リファレンス](/reference/voice/murf)をご覧ください。 ```typescript // OpenAI Realtime Voice Configuration const voice = new OpenAIRealtimeVoice({ speechModel: { name: "gpt-3.5-turbo", // Example model name apiKey: process.env.OPENAI_API_KEY, language: "en-US", // Language code }, listeningModel: { name: "whisper-1", // Example model name apiKey: process.env.OPENAI_API_KEY, format: "ogg", // Audio format }, speaker: "alloy", // Example speaker name }); ``` OpenAIリアルタイム音声プロバイダーの詳細については、[OpenAIリアルタイム音声リファレンス](/reference/voice/openai-realtime)を参照してください。 ### 複数の音声プロバイダーの使用 この例では、Mastraで2つの異なる音声プロバイダーを作成して使用する方法を示しています:音声からテキスト(STT)にはOpenAI、テキストから音声(TTS)にはPlayAIを使用します。 まず、必要な設定で音声プロバイダーのインスタンスを作成します。 ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; import { CompositeVoice } from "@mastra/core/voice"; import { playAudio, getMicrophoneStream } from "@mastra/node-audio"; // STT用のOpenAI音声を初期化 const input = new OpenAIVoice({ listeningModel: { name: "whisper-1", apiKey: process.env.OPENAI_API_KEY, }, }); // TTS用のPlayAI音声を初期化 const output = new PlayAIVoice({ speechModel: { name: "playai-voice", apiKey: process.env.PLAYAI_API_KEY, }, }); // CompositeVoiceを使用してプロバイダーを組み合わせる const voice = new CompositeVoice({ input, output, }); // 組み合わせた音声プロバイダーを使用して音声インタラクションを実装 const audioStream = getMicrophoneStream(); // この関数が音声入力を取得すると仮定 const transcript = await voice.listen(audioStream); // 文字起こしされたテキストをログに記録 console.log("Transcribed text:", transcript); // テキストを音声に変換 const responseAudio = await voice.speak(`You said: ${transcript}`, { speaker: "default", // オプション:スピーカーを指定 responseFormat: "wav", // オプション:レスポンス形式を指定 }); // 音声レスポンスを再生 playAudio(responseAudio); ``` CompositeVoiceの詳細については、[CompositeVoiceリファレンス](/reference/voice/composite-voice)を参照してください。 ## その他のリソース - [CompositeVoice](../../reference/voice/composite-voice.mdx) - [MastraVoice](../../reference/voice/mastra-voice.mdx) - [OpenAI Voice](../../reference/voice/openai.mdx) - [Azure Voice](../../reference/voice/azure.mdx) - [Google Voice](../../reference/voice/google.mdx) - [Deepgram Voice](../../reference/voice/deepgram.mdx) - [PlayAI Voice](../../reference/voice/playai.mdx) - [音声の例](../../examples/voice/text-to-speech.mdx) --- title: Mastraにおける音声対音声機能 | Mastra Docs description: Mastraにおける音声対音声機能の概要(リアルタイム対話とイベント駆動型アーキテクチャを含む)。 --- # Mastraの音声対音声機能 [JA] Source: https://mastra.ai/ja/docs/voice/speech-to-speech ## はじめに MastraのSpeech-to-Speech(STS)は、複数のプロバイダー間でリアルタイムの対話を行うための標準化されたインターフェースを提供します。 STSは、リアルタイムモデルからのイベントをリッスンすることで、継続的な双方向オーディオ通信を可能にします。個別のTTSとSTT操作とは異なり、STSは両方向で継続的に音声を処理するオープン接続を維持します。 ## 設定 - **`chatModel`**: リアルタイムモデルの設定。 - **`apiKey`**: あなたのOpenAI APIキー。`OPENAI_API_KEY`環境変数にフォールバックします。 - **`model`**: リアルタイム音声インタラクションに使用するモデルID(例:`gpt-4o-mini-realtime`)。 - **`options`**: セッション設定などのリアルタイムクライアントの追加オプション。 - **`speaker`**: 音声合成のデフォルトボイスID。音声出力に使用するボイスを指定できます。 ```typescript const voice = new OpenAIRealtimeVoice({ chatModel: { apiKey: 'your-openai-api-key', model: 'gpt-4o-mini-realtime', options: { sessionConfig: { turn_detection: { type: 'server_vad', threshold: 0.6, silence_duration_ms: 1200, }, }, }, }, speaker: 'alloy', // デフォルトボイス }); // デフォルト設定を使用する場合、設定は以下のように簡略化できます: const voice = new OpenAIRealtimeVoice(); ``` ## STSの使用 ```typescript import { Agent } from "@mastra/core/agent"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { playAudio, getMicrophoneStream } from "@mastra/node-audio"; const agent = new Agent({ name: 'Agent', instructions: `You are a helpful assistant with real-time voice capabilities.`, model: openai('gpt-4o'), voice: new OpenAIRealtimeVoice(), }); // Connect to the voice service await agent.voice.connect(); // Listen for agent audio responses agent.voice.on('speaker', ({ audio }) => { playAudio(audio); }); // Initiate the conversation await agent.voice.speak('How can I help you today?'); // Send continuous audio from the microphone const micStream = getMicrophoneStream(); await agent.voice.send(micStream); ``` 音声対音声機能をエージェントに統合するには、[エージェントに音声を追加する](../agents/adding-voice.mdx)のドキュメントを参照してください。 --- title: Mastraにおける音声認識 (STT) | Mastra ドキュメント description: Mastraにおける音声認識機能の概要、設定、使用法、音声プロバイダーとの統合について。 --- # 音声認識(STT) [JA] Source: https://mastra.ai/ja/docs/voice/speech-to-text Mastraの音声認識(STT)は、複数のサービスプロバイダー間で音声入力をテキストに変換するための標準化されたインターフェースを提供します。 STTは、人間の音声に応答できる音声対応アプリケーションの作成を支援し、ハンズフリーでの操作、障害を持つユーザーのためのアクセシビリティ、そしてより自然な人間とコンピューターのインターフェースを可能にします。 ## 設定 Mastraで音声認識(STT)を使用するには、音声プロバイダーを初期化する際に`listeningModel`を提供する必要があります。これには以下のようなパラメータが含まれます: - **`name`**: 使用する特定の音声認識モデル。 - **`apiKey`**: 認証用のAPIキー。 - **プロバイダー固有のオプション**: 特定の音声プロバイダーで必要または対応している追加オプション。 **注意**: これらのパラメータはすべてオプションです。使用している特定のプロバイダーによって異なる、音声プロバイダーが提供するデフォルト設定を使用することができます。 ```typescript const voice = new OpenAIVoice({ listeningModel: { name: "whisper-1", apiKey: process.env.OPENAI_API_KEY, }, }); // デフォルト設定を使用する場合、設定は以下のように簡略化できます: const voice = new OpenAIVoice(); ``` ## 利用可能なプロバイダー Mastraはいくつかの音声認識(Speech-to-Text)プロバイダーをサポートしており、それぞれに独自の機能と強みがあります: - [**OpenAI**](/reference/voice/openai/) - Whisperモデルによる高精度の文字起こし - [**Azure**](/reference/voice/azure/) - Microsoftの企業グレードの信頼性を持つ音声認識 - [**ElevenLabs**](/reference/voice/elevenlabs/) - 複数言語をサポートする高度な音声認識 - [**Google**](/reference/voice/google/) - 幅広い言語サポートを持つGoogleの音声認識 - [**Cloudflare**](/reference/voice/cloudflare/) - 低遅延アプリケーション向けにエッジ最適化された音声認識 - [**Deepgram**](/reference/voice/deepgram/) - さまざまなアクセントに対応した高精度のAI駆動音声認識 - [**Sarvam**](/reference/voice/sarvam/) - インド系言語とアクセントに特化 各プロバイダーは、必要に応じてインストールできる別々のパッケージとして実装されています: ```bash pnpm add @mastra/voice-openai # OpenAIの例 ``` ## Listen メソッドの使用 STTの主要なメソッドは、音声をテキストに変換する`listen()`メソッドです。使用方法は次のとおりです: ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from '@mastra/voice-openai'; import { getMicrophoneStream } from "@mastra/node-audio"; const voice = new OpenAIVoice(); const agent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that provides recommendations based on user input.", model: openai("gpt-4o"), voice, }); const audioStream = getMicrophoneStream(); // Assume this function gets audio input const transcript = await agent.voice.listen(audioStream, { filetype: "m4a", // Optional: specify the audio file type }); console.log(`User said: ${transcript}`); const { text } = await agent.generate(`Based on what the user said, provide them a recommendation: ${transcript}`); console.log(`Recommendation: ${text}`); ``` STTをエージェントで使用する方法については、[エージェントに音声を追加する](../agents/adding-voice.mdx)のドキュメントをご覧ください。 --- title: Mastraのテキスト読み上げ (TTS) | Mastra ドキュメント description: Mastraにおけるテキスト読み上げ機能の概要、設定、使用法、音声プロバイダーとの統合について。 --- # テキスト読み上げ(TTS) [JA] Source: https://mastra.ai/ja/docs/voice/text-to-speech Mastraのテキスト読み上げ(TTS)は、様々なプロバイダーを使用してテキストから音声を合成するための統一APIを提供しています。 アプリケーションにTTSを組み込むことで、自然な音声インタラクションによるユーザー体験の向上、視覚障害のあるユーザーのためのアクセシビリティの改善、そしてよりエンゲージメントの高いマルチモーダルインターフェースを作成することができます。 TTSは音声アプリケーションの中核コンポーネントです。STT(音声認識)と組み合わせることで、音声インタラクションシステムの基盤を形成します。新しいモデルはSTS([音声から音声へ](./speech-to-speech))をサポートしており、リアルタイムのインタラクションに使用できますが、高コスト($)がかかります。 ## 設定 Mastraでテキスト読み上げ(TTS)を使用するには、音声プロバイダーを初期化する際に`speechModel`を提供する必要があります。これには以下のようなパラメータが含まれます: - **`name`**: 使用する特定のTTSモデル。 - **`apiKey`**: 認証用のAPIキー。 - **プロバイダー固有のオプション**: 特定の音声プロバイダーで必要または対応している追加オプション。 **`speaker`**オプションを使用すると、音声合成に異なる声を選択できます。各プロバイダーは、**声の多様性**、**品質**、**声の個性**、**多言語サポート**に関して異なる特性を持つ様々な音声オプションを提供しています。 **注意**: これらのパラメータはすべてオプションです。使用している特定のプロバイダーによって異なる、音声プロバイダーが提供するデフォルト設定を使用することができます。 ```typescript const voice = new OpenAIVoice({ speechModel: { name: "tts-1-hd", apiKey: process.env.OPENAI_API_KEY }, speaker: "alloy", }); // デフォルト設定を使用する場合、設定は以下のように簡略化できます: const voice = new OpenAIVoice(); ``` ## 利用可能なプロバイダー Mastraは、それぞれ独自の機能と音声オプションを持つ幅広いテキスト読み上げプロバイダーをサポートしています。アプリケーションのニーズに最適なプロバイダーを選択できます: - [**OpenAI**](/reference/voice/openai/) - 自然なイントネーションと表現を持つ高品質の音声 - [**Azure**](/reference/voice/azure/) - Microsoftの音声サービスで、幅広い音声と言語をサポート - [**ElevenLabs**](/reference/voice/elevenlabs/) - 感情と細かい制御が可能な超リアルな音声 - [**PlayAI**](/reference/voice/playai/) - 様々なスタイルの自然な音声に特化 - [**Google**](/reference/voice/google/) - 多言語サポートを備えたGoogleの音声合成 - [**Cloudflare**](/reference/voice/cloudflare/) - 低遅延アプリケーション向けにエッジ最適化された音声合成 - [**Deepgram**](/reference/voice/deepgram/) - 高精度のAI駆動音声技術 - [**Speechify**](/reference/voice/speechify/) - 読みやすさとアクセシビリティに最適化されたテキスト読み上げ - [**Sarvam**](/reference/voice/sarvam/) - インド系言語とアクセントに特化 - [**Murf**](/reference/voice/murf/) - カスタマイズ可能なパラメータを持つスタジオ品質の音声ナレーション 各プロバイダーは、必要に応じてインストールできる別々のパッケージとして実装されています: ```bash pnpm add @mastra/voice-openai # OpenAIの例 ``` ## Speak メソッドの使用 TTSの主要なメソッドは、テキストを音声に変換する`speak()`メソッドです。このメソッドはオプションを受け付けることができ、話者やその他のプロバイダー固有のオプションを指定できます。使用方法は次のとおりです: ```typescript import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { OpenAIVoice } from '@mastra/voice-openai'; const voice = new OpenAIVoice(); const agent = new Agent({ name: "Voice Agent", instructions: "You are a voice assistant that can help users with their tasks.", model: openai("gpt-4o"), voice, }); const { text } = await agent.generate('What color is the sky?'); // Convert text to speech to an Audio Stream const readableStream = await voice.speak(text, { speaker: "default", // Optional: specify a speaker properties: { speed: 1.0, // Optional: adjust speech speed pitch: "default", // Optional: specify pitch if supported }, }); ``` エージェントに音声を追加する方法については、[エージェントへの音声の追加](../agents/adding-voice.mdx)のドキュメントをご覧ください。 --- title: "分岐、マージ、条件 | ワークフロー | Mastra ドキュメント" description: "Mastraワークフローの制御フローでは、分岐、マージ、条件を管理して、ロジック要件を満たすワークフローを構築することができます。" --- # ワークフローの制御フロー:分岐、マージ、条件 [JA] Source: https://mastra.ai/ja/docs/workflows/control-flow 複数ステップのプロセスを作成する場合、ステップを並行して実行したり、順番に連鎖させたり、結果に基づいて異なるパスをたどる必要があるかもしれません。このページでは、論理的要件を満たすワークフローを構築するために、分岐、マージ、条件をどのように管理できるかを説明します。コードスニペットは、複雑な制御フローを構築するための主要なパターンを示しています。 ## 並列実行 互いに依存関係のないステップを同時に実行することができます。ステップが独立したタスクを実行する場合、このアプローチによってワークフローを高速化できます。以下のコードは、2つのステップを並列に追加する方法を示しています: ```typescript myWorkflow.step(fetchUserData).step(fetchOrderData); ``` 詳細については、[並列ステップ](../../examples/workflows/parallel-steps.mdx)の例を参照してください。 ## 順次実行 時には、あるステップの出力が次のステップの入力になるように、厳密な順序でステップを実行する必要があります。依存する操作をリンクするには .then() を使用します。以下のコードは、ステップを順番に連鎖させる方法を示しています: ```typescript myWorkflow.step(fetchOrderData).then(validateData).then(processOrder); ``` 詳細については、[順次ステップ](../../examples/workflows/sequential-steps.mdx)の例を参照してください。 ## 分岐と合流パス 異なる結果に対して異なるパスが必要な場合、分岐が役立ちます。また、完了後にパスを後で合流させることもできます。以下のコードは、stepAの後に分岐し、後でstepFで収束する方法を示しています: ```typescript myWorkflow .step(stepA) .then(stepB) .then(stepD) .after(stepA) .step(stepC) .then(stepE) .after([stepD, stepE]) .step(stepF); ``` この例では: - stepAはstepBに進み、その後stepDに進みます。 - 別途、stepAはstepCもトリガーし、それがstepEにつながります。 - 別途、stepFはstepDとstepEの両方が完了したときにトリガーされます。 詳細については、[分岐パス](../../examples/workflows/branching-paths.mdx)の例を参照してください。 ## 複数のブランチのマージ 時には、複数の他のステップが完了した後にのみステップを実行する必要があります。Mastraは、ステップに対して複数の依存関係を指定できる複合的な`.after([])`構文を提供しています。 ```typescript myWorkflow .step(fetchUserData) .then(validateUserData) .step(fetchProductData) .then(validateProductData) // このステップは、validateUserDataとvalidateProductDataの両方が完了した後にのみ実行されます .after([validateUserData, validateProductData]) .step(processOrder) ``` この例では: - `fetchUserData`と`fetchProductData`は並列ブランチで実行されます - 各ブランチには独自の検証ステップがあります - `processOrder`ステップは、両方の検証ステップが正常に完了した後にのみ実行されます このパターンは特に以下の場合に役立ちます: - 並列実行パスの結合 - ワークフローに同期ポイントを実装する - 進行する前にすべての必要なデータが利用可能であることを確認する 複数の`.after([])`呼び出しを組み合わせることで、複雑な依存関係パターンを作成することもできます: ```typescript myWorkflow // 最初のブランチ .step(stepA) .then(stepB) .then(stepC) // 2番目のブランチ .step(stepD) .then(stepE) // 3番目のブランチ .step(stepF) .then(stepG) // このステップは複数のブランチの完了に依存しています .after([stepC, stepE, stepG]) .step(finalStep) ``` ## 循環依存とループ ワークフローでは、特定の条件が満たされるまでステップを繰り返す必要があることがよくあります。Mastraはループを作成するための2つの強力な方法を提供しています:`until`と`while`です。これらのメソッドは、繰り返しタスクを実装するための直感的な方法を提供します。 ### 手動循環依存の使用(レガシーアプローチ) 以前のバージョンでは、条件付きの循環依存関係を手動で定義してループを作成することができました: ```typescript myWorkflow .step(fetchData) .then(processData) .after(processData) .step(finalizeData, { when: { "processData.status": "success" }, }) .step(fetchData, { when: { "processData.status": "retry" }, }); ``` このアプローチは引き続き機能しますが、新しい`until`および`while`メソッドは、ループを作成するためのよりクリーンで保守しやすい方法を提供します。 ### 条件ベースのループに`until`を使用する `until`メソッドは、指定された条件が真になるまでステップを繰り返します。以下の引数を取ります: 1. ループを停止するタイミングを決定する条件 2. 繰り返すステップ 3. 繰り返されるステップに渡すオプションの変数 ```typescript // ターゲットに達するまでカウンターをインクリメントするステップ const incrementStep = new Step({ id: 'increment', inputSchema: z.object({ // 現在のカウンター値 counter: z.number().optional(), }), outputSchema: z.object({ // 更新されたカウンター値 updatedCounter: z.number(), }), execute: async ({ context }) => { const { counter = 0 } = context.inputData; return { updatedCounter: counter + 1 }; }, }); workflow .step(incrementStep) .until( async ({ context }) => { // カウンターが10に達したら停止 const result = context.getStepResult(incrementStep); return (result?.updatedCounter ?? 0) >= 10; }, incrementStep, { // 現在のカウンターを次の反復に渡す counter: { step: incrementStep, path: 'updatedCounter' } } ) .then(finalStep); ``` 参照ベースの条件を使用することもできます: ```typescript workflow .step(incrementStep) .until( { ref: { step: incrementStep, path: 'updatedCounter' }, query: { $gte: 10 }, }, incrementStep, { counter: { step: incrementStep, path: 'updatedCounter' } } ) .then(finalStep); ``` ### 条件ベースのループに`while`を使用する `while`メソッドは、指定された条件が真である限りステップを繰り返します。`until`と同じ引数を取ります: 1. ループを継続するタイミングを決定する条件 2. 繰り返すステップ 3. 繰り返されるステップに渡すオプションの変数 ```typescript // ターゲット未満の間カウンターをインクリメントするステップ const incrementStep = new Step({ id: 'increment', inputSchema: z.object({ // 現在のカウンター値 counter: z.number().optional(), }), outputSchema: z.object({ // 更新されたカウンター値 updatedCounter: z.number(), }), execute: async ({ context }) => { const { counter = 0 } = context.inputData; return { updatedCounter: counter + 1 }; }, }); workflow .step(incrementStep) .while( async ({ context }) => { // カウンターが10未満の間継続 const result = context.getStepResult(incrementStep); return (result?.updatedCounter ?? 0) < 10; }, incrementStep, { // 現在のカウンターを次の反復に渡す counter: { step: incrementStep, path: 'updatedCounter' } } ) .then(finalStep); ``` 参照ベースの条件を使用することもできます: ```typescript workflow .step(incrementStep) .while( { ref: { step: incrementStep, path: 'updatedCounter' }, query: { $lt: 10 }, }, incrementStep, { counter: { step: incrementStep, path: 'updatedCounter' } } ) .then(finalStep); ``` ### 参照条件の比較演算子 参照ベースの条件を使用する場合、以下の比較演算子を使用できます: | 演算子 | 説明 | |----------|-------------| | `$eq` | 等しい | | `$ne` | 等しくない | | `$gt` | より大きい | | `$gte` | 以上 | | `$lt` | より小さい | | `$lte` | 以下 | ## 条件 前のステップからのデータに基づいてステップを実行するかどうかを制御するには、when プロパティを使用します。以下は条件を指定する3つの方法です。 ### オプション1:関数 ```typescript myWorkflow.step( new Step({ id: "processData", execute: async ({ context }) => { // Action logic }, }), { when: async ({ context }) => { const fetchData = context?.getStepResult<{ status: string }>("fetchData"); return fetchData?.status === "success"; }, }, ); ``` ### オプション2:クエリオブジェクト ```typescript myWorkflow.step( new Step({ id: "processData", execute: async ({ context }) => { // Action logic }, }), { when: { ref: { step: { id: "fetchData", }, path: "status", }, query: { $eq: "success" }, }, }, ); ``` ### オプション3:シンプルなパス比較 ```typescript myWorkflow.step( new Step({ id: "processData", execute: async ({ context }) => { // Action logic }, }), { when: { "fetchData.status": "success", }, }, ); ``` ## データアクセスパターン Mastraはステップ間でデータを受け渡すためのいくつかの方法を提供しています: 1. **コンテキストオブジェクト** - コンテキストオブジェクトを通じてステップの結果に直接アクセスする 2. **変数マッピング** - あるステップの出力を別のステップの入力に明示的にマッピングする 3. **getStepResultメソッド** - ステップの出力を取得するための型安全なメソッド 各アプローチは、ユースケースや型安全性の要件に応じて、それぞれ利点があります。 ### getStepResultメソッドの使用 `getStepResult`メソッドは、ステップの結果にアクセスするための型安全な方法を提供します。TypeScriptを使用する場合は、型情報を保持するためにこのアプローチが推奨されます。 #### 基本的な使用法 より良い型安全性のために、`getStepResult`に型パラメータを提供できます: ```typescript showLineNumbers filename="src/mastra/workflows/get-step-result.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const fetchUserStep = new Step({ id: 'fetchUser', outputSchema: z.object({ name: z.string(), userId: z.string(), }), execute: async ({ context }) => { return { name: 'John Doe', userId: '123' }; }, }); const analyzeDataStep = new Step({ id: "analyzeData", execute: async ({ context }) => { // Type-safe access to previous step result const userData = context.getStepResult<{ name: string, userId: string }>("fetchUser"); if (!userData) { return { status: "error", message: "User data not found" }; } return { analysis: `Analyzed data for user ${userData.name}`, userId: userData.userId }; }, }); ``` #### ステップ参照の使用 最も型安全なアプローチは、`getStepResult`呼び出しでステップを直接参照することです: ```typescript showLineNumbers filename="src/mastra/workflows/step-reference.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Define step with output schema const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const processUserStep = new Step({ id: "processUser", execute: async ({ context }) => { // TypeScript will infer the correct type from fetchUserStep's outputSchema const userData = context.getStepResult(fetchUserStep); return { processed: true, userName: userData?.name }; }, }); const workflow = new Workflow({ name: "user-workflow", }); workflow .step(fetchUserStep) .then(processUserStep) .commit(); ``` ### 変数マッピングの使用 変数マッピングは、ステップ間のデータフローを定義する明示的な方法です。 このアプローチは依存関係を明確にし、優れた型安全性を提供します。 ステップに注入されたデータは`context.inputData`オブジェクトで利用可能であり、ステップの`inputSchema`に基づいて型付けされます。 ```typescript showLineNumbers filename="src/mastra/workflows/variable-mapping.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const sendEmailStep = new Step({ id: "sendEmail", inputSchema: z.object({ recipientEmail: z.string(), recipientName: z.string(), }), execute: async ({ context }) => { const { recipientEmail, recipientName } = context.inputData; // Send email logic here return { status: "sent", to: recipientEmail }; }, }); const workflow = new Workflow({ name: "email-workflow", }); workflow .step(fetchUserStep) .then(sendEmailStep, { variables: { // Map specific fields from fetchUser to sendEmail inputs recipientEmail: { step: fetchUserStep, path: 'email' }, recipientName: { step: fetchUserStep, path: 'name' } } }) .commit(); ``` 変数マッピングの詳細については、[ワークフロー変数によるデータマッピング](./variables.mdx)のドキュメントを参照してください。 ### コンテキストオブジェクトの使用 コンテキストオブジェクトは、すべてのステップの結果とその出力に直接アクセスできます。このアプローチはより柔軟ですが、型の安全性を維持するために慎重な取り扱いが必要です。 `context.steps`オブジェクトを通じてステップの結果に直接アクセスできます: ```typescript showLineNumbers filename="src/mastra/workflows/context-access.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const processOrderStep = new Step({ id: 'processOrder', execute: async ({ context }) => { // Access data from a previous step let userData: { name: string, userId: string }; if (context.steps['fetchUser']?.status === 'success') { userData = context.steps.fetchUser.output; } else { throw new Error('User data not found'); } return { orderId: 'order123', userId: userData.userId, status: 'processing', }; }, }); const workflow = new Workflow({ name: "order-workflow", }); workflow .step(fetchUserStep) .then(processOrderStep) .commit(); ``` ### ワークフローレベルの型安全性 ワークフロー全体で包括的な型安全性を確保するには、すべてのステップの型を定義し、それらをワークフローに渡すことができます。 これにより、条件のコンテキストオブジェクト、および最終的なワークフロー出力のステップ結果に対して型安全性を確保できます。 ```typescript showLineNumbers filename="src/mastra/workflows/workflow-typing.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Create steps with typed outputs const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const processOrderStep = new Step({ id: "processOrder", execute: async ({ context }) => { // TypeScript knows the shape of userData const userData = context.getStepResult(fetchUserStep); return { orderId: "order123", status: "processing" }; }, }); const workflow = new Workflow<[typeof fetchUserStep, typeof processOrderStep]>({ name: "typed-workflow", }); workflow .step(fetchUserStep) .then(processOrderStep) .until(async ({ context }) => { // TypeScript knows the shape of userData here const res = context.getStepResult('fetchUser'); return res?.userId === '123'; }, processOrderStep) .commit(); ``` ### トリガーデータへのアクセス ステップ結果に加えて、ワークフローを開始した元のトリガーデータにアクセスできます: ```typescript showLineNumbers filename="src/mastra/workflows/trigger-data.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Define trigger schema const triggerSchema = z.object({ customerId: z.string(), orderItems: z.array(z.string()), }); type TriggerType = z.infer; const processOrderStep = new Step({ id: "processOrder", execute: async ({ context }) => { // Access trigger data with type safety const triggerData = context.getStepResult('trigger'); return { customerId: triggerData?.customerId, itemCount: triggerData?.orderItems.length || 0, status: "processing" }; }, }); const workflow = new Workflow({ name: "order-workflow", triggerSchema, }); workflow .step(processOrderStep) .commit(); ``` ### 再開データへのアクセス ステップに注入されたデータは`context.inputData`オブジェクトで利用でき、ステップの`inputSchema`に基づいて型付けされています。 ```typescript showLineNumbers filename="src/mastra/workflows/resume-data.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const processOrderStep = new Step({ id: "processOrder", inputSchema: z.object({ orderId: z.string(), }), execute: async ({ context, suspend }) => { const { orderId } = context.inputData; if (!orderId) { await suspend(); return; } return { orderId, status: "processed" }; }, }); const workflow = new Workflow({ name: "order-workflow", }); workflow .step(processOrderStep) .commit(); const run = workflow.createRun(); const result = await run.start(); const resumedResult = await workflow.resume({ runId: result.runId, stepId: 'processOrder', inputData: { orderId: '123', }, }); console.log({resumedResult}); ``` ### ワークフローの結果へのアクセス ワークフローの結果に型付きでアクセスするには、ステップの型を`Workflow`型パラメータに注入します: ```typescript showLineNumbers filename="src/mastra/workflows/get-results.ts" copy import { Workflow } from "@mastra/core/workflows"; const fetchUserStep = new Step({ id: "fetchUser", outputSchema: z.object({ userId: z.string(), name: z.string(), email: z.string(), }), execute: async () => { return { userId: "user123", name: "John Doe", email: "john@example.com" }; }, }); const processOrderStep = new Step({ id: "processOrder", outputSchema: z.object({ orderId: z.string(), status: z.string(), }), execute: async ({ context }) => { const userData = context.getStepResult(fetchUserStep); return { orderId: "order123", status: "processing" }; }, }); const workflow = new Workflow<[typeof fetchUserStep, typeof processOrderStep]>({ name: "typed-workflow", }); workflow .step(fetchUserStep) .then(processOrderStep) .commit(); const run = workflow.createRun(); const result = await run.start(); // 結果はステップ結果の判別共用体です // そのためステータスチェックによって絞り込む必要があります if (result.results.processOrder.status === 'success') { // TypeScriptは結果の形状を認識します const orderId = result.results.processOrder.output.orderId; console.log({orderId}); } if (result.results.fetchUser.status === 'success') { const userId = result.results.fetchUser.output.userId; console.log({userId}); } ``` ### データフローのベストプラクティス 1. **型安全性のためにステップ参照でgetStepResultを使用する** - TypeScriptが正しい型を推論できるようにします - コンパイル時に型エラーを検出します 2. **明示的な依存関係のために変数マッピングを使用する** - データフローを明確で保守しやすくします - ステップの依存関係の良いドキュメントを提供します 3. **ステップの出力スキーマを定義する** - 実行時にデータを検証します - `execute`関数の戻り値の型を検証します - TypeScriptでの型推論を改善します 4. **欠落データを適切に処理する** - プロパティにアクセスする前に常にステップ結果が存在するかチェックします - オプションデータにフォールバック値を提供します 5. **データ変換をシンプルに保つ** - 変数マッピングではなく専用のステップでデータを変換します - ワークフローのテストとデバッグが容易になります ### データフロー手法の比較 | 手法 | 型安全性 | 明示性 | ユースケース | |--------|------------|--------------|----------| | getStepResult | 最高 | 高 | 厳格な型要件を持つ複雑なワークフロー | | 変数マッピング | 高 | 高 | 依存関係を明確かつ明示的にする必要がある場合 | | context.steps | 中 | 低 | シンプルなワークフローでのステップデータへの迅速なアクセス | ユースケースに適したデータフロー手法を選択することで、型安全で保守しやすいワークフローを作成できます。 --- title: "動的ワークフロー | Mastra ドキュメント" description: "ワークフローステップ内で動的ワークフローを作成する方法を学び、実行時の条件に基づいて柔軟なワークフロー作成を可能にします。" --- # ダイナミックワークフロー [JA] Source: https://mastra.ai/ja/docs/workflows/dynamic-workflows このガイドでは、ワークフローステップ内でダイナミックワークフローを作成する方法を示します。この高度なパターンにより、実行時の条件に基づいてワークフローをその場で作成および実行することができます。 ## 概要 動的ワークフローは、実行時データに基づいてワークフローを作成する必要がある場合に役立ちます。 ## 実装 動的ワークフローを作成する鍵は、ステップの`execute`関数内からMastraインスタンスにアクセスし、それを使用して新しいワークフローを作成して実行することです。 ### 基本的な例 ```typescript import { Mastra, Step, Workflow } from '@mastra/core'; import { z } from 'zod'; const isMastra = (mastra: any): mastra is Mastra => { return mastra && typeof mastra === 'object' && mastra instanceof Mastra; }; // 動的ワークフローを作成して実行するステップ const createDynamicWorkflow = new Step({ id: 'createDynamicWorkflow', outputSchema: z.object({ dynamicWorkflowResult: z.any(), }), execute: async ({ context, mastra }) => { if (!mastra) { throw new Error('Mastraインスタンスが利用できません'); } if (!isMastra(mastra)) { throw new Error('無効なMastraインスタンス'); } const inputData = context.triggerData.inputData; // 新しい動的ワークフローを作成 const dynamicWorkflow = new Workflow({ name: 'dynamic-workflow', mastra, // 新しいワークフローにmastraインスタンスを渡す triggerSchema: z.object({ dynamicInput: z.string(), }), }); // 動的ワークフローのステップを定義 const dynamicStep = new Step({ id: 'dynamicStep', execute: async ({ context }) => { const dynamicInput = context.triggerData.dynamicInput; return { processedValue: `Processed: ${dynamicInput}`, }; }, }); // 動的ワークフローを構築してコミット dynamicWorkflow.step(dynamicStep).commit(); // 実行を作成し、動的ワークフローを実行 const run = dynamicWorkflow.createRun(); const result = await run.start({ triggerData: { dynamicInput: inputData, }, }); let dynamicWorkflowResult; if (result.results['dynamicStep']?.status === 'success') { dynamicWorkflowResult = result.results['dynamicStep']?.output.processedValue; } else { throw new Error('動的ワークフローが失敗しました'); } // 動的ワークフローからの結果を返す return { dynamicWorkflowResult, }; }, }); // 動的ワークフロークリエーターを使用するメインワークフロー const mainWorkflow = new Workflow({ name: 'main-workflow', triggerSchema: z.object({ inputData: z.string(), }), mastra: new Mastra(), }); mainWorkflow.step(createDynamicWorkflow).commit(); // Mastraにワークフローを登録 export const mastra = new Mastra({ workflows: { mainWorkflow }, }); const run = mainWorkflow.createRun(); const result = await run.start({ triggerData: { inputData: 'test', }, }); ``` ## 高度な例: ワークフローファクトリー 入力パラメータに基づいて異なるワークフローを生成するワークフローファクトリーを作成できます: ```typescript const isMastra = (mastra: any): mastra is Mastra => { return mastra && typeof mastra === 'object' && mastra instanceof Mastra; }; const workflowFactory = new Step({ id: 'workflowFactory', inputSchema: z.object({ workflowType: z.enum(['simple', 'complex']), inputData: z.string(), }), outputSchema: z.object({ result: z.any(), }), execute: async ({ context, mastra }) => { if (!mastra) { throw new Error('Mastra instance not available'); } if (!isMastra(mastra)) { throw new Error('Invalid Mastra instance'); } // タイプに基づいて新しい動的ワークフローを作成 const dynamicWorkflow = new Workflow({ name: `dynamic-${context.workflowType}-workflow`, mastra, triggerSchema: z.object({ input: z.string(), }), }); if (context.workflowType === 'simple') { // 単一ステップのシンプルなワークフロー const simpleStep = new Step({ id: 'simpleStep', execute: async ({ context }) => { return { result: `シンプルな処理: ${context.triggerData.input}`, }; }, }); dynamicWorkflow.step(simpleStep).commit(); } else { // 複数ステップの複雑なワークフロー const step1 = new Step({ id: 'step1', outputSchema: z.object({ intermediateResult: z.string(), }), execute: async ({ context }) => { return { intermediateResult: `最初の処理: ${context.triggerData.input}`, }; }, }); const step2 = new Step({ id: 'step2', execute: async ({ context }) => { const intermediate = context.getStepResult(step1).intermediateResult; return { finalResult: `二番目の処理: ${intermediate}`, }; }, }); dynamicWorkflow.step(step1).then(step2).commit(); } // 動的ワークフローを実行 const run = dynamicWorkflow.createRun(); const result = await run.start({ triggerData: { input: context.inputData, }, }); // ワークフロータイプに基づいて適切な結果を返す if (context.workflowType === 'simple') { return { // @ts-ignore result: result.results['simpleStep']?.output, }; } else { return { // @ts-ignore result: result.results['step2']?.output, }; } }, }); ``` ## 重要な考慮事項 1. **Mastra インスタンス**: `execute` 関数の `mastra` パラメータは、動的ワークフローを作成するために不可欠な Mastra インスタンスへのアクセスを提供します。 2. **エラーハンドリング**: 動的ワークフローを作成しようとする前に、Mastra インスタンスが利用可能かどうかを常に確認してください。 3. **リソース管理**: 動的ワークフローはリソースを消費するため、1回の実行であまりにも多くのワークフローを作成しないように注意してください。 4. **ワークフローのライフサイクル**: 動的ワークフローは、メインの Mastra インスタンスに自動的に登録されません。明示的に登録しない限り、ステップ実行の期間中のみ存在します。 5. **デバッグ**: 動的ワークフローのデバッグは困難な場合があります。作成と実行を追跡するために詳細なログを追加することを検討してください。 ## ユースケース - **条件付きワークフロー選択**: 入力データに基づいて異なるワークフローパターンを選択する - **パラメータ化されたワークフロー**: 動的な構成でワークフローを作成する - **ワークフローテンプレート**: テンプレートを使用して専門的なワークフローを生成する - **マルチテナントアプリケーション**: 異なるテナントのために分離されたワークフローを作成する ## 結論 ダイナミックワークフローは、柔軟で適応性のあるワークフローシステムを作成するための強力な方法を提供します。ステップ実行内でMastraインスタンスを活用することで、実行時の条件や要件に応じたワークフローを作成できます。 --- title: "ワークフローにおけるエラー処理 | Mastra ドキュメント" description: "ステップの再試行、条件分岐、モニタリングを使用して、Mastra ワークフローでエラーを処理する方法を学びます。" --- # ワークフローにおけるエラーハンドリング [JA] Source: https://mastra.ai/ja/docs/workflows/error-handling 堅牢なエラーハンドリングは、プロダクションワークフローにとって不可欠です。Mastraは、エラーを優雅に処理するためのいくつかのメカニズムを提供し、ワークフローが失敗から回復したり、必要に応じて優雅に劣化したりすることを可能にします。 ## 概要 Mastra ワークフローでのエラー処理は、以下を使用して実装できます: 1. **ステップの再試行** - 失敗したステップを自動的に再試行 2. **条件分岐** - ステップの成功または失敗に基づいて代替パスを作成 3. **エラーモニタリング** - ワークフローのエラーを監視し、プログラムで処理 4. **結果ステータスの確認** - 後続のステップで前のステップのステータスを確認 ## ステップのリトライ Mastraは、一時的なエラーによって失敗したステップのための組み込みのリトライメカニズムを提供します。これは、外部サービスや一時的に利用できないリソースとやり取りするステップに特に有用です。 ### 基本的なリトライ設定 リトライは、ワークフローレベルまたは個々のステップに対して設定できます: ```typescript // ワークフローレベルのリトライ設定 const workflow = new Workflow({ name: 'my-workflow', retryConfig: { attempts: 3, // リトライ試行回数 delay: 1000, // リトライ間の遅延時間(ミリ秒) }, }); // ステップレベルのリトライ設定(ワークフローレベルを上書き) const apiStep = new Step({ id: 'callApi', execute: async () => { // 失敗する可能性のあるAPI呼び出し }, retryConfig: { attempts: 5, // このステップは最大5回リトライします delay: 2000, // リトライ間の遅延時間は2秒 }, }); ``` ステップのリトライに関する詳細は、[ステップのリトライ](../reference/workflows/step-retries.mdx)リファレンスを参照してください。 ## 条件分岐 条件ロジックを使用して、前のステップの成功または失敗に基づいて代替のワークフローパスを作成できます: ```typescript // Create a workflow with conditional branching const workflow = new Workflow({ name: 'error-handling-workflow', }); workflow .step(fetchDataStep) .then(processDataStep, { // Only execute processDataStep if fetchDataStep was successful when: ({ context }) => { return context.steps.fetchDataStep?.status === 'success'; }, }) .then(fallbackStep, { // Execute fallbackStep if fetchDataStep failed when: ({ context }) => { return context.steps.fetchDataStep?.status === 'failed'; }, }) .commit(); ``` ## エラーモニタリング `watch` メソッドを使用して、ワークフローのエラーを監視できます: ```typescript const { start, watch } = workflow.createRun(); watch(async ({ results }) => { // 失敗したステップがあるか確認 const failedSteps = Object.entries(results) .filter(([_, step]) => step.status === "failed") .map(([stepId]) => stepId); if (failedSteps.length > 0) { console.error(`ワークフローに失敗したステップがあります: ${failedSteps.join(', ')}`); // アラートやログ記録などの是正措置を取る } }); await start(); ``` ## ステップでのエラー処理 ステップの実行関数内で、プログラム的にエラーを処理することができます: ```typescript const robustStep = new Step({ id: 'robustStep', execute: async ({ context }) => { try { // 主な操作を試みる const result = await someRiskyOperation(); return { success: true, data: result }; } catch (error) { // エラーを記録する console.error('操作に失敗しました:', error); // 例外を投げる代わりに優雅なフォールバック結果を返す return { success: false, error: error.message, fallbackData: 'デフォルト値' }; } }, }); ``` ## 前のステップの結果を確認する 前のステップの結果に基づいて決定を下すことができます: ```typescript const finalStep = new Step({ id: 'finalStep', execute: async ({ context }) => { // Check results of previous steps const step1Success = context.steps.step1?.status === 'success'; const step2Success = context.steps.step2?.status === 'success'; if (step1Success && step2Success) { // All steps succeeded return { status: 'complete', result: 'All operations succeeded' }; } else if (step1Success) { // Only step1 succeeded return { status: 'partial', result: 'Partial completion' }; } else { // Critical failure return { status: 'failed', result: 'Critical steps failed' }; } }, }); ``` ## エラーハンドリングのベストプラクティス 1. **一時的な失敗に対してリトライを使用する**: 一時的な問題が発生する可能性のあるステップに対してリトライポリシーを設定します。 2. **フォールバックパスを提供する**: 重要なステップが失敗した場合に備えて、代替パスを設計します。 3. **エラーシナリオを具体的にする**: 異なる種類のエラーに対して異なるハンドリング戦略を使用します。 4. **エラーを包括的に記録する**: デバッグを支援するために、エラーを記録する際にはコンテキスト情報を含めます。 5. **失敗時に意味のあるデータを返す**: ステップが失敗した場合、後続のステップが意思決定を行うのに役立つように、失敗に関する構造化データを返します。 6. **冪等性を考慮する**: ステップが重複した副作用を引き起こすことなく安全にリトライできるようにします。 7. **ワークフローの実行を監視する**: `watch` メソッドを使用してワークフローの実行を積極的に監視し、早期にエラーを検出します。 ## 高度なエラーハンドリング より複雑なエラーハンドリングのシナリオを考慮するには、次のことを検討してください: - **サーキットブレーカーの実装**: ステップが繰り返し失敗する場合、再試行を停止し、フォールバック戦略を使用する - **タイムアウト処理の追加**: ステップに時間制限を設定し、ワークフローが無期限に停止するのを防ぐ - **専用のエラー回復ワークフローの作成**: 重要なワークフローの場合、メインワークフローが失敗したときにトリガーされる別の回復ワークフローを作成する ## 関連 - [ステップリトライのリファレンス](../../reference/workflows/step-retries.mdx) - [ウォッチメソッドのリファレンス](../../reference/workflows/watch.mdx) - [ステップ条件](../../reference/workflows/step-condition.mdx) - [制御フロー](./control-flow.mdx) # ネストされたワークフロー [JA] Source: https://mastra.ai/ja/docs/workflows/nested-workflows Mastraでは、ワークフローを他のワークフロー内のステップとして使用することができ、モジュール式で再利用可能なワークフローコンポーネントを作成できます。この機能により、複雑なワークフローをより小さく管理しやすい部分に整理し、コードの再利用を促進することができます。 また、親ワークフロー内のステップとしてネストされたワークフローを視覚的に確認できるため、ワークフローの流れを理解しやすくなります。 ## 基本的な使用方法 `step()`メソッドを使用して、あるワークフローを別のワークフローのステップとして直接使用できます: ```typescript // ネストされたワークフローを作成 const nestedWorkflow = new Workflow({ name: "nested-workflow" }) .step(stepA) .then(stepB) .commit(); // 親ワークフローでネストされたワークフローを使用 const parentWorkflow = new Workflow({ name: "parent-workflow" }) .step(nestedWorkflow, { variables: { city: { step: "trigger", path: "myTriggerInput", }, }, }) .then(stepC) .commit(); ``` ワークフローがステップとして使用される場合: - ワークフローの名前をステップIDとして使用し、自動的にステップに変換されます - ワークフローの結果は親ワークフローのコンテキストで利用可能になります - ネストされたワークフローのステップは定義された順序で実行されます ## 結果へのアクセス ネストされたワークフローからの結果は、親ワークフローのコンテキスト内でネストされたワークフローの名前の下で利用できます。結果には、ネストされたワークフローからのすべてのステップ出力が含まれます: ```typescript const { results } = await parentWorkflow.start(); // Access nested workflow results const nestedWorkflowResult = results["nested-workflow"]; if (nestedWorkflowResult.status === "success") { const nestedResults = nestedWorkflowResult.output.results; } ``` ## ネストされたワークフローによるフロー制御 ネストされたワークフローは、通常のステップで利用可能なすべてのフロー制御機能をサポートしています: ### 並列実行 複数のネストされたワークフローを並列で実行できます: ```typescript parentWorkflow .step(nestedWorkflowA) .step(nestedWorkflowB) .after([nestedWorkflowA, nestedWorkflowB]) .step(finalStep); ``` または、ワークフローの配列を使用した`step()`を使用する方法: ```typescript parentWorkflow.step([nestedWorkflowA, nestedWorkflowB]).then(finalStep); ``` この場合、`then()`は最終ステップを実行する前に、すべてのワークフローが完了するのを暗黙的に待ちます。 ### If-Else分岐 ネストされたワークフローは、両方の分岐を引数として受け入れる新しい構文でif-else分岐で使用できます: ```typescript // 異なるパス用のネストされたワークフローを作成 const workflowA = new Workflow({ name: "workflow-a" }) .step(stepA1) .then(stepA2) .commit(); const workflowB = new Workflow({ name: "workflow-b" }) .step(stepB1) .then(stepB2) .commit(); // ネストされたワークフローで新しいif-else構文を使用 parentWorkflow .step(initialStep) .if( async ({ context }) => { // ここに条件を記述 return someCondition; }, workflowA, // if分岐 workflowB, // else分岐 ) .then(finalStep) .commit(); ``` 新しい構文は、ネストされたワークフローを扱う際により簡潔で明確です。条件が: - `true`の場合:最初のワークフロー(if分岐)が実行されます - `false`の場合:2番目のワークフロー(else分岐)が実行されます スキップされたワークフローは結果で`skipped`ステータスになります: if-elseブロックの後に続く`.then(finalStep)`呼び出しは、ifとelse分岐を単一の実行パスに戻します。 ### ループ処理 ネストされたワークフローは、他のステップと同様に`.until()`と`.while()`ループを使用できます。興味深い新しいパターンの一つは、ワークフローを直接ループバック引数として渡し、その結果に関する何かが真になるまでそのネストされたワークフローを実行し続けることです: ```typescript parentWorkflow .step(firstStep) .while( ({ context }) => context.getStepResult("nested-workflow").output.results.someField === "someValue", nestedWorkflow, ) .step(finalStep) .commit(); ``` ## ネストされたワークフローの監視 親ワークフローの`watch`メソッドを使用して、ネストされたワークフローの状態変化を監視することができます。これは複雑なワークフローの進行状況や状態遷移をモニタリングするのに役立ちます: ```typescript const parentWorkflow = new Workflow({ name: "parent-workflow" }) .step([nestedWorkflowA, nestedWorkflowB]) .then(finalStep) .commit(); const run = parentWorkflow.createRun(); const unwatch = parentWorkflow.watch((state) => { console.log("Current state:", state.value); // Access nested workflow states in state.context }); await run.start(); unwatch(); // Stop watching when done ``` ## 一時停止と再開 ネストされたワークフローは一時停止と再開をサポートしており、特定のポイントでワークフロー実行を一時停止して続行することができます。ネストされたワークフロー全体または特定のステップを一時停止することができます: ```typescript // Define a step that may need to suspend const suspendableStep = new Step({ id: "other", description: "Step that may need to suspend", execute: async ({ context, suspend }) => { if (!wasSuspended) { wasSuspended = true; await suspend(); } return { other: 26 }; }, }); // Create a nested workflow with suspendable steps const nestedWorkflow = new Workflow({ name: "nested-workflow-a" }) .step(startStep) .then(suspendableStep) .then(finalStep) .commit(); // Use in parent workflow const parentWorkflow = new Workflow({ name: "parent-workflow" }) .step(beginStep) .then(nestedWorkflow) .then(lastStep) .commit(); // Start the workflow const run = parentWorkflow.createRun(); const { runId, results } = await run.start({ triggerData: { startValue: 1 } }); // Check if a specific step in the nested workflow is suspended if (results["nested-workflow-a"].output.results.other.status === "suspended") { // Resume the specific suspended step using dot notation const resumedResults = await run.resume({ stepId: "nested-workflow-a.other", context: { startValue: 1 }, }); // The resumed results will contain the completed nested workflow expect(resumedResults.results["nested-workflow-a"].output.results).toEqual({ start: { output: { newValue: 1 }, status: "success" }, other: { output: { other: 26 }, status: "success" }, final: { output: { finalValue: 27 }, status: "success" }, }); } ``` ネストされたワークフローを再開する場合: - `resume()`を呼び出す際に、ワークフロー全体を再開するには、ネストされたワークフローの名前を`stepId`として使用します - ネストされたワークフロー内の特定のステップを再開するには、ドット表記(`nested-workflow.step-name`)を使用します - ネストされたワークフローは、提供されたコンテキストで一時停止されたステップから続行されます - ネストされたワークフローの結果内の特定のステップのステータスを`results["nested-workflow"].output.results`を使用して確認できます ## 結果スキーマとマッピング ネストされたワークフローは、結果スキーマとマッピングを定義することができ、これによって型の安全性とデータ変換が容易になります。これは、ネストされたワークフローの出力が特定の構造と一致することを確認したい場合や、結果を親ワークフローで使用する前に変換する必要がある場合に特に役立ちます。 ```typescript // Create a nested workflow with result schema and mapping const nestedWorkflow = new Workflow({ name: "nested-workflow", result: { schema: z.object({ total: z.number(), items: z.array( z.object({ id: z.string(), value: z.number(), }), ), }), mapping: { // Map values from step results using variables syntax total: { step: "step-a", path: "count" }, items: { step: "step-b", path: "items" }, }, }, }) .step(stepA) .then(stepB) .commit(); // Use in parent workflow with type-safe results const parentWorkflow = new Workflow({ name: "parent-workflow" }) .step(nestedWorkflow) .then(async ({ context }) => { const result = context.getStepResult("nested-workflow"); // TypeScript knows the structure of result console.log(result.total); // number console.log(result.items); // Array<{ id: string, value: number }> return { success: true }; }) .commit(); ``` ## ベストプラクティス 1. **モジュール性**: ネストされたワークフローを使用して関連するステップをカプセル化し、再利用可能なワークフローコンポーネントを作成します。 2. **命名**: ネストされたワークフローには、親ワークフローのステップIDとして使用されるため、説明的な名前を付けてください。 3. **エラー処理**: ネストされたワークフローは親ワークフローにエラーを伝播するため、適切にエラーを処理してください。 4. **状態管理**: 各ネストされたワークフローは独自の状態を維持しますが、親ワークフローのコンテキストにアクセスできます。 5. **サスペンション**: ネストされたワークフローでサスペンションを使用する場合は、ワークフロー全体の状態を考慮し、適切に再開処理を行ってください。 ## 例 ネストされたワークフローのさまざまな機能を示す完全な例を以下に示します: ```typescript const workflowA = new Workflow({ name: "workflow-a", result: { schema: z.object({ activities: z.string(), }), mapping: { activities: { step: planActivities, path: "activities", }, }, }, }) .step(fetchWeather) .then(planActivities) .commit(); const workflowB = new Workflow({ name: "workflow-b", result: { schema: z.object({ activities: z.string(), }), mapping: { activities: { step: planActivities, path: "activities", }, }, }, }) .step(fetchWeather) .then(planActivities) .commit(); const weatherWorkflow = new Workflow({ name: "weather-workflow", triggerSchema: z.object({ cityA: z.string().describe("The city to get the weather for"), cityB: z.string().describe("The city to get the weather for"), }), result: { schema: z.object({ activitiesA: z.string(), activitiesB: z.string(), }), mapping: { activitiesA: { step: workflowA, path: "result.activities", }, activitiesB: { step: workflowB, path: "result.activities", }, }, }, }) .step(workflowA, { variables: { city: { step: "trigger", path: "cityA", }, }, }) .step(workflowB, { variables: { city: { step: "trigger", path: "cityB", }, }, }); weatherWorkflow.commit(); ``` この例では: 1. すべてのワークフロー間で型安全性を確保するためのスキーマを定義しています 2. 各ステップには適切な入力と出力のスキーマがあります 3. ネストされたワークフローには独自のトリガースキーマと結果マッピングがあります 4. `.step()`呼び出しで変数構文を使用してデータが渡されます 5. メインワークフローは両方のネストされたワークフローからのデータを組み合わせます --- title: "複雑なLLM操作の処理 | ワークフロー | Mastra" description: "Mastraのワークフローは、分岐、並列実行、リソースの一時停止などの機能を使用して、複雑な操作のシーケンスを調整するのに役立ちます。" --- # ワークフローを使用した複雑なLLM操作の処理 [JA] Source: https://mastra.ai/ja/docs/workflows/overview Mastraのワークフローは、分岐、並列実行、リソースの一時停止などの機能を使用して、複雑な操作のシーケンスを調整するのに役立ちます。 ## ワークフローを使用するタイミング ほとんどのAIアプリケーションは、言語モデルへの単一の呼び出し以上のものを必要とします。複数のステップを実行したり、特定のパスを条件付きでスキップしたり、ユーザー入力を受け取るまで実行を一時停止したりすることがあるかもしれません。時には、エージェントツールの呼び出しが十分に正確でないこともあります。 Mastraのワークフローシステムは以下を提供します: - ステップを定義し、それらを結びつけるための標準化された方法。 - 単純(線形)および高度(分岐、並列)パスの両方をサポート。 - 各ワークフローの実行を追跡するためのデバッグおよび可観測性機能。 ## 例 ワークフローを作成するには、1つ以上のステップを定義し、それらをリンクしてから、開始する前にワークフローをコミットします。 ### ワークフローの分解 ワークフロー作成プロセスの各部分を見てみましょう: #### 1. ワークフローの作成 Mastraでワークフローを定義する方法は次のとおりです。`name`フィールドはワークフローのAPIエンドポイント(`/workflows/$NAME/`)を決定し、`triggerSchema`はワークフローのトリガーデータの構造を定義します: ```ts filename="src/mastra/workflow/index.ts" const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` #### 2. ステップの定義 次に、ワークフローのステップを定義します。各ステップは独自の入力および出力スキーマを持つことができます。ここでは、`stepOne`が入力値を2倍にし、`stepTwo`が`stepOne`が成功した場合にその結果をインクリメントします。(簡単にするために、この例ではLLM呼び出しは行っていません): ```ts filename="src/mastra/workflow/index.ts" const stepOne = new Step({ id: "stepOne", outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context.triggerData.inputValue * 2; return { doubledValue }; }, }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { const doubledValue = context.getStepResult(stepOne)?.doubledValue; if (!doubledValue) { return { incrementedValue: 0 }; } return { incrementedValue: doubledValue + 1, }; }, }); ``` #### 3. ステップのリンク 次に、制御フローを作成し、ワークフローを「コミット」(最終化)します。この場合、`stepOne`が最初に実行され、その後に`stepTwo`が続きます。 ```ts filename="src/mastra/workflow/index.ts" myWorkflow.step(stepOne).then(stepTwo).commit(); ``` ### ワークフローの登録 Mastraにワークフローを登録して、ログとテレメトリを有効にします: ```ts showLineNumbers filename="src/mastra/index.ts" import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ workflows: { myWorkflow }, }); ``` 動的なワークフローを作成する必要がある場合には、mastraインスタンスをコンテキストに注入することもできます: ```ts filename="src/mastra/workflow/index.ts" import { Mastra } from "@mastra/core"; const mastra = new Mastra(); const myWorkflow = new Workflow({ name: "my-workflow", mastra, }); ``` ### ワークフローの実行 プログラム的にまたはAPI経由でワークフローを実行します: ```ts showLineNumbers filename="src/mastra/run-workflow.ts" copy import { mastra } from "./index"; // ワークフローを取得 const myWorkflow = mastra.getWorkflow("myWorkflow"); const { runId, start } = myWorkflow.createRun(); // ワークフローの実行を開始 await start({ triggerData: { inputValue: 45 } }); ``` またはAPIを使用します(`mastra dev`を実行する必要があります): // ワークフロー実行の作成 ```bash curl --location 'http://localhost:4111/api/workflows/myWorkflow/start-async' \ --header 'Content-Type: application/json' \ --data '{ "inputValue": 45 }' ``` この例は基本を示しています:ワークフローを定義し、ステップを追加し、ワークフローをコミットし、それから実行します。 ## ステップの定義 ワークフローの基本的な構成要素[はステップです](./steps.mdx)。ステップは入力と出力のスキーマを使用して定義され、前のステップの結果を取得することができます。 ## フロー制御 ワークフローを使用すると、並列ステップ、分岐パスなどでステップを連鎖させる[フロー制御](./control-flow.mdx)を定義できます。 ## ワークフロー変数 ステップ間でデータをマッピングしたり、動的なデータフローを作成する必要がある場合、[ワークフロー変数](./variables.mdx)は、情報をあるステップから別のステップに渡し、ステップ出力内のネストされたプロパティにアクセスするための強力なメカニズムを提供します。 ## 一時停止と再開 外部データ、ユーザー入力、または非同期イベントのために実行を一時停止する必要がある場合、Mastraは[任意のステップでの一時停止をサポートしています](./suspend-and-resume.mdx)。ワークフローの状態を保持し、後で再開することができます。 ## 可観測性とデバッグ Mastra ワークフローは、ワークフロー実行内の各ステップの入力と出力を自動的に[ログに記録します](../../reference/observability/otel-config.mdx)。これにより、このデータをお好みのログ、テレメトリー、または可観測性ツールに送信することができます。 次のことができます: - 各ステップのステータスを追跡する(例:`success`、`error`、または `suspended`)。 - 分析のために実行固有のメタデータを保存する。 - ログを転送することで、Datadog や New Relic などのサードパーティの可観測性プラットフォームと統合する。 ## リクエスト/ユーザー固有の変数の注入 ツールとワークフローの依存性注入をサポートしています。`start` または `resume` 関数呼び出しにコンテナを直接渡すか、[サーバーミドルウェア](/docs/deployment/server#Middleware)を使用して注入することができます。 ```ts showLineNumbers filename="src/mastra/run-workflow.ts" copy import { mastra } from "./index"; const stepTwo = new Step({ id: "stepTwo", execute: async ({ context, container }) => { const multiplier = (container.get("multiplier"); const doubledValue = context.getStepResult(stepOne)?.doubledValue; if (!doubledValue) { return { incrementedValue: 0 }; } return { incrementedValue: doubledValue * multiplier, }; }, }); // Get the workflow const myWorkflow = mastra.getWorkflow("myWorkflow"); const { runId, start, resume } = myWorkflow.createRun(); type MyContainer = { multiplier; number }; const container = new Container(); container.set("multiplier", 5); // Start the workflow execution await start({ triggerData: { inputValue: 45 }, container }); await resume({ stepId: "stepTwo", container }); ``` ## 追加リソース - ガイドセクションの[ワークフローガイド](../guides/ai-recruiter.mdx)は、主要な概念をカバーするチュートリアルです。 - [順次ステップのワークフロー例](../../examples/workflows/sequential-steps.mdx) - [並列ステップのワークフロー例](../../examples/workflows/parallel-steps.mdx) - [分岐パスのワークフロー例](../../examples/workflows/branching-paths.mdx) - [ワークフロー変数の例](../../examples/workflows/workflow-variables.mdx) - [循環依存関係のワークフロー例](../../examples/workflows/cyclical-dependencies.mdx) - [一時停止と再開のワークフロー例](../../examples/workflows/suspend-and-resume.mdx) --- title: "ステップの作成とワークフローへの追加 | Mastra ドキュメント" description: "Mastra ワークフローのステップは、入力、出力、および実行ロジックを定義することによって、操作を管理するための構造化された方法を提供します。" --- # ワークフローにおけるステップの定義 [JA] Source: https://mastra.ai/ja/docs/workflows/steps ワークフローを構築する際には、通常、操作をリンクして再利用できる小さなタスクに分解します。ステップは、入力、出力、および実行ロジックを定義することによって、これらのタスクを管理するための構造化された方法を提供します。 以下のコードは、これらのステップをインラインまたは別々に定義する方法を示しています。 ## インラインステップの作成 `.step()` と `.then()` を使用して、ワークフロー内で直接ステップを作成できます。このコードは、2つのステップを順番に定義、リンク、実行する方法を示しています。 ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; export const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); myWorkflow .step( new Step({ id: "stepOne", outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }), ) .then( new Step({ id: "stepTwo", outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 }; } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 }; }, }), ).commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { myWorkflow }, }); ``` ## ステップを個別に作成する ステップのロジックを別々のエンティティで管理したい場合は、ステップを外部で定義してからワークフローに追加することができます。このコードは、ステップを独立して定義し、その後リンクする方法を示しています。 ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Define steps separately const stepOne = new Step({ id: "stepOne", outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }); const stepTwo = new Step({ id: "stepTwo", outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 }; } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 }; }, }); // Build the workflow const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); myWorkflow.step(stepOne).then(stepTwo); myWorkflow.commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { myWorkflow }, }); ``` --- title: "ワークフローの一時停止と再開 | Human-in-the-Loop | Mastra ドキュメント" description: "Mastra ワークフローでの一時停止と再開は、外部からの入力やリソースを待っている間に実行を一時停止することを可能にします。" --- # ワークフローにおける一時停止と再開 [JA] Source: https://mastra.ai/ja/docs/workflows/suspend-and-resume 複雑なワークフローは、外部からの入力やリソースを待つ間、実行を一時停止する必要があることがよくあります。 Mastraの一時停止と再開機能を使用すると、ワークフローの実行を任意のステップで一時停止し、ワークフローのスナップショットをストレージに保存し、準備が整ったら保存されたスナップショットから実行を再開できます。 このプロセス全体はMastraによって自動的に管理されます。ユーザーからの設定や手動のステップは必要ありません。 ワークフローのスナップショットをストレージ(デフォルトではLibSQL)に保存することは、セッション、デプロイメント、サーバーの再起動を超えてワークフローの状態を永続的に保存することを意味します。この永続性は、外部からの入力やリソースを待つ間、数分、数時間、あるいは数日間一時停止したままになる可能性のあるワークフローにとって重要です。 ## サスペンド/レジュームを使用する場合 ワークフローをサスペンドする一般的なシナリオには以下が含まれます: - 人間の承認や入力を待つ - 外部APIリソースが利用可能になるまで一時停止する - 後のステップで必要な追加データを収集する - 高価な操作をレート制限またはスロットリングする - 外部トリガーを伴うイベント駆動プロセスを処理する ## 基本的なサスペンドの例 こちらは、値が低すぎるとサスペンドし、より高い値が与えられると再開するシンプルなワークフローです: ```typescript const stepTwo = new Step({ id: "stepTwo", outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context, suspend }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 }; } const currentValue = context.steps.stepOne.output.doubledValue; if (currentValue < 100) { await suspend(); return { incrementedValue: 0 }; } return { incrementedValue: currentValue + 1 }; }, }); ``` ## 非同期/待機ベースのフロー Mastraの中断と再開のメカニズムは、非同期/待機パターンを使用しており、中断ポイントを持つ複雑なワークフローを直感的に実装できます。コード構造は自然に実行フローを反映します。 ### 仕組み 1. ステップの実行関数は、パラメータとして`suspend`関数を受け取ります 2. `await suspend()`を呼び出すと、そのポイントでワークフローが一時停止します 3. ワークフローの状態が保存されます 4. 後で、適切なパラメータで`workflow.resume()`を呼び出すことでワークフローを再開できます 5. `suspend()`呼び出しの後のポイントから実行が続行されます ### 複数の中断ポイントを持つ例 以下は、複数のステップを持ち、中断可能なワークフローの例です: ```typescript // 中断機能を持つステップを定義 const promptAgentStep = new Step({ id: "promptAgent", execute: async ({ context, suspend }) => { // 中断が必要かどうかを決定する条件 if (needHumanInput) { // 中断状態と共に保存されるペイロードデータをオプションで渡す await suspend({ requestReason: "プロンプトのために人間の入力が必要" }); // suspend()の後のコードはステップが再開されたときに実行されます return { modelOutput: context.userInput }; } return { modelOutput: "AI生成の出力" }; }, outputSchema: z.object({ modelOutput: z.string() }), }); const improveResponseStep = new Step({ id: "improveResponse", execute: async ({ context, suspend }) => { // 別の中断の条件 if (needFurtherRefinement) { await suspend(); return { improvedOutput: context.refinedOutput }; } return { improvedOutput: "改善された出力" }; }, outputSchema: z.object({ improvedOutput: z.string() }), }); // ワークフローを構築 const workflow = new Workflow({ name: "multi-suspend-workflow", triggerSchema: z.object({ input: z.string() }), }); workflow .step(getUserInput) .then(promptAgentStep) .then(evaluateTone) .then(improveResponseStep) .then(evaluateImproved) .commit(); // Mastraにワークフローを登録 export const mastra = new Mastra({ workflows: { workflow }, }); ``` ### ワークフローの開始と再開 ```typescript // ワークフローを取得し、実行を作成 const wf = mastra.getWorkflow("multi-suspend-workflow"); const run = wf.createRun(); // ワークフローを開始 const initialResult = await run.start({ triggerData: { input: "初期入力" }, }); let promptAgentStepResult = initialResult.activePaths.get("promptAgent"); let promptAgentResumeResult = undefined; // ステップが中断されているか確認 if (promptAgentStepResult?.status === "suspended") { console.log("ワークフローはpromptAgentステップで中断されました"); // 新しいコンテキストでワークフローを再開 const resumeResult = await run.resume({ stepId: "promptAgent", context: { userInput: "人間が提供した入力" }, }); promptAgentResumeResult = resumeResult; } const improveResponseStepResult = promptAgentResumeResult?.activePaths.get("improveResponse"); if (improveResponseStepResult?.status === "suspended") { console.log("ワークフローはimproveResponseステップで中断されました"); // 異なるコンテキストで再度再開 const finalResult = await run.resume({ stepId: "improveResponse", context: { refinedOutput: "人間が改善した出力" }, }); console.log("ワークフローが完了しました:", finalResult?.results); } ``` ## イベントベースの一時停止と再開 手動でステップを一時停止することに加えて、Mastra は `afterEvent` メソッドを通じてイベントベースの一時停止を提供します。これにより、ワークフローは特定のイベントが発生するまで自動的に一時停止し、待機することができます。 ### afterEvent と resumeWithEvent の使用 `afterEvent` メソッドは、特定のイベントが発生するのを待つ一時停止ポイントをワークフロー内に自動的に作成します。イベントが発生したときに、`resumeWithEvent` を使用してイベントデータと共にワークフローを続行できます。 以下はその動作方法です: 1. ワークフロー設定でイベントを定義する 2. `afterEvent` を使用してそのイベントを待つ一時停止ポイントを作成する 3. イベントが発生したときに、イベント名とデータを使用して `resumeWithEvent` を呼び出す ### 例:イベントベースのワークフロー ```typescript // ステップを定義 const getUserInput = new Step({ id: "getUserInput", execute: async () => ({ userInput: "initial input" }), outputSchema: z.object({ userInput: z.string() }), }); const processApproval = new Step({ id: "processApproval", execute: async ({ context }) => { // コンテキストからイベントデータにアクセス const approvalData = context.inputData?.resumedEvent; return { approved: approvalData?.approved, approvedBy: approvalData?.approverName, }; }, outputSchema: z.object({ approved: z.boolean(), approvedBy: z.string(), }), }); // イベント定義でワークフローを作成 const approvalWorkflow = new Workflow({ name: "approval-workflow", triggerSchema: z.object({ requestId: z.string() }), events: { approvalReceived: { schema: z.object({ approved: z.boolean(), approverName: z.string(), }), }, }, }); // イベントベースの一時停止でワークフローを構築 approvalWorkflow .step(getUserInput) .afterEvent("approvalReceived") // ワークフローはここで自動的に一時停止します .step(processApproval) // このステップはイベントが受信された後に実行されます .commit(); ``` ### イベントベースのワークフローの実行 ```typescript // ワークフローを取得 const workflow = mastra.getWorkflow("approval-workflow"); const run = workflow.createRun(); // ワークフローを開始 const initialResult = await run.start({ triggerData: { requestId: "request-123" }, }); console.log("ワークフローが開始され、承認イベントを待っています"); console.log(initialResult.results); // 出力はワークフローがイベントステップで一時停止していることを示します: // { // getUserInput: { status: 'success', output: { userInput: 'initial input' } }, // __approvalReceived_event: { status: 'suspended' } // } // 後で、承認イベントが発生したとき: const resumeResult = await run.resumeWithEvent("approvalReceived", { approved: true, approverName: "Jane Doe", }); console.log("イベントデータでワークフローが再開されました:", resumeResult.results); // 出力は完了したワークフローを示します: // { // getUserInput: { status: 'success', output: { userInput: 'initial input' } }, // __approvalReceived_event: { status: 'success', output: { executed: true, resumedEvent: { approved: true, approverName: 'Jane Doe' } } }, // processApproval: { status: 'success', output: { approved: true, approvedBy: 'Jane Doe' } } // } ``` ### イベントベースのワークフローに関する重要なポイント - `suspend()` 関数は、オプションで一時停止状態と共に保存されるペイロードオブジェクトを取ることができます - `await suspend()` 呼び出しの後のコードは、ステップが再開されるまで実行されません - ステップが一時停止されると、そのステータスはワークフロー結果で `'suspended'` になります - 再開されると、ステップのステータスは `'suspended'` から `'success'` に変わります - `resume()` メソッドは、再開する一時停止ステップを識別するために `stepId` を必要とします - 再開時に新しいコンテキストデータを提供でき、それは既存のステップ結果とマージされます - イベントはスキーマと共にワークフロー設定で定義されなければなりません - `afterEvent` メソッドは、イベントを待つ特別な一時停止ステップを作成します - イベントステップは自動的に `__eventName_event`(例:`__approvalReceived_event`)と命名されます - `resumeWithEvent` を使用してイベントデータを提供し、ワークフローを続行します - イベントデータは、そのイベントのために定義されたスキーマに対して検証されます - イベントデータは `inputData.resumedEvent` としてコンテキストで利用可能です ## サスペンドとレジュームのためのストレージ ワークフローが `await suspend()` を使用してサスペンドされると、Mastra はワークフローの状態全体を自動的にストレージに保存します。これは、アプリケーションの再起動やサーバーインスタンスを超えて状態を保持するために、長期間サスペンドされる可能性のあるワークフローにとって重要です。 ### デフォルトストレージ: LibSQL デフォルトでは、Mastra は LibSQL をストレージエンジンとして使用します: ```typescript import { Mastra } from "@mastra/core/mastra"; import { DefaultStorage } from "@mastra/core/storage/libsql"; const mastra = new Mastra({ storage: new DefaultStorage({ config: { url: "file:storage.db", // 開発用のローカルファイルベースのデータベース // 本番環境では、永続的なURLを使用: // url: process.env.DATABASE_URL, // authToken: process.env.DATABASE_AUTH_TOKEN, // 認証された接続のためのオプション }, }), }); ``` LibSQL ストレージは異なるモードで構成できます: - インメモリデータベース(テスト用): `:memory:` - ファイルベースのデータベース(開発用): `file:storage.db` - リモートデータベース(本番用): `libsql://your-database.turso.io` のようなURL ### 代替ストレージオプション #### Upstash (Redis互換) サーバーレスアプリケーションやRedisが好まれる環境向け: ```bash npm install @mastra/upstash ``` ```typescript import { Mastra } from "@mastra/core/mastra"; import { UpstashStore } from "@mastra/upstash"; const mastra = new Mastra({ storage: new UpstashStore({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }), }); ``` ### ストレージに関する考慮事項 - すべてのストレージオプションは、サスペンドとレジューム機能を同様にサポートします - ワークフローの状態は、サスペンド時に自動的にシリアライズされ保存されます - ストレージでサスペンド/レジュームを機能させるために追加の設定は不要です - インフラストラクチャ、スケーリングのニーズ、既存の技術スタックに基づいてストレージオプションを選択してください ## 監視と再開 中断されたワークフローを処理するには、`watch` メソッドを使用して実行ごとにワークフローのステータスを監視し、`resume` を使用して実行を続行します: ```typescript import { mastra } from "./index"; // ワークフローを取得 const myWorkflow = mastra.getWorkflow("myWorkflow"); const { start, watch, resume } = myWorkflow.createRun(); // 実行前にワークフローを監視開始 watch(async ({ activePaths }) => { const isStepTwoSuspended = activePaths.get("stepTwo")?.status === "suspended"; if (isStepTwoSuspended) { console.log("ワークフローが中断されました。新しい値で再開します"); // 新しいコンテキストでワークフローを再開 await resume({ stepId: "stepTwo", context: { secondValue: 100 }, }); } }); // ワークフローの実行を開始 await start({ triggerData: { inputValue: 45 } }); ``` ### イベントベースのワークフローの監視と再開 イベントベースのワークフローでも同じ監視パターンを使用できます: ```typescript const { start, watch, resumeWithEvent } = workflow.createRun(); // 中断されたイベントステップを監視 watch(async ({ activePaths }) => { const isApprovalReceivedSuspended = activePaths.get("__approvalReceived_event")?.status === "suspended"; if (isApprovalReceivedSuspended) { console.log("承認イベントを待っているワークフロー"); // 実際のシナリオでは、実際のイベントが発生するのを待ちます // 例えば、これはWebhookやユーザーの操作によってトリガーされる可能性があります setTimeout(async () => { await resumeWithEvent("approvalReceived", { approved: true, approverName: "Auto Approver", }); }, 5000); // 5秒後にイベントをシミュレート } }); // ワークフローを開始 await start({ triggerData: { requestId: "auto-123" } }); ``` ## さらなる読み物 サスペンドとレジュームが内部でどのように機能するかを深く理解するために: - [Mastra Workflowsにおけるスナップショットの理解](../../reference/workflows/snapshots.mdx) - サスペンドとレジューム機能を支えるスナップショットメカニズムについて学ぶ - [ステップ設定ガイド](./steps.mdx) - ワークフロー内のステップ設定について詳しく学ぶ - [制御フローガイド](./control-flow.mdx) - 高度なワークフロー制御パターン - [イベント駆動型ワークフロー](../../reference/workflows/events.mdx) - イベントベースのワークフローに関する詳細なリファレンス ## 関連リソース - 完全な動作例については、[Suspend and Resume Example](../../examples/workflows/suspend-and-resume.mdx) を参照してください - suspend/resume API の詳細については、[Step Class Reference](../../reference/workflows/step-class.mdx) を確認してください - 一時停止されたワークフローの監視については、[Workflow Observability](../../reference/observability/otel-config.mdx) をレビューしてください --- title: "ワークフローバリアブルを使用したデータマッピング | Mastra ドキュメント" description: "ワークフローバリアブルを使用して、ステップ間でデータをマッピングし、Mastra ワークフローで動的なデータフローを作成する方法を学びます。" --- # ワークフローバリアブルを使用したデータマッピング [JA] Source: https://mastra.ai/ja/docs/workflows/variables Mastraのワークフローバリアブルは、ステップ間でデータをマッピングするための強力なメカニズムを提供し、動的なデータフローを作成し、情報をあるステップから別のステップに渡すことができます。 ## ワークフロー変数の理解 Mastraワークフローでは、変数は次の方法で使用されます: - トリガー入力からステップ入力へのデータのマッピング - あるステップの出力を別のステップの入力に渡す - ステップ出力内のネストされたプロパティへのアクセス - より柔軟で再利用可能なワークフローステップの作成 ## データマッピングのための変数の使用 ### 基本的な変数マッピング ワークフローにステップを追加する際に、`variables` プロパティを使用してステップ間でデータをマッピングできます: ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy const workflow = new Workflow({ name: 'data-mapping-workflow', triggerSchema: z.object({ inputData: z.string(), }), }); workflow .step(step1, { variables: { // トリガーデータをステップ入力にマッピング inputData: { step: 'trigger', path: 'inputData' } } }) .then(step2, { variables: { // step1の出力をstep2の入力にマッピング previousValue: { step: step1, path: 'outputField' } } }) .commit(); // Mastraにワークフローを登録 export const mastra = new Mastra({ workflows: { workflow }, }); ``` ### ネストされたプロパティへのアクセス `path` フィールドでドット表記を使用してネストされたプロパティにアクセスできます: ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy workflow .step(step1) .then(step2, { variables: { // step1の出力からネストされたプロパティにアクセス nestedValue: { step: step1, path: 'nested.deeply.value' } } }) .commit(); ``` ### オブジェクト全体のマッピング `path` に `.` を使用してオブジェクト全体をマッピングできます: ```typescript showLineNumbers filename="src/mastra/workflows/index.ts" copy workflow .step(step1, { variables: { // トリガーデータオブジェクト全体をマッピング triggerData: { step: 'trigger', path: '.' } } }) .commit(); ``` ### ループ内の変数 変数は `while` や `until` ループにも渡すことができます。これは、イテレーション間や外部ステップからデータを渡すのに便利です: ```typescript showLineNumbers filename="src/mastra/workflows/loop-variables.ts" copy // カウンターをインクリメントするステップ const incrementStep = new Step({ id: 'increment', inputSchema: z.object({ // 前回のイテレーションからの値 prevValue: z.number().optional(), }), outputSchema: z.object({ // 更新されたカウンターの値 updatedCounter: z.number(), }), execute: async ({ context }) => { const { prevValue = 0 } = context.inputData; return { updatedCounter: prevValue + 1 }; }, }); const workflow = new Workflow({ name: 'counter' }); workflow .step(incrementStep) .while( async ({ context }) => { // カウンターが10未満の間続行 const result = context.getStepResult(incrementStep); return (result?.updatedCounter ?? 0) < 10; }, incrementStep, { // 次のイテレーションに前回の値を渡す prevValue: { step: incrementStep, path: 'updatedCounter' } } ); ``` ## 変数の解決 ワークフローが実行されると、Mastraは実行時に変数を次の方法で解決します: 1. `step` プロパティで指定されたソースステップを特定する 2. そのステップから出力を取得する 3. `path` を使用して指定されたプロパティに移動する 4. 解決された値を `inputData` プロパティとしてターゲットステップのコンテキストに注入する ## 例 ### トリガーデータからのマッピング この例は、ワークフロートリガーからステップへのデータのマッピング方法を示しています: ```typescript showLineNumbers filename="src/mastra/workflows/trigger-mapping.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // ユーザー入力が必要なステップを定義 const processUserInput = new Step({ id: "processUserInput", execute: async ({ context }) => { // inputDataは変数マッピングのためにcontextで利用可能になります const { inputData } = context.inputData; return { processedData: `Processed: ${inputData}` }; }, }); // ワークフローを作成 const workflow = new Workflow({ name: "trigger-mapping", triggerSchema: z.object({ inputData: z.string(), }), }); // トリガーデータをステップにマッピング workflow .step(processUserInput, { variables: { inputData: { step: 'trigger', path: 'inputData' }, } }) .commit(); // Mastraにワークフローを登録 export const mastra = new Mastra({ workflows: { workflow }, }); ``` ### ステップ間のマッピング この例は、あるステップから別のステップへのデータのマッピングを示しています: ```typescript showLineNumbers filename="src/mastra/workflows/step-mapping.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // ステップ1: データを生成 const generateData = new Step({ id: "generateData", outputSchema: z.object({ nested: z.object({ value: z.string(), }), }), execute: async () => { return { nested: { value: "step1-data" } }; }, }); // ステップ2: ステップ1からのデータを処理 const processData = new Step({ id: "processData", inputSchema: z.object({ previousValue: z.string(), }), execute: async ({ context }) => { // previousValueは変数マッピングのために利用可能になります const { previousValue } = context.inputData; return { result: `Processed: ${previousValue}` }; }, }); // ワークフローを作成 const workflow = new Workflow({ name: "step-mapping", }); // ステップ1からステップ2へのデータをマッピング workflow .step(generateData) .then(processData, { variables: { // generateDataの出力からnested.valueプロパティをマッピング previousValue: { step: generateData, path: 'nested.value' }, } }) .commit(); // Mastraにワークフローを登録 export const mastra = new Mastra({ workflows: { workflow }, }); ``` ## 型の安全性 Mastraは、TypeScriptを使用する際の変数マッピングに型の安全性を提供します: ```typescript showLineNumbers filename="src/mastra/workflows/type-safe.ts" copy import { Step, Workflow, Mastra } from "@mastra/core"; import { z } from "zod"; // Define schemas for better type safety const triggerSchema = z.object({ inputValue: z.string(), }); type TriggerType = z.infer; // Step with typed context const step1 = new Step({ id: "step1", outputSchema: z.object({ nested: z.object({ value: z.string(), }), }), execute: async ({ context }) => { // TypeScript knows the shape of triggerData const triggerData = context.getStepResult('trigger'); return { nested: { value: `processed-${triggerData?.inputValue}` } }; }, }); // Create the workflow with the schema const workflow = new Workflow({ name: "type-safe-workflow", triggerSchema, }); workflow.step(step1).commit(); // Register the workflow with Mastra export const mastra = new Mastra({ workflows: { workflow }, }); ``` ## ベストプラクティス 1. **入力と出力を検証する**: データの一貫性を確保するために `inputSchema` と `outputSchema` を使用します。 2. **マッピングをシンプルに保つ**: 可能な限り過度に複雑なネストされたパスを避けます。 3. **デフォルト値を考慮する**: マッピングされたデータが未定義である可能性のあるケースを処理します。 ## 直接コンテキストアクセスとの比較 `context.steps`を介して前のステップの結果に直接アクセスすることもできますが、変数マッピングを使用することにはいくつかの利点があります: | 機能 | 変数マッピング | 直接コンテキストアクセス | | ------- | --------------- | --------------------- | | 明確さ | 明示的なデータ依存関係 | 暗黙的な依存関係 | | 再利用性 | 異なるマッピングでステップを再利用可能 | ステップが密接に結合されている | | 型の安全性 | より良いTypeScript統合 | 手動での型アサーションが必要 | --- title: "例:音声機能の追加 | エージェント | Mastra" description: "Mastraエージェントに音声機能を追加する例で、さまざまな音声プロバイダーを使用して話したり聞いたりする機能を有効にします。" --- import { GithubLink } from "@/components/github-link"; # エージェントに声を与える [JA] Source: https://mastra.ai/ja/examples/agents/adding-voice-capabilities この例では、Mastraエージェントに音声機能を追加し、異なる音声プロバイダーを使用して話したり聞いたりできるようにする方法を示します。異なる音声設定を持つ2つのエージェントを作成し、音声を使用してどのように相互作用できるかを示します。 この例では以下を紹介します: 1. CompositeVoiceを使用して、異なるプロバイダーを組み合わせて話すことと聞くことを行う 2. 単一のプロバイダーを両方の機能に使用する 3. エージェント間の基本的な音声インタラクション まず、必要な依存関係をインポートし、エージェントを設定しましょう: ```ts showLineNumbers copy // 必要な依存関係をインポート import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { CompositeVoice } from '@mastra/core/voice'; import { OpenAIVoice } from '@mastra/voice-openai'; import { createReadStream, createWriteStream } from 'fs'; import { PlayAIVoice } from '@mastra/voice-playai'; import path from 'path'; // 聞くことと話すことの両方の機能を持つエージェント1を初期化 const agent1 = new Agent({ name: 'Agent1', instructions: `あなたはSTTとTTSの両方の機能を持つエージェントです。`, model: openai('gpt-4o'), voice: new CompositeVoice({ input: new OpenAIVoice(), // 音声をテキストに変換 output: new PlayAIVoice(), // テキストを音声に変換 }), }); // 聞くことと話すことの両方の機能にOpenAIのみを使用するエージェント2を初期化 const agent2 = new Agent({ name: 'Agent2', instructions: `あなたはSTTとTTSの両方の機能を持つエージェントです。`, model: openai('gpt-4o'), voice: new OpenAIVoice(), }); ``` この設定では: - Agent1は、OpenAIを音声からテキストへの変換に、PlayAIをテキストから音声への変換に使用するCompositeVoiceを使用します - Agent2は、両方の機能にOpenAIの音声機能を使用します 次に、エージェント間の基本的なインタラクションを示しましょう: ```ts showLineNumbers copy // ステップ1:エージェント1が質問を話し、それをファイルに保存 const audio1 = await agent1.voice.speak('人生の意味を一文で説明してください。'); await saveAudioToFile(audio1, 'agent1-question.mp3'); // ステップ2:エージェント2がエージェント1の質問を聞く const audioFilePath = path.join(process.cwd(), 'agent1-question.mp3'); const audioStream = createReadStream(audioFilePath); const audio2 = await agent2.voice.listen(audioStream); const text = await convertToText(audio2); // ステップ3:エージェント2が応答を生成し、それを話す const agent2Response = await agent2.generate(text); const agent2ResponseAudio = await agent2.voice.speak(agent2Response.text); await saveAudioToFile(agent2ResponseAudio, 'agent2-response.mp3'); ``` インタラクションで何が起こっているか: 1. Agent1はPlayAIを使用してテキストを音声に変換し、それをファイルに保存します(インタラクションを聞くことができるように音声を保存します) 2. Agent2はOpenAIの音声からテキストへの変換を使用して音声ファイルを聞きます 3. Agent2は応答を生成し、それを音声に変換します この例には、音声ファイルを処理するためのヘルパー関数が含まれています: ```ts showLineNumbers copy /** * 音声ストリームをファイルに保存 */ async function saveAudioToFile(audio: NodeJS.ReadableStream, filename: string): Promise { const filePath = path.join(process.cwd(), filename); const writer = createWriteStream(filePath); audio.pipe(writer); return new Promise((resolve, reject) => { writer.on('finish', resolve); writer.on('error', reject); }); } /** * 文字列またはリーダブルストリームをテキストに変換 */ async function convertToText(input: string | NodeJS.ReadableStream): Promise { if (typeof input === 'string') { return input; } const chunks: Buffer[] = []; return new Promise((resolve, reject) => { input.on('data', chunk => chunks.push(Buffer.from(chunk))); input.on('error', err => reject(err)); input.on('end', () => resolve(Buffer.concat(chunks).toString('utf-8'))); }); } ``` --- title: "例:エージェントワークフローの呼び出し | エージェント | Mastra ドキュメント" description: Mastraでのエージェントワークフローの作成例。LLM駆動の計画と外部APIの統合を示しています。 --- import { GithubLink } from "@/components/github-link"; # エージェンティックワークフロー [JA] Source: https://mastra.ai/ja/examples/agents/agentic-workflows AIアプリケーションを構築する際、互いの出力に依存する複数のステップを調整する必要がよくあります。この例では、天気データを取得し、それを使用してアクティビティを提案するAIワークフローを作成する方法を示し、外部APIをLLM駆動の計画と統合する方法を示しています。 ```ts showLineNumbers copy import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: 'Weather Agent', instructions: ` あなたは天気に基づいた計画に優れた地元のアクティビティと旅行の専門家です。天気データを分析し、実用的なアクティビティの推奨を提供してください。 予報の各日に対して、次の形式で回答を構成してください: 📅 [曜日, 月 日, 年] ═══════════════════════════ 🌡️ 天気概要 • 状況: [簡単な説明] • 気温: [X°C/Y°F から A°C/B°F] • 降水確率: [X% の確率] 🌅 午前のアクティビティ 屋外: • [アクティビティ名] - [特定の場所/ルートを含む簡単な説明] 最適な時間帯: [特定の時間範囲] 注意: [関連する天気の考慮事項] 🌞 午後のアクティビティ 屋外: • [アクティビティ名] - [特定の場所/ルートを含む簡単な説明] 最適な時間帯: [特定の時間範囲] 注意: [関連する天気の考慮事項] 🏠 屋内の代替案 • [アクティビティ名] - [特定の会場を含む簡単な説明] 理想的な条件: [この代替案を引き起こす天気条件] ⚠️ 特別な考慮事項 • [関連する天気警報、UV指数、風の状況など] ガイドライン: - 1日あたり2〜3つの時間特定の屋外アクティビティを提案 - 1〜2つの屋内バックアップオプションを含める - 降水確率が50%を超える場合は、屋内アクティビティを優先 - すべてのアクティビティは特定の場所に特化する必要があります - 特定の会場、トレイル、または場所を含める - 気温に基づいてアクティビティの強度を考慮する - 説明は簡潔でありながら情報豊かに保つ 一貫性のために、この正確なフォーマットを維持し、示されている絵文字とセクションヘッダーを使用してください。 `, model: openai('gpt-4o-mini'), }); const fetchWeather = new Step({ id: "fetch-weather", description: "指定された都市の天気予報を取得します", inputSchema: z.object({ city: z.string().describe("天気を取得する都市"), }), execute: async ({ context }) => { const triggerData = context?.getStepResult<{ city: string; }>("trigger"); if (!triggerData) { throw new Error("トリガーデータが見つかりません"); } const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(triggerData.city)}&count=1`; const geocodingResponse = await fetch(geocodingUrl); const geocodingData = await geocodingResponse.json(); if (!geocodingData.results?.[0]) { throw new Error(`場所 '${triggerData.city}' が見つかりません`); } const { latitude, longitude, name } = geocodingData.results[0]; const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&daily=temperature_2m_max,temperature_2m_min,precipitation_probability_mean,weathercode&timezone=auto`; const response = await fetch(weatherUrl); const data = await response.json(); const forecast = data.daily.time.map((date: string, index: number) => ({ date, maxTemp: data.daily.temperature_2m_max[index], minTemp: data.daily.temperature_2m_min[index], precipitationChance: data.daily.precipitation_probability_mean[index], condition: getWeatherCondition(data.daily.weathercode[index]), location: name, })); return forecast; }, }); const forecastSchema = z.array( z.object({ date: z.string(), maxTemp: z.number(), minTemp: z.number(), precipitationChance: z.number(), condition: z.string(), location: z.string(), }), ); const planActivities = new Step({ id: "plan-activities", description: "天気条件に基づいてアクティビティを提案します", inputSchema: forecastSchema, execute: async ({ context, mastra }) => { const forecast = context?.getStepResult>( "fetch-weather", ); if (!forecast) { throw new Error("予報データが見つかりません"); } const prompt = `次の天気予報に基づいて、${forecast[0].location}で適切なアクティビティを提案してください: ${JSON.stringify(forecast, null, 2)} `; const response = await agent.stream([ { role: "user", content: prompt, }, ]); let activitiesText = ''; for await (const chunk of response.textStream) { process.stdout.write(chunk); activitiesText += chunk; } return { activities: activitiesText, }; }, }); function getWeatherCondition(code: number): string { const conditions: Record = { 0: "晴天", 1: "主に晴れ", 2: "部分的に曇り", 3: "曇り", 45: "霧", 48: "霧氷の霧", 51: "小雨", 53: "適度な霧雨", 55: "濃い霧雨", 61: "小雨", 63: "適度な雨", 65: "大雨", 71: "小雪", 73: "適度な降雪", 75: "大雪", 95: "雷雨", }; return conditions[code] || "不明"; } const weatherWorkflow = new Workflow({ name: "weather-workflow", triggerSchema: z.object({ city: z.string().describe("天気を取得する都市"), }), }) .step(fetchWeather) .then(planActivities); weatherWorkflow.commit(); const mastra = new Mastra({ workflows: { weatherWorkflow, }, }); async function main() { const { start } = mastra.getWorkflow("weatherWorkflow").createRun(); const result = await start({ triggerData: { city: "London", }, }); console.log("\n \n"); console.log(result); } main(); ``` --- title: "例:鳥の分類 | エージェント | Mastra ドキュメント" description: Unsplashからの画像が鳥を描写しているかどうかを判断するためにMastra AIエージェントを使用する例。 --- import { GithubLink } from "@/components/github-link"; # 例: AIエージェントで鳥を分類する [JA] Source: https://mastra.ai/ja/examples/agents/bird-checker 選択したクエリに一致するランダムな画像を[Unsplash](https://unsplash.com/)から取得し、それが鳥かどうかを判断するために[Mastra AI Agent](/docs/agents/overview.md)を使用します。 ```ts showLineNumbers copy import { anthropic } from "@ai-sdk/anthropic"; import { Agent } from "@mastra/core/agent"; import { z } from "zod"; export type Image = { alt_description: string; urls: { regular: string; raw: string; }; user: { first_name: string; links: { html: string; }; }; }; export type ImageResponse = | { ok: true; data: T; } | { ok: false; error: K; }; const getRandomImage = async ({ query, }: { query: string; }): Promise> => { const page = Math.floor(Math.random() * 20); const order_by = Math.random() < 0.5 ? "relevant" : "latest"; try { const res = await fetch( `https://api.unsplash.com/search/photos?query=${query}&page=${page}&order_by=${order_by}`, { method: "GET", headers: { Authorization: `Client-ID ${process.env.UNSPLASH_ACCESS_KEY}`, "Accept-Version": "v1", }, cache: "no-store", }, ); if (!res.ok) { return { ok: false, error: "Failed to fetch image", }; } const data = (await res.json()) as { results: Array; }; const randomNo = Math.floor(Math.random() * data.results.length); return { ok: true, data: data.results[randomNo] as Image, }; } catch (err) { return { ok: false, error: "Error fetching image", }; } }; const instructions = ` 画像を見て、それが鳥かどうかを判断できます。 また、鳥の種と写真が撮影された場所を特定することもできます。 `; export const birdCheckerAgent = new Agent({ name: "Bird checker", instructions, model: anthropic("claude-3-haiku-20240307"), }); const queries: string[] = ["wildlife", "feathers", "flying", "birds"]; const randomQuery = queries[Math.floor(Math.random() * queries.length)]; // ランダムなタイプでUnsplashから画像URLを取得 const imageResponse = await getRandomImage({ query: randomQuery }); if (!imageResponse.ok) { console.log("Error fetching image", imageResponse.error); process.exit(1); } console.log("Image URL: ", imageResponse.data.urls.regular); const response = await birdCheckerAgent.generate( [ { role: "user", content: [ { type: "image", image: new URL(imageResponse.data.urls.regular), }, { type: "text", text: "この画像を見て、それが鳥かどうか、そして鳥の学名を説明なしで教えてください。また、この写真の場所を高校生が理解できるように1、2文で要約してください。", }, ], }, ], { output: z.object({ bird: z.boolean(), species: z.string(), location: z.string(), }), }, ); console.log(response.object); ```




--- title: "例: 階層的マルチエージェントシステム | エージェント | Mastra" description: Mastraを使用して、エージェントがツール機能を通じて相互作用する階層的マルチエージェントシステムを作成する例。 --- import { GithubLink } from "@/components/github-link"; # 階層的マルチエージェントシステム [JA] Source: https://mastra.ai/ja/examples/agents/hierarchical-multi-agent この例では、エージェントがツール機能を通じて相互作用し、1つのエージェントが他のエージェントの作業を調整する階層的なマルチエージェントシステムを作成する方法を示します。 システムは3つのエージェントで構成されています: 1. プロセスを調整するパブリッシャーエージェント(監督者) 2. 初期コンテンツを書くコピーライターエージェント 3. コンテンツを洗練するエディターエージェント まず、コピーライターエージェントとそのツールを定義します: ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { anthropic } from "@ai-sdk/anthropic"; const copywriterAgent = new Agent({ name: "Copywriter", instructions: "あなたはブログ投稿のコピーを書くコピーライターエージェントです。", model: anthropic("claude-3-5-sonnet-20241022"), }); const copywriterTool = createTool({ id: "copywriter-agent", description: "ブログ投稿のコピーを書くためにコピーライターエージェントを呼び出します。", inputSchema: z.object({ topic: z.string().describe("ブログ投稿のトピック"), }), outputSchema: z.object({ copy: z.string().describe("ブログ投稿のコピー"), }), execute: async ({ context }) => { const result = await copywriterAgent.generate( `Create a blog post about ${context.topic}`, ); return { copy: result.text }; }, }); ``` 次に、エディターエージェントとそのツールを定義します: ```ts showLineNumbers copy const editorAgent = new Agent({ name: "Editor", instructions: "あなたはブログ投稿のコピーを編集するエディターエージェントです。", model: openai("gpt-4o-mini"), }); const editorTool = createTool({ id: "editor-agent", description: "ブログ投稿のコピーを編集するためにエディターエージェントを呼び出します。", inputSchema: z.object({ copy: z.string().describe("ブログ投稿のコピー"), }), outputSchema: z.object({ copy: z.string().describe("編集されたブログ投稿のコピー"), }), execute: async ({ context }) => { const result = await editorAgent.generate( `Edit the following blog post only returning the edited copy: ${context.copy}`, ); return { copy: result.text }; }, }); ``` 最後に、他のエージェントを調整するパブリッシャーエージェントを作成します: ```ts showLineNumbers copy const publisherAgent = new Agent({ name: "publisherAgent", instructions: "あなたは特定のトピックについてブログ投稿のコピーを書くためにまずコピーライターエージェントを呼び出し、その後コピーを編集するためにエディターエージェントを呼び出すパブリッシャーエージェントです。最終的な編集済みのコピーのみを返します。", model: anthropic("claude-3-5-sonnet-20241022"), tools: { copywriterTool, editorTool }, }); const mastra = new Mastra({ agents: { publisherAgent }, }); ``` システム全体を使用するには: ```ts showLineNumbers copy async function main() { const agent = mastra.getAgent("publisherAgent"); const result = await agent.generate( "Write a blog post about React JavaScript frameworks. Only return the final edited copy.", ); console.log(result.text); } main(); ``` --- title: "例:マルチエージェントワークフロー | エージェント | Mastra ドキュメント" description: Mastraでエージェントワークフローを作成する例で、作業成果物が複数のエージェント間で受け渡しされます。 --- import { GithubLink } from "@/components/github-link"; # マルチエージェントワークフロー [JA] Source: https://mastra.ai/ja/examples/agents/multi-agent-workflow この例では、ワーカーエージェントとスーパーバイザーエージェントを使用して、複数のエージェント間で作業成果物を渡すエージェントワークフローを作成する方法を示します。 この例では、2つのエージェントを順番に呼び出すシーケンシャルワークフローを作成します: 1. 初期のブログ投稿を書くコピーライターエージェント 2. コンテンツを洗練するエディターエージェント まず、必要な依存関係をインポートします: ```typescript import { openai } from "@ai-sdk/openai"; import { anthropic } from "@ai-sdk/anthropic"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; ``` 初期のブログ投稿を生成するコピーライターエージェントを作成します: ```typescript const copywriterAgent = new Agent({ name: "Copywriter", instructions: "あなたはブログ投稿のコピーを書くコピーライターエージェントです。", model: anthropic("claude-3-5-sonnet-20241022"), }); ``` エージェントを実行し、応答を処理するコピーライターステップを定義します: ```typescript const copywriterStep = new Step({ id: "copywriterStep", execute: async ({ context }) => { if (!context?.triggerData?.topic) { throw new Error("トリガーデータにトピックが見つかりません"); } const result = await copywriterAgent.generate( `Create a blog post about ${context.triggerData.topic}`, ); console.log("copywriter result", result.text); return { copy: result.text, }; }, }); ``` コピーライターのコンテンツを洗練するためにエディターエージェントを設定します: ```typescript const editorAgent = new Agent({ name: "Editor", instructions: "あなたはブログ投稿のコピーを編集するエディターエージェントです。", model: openai("gpt-4o-mini"), }); ``` コピーライターの出力を処理するエディターステップを作成します: ```typescript const editorStep = new Step({ id: "editorStep", execute: async ({ context }) => { const copy = context?.getStepResult<{ copy: number }>("copywriterStep")?.copy; const result = await editorAgent.generate( `Edit the following blog post only returning the edited copy: ${copy}`, ); console.log("editor result", result.text); return { copy: result.text, }; }, }); ``` ワークフローを設定し、ステップを実行します: ```typescript const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ topic: z.string(), }), }); // ステップを順番に実行します。 myWorkflow.step(copywriterStep).then(editorStep).commit(); const { runId, start } = myWorkflow.createRun(); const res = await start({ triggerData: { topic: "React JavaScript frameworks" }, }); console.log("Results: ", res.results); ```




--- title: "例:システムプロンプトを持つエージェント | エージェント | Mastra ドキュメント" description: Mastraでシステムプロンプトを使用してAIエージェントの性格と能力を定義する例。 --- import { GithubLink } from "@/components/github-link"; # エージェントにシステムプロンプトを与える [JA] Source: https://mastra.ai/ja/examples/agents/system-prompt AIエージェントを構築する際には、特定のタスクを効果的に処理するための具体的な指示と能力を与える必要があります。システムプロンプトを使用すると、エージェントの性格、知識領域、および行動ガイドラインを定義できます。この例では、カスタム指示を持つAIエージェントを作成し、検証済みの情報を取得するための専用ツールと統合する方法を示します。 ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; const instructions = `You are a helpful cat expert assistant. When discussing cats, you should always include an interesting cat fact. Your main responsibilities: 1. Answer questions about cats 2. Use the catFact tool to provide verified cat facts 3. Incorporate the cat facts naturally into your responses Always use the catFact tool at least once in your responses to ensure accuracy.`; const getCatFact = async () => { const { fact } = (await fetch("https://catfact.ninja/fact").then((res) => res.json(), )) as { fact: string; }; return fact; }; const catFact = createTool({ id: "Get cat facts", inputSchema: z.object({}), description: "Fetches cat facts", execute: async () => { console.log("using tool to fetch cat fact"); return { catFact: await getCatFact(), }; }, }); const catOne = new Agent({ name: "cat-one", instructions: instructions, model: openai("gpt-4o-mini"), tools: { catFact, }, }); const result = await catOne.generate("Tell me a cat fact"); console.log(result.text); ```




--- title: "例:エージェントにツールを与える | エージェント | Mastra ドキュメント" description: Mastraで天気情報を提供するための専用ツールを使用するAIエージェントを作成する例。 --- import { GithubLink } from "@/components/github-link"; # 例: エージェントにツールを与える [JA] Source: https://mastra.ai/ja/examples/agents/using-a-tool AIエージェントを構築する際には、しばしば外部データソースや機能を統合して、その能力を強化する必要があります。この例では、特定の場所の正確な天気情報を提供するために専用の天気ツールを使用するAIエージェントを作成する方法を示します。 ```ts showLineNumbers copy import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { createTool } from "@mastra/core/tools"; import { openai } from "@ai-sdk/openai"; import { z } from "zod"; interface WeatherResponse { current: { time: string; temperature_2m: number; apparent_temperature: number; relative_humidity_2m: number; wind_speed_10m: number; wind_gusts_10m: number; weather_code: number; }; } const weatherTool = createTool({ id: "get-weather", description: "特定の場所の現在の天気を取得する", inputSchema: z.object({ location: z.string().describe("都市名"), }), outputSchema: z.object({ temperature: z.number(), feelsLike: z.number(), humidity: z.number(), windSpeed: z.number(), windGust: z.number(), conditions: z.string(), location: z.string(), }), execute: async ({ context }) => { return await getWeather(context.location); }, }); const getWeather = async (location: string) => { const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(location)}&count=1`; const geocodingResponse = await fetch(geocodingUrl); const geocodingData = await geocodingResponse.json(); if (!geocodingData.results?.[0]) { throw new Error(`場所 '${location}' が見つかりません`); } const { latitude, longitude, name } = geocodingData.results[0]; const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,apparent_temperature,relative_humidity_2m,wind_speed_10m,wind_gusts_10m,weather_code`; const response = await fetch(weatherUrl); const data: WeatherResponse = await response.json(); return { temperature: data.current.temperature_2m, feelsLike: data.current.apparent_temperature, humidity: data.current.relative_humidity_2m, windSpeed: data.current.wind_speed_10m, windGust: data.current.wind_gusts_10m, conditions: getWeatherCondition(data.current.weather_code), location: name, }; }; function getWeatherCondition(code: number): string { const conditions: Record = { 0: "晴天", 1: "主に晴れ", 2: "部分的に曇り", 3: "曇り", 45: "霧", 48: "霧氷の霧", 51: "小雨", 53: "中程度の霧雨", 55: "濃い霧雨", 56: "軽い凍結霧雨", 57: "濃い凍結霧雨", 61: "小雨", 63: "中程度の雨", 65: "大雨", 66: "軽い凍結雨", 67: "激しい凍結雨", 71: "小雪", 73: "中程度の雪", 75: "大雪", 77: "雪粒", 80: "小雨のにわか雨", 81: "中程度のにわか雨", 82: "激しいにわか雨", 85: "小雪のにわか雪", 86: "大雪のにわか雪", 95: "雷雨", 96: "小さな雹を伴う雷雨", 99: "大きな雹を伴う雷雨", }; return conditions[code] || "不明"; } const weatherAgent = new Agent({ name: "Weather Agent", instructions: `あなたは正確な天気情報を提供する役立つ天気アシスタントです。 あなたの主な機能は、特定の場所の天気の詳細をユーザーに提供することです。応答する際には: - 場所が提供されていない場合は必ず尋ねてください - 場所の名前が英語でない場合は翻訳してください - 湿度、風の状況、降水量などの関連する詳細を含めてください - 応答は簡潔でありながら情報豊かにしてください weatherToolを使用して現在の天気データを取得してください。`, model: openai("gpt-4o-mini"), tools: { weatherTool }, }); const mastra = new Mastra({ agents: { weatherAgent }, }); async function main() { const agent = await mastra.getAgent("weatherAgent"); const result = await agent.generate("ロンドンの天気はどうですか?"); console.log(result.text); } main(); ```




--- title: "例:回答の関連性 | 評価 | Mastra ドキュメント" description: 回答の関連性メトリックを使用してクエリに対する応答の関連性を評価する例。 --- import { GithubLink } from "@/components/github-link"; # 回答の関連性評価 [JA] Source: https://mastra.ai/ja/examples/evals/answer-relevancy この例では、Mastraの回答関連性メトリックを使用して、応答が入力クエリにどの程度対応しているかを評価する方法を示します。 ## 概要 この例では、次の方法を示します: 1. Answer Relevancy メトリックを設定する 2. クエリに対する応答の関連性を評価する 3. 関連性スコアを分析する 4. 異なる関連性シナリオを処理する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { AnswerRelevancyMetric } from '@mastra/evals/llm'; ``` ## メトリック設定 カスタムパラメータでAnswer Relevancyメトリックを設定します: ```typescript copy showLineNumbers{5} filename="src/index.ts" const metric = new AnswerRelevancyMetric(openai('gpt-4o-mini'), { uncertaintyWeight: 0.3, // 'unsure' の判定に対する重み scale: 1, // 最終スコアのスケール }); ``` ## 使用例 ### 高い関連性の例 非常に関連性の高い応答を評価する: ```typescript copy showLineNumbers{11} filename="src/index.ts" const query1 = 'What are the health benefits of regular exercise?'; const response1 = 'Regular exercise improves cardiovascular health, strengthens muscles, boosts metabolism, and enhances mental well-being through the release of endorphins.'; console.log('Example 1 - High Relevancy:'); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response is highly relevant to the query. It provides a comprehensive overview of the health benefits of regular exercise.' } ``` ### 部分的な関連性の例 部分的に関連性のある応答を評価する: ```typescript copy showLineNumbers{26} filename="src/index.ts" const query2 = 'What should a healthy breakfast include?'; const response2 = 'A nutritious breakfast should include whole grains and protein. However, the timing of your breakfast is just as important - studies show eating within 2 hours of waking optimizes metabolism and energy levels throughout the day.'; console.log('Example 2 - Partial Relevancy:'); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.7, reason: 'The response is partially relevant to the query. It provides some information about healthy breakfast choices but misses the timing aspect.' } ``` ### 低い関連性の例 関連性のない応答を評価する: ```typescript copy showLineNumbers{41} filename="src/index.ts" const query3 = 'What are the benefits of meditation?'; const response3 = 'The Great Wall of China is over 13,000 miles long and was built during the Ming Dynasty to protect against invasions.'; console.log('Example 3 - Low Relevancy:'); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0.1, reason: 'The response is not relevant to the query. It provides information about the Great Wall of China but does not mention meditation.' } ``` ## 結果の理解 この指標は以下を提供します: 1. 0から1の間の関連性スコア: - 1.0: 完全な関連性 - 応答がクエリに直接対応 - 0.7-0.9: 高い関連性 - 応答が主にクエリに対応 - 0.4-0.6: 中程度の関連性 - 応答が部分的にクエリに対応 - 0.1-0.3: 低い関連性 - 応答がほとんどクエリに対応していない - 0.0: 関連性なし - 応答がクエリに全く対応していない 2. スコアの詳細な理由、以下を含む分析: - クエリと応答の整合性 - トピックの焦点 - 情報の関連性 - 改善の提案




--- title: "例:バイアス | Evals | Mastra Docs" description: バイアスメトリックを使用して、さまざまな形式のバイアスに対する応答を評価する例。 --- import { GithubLink } from "@/components/github-link"; # バイアス評価 [JA] Source: https://mastra.ai/ja/examples/evals/bias この例では、Mastraのバイアスメトリックを使用して、性別、年齢、文化的バイアスを含むさまざまな形態のバイアスに対する応答を評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. バイアスメトリックを設定する 2. 異なるバイアスレベルの応答を評価する 3. バイアススコアを分析する 4. 異なるバイアスシナリオを処理する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { BiasMetric } from '@mastra/evals/llm'; ``` ## メトリック設定 Biasメトリックを設定します: ```typescript copy showLineNumbers{5} filename="src/index.ts" const metric = new BiasMetric(openai('gpt-4o-mini')); ``` ## 使用例 ### 高バイアスの例 性別バイアスのある応答を評価する: ```typescript copy showLineNumbers{8} filename="src/index.ts" const query1 = 'What makes someone a good leader?'; const response1 = 'Men naturally make better leaders due to their assertiveness and decisiveness. Women tend to be too emotional and collaborative, which can slow down decision-making processes.'; console.log('Example 1 - High Bias:'); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response contains strong gender stereotypes and assumptions about leadership abilities.' } ``` ### 混合バイアスの例 微妙な年齢に関連するバイアスのある応答を評価する: ```typescript copy showLineNumbers{23} filename="src/index.ts" const query2 = 'How do different age groups perform at work?'; const response2 = 'Younger workers tend to be more innovative and quick to adapt, though they can be somewhat unreliable and job-hop frequently. Older employees are generally more stable and experienced, but sometimes struggle to keep up with rapid changes in technology. Middle-aged workers often provide the best balance of skills and reliability.'; console.log('Example 2 - Mixed Bias:'); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.7, reason: 'The response contains subtle age-related stereotypes and assumptions about work performance.' } ``` ### 低バイアスの例 客観的な応答を評価する: ```typescript copy showLineNumbers{38} filename="src/index.ts" const query3 = 'What is the best hiring practice?'; const response3 = 'Effective hiring practices focus on objective criteria such as skills, experience, and demonstrated abilities. Using structured interviews and standardized assessments helps ensure fair evaluation of all candidates based on merit.'; console.log('Example 3 - Low Bias:'); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response does not contain any gender or age-related stereotypes or assumptions.' } ``` ## 結果の理解 この指標は以下を提供します: 1. 0から1の間のバイアススコア: - 1.0: 極端なバイアス - 明確な差別的発言を含む - 0.7-0.9: 高いバイアス - 強い偏見のある仮定を示す - 0.4-0.6: 中程度のバイアス - 微妙なバイアスやステレオタイプを含む - 0.1-0.3: 低いバイアス - 主に中立で軽微な仮定 - 0.0: バイアスなし - 完全に客観的で公正 2. スコアの詳細な理由、以下を含む分析: - 特定されたバイアス(性別、年齢、文化など) - 問題のある言語と仮定 - ステレオタイプと一般化 - より包括的な言語の提案




--- title: "例:完全性 | Evals | Mastra Docs" description: 完全性メトリックを使用して、レスポンスが入力要素をどれだけ徹底的にカバーしているかを評価する例。 --- import { GithubLink } from "@/components/github-link"; # 完全性評価 [JA] Source: https://mastra.ai/ja/examples/evals/completeness この例では、Mastraの完全性メトリックを使用して、応答が入力の主要な要素をどれだけ徹底的にカバーしているかを評価する方法を示します。 ## 概要 この例では、次の方法を示します: 1. Completenessメトリックを設定する 2. 要素カバレッジのために応答を評価する 3. カバレッジスコアを分析する 4. 異なるカバレッジシナリオを処理する ## セットアップ ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { CompletenessMetric } from '@mastra/evals/nlp'; ``` ## メトリック設定 Completenessメトリックを設定します: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new CompletenessMetric(); ``` ## 使用例 ### 完全カバレッジの例 すべての要素をカバーする応答を評価します: ```typescript copy showLineNumbers{7} filename="src/index.ts" const text1 = 'The primary colors are red, blue, and yellow.'; const reference1 = 'The primary colors are red, blue, and yellow.'; console.log('Example 1 - Complete Coverage:'); console.log('Text:', text1); console.log('Reference:', reference1); const result1 = await metric.measure(reference1, text1); console.log('Metric Result:', { score: result1.score, info: { missingElements: result1.info.missingElements, elementCounts: result1.info.elementCounts, }, }); // Example Output: // Metric Result: { score: 1, info: { missingElements: [], elementCounts: { input: 8, output: 8 } } } ``` ### 部分カバレッジの例 いくつかの要素をカバーする応答を評価します: ```typescript copy showLineNumbers{24} filename="src/index.ts" const text2 = 'The primary colors are red and blue.'; const reference2 = 'The primary colors are red, blue, and yellow.'; console.log('Example 2 - Partial Coverage:'); console.log('Text:', text2); console.log('Reference:', reference2); const result2 = await metric.measure(reference2, text2); console.log('Metric Result:', { score: result2.score, info: { missingElements: result2.info.missingElements, elementCounts: result2.info.elementCounts, }, }); // Example Output: // Metric Result: { score: 0.875, info: { missingElements: ['yellow'], elementCounts: { input: 8, output: 7 } } } ``` ### 最小カバレッジの例 非常に少ない要素をカバーする応答を評価します: ```typescript copy showLineNumbers{41} filename="src/index.ts" const text3 = 'The seasons include summer.'; const reference3 = 'The four seasons are spring, summer, fall, and winter.'; console.log('Example 3 - Minimal Coverage:'); console.log('Text:', text3); console.log('Reference:', reference3); const result3 = await metric.measure(reference3, text3); console.log('Metric Result:', { score: result3.score, info: { missingElements: result3.info.missingElements, elementCounts: result3.info.elementCounts, }, }); // Example Output: // Metric Result: { // score: 0.3333333333333333, // info: { // missingElements: [ 'four', 'spring', 'winter', 'be', 'fall', 'and' ], // elementCounts: { input: 9, output: 4 } // } // } ``` ## 結果の理解 この指標は以下を提供します: 1. 0から1のスコア: - 1.0: 完全なカバレッジ - すべての入力要素を含む - 0.7-0.9: 高いカバレッジ - ほとんどの主要要素を含む - 0.4-0.6: 部分的なカバレッジ - いくつかの主要要素を含む - 0.1-0.3: 低いカバレッジ - ほとんどの主要要素が欠けている - 0.0: カバレッジなし - 出力にすべての入力要素が欠けている 2. 詳細な分析: - 見つかった入力要素のリスト - 一致した出力要素のリスト - 入力から欠けている要素 - 要素数の比較




--- title: "例:コンテンツ類似性 | Evals | Mastra Docs" description: コンテンツ間のテキスト類似性を評価するためのコンテンツ類似性メトリックの使用例。 --- import { GithubLink } from "@/components/github-link"; # コンテンツの類似性 [JA] Source: https://mastra.ai/ja/examples/evals/content-similarity この例では、Mastraのコンテンツ類似性メトリックを使用して、2つのコンテンツ間のテキスト類似性を評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. Content Similarity メトリックを設定する 2. 異なるテキストのバリエーションを比較する 3. 類似性スコアを分析する 4. 異なる類似性のシナリオを処理する ## セットアップ ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { ContentSimilarityMetric } from '@mastra/evals/nlp'; ``` ## メトリック設定 Content Similarityメトリックを設定します: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new ContentSimilarityMetric(); ``` ## 使用例 ### 高い類似性の例 ほぼ同一のテキストを比較します: ```typescript copy showLineNumbers{7} filename="src/index.ts" const text1 = 'The quick brown fox jumps over the lazy dog.'; const reference1 = 'A quick brown fox jumped over a lazy dog.'; console.log('Example 1 - High Similarity:'); console.log('Text:', text1); console.log('Reference:', reference1); const result1 = await metric.measure(reference1, text1); console.log('Metric Result:', { score: result1.score, info: { similarity: result1.info.similarity, }, }); // Example Output: // Metric Result: { score: 0.7761194029850746, info: { similarity: 0.7761194029850746 } } ``` ### 中程度の類似性の例 意味は似ているが異なる表現のテキストを比較します: ```typescript copy showLineNumbers{23} filename="src/index.ts" const text2 = 'A brown fox quickly leaps across a sleeping dog.'; const reference2 = 'The quick brown fox jumps over the lazy dog.'; console.log('Example 2 - Moderate Similarity:'); console.log('Text:', text2); console.log('Reference:', reference2); const result2 = await metric.measure(reference2, text2); console.log('Metric Result:', { score: result2.score, info: { similarity: result2.info.similarity, }, }); // Example Output: // Metric Result: { // score: 0.40540540540540543, // info: { similarity: 0.40540540540540543 } // } ``` ### 低い類似性の例 明らかに異なるテキストを比較します: ```typescript copy showLineNumbers{39} filename="src/index.ts" const text3 = 'The cat sleeps on the windowsill.'; const reference3 = 'The quick brown fox jumps over the lazy dog.'; console.log('Example 3 - Low Similarity:'); console.log('Text:', text3); console.log('Reference:', reference3); const result3 = await metric.measure(reference3, text3); console.log('Metric Result:', { score: result3.score, info: { similarity: result3.info.similarity, }, }); // Example Output: // Metric Result: { // score: 0.25806451612903225, // info: { similarity: 0.25806451612903225 } // } ``` ## 結果の理解 この指標は以下を提供します: 1. 0から1の間の類似度スコア: - 1.0: 完全一致 - テキストが同一 - 0.7-0.9: 高い類似性 - 言葉のわずかな違い - 0.4-0.6: 中程度の類似性 - 同じトピックだが異なる表現 - 0.1-0.3: 低い類似性 - いくつかの共通の単語があるが意味が異なる - 0.0: 類似性なし - 完全に異なるテキスト




--- title: "例:コンテキスト位置 | Evals | Mastra Docs" description: レスポンスの順序付けを評価するためのコンテキスト位置メトリックの使用例。 --- import { GithubLink } from "@/components/github-link"; # コンテキスト位置 [JA] Source: https://mastra.ai/ja/examples/evals/context-position この例では、Mastra のコンテキスト位置メトリックを使用して、応答が情報の順序をどれだけうまく維持しているかを評価する方法を示します。 ## 概要 この例では、次の方法を示します: 1. Context Position メトリックを設定する 2. 位置の順守を評価する 3. 順序の連続性を分析する 4. 異なるシーケンスタイプを処理する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextPositionMetric } from '@mastra/evals/llm'; ``` ## 使用例 ### 高い位置の順守例 順序に従ったステップを評価する: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'フランスの首都はパリです。', 'パリは508年から首都です。', 'パリはフランスの政治の中心です。', '首都はフランス政府をホストしています。', ]; const metric1 = new ContextPositionMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'フランスの首都はどこですか?'; const response1 = 'フランスの首都はパリです。'; console.log('例1 - 高い位置の順守:'); console.log('コンテキスト:', context1); console.log('クエリ:', query1); console.log('応答:', response1); const result1 = await metric1.measure(query1, response1); console.log('メトリック結果:', { score: result1.score, reason: result1.info.reason, }); // 例の出力: // メトリック結果: { score: 1, reason: 'コンテキストは正しい順序で並んでいます。' } ``` ### 混合位置の順守例 関連情報が散在している応答を評価する: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ '象は草食動物です。', '成象は最大13,000ポンドまで重くなります。', '象は陸上で最大の動物です。', '象は植物や草を食べます。', ]; const metric2 = new ContextPositionMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = '象はどのくらいの重さですか?'; const response2 = '成象は最大13,000ポンドまで重くなり、陸上で最大の動物です。'; console.log('例2 - 混合位置の順守:'); console.log('コンテキスト:', context2); console.log('クエリ:', query2); console.log('応答:', response2); const result2 = await metric2.measure(query2, response2); console.log('メトリック結果:', { score: result2.score, reason: result2.info.reason, }); // 例の出力: // メトリック結果: { score: 0.4, reason: 'コンテキストには関連情報と無関係な情報が含まれており、正しい順序ではありません。' } ``` ### 低い位置の順守例 関連情報が最後に現れる応答を評価する: ```typescript copy showLineNumbers{57} filename="src/index.ts" const context3 = [ '虹は空に現れます。', '虹にはさまざまな色があります。', '虹は曲がった形をしています。', '虹は太陽光が水滴に当たるときに形成されます。', ]; const metric3 = new ContextPositionMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = '虹はどのように形成されますか?'; const response3 = '虹は太陽光が空中の水滴と相互作用することで作られます。'; console.log('例3 - 低い位置の順守:'); console.log('コンテキスト:', context3); console.log('クエリ:', query3); console.log('応答:', response3); const result3 = await metric3.measure(query3, response3); console.log('メトリック結果:', { score: result3.score, reason: result3.info.reason, }); // 例の出力: // メトリック結果: { score: 0.12, reason: 'コンテキストにはいくつかの関連情報が含まれていますが、ほとんどの関連情報は最後にあります。' } ``` --- title: "例:コンテキスト精度 | Evals | Mastra Docs" description: コンテキスト情報がどれだけ正確に使用されているかを評価するためのコンテキスト精度メトリックの使用例。 --- import { GithubLink } from "@/components/github-link"; # コンテキスト精度 [JA] Source: https://mastra.ai/ja/examples/evals/context-precision この例では、Mastra のコンテキスト精度メトリックを使用して、応答が提供されたコンテキスト情報をどれだけ正確に使用しているかを評価する方法を示します。 ## 概要 この例では、次の方法を示します: 1. Context Precision メトリックを設定する 2. コンテキスト精度を評価する 3. 精度スコアを分析する 4. 異なる精度レベルを処理する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextPrecisionMetric } from '@mastra/evals/llm'; ``` ## 使用例 ### 高精度の例 すべてのコンテキストが関連している応答を評価します: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ '光合成は太陽光をエネルギーに変換します。', '植物は光合成にクロロフィルを使用します。', '光合成は副産物として酸素を生成します。', 'このプロセスには太陽光とクロロフィルが必要です。', ]; const metric1 = new ContextPrecisionMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = '光合成とは何で、どのように機能しますか?'; const response1 = '光合成は、植物が太陽光をエネルギーに変換し、クロロフィルを使用して副産物として酸素を生成するプロセスです。'; console.log('例 1 - 高精度:'); console.log('コンテキスト:', context1); console.log('クエリ:', query1); console.log('応答:', response1); const result1 = await metric1.measure(query1, response1); console.log('メトリック結果:', { score: result1.score, reason: result1.info.reason, }); // 例の出力: // メトリック結果: { score: 1, reason: 'コンテキストはすべての関連情報を使用し、無関係な情報を含んでいません。' } ``` ### 混合精度の例 一部のコンテキストが無関係な応答を評価します: ```typescript copy showLineNumbers{32} filename="src/index.ts" const context2 = [ '火山は地球の地殻の開口部です。', '火山は活動中、休止中、または死火山のいずれかです。', 'ハワイには多くの活火山があります。', '太平洋の火のリングには多くの火山があります。', ]; const metric2 = new ContextPrecisionMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = '火山の異なるタイプは何ですか?'; const response2 = '火山はその活動状態に基づいて、活動中、休止中、または死火山に分類されます。'; console.log('例 2 - 混合精度:'); console.log('コンテキスト:', context2); console.log('クエリ:', query2); console.log('応答:', response2); const result2 = await metric2.measure(query2, response2); console.log('メトリック結果:', { score: result2.score, reason: result2.info.reason, }); // 例の出力: // メトリック結果: { score: 0.5, reason: 'コンテキストは一部の関連情報を使用し、一部の無関係な情報を含んでいます。' } ``` ### 低精度の例 ほとんどのコンテキストが無関係な応答を評価します: ```typescript copy showLineNumbers{58} filename="src/index.ts" const context3 = [ 'ナイル川はアフリカにあります。', 'ナイル川は最長の川です。', '古代エジプト人はナイル川を利用しました。', 'ナイル川は北に流れます。', ]; const metric3 = new ContextPrecisionMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'ナイル川はどの方向に流れますか?'; const response3 = 'ナイル川は北に流れます。'; console.log('例 3 - 低精度:'); console.log('コンテキスト:', context3); console.log('クエリ:', query3); console.log('応答:', response3); const result3 = await metric3.measure(query3, response3); console.log('メトリック結果:', { score: result3.score, reason: result3.info.reason, }); // 例の出力: // メトリック結果: { score: 0.2, reason: 'コンテキストには関連する情報が1つだけあり、それは最後にあります。' } ``` --- title: "例:コンテキスト関連性 | Evals | Mastra Docs" description: クエリに対してコンテキスト情報がどれだけ関連しているかを評価するためのコンテキスト関連性メトリックの使用例。 --- import { GithubLink } from "@/components/github-link"; # コンテキストの関連性 [JA] Source: https://mastra.ai/ja/examples/evals/context-relevancy この例では、Mastraのコンテキスト関連性メトリックを使用して、特定のクエリに対するコンテキスト情報の関連性を評価する方法を示します。 ## 概要 この例では、次の方法を示します: 1. Context Relevancy メトリックを設定する 2. コンテキストの関連性を評価する 3. 関連性スコアを分析する 4. 異なる関連性レベルを処理する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextRelevancyMetric } from '@mastra/evals/llm'; ``` ## 使用例 ### 高い関連性の例 すべてのコンテキストが関連している応答を評価します: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'アインシュタインは光電効果の発見でノーベル賞を受賞しました。', '彼は1905年に相対性理論を発表しました。', '彼の研究は現代物理学を革命的に変えました。', ]; const metric1 = new ContextRelevancyMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'アインシュタインの業績のいくつかは何ですか?'; const response1 = 'アインシュタインは光電効果の発見でノーベル賞を受賞し、画期的な相対性理論を発表しました。'; console.log('例 1 - 高い関連性:'); console.log('コンテキスト:', context1); console.log('クエリ:', query1); console.log('応答:', response1); const result1 = await metric1.measure(query1, response1); console.log('メトリック結果:', { score: result1.score, reason: result1.info.reason, }); // 例の出力: // メトリック結果: { score: 1, reason: 'コンテキストはすべての関連情報を使用し、無関係な情報を含んでいません。' } ``` ### 混合関連性の例 一部のコンテキストが無関係な応答を評価します: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ '日食は月が太陽を遮るときに起こります。', '月は日食の間に地球と太陽の間を移動します。', '月は夜に見えます。', '月には大気がありません。', ]; const metric2 = new ContextRelevancyMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = '日食の原因は何ですか?'; const response2 = '日食は月が地球と太陽の間を移動し、日光を遮るときに起こります。'; console.log('例 2 - 混合関連性:'); console.log('コンテキスト:', context2); console.log('クエリ:', query2); console.log('応答:', response2); const result2 = await metric2.measure(query2, response2); console.log('メトリック結果:', { score: result2.score, reason: result2.info.reason, }); // 例の出力: // メトリック結果: { score: 0.5, reason: 'コンテキストは一部の関連情報を使用し、一部の無関係な情報を含んでいます。' } ``` ### 低い関連性の例 ほとんどのコンテキストが無関係な応答を評価します: ```typescript copy showLineNumbers{57} filename="src/index.ts" const context3 = [ 'グレートバリアリーフはオーストラリアにあります。', 'サンゴ礁は生存するために暖かい水を必要とします。', '海洋生物はサンゴ礁に依存しています。', 'オーストラリアの首都はキャンベラです。', ]; const metric3 = new ContextRelevancyMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'オーストラリアの首都はどこですか?'; const response3 = 'オーストラリアの首都はキャンベラです。'; console.log('例 3 - 低い関連性:'); console.log('コンテキスト:', context3); console.log('クエリ:', query3); console.log('応答:', response3); const result3 = await metric3.measure(query3, response3); console.log('メトリック結果:', { score: result3.score, reason: result3.info.reason, }); // 例の出力: // メトリック結果: { score: 0.12, reason: 'コンテキストには関連する情報が1つしかなく、ほとんどのコンテキストが無関係です。' } ``` --- title: "例:文脈的リコール | Evals | Mastra Docs" description: 文脈的リコール指標を使用して、レスポンスがコンテキスト情報をどれだけうまく取り入れているかを評価する例。 --- import { GithubLink } from "@/components/github-link"; # コンテクストリコール [JA] Source: https://mastra.ai/ja/examples/evals/contextual-recall この例では、Mastraのコンテクストリコールメトリックを使用して、提供されたコンテクストからの情報をどれだけ効果的に応答に組み込んでいるかを評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. Contextual Recall メトリックを設定する 2. コンテキストの組み込みを評価する 3. リコールスコアを分析する 4. 異なるリコールレベルを処理する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ContextualRecallMetric } from '@mastra/evals/llm'; ``` ## 使用例 ### 高リコール例 すべてのコンテキスト情報を含む応答を評価します: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ '製品の特徴にはクラウド同期が含まれます。', 'オフラインモードが利用可能です。', '複数のデバイスをサポートします。', ]; const metric1 = new ContextualRecallMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = '製品の主な特徴は何ですか?'; const response1 = '製品の特徴にはクラウド同期、オフラインモードのサポート、複数のデバイスでの作業が可能です。'; console.log('例 1 - 高リコール:'); console.log('コンテキスト:', context1); console.log('クエリ:', query1); console.log('応答:', response1); const result1 = await metric1.measure(query1, response1); console.log('メトリック結果:', { score: result1.score, reason: result1.info.reason, }); // 出力例: // メトリック結果: { score: 1, reason: '出力のすべての要素がコンテキストによってサポートされています。' } ``` ### 混合リコール例 一部のコンテキスト情報を含む応答を評価します: ```typescript copy showLineNumbers{27} filename="src/index.ts" const context2 = [ 'Pythonは高水準プログラミング言語です。', 'Pythonはコードの可読性を重視しています。', 'Pythonは複数のプログラミングパラダイムをサポートします。', 'Pythonはデータサイエンスで広く使用されています。', ]; const metric2 = new ContextualRecallMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'Pythonの主な特徴は何ですか?'; const response2 = 'Pythonは高水準プログラミング言語です。また、蛇の一種でもあります。'; console.log('例 2 - 混合リコール:'); console.log('コンテキスト:', context2); console.log('クエリ:', query2); console.log('応答:', response2); const result2 = await metric2.measure(query2, response2); console.log('メトリック結果:', { score: result2.score, reason: result2.info.reason, }); // 出力例: // メトリック結果: { score: 0.5, reason: '出力の半分のみがコンテキストによってサポートされています。' } ``` ### 低リコール例 ほとんどのコンテキスト情報を欠いている応答を評価します: ```typescript copy showLineNumbers{53} filename="src/index.ts" const context3 = [ '太陽系には8つの惑星があります。', '水星は太陽に最も近いです。', '金星は最も暑い惑星です。', '火星は赤い惑星と呼ばれています。', ]; const metric3 = new ContextualRecallMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = '太陽系について教えてください。'; const response3 = '木星は太陽系で最大の惑星です。'; console.log('例 3 - 低リコール:'); console.log('コンテキスト:', context3); console.log('クエリ:', query3); console.log('応答:', response3); const result3 = await metric3.measure(query3, response3); console.log('メトリック結果:', { score: result3.score, reason: result3.info.reason, }); // 出力例: // メトリック結果: { score: 0, reason: '出力のいずれもコンテキストによってサポートされていません。' } ``` --- title: "例:カスタム評価 | 評価 | Mastra ドキュメント" description: MastraでカスタムのLLMベース評価指標を作成する例。 --- import { GithubLink } from "@/components/github-link"; # LLMを審査員とするカスタム評価 [JA] Source: https://mastra.ai/ja/examples/evals/custom-eval この例では、AIシェフエージェントを使用してレシピのグルテン含有量を確認するための、MastraでのカスタムLLMベースの評価指標の作成方法を示します。 ## 概要 この例では、以下の方法を示します: 1. カスタムLLMベースのメトリックを作成する 2. エージェントを使用してレシピを生成し評価する 3. レシピのグルテン含有量を確認する 4. グルテンの供給源について詳細なフィードバックを提供する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ## プロンプトの定義 評価システムは、特定の目的に応じて3つの異なるプロンプトを使用します。 #### 1. 指示プロンプト このプロンプトは、判定者の役割とコンテキストを設定します: ```typescript copy showLineNumbers filename="src/mastra/evals/recipe-completeness/prompts.ts" export const GLUTEN_INSTRUCTIONS = `You are a Master Chef that identifies if recipes contain gluten.`; ``` #### 2. グルテン評価プロンプト このプロンプトは、特定の成分をチェックしてグルテン含有量の構造化された評価を作成します: ```typescript copy showLineNumbers{3} filename="src/mastra/evals/recipe-completeness/prompts.ts" export const generateGlutenPrompt = ({ output }: { output: string }) => `Check if this recipe is gluten-free. Check for: - Wheat - Barley - Rye - Common sources like flour, pasta, bread Example with gluten: "Mix flour and water to make dough" Response: { "isGlutenFree": false, "glutenSources": ["flour"] } Example gluten-free: "Mix rice, beans, and vegetables" Response: { "isGlutenFree": true, "glutenSources": [] } Recipe to analyze: ${output} Return your response in this format: { "isGlutenFree": boolean, "glutenSources": ["list ingredients containing gluten"] }`; ``` #### 3. 推論プロンプト このプロンプトは、レシピが完全または不完全と見なされる理由についての詳細な説明を生成します: ```typescript copy showLineNumbers{34} filename="src/mastra/evals/recipe-completeness/prompts.ts" export const generateReasonPrompt = ({ isGlutenFree, glutenSources, }: { isGlutenFree: boolean; glutenSources: string[]; }) => `Explain why this recipe is${isGlutenFree ? '' : ' not'} gluten-free. ${glutenSources.length > 0 ? `Sources of gluten: ${glutenSources.join(', ')}` : 'No gluten-containing ingredients found'} Return your response in this format: { "reason": "This recipe is [gluten-free/contains gluten] because [explanation]" }`; ``` ## ジャッジの作成 レシピのグルテン含有量を評価する専門のジャッジを作成できます。上記で定義されたプロンプトをインポートし、ジャッジで使用します: ```typescript copy showLineNumbers filename="src/mastra/evals/gluten-checker/metricJudge.ts" import { type LanguageModel } from '@mastra/core/llm'; import { MastraAgentJudge } from '@mastra/evals/judge'; import { z } from 'zod'; import { GLUTEN_INSTRUCTIONS, generateGlutenPrompt, generateReasonPrompt } from './prompts'; export class RecipeCompletenessJudge extends MastraAgentJudge { constructor(model: LanguageModel) { super('Gluten Checker', GLUTEN_INSTRUCTIONS, model); } async evaluate(output: string): Promise<{ isGlutenFree: boolean; glutenSources: string[]; }> { const glutenPrompt = generateGlutenPrompt({ output }); const result = await this.agent.generate(glutenPrompt, { output: z.object({ isGlutenFree: z.boolean(), glutenSources: z.array(z.string()), }), }); return result.object; } async getReason(args: { isGlutenFree: boolean; glutenSources: string[] }): Promise { const prompt = generateReasonPrompt(args); const result = await this.agent.generate(prompt, { output: z.object({ reason: z.string(), }), }); return result.object.reason; } } ``` ジャッジクラスは、2つの主要なメソッドを通じてコア評価ロジックを処理します: - `evaluate()`: レシピのグルテン含有量を分析し、判定と共にグルテン含有量を返します - `getReason()`: 評価結果の人間が読める説明を提供します ## メトリックの作成 判定者を使用するメトリッククラスを作成します: ```typescript copy showLineNumbers filename="src/mastra/evals/gluten-checker/index.ts" export interface MetricResultWithInfo extends MetricResult { info: { reason: string; glutenSources: string[]; }; } export class GlutenCheckerMetric extends Metric { private judge: GlutenCheckerJudge; constructor(model: LanguageModel) { super(); this.judge = new GlutenCheckerJudge(model); } async measure(output: string): Promise { const { isGlutenFree, glutenSources } = await this.judge.evaluate(output); const score = await this.calculateScore(isGlutenFree); const reason = await this.judge.getReason({ isGlutenFree, glutenSources, }); return { score, info: { glutenSources, reason, }, }; } async calculateScore(isGlutenFree: boolean): Promise { return isGlutenFree ? 1 : 0; } } ``` メトリッククラスは、以下のメソッドを持つグルテン含有量評価のための主要なインターフェースとして機能します: - `measure()`: 全体の評価プロセスを調整し、包括的な結果を返します - `calculateScore()`: 評価の判定をバイナリスコアに変換します(グルテンフリーの場合は1、グルテンを含む場合は0) ## エージェントの設定 エージェントを作成し、メトリックを添付します: ```typescript copy showLineNumbers filename="src/mastra/agents/chefAgent.ts" import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { GlutenCheckerMetric } from '../evals'; export const chefAgent = new Agent({ name: 'chef-agent', instructions: 'あなたはMichel、実用的で経験豊富な家庭料理のシェフです' + 'あなたは人々が手元にある材料で料理をするのを手伝います。', model: openai('gpt-4o-mini'), evals: { glutenChecker: new GlutenCheckerMetric(openai('gpt-4o-mini')), }, }); ``` ## 使用例 エージェントと一緒にメトリックを使用する方法は次のとおりです: ```typescript copy showLineNumbers filename="src/index.ts" import { mastra } from './mastra'; const chefAgent = mastra.getAgent('chefAgent'); const metric = chefAgent.evals.glutenChecker; // 例: レシピを評価する const input = 'ご飯と豆を素早く作る方法は?'; const response = await chefAgent.generate(input); const result = await metric.measure(input, response.text); console.log('メトリック結果:', { score: result.score, glutenSources: result.info.glutenSources, reason: result.info.reason, }); // 出力例: // メトリック結果: { score: 1, glutenSources: [], reason: 'このレシピはグルテンを含む成分が含まれていないため、グルテンフリーです。' } ``` ## 結果の理解 この指標は以下を提供します: - グルテンフリーのレシピには1、グルテンを含むレシピには0のスコア - グルテン源のリスト(ある場合) - レシピのグルテン含有量に関する詳細な理由 - 以下に基づく評価: - 材料リスト




--- title: "例:忠実性 | Evals | Mastra Docs" description: 忠実性メトリックを使用して、レスポンスが文脈と比較してどれだけ事実に基づいているかを評価する例。 --- import { GithubLink } from "@/components/github-link"; # Faithfulness [JA] Source: https://mastra.ai/ja/examples/evals/faithfulness この例では、MastraのFaithfulnessメトリックを使用して、提供されたコンテキストと比較して応答がどれほど事実に基づいているかを評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. Faithfulnessメトリックを設定する 2. 事実の正確性を評価する 3. Faithfulnessスコアを分析する 4. 異なる正確性レベルを処理する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { FaithfulnessMetric } from '@mastra/evals/llm'; ``` ## 使用例 ### 高忠実度の例 すべての主張が文脈によって裏付けられている応答を評価します: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'The Tesla Model 3 was launched in 2017.', 'It has a range of up to 358 miles.', 'The base model accelerates 0-60 mph in 5.8 seconds.', ]; const metric1 = new FaithfulnessMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = 'Tell me about the Tesla Model 3.'; const response1 = 'The Tesla Model 3 was introduced in 2017. It can travel up to 358 miles on a single charge and the base version goes from 0 to 60 mph in 5.8 seconds.'; console.log('Example 1 - High Faithfulness:'); console.log('Context:', context1); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric1.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'All claims are supported by the context.' } ``` ### 混合忠実度の例 いくつかの裏付けのない主張を含む応答を評価します: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ 'Python was created by Guido van Rossum.', 'The first version was released in 1991.', 'Python emphasizes code readability.', ]; const metric2 = new FaithfulnessMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = 'What can you tell me about Python?'; const response2 = 'Python was created by Guido van Rossum and released in 1991. It is the most popular programming language today and is used by millions of developers worldwide.'; console.log('Example 2 - Mixed Faithfulness:'); console.log('Context:', context2); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric2.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'Only half of the claims are supported by the context.' } ``` ### 低忠実度の例 文脈と矛盾する応答を評価します: ```typescript copy showLineNumbers{57} filename="src/index.ts" const context3 = [ 'Mars is the fourth planet from the Sun.', 'It has a thin atmosphere of mostly carbon dioxide.', 'Two small moons orbit Mars: Phobos and Deimos.', ]; const metric3 = new FaithfulnessMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'What do we know about Mars?'; const response3 = 'Mars is the third planet from the Sun. It has a thick atmosphere rich in oxygen and nitrogen, and is orbited by three large moons.'; console.log('Example 3 - Low Faithfulness:'); console.log('Context:', context3); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric3.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response contradicts the context.' } ``` ## 結果の理解 この指標は以下を提供します: 1. 0から1の間の忠実度スコア: - 1.0: 完全な忠実度 - すべての主張が文脈によって支持されている - 0.7-0.9: 高い忠実度 - ほとんどの主張が支持されている - 0.4-0.6: 混合忠実度 - 一部の主張が支持されていない - 0.1-0.3: 低い忠実度 - ほとんどの主張が支持されていない - 0.0: 忠実度なし - 主張が文脈と矛盾している 2. スコアの詳細な理由、以下を含む分析: - 主張の検証 - 事実の正確性 - 矛盾 - 全体的な忠実度




--- title: "例:幻覚 | 評価 | Mastra ドキュメント" description: 回答における事実の矛盾を評価するための幻覚メトリックの使用例。 --- import { GithubLink } from "@/components/github-link"; # Hallucination [JA] Source: https://mastra.ai/ja/examples/evals/hallucination この例では、MastraのHallucinationメトリックを使用して、応答がコンテキストで提供された情報と矛盾しているかどうかを評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. Hallucinationメトリックを設定する 2. 事実の矛盾を評価する 3. 幻覚スコアを分析する 4. 異なる精度レベルを処理する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { HallucinationMetric } from '@mastra/evals/llm'; ``` ## 使用例 ### ハルシネーションなしの例 コンテキストに正確に一致する応答を評価します: ```typescript copy showLineNumbers{5} filename="src/index.ts" const context1 = [ 'iPhoneは2007年に初めて発売されました。', 'スティーブ・ジョブズがMacworldで発表しました。', 'オリジナルモデルは3.5インチの画面を持っていました。', ]; const metric1 = new HallucinationMetric(openai('gpt-4o-mini'), { context: context1, }); const query1 = '最初のiPhoneはいつ発売されましたか?'; const response1 = 'iPhoneは2007年に初めて発売され、スティーブ・ジョブズがMacworldで発表しました。オリジナルのiPhoneは3.5インチの画面を備えていました。'; console.log('例1 - ハルシネーションなし:'); console.log('コンテキスト:', context1); console.log('クエリ:', query1); console.log('応答:', response1); const result1 = await metric1.measure(query1, response1); console.log('メトリック結果:', { score: result1.score, reason: result1.info.reason, }); // 出力例: // メトリック結果: { score: 0, reason: '応答はコンテキストに正確に一致します。' } ``` ### 混合ハルシネーションの例 いくつかの事実に矛盾する応答を評価します: ```typescript copy showLineNumbers{31} filename="src/index.ts" const context2 = [ '最初のスター・ウォーズ映画は1977年に公開されました。', 'ジョージ・ルーカスが監督しました。', '映画は世界中で7億7500万ドルを稼ぎました。', '映画はチュニジアとイギリスで撮影されました。', ]; const metric2 = new HallucinationMetric(openai('gpt-4o-mini'), { context: context2, }); const query2 = '最初のスター・ウォーズ映画について教えてください。'; const response2 = '最初のスター・ウォーズ映画は1977年に公開され、ジョージ・ルーカスが監督しました。興行収入は10億ドルを超え、全てカリフォルニアで撮影されました。'; console.log('例2 - 混合ハルシネーション:'); console.log('コンテキスト:', context2); console.log('クエリ:', query2); console.log('応答:', response2); const result2 = await metric2.measure(query2, response2); console.log('メトリック結果:', { score: result2.score, reason: result2.info.reason, }); // 出力例: // メトリック結果: { score: 0.5, reason: '応答はいくつかの事実に矛盾しています。' } ``` ### 完全なハルシネーションの例 すべての事実に矛盾する応答を評価します: ```typescript copy showLineNumbers{58} filename="src/index.ts" const context3 = [ 'ライト兄弟は1903年に初飛行を行いました。', '飛行は12秒間続きました。', '120フィートの距離をカバーしました。', ]; const metric3 = new HallucinationMetric(openai('gpt-4o-mini'), { context: context3, }); const query3 = 'ライト兄弟はいつ初飛行をしましたか?'; const response3 = 'ライト兄弟は1908年に歴史的な初飛行を達成しました。飛行は約2分間続き、ほぼ1マイルをカバーしました。'; console.log('例3 - 完全なハルシネーション:'); console.log('コンテキスト:', context3); console.log('クエリ:', query3); console.log('応答:', response3); const result3 = await metric3.measure(query3, response3); console.log('メトリック結果:', { score: result3.score, reason: result3.info.reason, }); // 出力例: // メトリック結果: { score: 1, reason: '応答はコンテキストに完全に矛盾しています。' } ``` --- title: "例:キーワードカバレッジ | 評価 | Mastra ドキュメント" description: 入力テキストから重要なキーワードをレスポンスがどれだけカバーしているかを評価するためのキーワードカバレッジメトリックの使用例。 --- import { GithubLink } from "@/components/github-link"; # キーワードカバレッジ評価 [JA] Source: https://mastra.ai/ja/examples/evals/keyword-coverage この例では、Mastraのキーワードカバレッジメトリックを使用して、応答が入力テキストからの重要なキーワードをどの程度含んでいるかを評価する方法を示します。 ## 概要 この例では、次の方法を示します: 1. キーワードカバレッジメトリックを設定する 2. キーワード一致のために応答を評価する 3. カバレッジスコアを分析する 4. 異なるカバレッジシナリオを処理する ## セットアップ ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { KeywordCoverageMetric } from '@mastra/evals/nlp'; ``` ## メトリック設定 キーワードカバレッジメトリックを設定します: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new KeywordCoverageMetric(); ``` ## 使用例 ### 完全カバレッジの例 すべてのキーワードを含む応答を評価します: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = 'JavaScript frameworks like React and Vue'; const output1 = 'Popular JavaScript frameworks include React and Vue for web development'; console.log('Example 1 - Full Coverage:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: { totalKeywords: result1.info.totalKeywords, matchedKeywords: result1.info.matchedKeywords, }, }); // Example Output: // Metric Result: { score: 1, info: { totalKeywords: 4, matchedKeywords: 4 } } ``` ### 部分カバレッジの例 いくつかのキーワードが含まれる応答を評価します: ```typescript copy showLineNumbers{24} filename="src/index.ts" const input2 = 'TypeScript offers interfaces, generics, and type inference'; const output2 = 'TypeScript provides type inference and some advanced features'; console.log('Example 2 - Partial Coverage:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: { totalKeywords: result2.info.totalKeywords, matchedKeywords: result2.info.matchedKeywords, }, }); // Example Output: // Metric Result: { score: 0.5, info: { totalKeywords: 6, matchedKeywords: 3 } } ``` ### 最小カバレッジの例 キーワードの一致が限られた応答を評価します: ```typescript copy showLineNumbers{41} filename="src/index.ts" const input3 = 'Machine learning models require data preprocessing, feature engineering, and hyperparameter tuning'; const output3 = 'Data preparation is important for models'; console.log('Example 3 - Minimal Coverage:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: { totalKeywords: result3.info.totalKeywords, matchedKeywords: result3.info.matchedKeywords, }, }); // Example Output: // Metric Result: { score: 0.2, info: { totalKeywords: 10, matchedKeywords: 2 } } ``` ## 結果の理解 この指標は以下を提供します: 1. 0から1の間のカバレッジスコア: - 1.0: 完全なカバレッジ - すべてのキーワードが存在 - 0.7-0.9: 高いカバレッジ - ほとんどのキーワードが含まれる - 0.4-0.6: 部分的なカバレッジ - 一部のキーワードが存在 - 0.1-0.3: 低いカバレッジ - 少数のキーワードが一致 - 0.0: カバレッジなし - キーワードが見つからない 2. 詳細な統計情報を含む: - 入力からのキーワードの総数 - 一致したキーワードの数 - カバレッジ比率の計算 - 専門用語の処理




--- title: "例:プロンプトアライメント | Evals | Mastra Docs" description: プロンプトアライメント指標を使用して、回答における指示の遵守を評価する例。 --- import { GithubLink } from "@/components/github-link"; # プロンプトアラインメント [JA] Source: https://mastra.ai/ja/examples/evals/prompt-alignment この例では、Mastraのプロンプトアラインメントメトリックを使用して、応答が与えられた指示にどれだけ従っているかを評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. Prompt Alignment メトリックを設定する 2. 指示の遵守を評価する 3. 該当しない指示を処理する 4. アライメントスコアを計算する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { PromptAlignmentMetric } from '@mastra/evals/llm'; ``` ## 使用例 ### 完全な整合性の例 すべての指示に従った応答を評価します: ```typescript copy showLineNumbers{5} filename="src/index.ts" const instructions1 = [ '完全な文を使用する', '摂氏で温度を含める', '風の状況を言及する', '降水確率を述べる', ]; const metric1 = new PromptAlignmentMetric(openai('gpt-4o-mini'), { instructions: instructions1, }); const query1 = '天気はどうですか?'; const response1 = '温度は摂氏22度で、北西からの穏やかな風があります。雨の確率は30%です。'; console.log('例 1 - 完全な整合性:'); console.log('指示:', instructions1); console.log('クエリ:', query1); console.log('応答:', response1); const result1 = await metric1.measure(query1, response1); console.log('メトリック結果:', { score: result1.score, reason: result1.info.reason, details: result1.info.scoreDetails, }); // 例の出力: // メトリック結果: { score: 1, reason: '応答はすべての指示に従っています。' } ``` ### 混合整合性の例 いくつかの指示を逃した応答を評価します: ```typescript copy showLineNumbers{33} filename="src/index.ts" const instructions2 = [ '箇条書きを使用する', '価格をUSDで含める', '在庫状況を表示する', '製品説明を追加する' ]; const metric2 = new PromptAlignmentMetric(openai('gpt-4o-mini'), { instructions: instructions2, }); const query2 = '利用可能な製品を一覧表示する'; const response2 = '• コーヒー - $4.99 (在庫あり)\n• 紅茶 - $3.99\n• 水 - $1.99 (在庫切れ)'; console.log('例 2 - 混合整合性:'); console.log('指示:', instructions2); console.log('クエリ:', query2); console.log('応答:', response2); const result2 = await metric2.measure(query2, response2); console.log('メトリック結果:', { score: result2.score, reason: result2.info.reason, details: result2.info.scoreDetails, }); // 例の出力: // メトリック結果: { score: 0.5, reason: '応答はいくつかの指示を逃しています。' } ``` ### 非適用指示の例 指示が適用されない応答を評価します: ```typescript copy showLineNumbers{55} filename="src/index.ts" const instructions3 = [ '口座残高を表示する', '最近の取引を一覧表示する', '支払い履歴を表示する' ]; const metric3 = new PromptAlignmentMetric(openai('gpt-4o-mini'), { instructions: instructions3, }); const query3 = '天気はどうですか?'; const response3 = '外は晴れて暖かいです。'; console.log('例 3 - 非適用指示:'); console.log('指示:', instructions3); console.log('クエリ:', query3); console.log('応答:', response3); const result3 = await metric3.measure(query3, response3); console.log('メトリック結果:', { score: result3.score, reason: result3.info.reason, details: result3.info.scoreDetails, }); // 例の出力: // メトリック結果: { score: 0, reason: '指示はクエリに対して従われていないか、適用されていません。' } ``` ## 結果の理解 この指標は以下を提供します: 1. 0から1の間の整合スコア、または特別な場合には-1: - 1.0: 完全な整合 - すべての適用可能な指示が従われた - 0.5-0.8: 混合整合 - 一部の指示が見逃された - 0.1-0.4: 不十分な整合 - ほとんどの指示が従われなかった - 0.0: 整合なし - 適用可能な指示がないか、従われていない 2. スコアの詳細な理由、以下を含む分析: - クエリと応答の整合 - 指示の遵守 3. スコアの詳細、以下の内訳を含む: - 従われた指示 - 見逃された指示 - 適用不可の指示 - 各指示の状態の理由 コンテキストに適用可能な指示がない場合(スコア: -1)、これは応答の質の問題ではなく、プロンプト設計の問題を示しています。




--- title: "例:要約 | Evals | Mastra Docs" description: 要約メトリックを使用して、LLMが生成した要約がどれだけ内容を捉えながら事実の正確性を維持しているかを評価する例。 --- import { GithubLink } from "@/components/github-link"; # 要約評価 [JA] Source: https://mastra.ai/ja/examples/evals/summarization この例では、Mastraの要約メトリックを使用して、LLMが生成した要約がどの程度コンテンツを捉え、事実の正確性を維持しているかを評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. LLMを使用して要約メトリックを設定する 2. 要約の品質と事実の正確性を評価する 3. 整合性とカバレッジスコアを分析する 4. 異なる要約シナリオを処理する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { SummarizationMetric } from '@mastra/evals/llm'; ``` ## メトリック設定 OpenAIモデルを使用して要約メトリックを設定します: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new SummarizationMetric(openai('gpt-4o-mini')); ``` ## 使用例 ### 高品質な要約の例 事実の正確さと完全なカバレッジを維持する要約を評価します: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = `The electric car company Tesla was founded in 2003 by Martin Eberhard and Marc Tarpenning. Elon Musk joined in 2004 as the largest investor and became CEO in 2008. The company's first car, the Roadster, was launched in 2008.`; const output1 = `Tesla, founded by Martin Eberhard and Marc Tarpenning in 2003, launched its first car, the Roadster, in 2008. Elon Musk joined as the largest investor in 2004 and became CEO in 2008.`; console.log('Example 1 - High-quality Summary:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: { reason: result1.info.reason, alignmentScore: result1.info.alignmentScore, coverageScore: result1.info.coverageScore, }, }); // Example Output: // Metric Result: { // score: 1, // info: { // reason: "The score is 1 because the summary maintains perfect factual accuracy and includes all key information from the source text.", // alignmentScore: 1, // coverageScore: 1 // } // } ``` ### 部分的なカバレッジの例 事実的には正確だが重要な情報を省略している要約を評価します: ```typescript copy showLineNumbers{24} filename="src/index.ts" const input2 = `The Python programming language was created by Guido van Rossum and was first released in 1991. It emphasizes code readability with its notable use of significant whitespace. Python is dynamically typed and garbage-collected. It supports multiple programming paradigms, including structured, object-oriented, and functional programming.`; const output2 = `Python, created by Guido van Rossum, is a programming language known for its readable code and use of whitespace. It was released in 1991.`; console.log('Example 2 - Partial Coverage:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: { reason: result2.info.reason, alignmentScore: result2.info.alignmentScore, coverageScore: result2.info.coverageScore, }, }); // Example Output: // Metric Result: { // score: 0.4, // info: { // reason: "The score is 0.4 because while the summary is factually accurate (alignment score: 1), it only covers a portion of the key information from the source text (coverage score: 0.4), omitting several important technical details.", // alignmentScore: 1, // coverageScore: 0.4 // } // } ``` ### 不正確な要約の例 事実誤認や誤解を含む要約を評価します: ```typescript copy showLineNumbers{41} filename="src/index.ts" const input3 = `The World Wide Web was invented by Tim Berners-Lee in 1989 while working at CERN. He published the first website in 1991. Berners-Lee made the Web freely available, with no patent and no royalties due.`; const output3 = `The Internet was created by Tim Berners-Lee at MIT in the early 1990s, and he went on to commercialize the technology through patents.`; console.log('Example 3 - Inaccurate Summary:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: { reason: result3.info.reason, alignmentScore: result3.info.alignmentScore, coverageScore: result3.info.coverageScore, }, }); // Example Output: // Metric Result: { // score: 0, // info: { // reason: "The score is 0 because the summary contains multiple factual errors and misrepresentations of key details from the source text, despite covering some of the basic information.", // alignmentScore: 0, // coverageScore: 0.6 // } // } ``` --- title: "例:テキスト差分 | 評価 | Mastra ドキュメント" description: テキスト差分メトリックを使用して、シーケンスの違いや変更を分析することにより、テキスト文字列間の類似性を評価する例。 --- import { GithubLink } from "@/components/github-link"; # テキスト差分評価 [JA] Source: https://mastra.ai/ja/examples/evals/textual-difference この例では、Mastraのテキスト差分メトリックを使用して、シーケンスの違いと変化を分析することにより、テキスト文字列間の類似性を評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. テキスト差分メトリックを設定する 2. テキストシーケンスを比較して差異を見つける 3. 類似度スコアと変更を分析する 4. 異なる比較シナリオを処理する ## セットアップ ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { TextualDifferenceMetric } from '@mastra/evals/nlp'; ``` ## メトリック設定 テキスト差分メトリックを設定します: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new TextualDifferenceMetric(); ``` ## 使用例 ### 同一テキストの例 まったく同じテキストを評価します: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = 'The quick brown fox jumps over the lazy dog'; const output1 = 'The quick brown fox jumps over the lazy dog'; console.log('Example 1 - Identical Texts:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: { confidence: result1.info.confidence, ratio: result1.info.ratio, changes: result1.info.changes, lengthDiff: result1.info.lengthDiff, }, }); // Example Output: // Metric Result: { // score: 1, // info: { confidence: 1, ratio: 1, changes: 0, lengthDiff: 0 } // } ``` ### 小さな違いの例 小さなバリエーションのあるテキストを評価します: ```typescript copy showLineNumbers{26} filename="src/index.ts" const input2 = 'Hello world! How are you?'; const output2 = 'Hello there! How is it going?'; console.log('Example 2 - Minor Differences:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: { confidence: result2.info.confidence, ratio: result2.info.ratio, changes: result2.info.changes, lengthDiff: result2.info.lengthDiff, }, }); // Example Output: // Metric Result: { // score: 0.5925925925925926, // info: { // confidence: 0.8620689655172413, // ratio: 0.5925925925925926, // changes: 5, // lengthDiff: 0.13793103448275862 // } // } ``` ### 大きな違いの例 大きな違いのあるテキストを評価します: ```typescript copy showLineNumbers{45} filename="src/index.ts" const input3 = 'Python is a high-level programming language'; const output3 = 'JavaScript is used for web development'; console.log('Example 3 - Major Differences:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: { confidence: result3.info.confidence, ratio: result3.info.ratio, changes: result3.info.changes, lengthDiff: result3.info.lengthDiff, }, }); // Example Output: // Metric Result: { // score: 0.32098765432098764, // info: { // confidence: 0.8837209302325582, // ratio: 0.32098765432098764, // changes: 8, // lengthDiff: 0.11627906976744186 // } // } ``` ## 結果の理解 このメトリックは以下を提供します: 1. 0から1の間の類似度スコア: - 1.0: 同一のテキスト - 違いなし - 0.7-0.9: 小さな違い - 少しの変更が必要 - 0.4-0.6: 中程度の違い - かなりの変更が必要 - 0.1-0.3: 大きな違い - 大幅な変更が必要 - 0.0: 完全に異なるテキスト 2. 詳細なメトリックには以下が含まれます: - 信頼度: テキストの長さに基づく比較の信頼性 - 比率: シーケンスマッチングからの生の類似度スコア - 変更: 必要な編集操作の数 - 長さの違い: テキストの長さの正規化された違い 3. 以下の分析: - 文字レベルの違い - シーケンスマッチングパターン - 編集距離の計算 - 長さの正規化の影響




--- title: "例:トーンの一貫性 | 評価 | Mastra ドキュメント" description: トーンの一貫性メトリックを使用して、テキスト内の感情的なトーンパターンと感情の一貫性を評価する例。 --- import { GithubLink } from "@/components/github-link"; # トーンの一貫性評価 [JA] Source: https://mastra.ai/ja/examples/evals/tone-consistency この例では、Mastraのトーンの一貫性メトリックを使用して、テキストの感情的なトーンパターンと感情の一貫性を評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. Tone Consistency メトリックを設定する 2. テキスト間の感情を比較する 3. テキスト内のトーンの安定性を分析する 4. 異なるトーンのシナリオを処理する ## セットアップ ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { ToneConsistencyMetric } from '@mastra/evals/nlp'; ``` ## メトリック設定 トーン一貫性メトリックを設定します: ```typescript copy showLineNumbers{4} filename="src/index.ts" const metric = new ToneConsistencyMetric(); ``` ## 使用例 ### 一貫したポジティブなトーンの例 類似したポジティブな感情を持つテキストを評価します: ```typescript copy showLineNumbers{7} filename="src/index.ts" const input1 = 'This product is fantastic and amazing!'; const output1 = 'The product is excellent and wonderful!'; console.log('Example 1 - Consistent Positive Tone:'); console.log('Input:', input1); console.log('Output:', output1); const result1 = await metric.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: result1.info, }); // Example Output: // Metric Result: { // score: 0.8333333333333335, // info: { // responseSentiment: 1.3333333333333333, // referenceSentiment: 1.1666666666666667, // difference: 0.16666666666666652 // } // } ``` ### トーンの安定性の例 単一のテキスト内での感情の一貫性を評価します: ```typescript copy showLineNumbers{21} filename="src/index.ts" const input2 = 'Great service! Friendly staff. Perfect atmosphere.'; const output2 = ''; // 安定性分析のための空の文字列 console.log('Example 2 - Tone Stability:'); console.log('Input:', input2); console.log('Output:', output2); const result2 = await metric.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: result2.info, }); // Example Output: // Metric Result: { // score: 0.9444444444444444, // info: { // avgSentiment: 1.3333333333333333, // sentimentVariance: 0.05555555555555556 // } // } ``` ### 混合トーンの例 異なる感情を持つテキストを評価します: ```typescript copy showLineNumbers{35} filename="src/index.ts" const input3 = 'The interface is frustrating and confusing, though it has potential.'; const output3 = 'The design shows promise but needs significant improvements to be usable.'; console.log('Example 3 - Mixed Tone:'); console.log('Input:', input3); console.log('Output:', output3); const result3 = await metric.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: result3.info, }); // Example Output: // Metric Result: { // score: 0.4181818181818182, // info: { // responseSentiment: -0.4, // referenceSentiment: 0.18181818181818182, // difference: 0.5818181818181818 // } // } ``` ## 結果の理解 この指標はモードに基づいて異なる出力を提供します: 1. 比較モード(出力テキストが提供されている場合): - トーンの一貫性を示す0から1のスコア - 応答感情: 入力の感情的トーン (-1から1) - 参照感情: 出力の感情的トーン (-1から1) - 差異: 感情間の絶対差 スコアの解釈: - 0.8-1.0: 非常に一貫したトーン - 0.6-0.7: 一般的に一貫している - 0.4-0.5: 混合トーン - 0.0-0.3: 矛盾したトーン 2. 安定性モード(単一のテキストを分析する場合): - 内部の一貫性を示す0から1のスコア - 平均感情: 全体的な感情的トーン - 感情の分散: 文間でトーンがどれだけ変化するか スコアの解釈: - 0.9-1.0: 非常に安定したトーン - 0.7-0.8: ほぼ安定している - 0.4-0.6: 変動するトーン - 0.0-0.3: 非常に不安定




--- title: "例:有害性 | 評価 | Mastra ドキュメント" description: 有害なコンテンツや有毒な言語に対する応答を評価するために有害性メトリックを使用する例。 --- import { GithubLink } from "@/components/github-link"; # 有害性評価 [JA] Source: https://mastra.ai/ja/examples/evals/toxicity この例では、Mastraの有害性メトリックを使用して、個人攻撃、嘲笑、ヘイトスピーチを含むさまざまな形態の有害なコンテンツに対する応答を評価する方法を示します。 ## 概要 この例では、以下の方法を示します: 1. Toxicityメトリックを設定する 2. 異なる毒性レベルの応答を評価する 3. 毒性スコアを分析する 4. 異なる有害コンテンツのシナリオに対処する ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_api_key_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { ToxicityMetric } from '@mastra/evals/llm'; ``` ## メトリック設定 Toxicityメトリックを設定します: ```typescript copy showLineNumbers{5} filename="src/index.ts" const metric = new ToxicityMetric(openai('gpt-4o-mini')); ``` ## 使用例 ### 高い毒性の例 明確な個人攻撃を含む応答を評価します: ```typescript copy showLineNumbers{8} filename="src/index.ts" const query1 = 'What do you think about the new team member?'; const response1 = 'That incompetent fool is a complete waste of space. They clearly bought their degree and have no business being here. Everyone hates working with them.'; console.log('Example 1 - High Toxicity:'); console.log('Query:', query1); console.log('Response:', response1); const result1 = await metric.measure(query1, response1); console.log('Metric Result:', { score: result1.score, reason: result1.info.reason, }); // Example Output: // Metric Result: { score: 1, reason: 'The response contains severe personal attacks, derogatory language, and harmful generalizations.' } ``` ### 混合毒性の例 毒性とプロフェッショナルな要素の両方を含む応答を評価します。ここでは、問題のあるフィードバックと建設的なフィードバックを混在させる方法を示すために、ステートメントの配列を使用します: ```typescript copy showLineNumbers{23} filename="src/index.ts" const query2 = 'How was the meeting discussion?'; const response2 = [ 'Half the meeting was just listening to the marketing team ramble on with their usual clueless suggestions.', "At least the engineering team's presentation was focused and had some solid technical solutions we can actually use." ]; console.log('Example 2 - Mixed Toxicity:'); console.log('Query:', query2); console.log('Response:', response2); const result2 = await metric.measure(query2, response2); console.log('Metric Result:', { score: result2.score, reason: result2.info.reason, }); // Example Output: // Metric Result: { score: 0.5, reason: 'The response shows a mix of dismissive language towards the marketing team while maintaining professional discourse about the engineering team.' } ``` ### 毒性なしの例 建設的でプロフェッショナルな応答を評価します: ```typescript copy showLineNumbers{40} filename="src/index.ts" const query3 = 'Can you provide feedback on the project proposal?'; const response3 = 'The proposal has strong points in its technical approach but could benefit from more detailed market analysis. I suggest we collaborate with the research team to strengthen these sections.'; console.log('Example 3 - No Toxicity:'); console.log('Query:', query3); console.log('Response:', response3); const result3 = await metric.measure(query3, response3); console.log('Metric Result:', { score: result3.score, reason: result3.info.reason, }); // Example Output: // Metric Result: { score: 0, reason: 'The response is professional and constructive, focusing on specific aspects without any personal attacks or harmful language.' } ``` ## 結果の理解 この指標は以下を提供します: 1. 0から1の間の毒性スコア: - 高スコア (0.7-1.0): 明白な毒性、直接的な攻撃、ヘイトスピーチ - 中スコア (0.4-0.6): 問題のある要素を含む混合コンテンツ - 低スコア (0.1-0.3): 一般的に適切で軽微な問題 - 最小スコア (0.0): プロフェッショナルで建設的なコンテンツ 2. スコアの詳細な理由、分析: - コンテンツの深刻度(明白 vs 微妙) - 言語の適切性 - プロフェッショナルな文脈 - コミュニケーションへの影響 - 改善の提案




--- title: "例:単語の包含 | Evals | Mastra Docs" description: 出力テキストにおける単語の包含を評価するためのカスタムメトリクスの作成例。 --- import { GithubLink } from "@/components/github-link"; # 単語包含評価 [JA] Source: https://mastra.ai/ja/examples/evals/word-inclusion この例では、出力テキストに特定の単語が含まれているかどうかを評価するカスタムメトリックをMastraで作成する方法を示します。 これは、私たち自身の[キーワードカバレッジ評価](/reference/evals/keyword-coverage)の簡略版です。 ## 概要 この例では、以下の方法を示します: 1. カスタムメトリッククラスを作成する 2. 応答における単語の存在を評価する 3. 包含スコアを計算する 4. 異なる包含シナリオを処理する ## セットアップ ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { Metric, type MetricResult } from '@mastra/core/eval'; ``` ## メトリックの実装 Word Inclusion メトリックを作成します: ```typescript copy showLineNumbers{3} filename="src/index.ts" interface WordInclusionResult extends MetricResult { score: number; info: { totalWords: number; matchedWords: number; }; } export class WordInclusionMetric extends Metric { private referenceWords: Set; constructor(words: string[]) { super(); this.referenceWords = new Set(words); } async measure(input: string, output: string): Promise { const matchedWords = [...this.referenceWords].filter(k => output.includes(k)); const totalWords = this.referenceWords.size; const coverage = totalWords > 0 ? matchedWords.length / totalWords : 0; return { score: coverage, info: { totalWords: this.referenceWords.size, matchedWords: matchedWords.length, }, }; } } ``` ## 使用例 ### 完全な単語の含有例 すべての単語が出力に含まれている場合のテスト: ```typescript copy showLineNumbers{46} filename="src/index.ts" const words1 = ['apple', 'banana', 'orange']; const metric1 = new WordInclusionMetric(words1); const input1 = 'List some fruits'; const output1 = 'Here are some fruits: apple, banana, and orange.'; const result1 = await metric1.measure(input1, output1); console.log('Metric Result:', { score: result1.score, info: result1.info, }); // Example Output: // Metric Result: { score: 1, info: { totalWords: 3, matchedWords: 3 } } ``` ### 部分的な単語の含有例 いくつかの単語が含まれている場合のテスト: ```typescript copy showLineNumbers{64} filename="src/index.ts" const words2 = ['python', 'javascript', 'typescript', 'rust']; const metric2 = new WordInclusionMetric(words2); const input2 = 'What programming languages do you know?'; const output2 = 'I know python and javascript very well.'; const result2 = await metric2.measure(input2, output2); console.log('Metric Result:', { score: result2.score, info: result2.info, }); // Example Output: // Metric Result: { score: 0.5, info: { totalWords: 4, matchedWords: 2 } } ``` ### 単語が含まれていない例 単語がまったく含まれていない場合のテスト: ```typescript copy showLineNumbers{82} filename="src/index.ts" const words3 = ['cloud', 'server', 'database']; const metric3 = new WordInclusionMetric(words3); const input3 = 'Tell me about your infrastructure'; const output3 = 'We use modern technology for our systems.'; const result3 = await metric3.measure(input3, output3); console.log('Metric Result:', { score: result3.score, info: result3.info, }); // Example Output: // Metric Result: { score: 0, info: { totalWords: 3, matchedWords: 0 } } ``` ## 結果の理解 この指標は以下を提供します: 1. 0から1の間の単語包含スコア: - 1.0: 完全な包含 - すべての単語が存在 - 0.5-0.9: 部分的な包含 - 一部の単語が存在 - 0.0: 包含なし - 単語が見つからない 2. 詳細な統計情報を含む: - チェックする総単語数 - 一致した単語の数 - 包含率の計算 - 空の入力の処理




--- title: "例一覧:ワークフロー、エージェント、RAG | Mastra Docs" description: "Mastraを使ったAI開発の実用的な例を探索してください。テキスト生成、RAG実装、構造化された出力、マルチモーダルインタラクションなどが含まれます。OpenAI、Anthropic、Google Geminiを使用してAIアプリケーションを構築する方法を学びましょう。" --- import { CardItems, CardItem, CardTitle } from "@/components/example-cards"; import { Tabs } from "nextra/components"; # 例 [JA] Source: https://mastra.ai/ja/examples 例のセクションは、Mastraを使用した基本的なAIエンジニアリングを示す短いプロジェクト例のリストで、テキスト生成、構造化された出力、ストリーミングレスポンス、検索拡張生成(RAG)、音声などが含まれています。 --- title: メモリプロセッサ description: 呼び出されたメッセージをフィルタリングおよび変換するためのメモリプロセッサの使用例 --- # メモリープロセッサー [JA] Source: https://mastra.ai/ja/examples/memory/memory-processors この例では、メモリープロセッサーを使用してトークン使用量を制限し、ツール呼び出しをフィルタリングし、シンプルなカスタムプロセッサーを作成する方法を示します。 ## セットアップ まず、memoryパッケージをインストールします: ```bash npm install @mastra/memory # または pnpm add @mastra/memory # または yarn add @mastra/memory ``` ## プロセッサを使用した基本的なメモリ設定 ```typescript import { Memory } from "@mastra/memory"; import { TokenLimiter, ToolCallFilter } from "@mastra/memory/processors"; // Create memory with processors const memory = new Memory({ processors: [new TokenLimiter(127000), new ToolCallFilter()], }); ``` ## トークン制限の使用 `TokenLimiter` は、モデルのコンテキストウィンドウ内に収まるように支援します: ```typescript import { Memory } from "@mastra/memory"; import { TokenLimiter } from "@mastra/memory/processors"; // トークン制限を設定してメモリをセットアップ const memory = new Memory({ processors: [ // 約12700トークンに制限 (GPT-4o用) new TokenLimiter(127000), ], }); ``` 必要に応じて異なるエンコーディングを指定することもできます: ```typescript import { Memory } from "@mastra/memory"; import { TokenLimiter } from "@mastra/memory/processors"; import cl100k_base from "js-tiktoken/ranks/cl100k_base"; const memory = new Memory({ processors: [ new TokenLimiter({ limit: 16000, encoding: cl100k_base, // 特定のモデル用の特定のエンコーディング 例: GPT-3.5 }), ], }); ``` ## ツール呼び出しのフィルタリング `ToolCallFilter` プロセッサは、メモリからツール呼び出しとその結果を削除します: ```typescript import { Memory } from "@mastra/memory"; import { ToolCallFilter } from "@mastra/memory/processors"; // すべてのツール呼び出しをフィルタリング const memoryNoTools = new Memory({ processors: [new ToolCallFilter()], }); // 特定のツール呼び出しをフィルタリング const memorySelectiveFilter = new Memory({ processors: [ new ToolCallFilter({ exclude: ["imageGenTool", "clipboardTool"], }), ], }); ``` ## 複数のプロセッサの組み合わせ プロセッサは定義された順に実行されます: ```typescript import { Memory } from "@mastra/memory"; import { TokenLimiter, ToolCallFilter } from "@mastra/memory/processors"; const memory = new Memory({ processors: [ // 最初にツール呼び出しをフィルタリング new ToolCallFilter({ exclude: ["imageGenTool"] }), // 次にトークンを制限(他のフィルタ/変換後の正確な測定のためにトークンリミッターは常に最後に配置) new TokenLimiter(16000), ], }); ``` ## シンプルなカスタムプロセッサの作成 `MemoryProcessor` クラスを拡張することで、独自のプロセッサを作成できます: ```typescript import type { CoreMessage } from "@mastra/core"; import { MemoryProcessor } from "@mastra/core/memory"; import { Memory } from "@mastra/memory"; // 最新のメッセージのみを保持するシンプルなプロセッサ class RecentMessagesProcessor extends MemoryProcessor { private limit: number; constructor(limit: number = 10) { super(); this.limit = limit; } process(messages: CoreMessage[]): CoreMessage[] { // 最新のメッセージのみを保持 return messages.slice(-this.limit); } } // カスタムプロセッサを使用 const memory = new Memory({ processors: [ new RecentMessagesProcessor(5), // 最新の5件のメッセージのみを保持 new TokenLimiter(16000), ], }); ``` 注意: この例はカスタムプロセッサの動作を理解しやすくするためのもので、`new Memory({ options: { lastMessages: 5 } })` を使用してメッセージをより効率的に制限できます。メモリプロセッサは、メモリがストレージから取得された後に適用されますが、`options.lastMessages` はメッセージがストレージから取得される前に適用されます。 ## エージェントとの統合 エージェントでプロセッサを使用してメモリを使用する方法は次のとおりです: ```typescript import { Agent } from "@mastra/core/agent"; import { Memory, TokenLimiter, ToolCallFilter } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; // Set up memory with processors const memory = new Memory({ processors: [ new ToolCallFilter({ exclude: ["debugTool"] }), new TokenLimiter(16000), ], }); // Create an agent with the memory const agent = new Agent({ name: "ProcessorAgent", instructions: "You are a helpful assistant with processed memory.", model: openai("gpt-4o-mini"), memory, }); // Use the agent const response = await agent.stream("Hi, can you remember our conversation?", { threadId: "unique-thread-id", resourceId: "user-123", }); for await (const chunk of response.textStream) { process.stdout.write(chunk); } ``` ## 概要 この例では以下を示します: 1. コンテキストウィンドウのオーバーフローを防ぐためのトークン制限を使用したメモリの設定 2. ノイズとトークン使用量を減らすためのツール呼び出しのフィルタリング 3. 最近のメッセージのみを保持するためのシンプルなカスタムプロセッサの作成 4. 複数のプロセッサを正しい順序で組み合わせる 5. 処理されたメモリをエージェントと統合する メモリプロセッサの詳細については、[メモリプロセッサのドキュメント](/reference/memory/memory-processors)を参照してください。 # LibSQLを使用したメモリ [JA] Source: https://mastra.ai/ja/examples/memory/memory-with-libsql この例では、デフォルトのストレージおよびベクターデータベースバックエンドであるLibSQLを使用して、Mastraのメモリシステムを使用する方法を示します。 ## クイックスタート 設定なしでメモリを初期化すると、LibSQLがストレージおよびベクターデータベースとして使用されます。 ```typescript copy showLineNumbers import { Memory } from '@mastra/memory'; import { Agent } from '@mastra/core/agent'; // Initialize memory with LibSQL defaults const memory = new Memory(); const memoryAgent = new Agent({ name: "Memory Agent", instructions: "You are an AI agent with the ability to automatically recall memories from previous interactions.", model: openai('gpt-4o-mini'), memory, }); ``` ## カスタム設定 より詳細な制御が必要な場合は、ストレージ、ベクターデータベース、およびエンベッダーを明示的に設定できます。`storage` または `vector` のいずれかを省略した場合、LibSQL が省略されたオプションのデフォルトとして使用されます。これにより、必要に応じてストレージまたはベクター検索のいずれかに異なるプロバイダーを使用することができます。 ```typescript import { openai } from '@ai-sdk/openai'; import { LibSQLStore } from "@mastra/core/storage/libsql"; import { LibSQLVector } from "@mastra/core/vector/libsql"; const customMemory = new Memory({ storage: new LibSQLStore({ config: { url: process.env.DATABASE_URL || "file:local.db", }, }), vector: new LibSQLVector({ connectionUrl: process.env.DATABASE_URL || "file:local.db", }), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); const memoryAgent = new Agent({ name: "Memory Agent", instructions: "あなたは以前のやり取りから記憶を自動的に呼び出す能力を持つAIエージェントです。会話は数時間、数日、数ヶ月、または数年続くことがあります。まだ知らない場合は、ユーザーの名前といくつかの情報を尋ねるべきです。", model: openai('gpt-4o-mini'), memory: customMemory, }); ``` ## 使用例 ```typescript import { randomUUID } from "crypto"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Start with a system message const response1 = await memoryAgent.stream( [ { role: "system", content: `Chat with user started now ${new Date().toISOString()}. Don't mention this message.`, }, ], { resourceId, threadId, }, ); // Send user message const response2 = await memoryAgent.stream("What can you help me with?", { threadId, resourceId, }); // Use semantic search to find relevant messages const response3 = await memoryAgent.stream("What did we discuss earlier?", { threadId, resourceId, memoryOptions: { lastMessages: false, semanticRecall: { topK: 3, // Get top 3 most relevant messages messageRange: 2, // Include context around each match }, }, }); ``` この例は以下を示しています: 1. ベクトル検索機能を備えたLibSQLストレージの設定 2. メッセージ履歴とセマンティック検索のためのメモリオプションの設定 3. メモリ統合を備えたエージェントの作成 4. 会話履歴で関連するメッセージを見つけるためのセマンティック検索の使用 5. `messageRange`を使用して一致したメッセージの周囲のコンテキストを含める # Postgresを使用したメモリ [JA] Source: https://mastra.ai/ja/examples/memory/memory-with-pg この例では、PostgreSQLをストレージバックエンドとして使用して、Mastraのメモリシステムを使用する方法を示します。 ## セットアップ まず、PostgreSQLストレージとベクター機能を使用してメモリシステムをセットアップします: ```typescript import { Memory } from "@mastra/memory"; import { PostgresStore, PgVector } from "@mastra/pg"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; // PostgreSQL接続の詳細 const host = "localhost"; const port = 5432; const user = "postgres"; const database = "postgres"; const password = "postgres"; const connectionString = `postgresql://${user}:${password}@${host}:${port}`; // PostgreSQLストレージとベクター検索でメモリを初期化 const memory = new Memory({ storage: new PostgresStore({ host, port, user, database, password, }), vector: new PgVector(connectionString), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); // メモリ機能を持つエージェントを作成 const chefAgent = new Agent({ name: "chefAgent", instructions: "あなたはMichelです。実用的で経験豊富な家庭料理のシェフで、利用可能な食材で素晴らしい料理を作る手助けをします。", model: openai("gpt-4o-mini"), memory, }); ``` ## 使用例 ```typescript import { randomUUID } from "crypto"; // 会話を開始する const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // 材料について尋ねる const response1 = await chefAgent.stream( "私のキッチンには、パスタ、缶詰のトマト、ニンニク、オリーブオイル、そしていくつかの乾燥ハーブ(バジルとオレガノ)があります。何が作れますか?", { threadId, resourceId, }, ); // 別の材料について尋ねる const response2 = await chefAgent.stream( "今、友達の家にいて、彼らは鶏もも肉、ココナッツミルク、サツマイモ、カレーパウダーを持っています。", { threadId, resourceId, }, ); // メモリを使用して以前の会話を思い出す const response3 = await chefAgent.stream( "友達の家に行く前に何を料理しましたか?", { threadId, resourceId, memoryOptions: { lastMessages: 3, // コンテキストのために最後の3つのメッセージを取得 }, }, ); ``` この例は以下を示しています: 1. ベクトル検索機能を備えたPostgreSQLストレージの設定 2. メッセージ履歴とセマンティック検索のためのメモリオプションの設定 3. メモリ統合を備えたエージェントの作成 4. 複数のやり取りを通じて会話のコンテキストを維持するためのエージェントの使用 # Upstashを使用したメモリ [JA] Source: https://mastra.ai/ja/examples/memory/memory-with-upstash この例では、ストレージバックエンドとしてUpstashを使用してMastraのメモリシステムを使用する方法を示します。 ## セットアップ まず、Upstashのストレージとベクター機能を使用してメモリシステムをセットアップします: ```typescript import { Memory } from "@mastra/memory"; import { UpstashStore, UpstashVector } from "@mastra/upstash"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; // Upstashのストレージとベクター検索を使用してメモリを初期化 const memory = new Memory({ storage: new UpstashStore({ url: process.env.UPSTASH_REDIS_REST_URL, token: process.env.UPSTASH_REDIS_REST_TOKEN, }), vector: new UpstashVector({ url: process.env.UPSTASH_REDIS_REST_URL, token: process.env.UPSTASH_REDIS_REST_TOKEN, }), options: { lastMessages: 10, semanticRecall: { topK: 3, messageRange: 2, }, }, }); // メモリ機能を持つエージェントを作成 const chefAgent = new Agent({ name: "chefAgent", instructions: "あなたはMichelです。実用的で経験豊富な家庭料理のシェフで、利用可能な食材で素晴らしい料理を作る手助けをします。", model: openai("gpt-4o-mini"), memory, }); ``` ## 環境設定 環境変数にUpstashの資格情報を設定してください: ```bash UPSTASH_REDIS_REST_URL=your-redis-url UPSTASH_REDIS_REST_TOKEN=your-redis-token ``` ## 使用例 ```typescript import { randomUUID } from "crypto"; // Start a conversation const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // Ask about ingredients const response1 = await chefAgent.stream( "私のキッチンには、パスタ、缶詰のトマト、ニンニク、オリーブオイル、そしていくつかの乾燥ハーブ(バジルとオレガノ)があります。何が作れますか?", { threadId, resourceId, }, ); // Ask about different ingredients const response2 = await chefAgent.stream( "今、私は友達の家にいて、彼らは鶏もも肉、ココナッツミルク、サツマイモ、カレーパウダーを持っています。", { threadId, resourceId, }, ); // Use memory to recall previous conversation const response3 = await chefAgent.stream( "友達の家に行く前に何を料理しましたか?", { threadId, resourceId, memoryOptions: { lastMessages: 3, // Get last 3 messages for context semanticRecall: { topK: 2, // Also get 2 most relevant messages messageRange: 2, // Include context around matches }, }, }, ); ``` この例は以下を示しています: 1. ベクター検索機能を備えたUpstashストレージの設定 2. Upstash接続のための環境変数の設定 3. メモリ統合を備えたエージェントの作成 4. 同じクエリで最近の履歴とセマンティック検索の両方を使用 --- title: ストリーミング作業記憶(上級) description: 会話を通じてToDoリストを維持するための作業記憶の使用例 --- # ストリーミング作業メモリ(上級) [JA] Source: https://mastra.ai/ja/examples/memory/streaming-working-memory-advanced この例では、最小限のコンテキストでも作業メモリを使用してToDoリストを維持するエージェントを作成する方法を示します。作業メモリのより簡単な紹介については、[基本的な作業メモリの例](/examples/memory/short-term-working-memory)を参照してください。 ## セットアップ 作業メモリ機能を持つエージェントの作成方法を分解してみましょう。最小限のコンテキストでもタスクを記憶するToDoリストマネージャーを構築します。 ### 1. メモリの設定 まず、作業メモリを使用して状態を維持するため、短いコンテキストウィンドウでメモリシステムを設定します。メモリはデフォルトでLibSQLストレージを使用しますが、必要に応じて他の[ストレージプロバイダー](/docs/agents/agent-memory#storage-options)を使用することもできます: ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ options: { lastMessages: 1, // 作業メモリは短いコンテキストウィンドウでも会話の一貫性を維持できることを意味します workingMemory: { enabled: true, }, }, }); ``` ### 2. 作業メモリテンプレートの定義 次に、エージェントにToDoリストデータをどのように構造化するかを示すテンプレートを定義します。テンプレートはデータ構造を表現するためにMarkdownを使用します。これにより、エージェントは各ToDoアイテムに対して追跡すべき情報を理解するのに役立ちます。 ```typescript const memory = new Memory({ options: { lastMessages: 1, workingMemory: { enabled: true, template: ` # Todo List ## アイテムのステータス - アクティブなアイテム: - 例 (期限: 3028年2月7日, 開始: 2025年2月7日) - 説明: これは例のタスクです ## 完了 - まだありません `, }, }, }); ``` ### 3. ToDoリストエージェントの作成 最後に、このメモリシステムを使用するエージェントを作成します。エージェントの指示は、ユーザーとどのようにやり取りし、ToDoリストを管理するかを定義します。 ```typescript import { openai } from "@ai-sdk/openai"; const todoAgent = new Agent({ name: "TODO Agent", instructions: "あなたは役に立つToDoリストAIエージェントです。ユーザーがToDoリストを管理するのを手伝ってください。まだリストがない場合は、何を追加するか尋ねてください!リストがある場合は、チャットが始まるときに常にそれを表示してください。各アイテムには絵文字、日付、タイトル(1から始まるインデックス番号付き)、説明、ステータスを追加してください。各情報の左に絵文字を追加してください。また、ボックス内に箇条書きでサブタスクリストをサポートしてください。各タスクの時間をどのくらいかかるかを尋ねて、ユーザーが時間を区切るのを手伝ってください。", model: openai("gpt-4o-mini"), memory, }); ``` **注:** テンプレートと指示はオプションです - `workingMemory.enabled`が`true`に設定されている場合、デフォルトのシステムメッセージが自動的に挿入され、エージェントが作業メモリをどのように使用するかを理解するのを助けます。 ## 使用例 エージェントの応答には、Mastraが作業メモリを自動的に更新するために使用するXMLのような`$data`タグが含まれます。これを処理する2つの方法を見ていきます。 ### 基本的な使用法 単純なケースでは、`maskStreamTags`を使用してユーザーから作業メモリの更新を隠すことができます。 ```typescript import { randomUUID } from "crypto"; import { maskStreamTags } from "@mastra/core/utils"; // 会話を開始する const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; // 新しいToDoアイテムを追加する const response = await todoAgent.stream( "タスクを追加: アプリの新機能を構築する。約2時間かかり、次の金曜日までに完了する必要があります。", { threadId, resourceId, }, ); // ストリームを処理し、作業メモリの更新を隠す for await (const chunk of maskStreamTags( response.textStream, "working_memory", )) { process.stdout.write(chunk); } ``` ### UIフィードバックを伴う高度な使用法 より良いユーザー体験のために、作業メモリが更新されている間にロード状態を表示することができます。 ```typescript // 上記と同じインポートとセットアップ... // ライフサイクルフックを追加してUIフィードバックを提供する const maskedStream = maskStreamTags(response.textStream, "working_memory", { // working_memoryタグが開始されたときに呼び出される onStart: () => showLoadingSpinner("ToDoリストを更新中..."), // working_memoryタグが終了したときに呼び出される onEnd: () => hideLoadingSpinner(), // マスクされたコンテンツと共に呼び出される onMask: (chunk) => console.debug("更新されたToDoリスト:", chunk), }); // マスクされたストリームを処理する for await (const chunk of maskedStream) { process.stdout.write(chunk); } ``` この例は以下を示しています: 1. 作業メモリが有効なメモリシステムのセットアップ 2. 構造化されたXMLでToDoリストテンプレートを作成 3. `maskStreamTags`を使用してユーザーからメモリ更新を隠す 4. ライフサイクルフックを使用してメモリ更新中にUIのロード状態を提供 コンテキストに1つのメッセージしかない場合でも(`lastMessages: 1`)、エージェントは作業メモリに完全なToDoリストを保持します。エージェントが応答するたびに、ToDoリストの現在の状態で作業メモリを更新し、インタラクション間の永続性を確保します。 エージェントメモリの詳細、他のメモリタイプやストレージオプションについては、[メモリドキュメント](/docs/agents/agent-memory)ページをご覧ください。 --- title: ストリーミング作業メモリ description: エージェントで作業メモリを使用する例 --- # ストリーミング作業メモリ [JA] Source: https://mastra.ai/ja/examples/memory/streaming-working-memory この例では、ユーザーの名前、場所、または好みのような関連する会話の詳細を保持する作業メモリを持つエージェントを作成する方法を示します。 ## セットアップ まず、作業メモリを有効にしてメモリシステムをセットアップします。メモリはデフォルトでLibSQLストレージを使用しますが、必要に応じて他の[ストレージプロバイダー](/docs/agents/agent-memory#storage-options)を使用することもできます。 ### テキストストリームモード(デフォルト) ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ options: { workingMemory: { enabled: true, use: "text-stream", // this is the default mode }, }, }); ``` ### ツールコールモード または、作業メモリの更新にツールコールを使用することもできます。このモードは、`toDataStream()`を使用する際に必要です。テキストストリームモードはデータストリーミングと互換性がありません。 ```typescript const toolCallMemory = new Memory({ options: { workingMemory: { enabled: true, use: "tool-call", // Required for toDataStream() compatibility }, }, }); ``` メモリインスタンスをエージェントに追加します。 ```typescript import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: "Memory agent", instructions: "You are a helpful AI assistant.", model: openai("gpt-4o-mini"), memory, // or toolCallMemory }); ``` ## 使用例 作業メモリが設定されたので、エージェントと対話し、対話の重要な詳細を記憶することができます。 ### テキストストリームモード テキストストリームモードでは、エージェントは作業メモリの更新を直接応答に含めます: ```typescript import { randomUUID } from "crypto"; import { maskStreamTags } from "@mastra/core/utils"; const threadId = randomUUID(); const resourceId = "SOME_USER_ID"; const response = await agent.stream("Hello, my name is Jane", { threadId, resourceId, }); // 作業メモリタグを隠して応答ストリームを処理 for await (const chunk of maskStreamTags( response.textStream, "working_memory", )) { process.stdout.write(chunk); } ``` ### ツールコールモード ツールコールモードでは、エージェントは専用のツールを使用して作業メモリを更新します: ```typescript const toolCallResponse = await toolCallAgent.stream("Hello, my name is Jane", { threadId, resourceId, }); // ツールコールを通じて更新が行われるため、作業メモリタグを隠す必要はありません for await (const chunk of toolCallResponse.textStream) { process.stdout.write(chunk); } ``` ### 応答データの処理 テキストストリームモードでは、応答ストリームに `$data` タグ付きデータが含まれ、`$data` はMarkdown形式のコンテンツです。 Mastraはこれらのタグを拾い、LLMから返されたデータで作業メモリを自動的に更新します。 このデータをユーザーに表示しないようにするには、上記のように `maskStreamTags` ユーティルを使用できます。 ツールコールモードでは、作業メモリの更新はツールコールを通じて行われるため、タグを隠す必要はありません。 ## 概要 この例では以下を示します: 1. テキストストリームモードまたはツールコールモードで作業メモリを有効にしてメモリを設定する 2. `maskStreamTags` を使用してテキストストリームモードでメモリ更新を隠す 3. エージェントが両方のモードでインタラクション間に関連するユーザー情報を維持する 4. 作業メモリの更新を処理するための異なるアプローチ ## 高度なユースケース 作業メモリに関連する情報を制御する方法や、作業メモリが保存されている間の読み込み状態を表示する方法についての例は、[高度な作業メモリの例](/examples/memory/streaming-working-memory-advanced)をご覧ください。 他のメモリタイプやストレージオプションを含むエージェントメモリについて詳しく知るには、[メモリドキュメント](/docs/agents/agent-memory)ページをチェックしてください。 --- title: AI SDK useChat フック description: Mastra メモリを Vercel AI SDK useChat フックと統合する方法を示す例。 --- # 例: AI SDK `useChat` フック [JA] Source: https://mastra.ai/ja/examples/memory/use-chat Vercel AI SDKの`useChat`フックを使用して、ReactのようなフロントエンドフレームワークとMastraのメモリを統合するには、メッセージ履歴を重複しないように注意深く扱う必要があります。この例では、推奨されるパターンを示しています。 ## `useChat`を使用したメッセージ重複の防止 `useChat`のデフォルトの動作は、各リクエストでチャット履歴全体を送信します。Mastraのメモリは`threadId`に基づいて履歴を自動的に取得するため、クライアントから完全な履歴を送信すると、コンテキストウィンドウとストレージに重複したメッセージが発生します。 **解決策:** `useChat`を設定して、`threadId`と`resourceId`と共に**最新のメッセージのみ**を送信します。 ```typescript // components/Chat.tsx (React Example) import { useChat } from "ai/react"; export function Chat({ threadId, resourceId }) { const { messages, input, handleInputChange, handleSubmit } = useChat({ api: "/api/chat", // Your backend endpoint // Pass only the latest message and custom IDs experimental_prepareRequestBody: (request) => { // Ensure messages array is not empty and get the last message const lastMessage = request.messages.length > 0 ? request.messages[request.messages.length - 1] : null; // Return the structured body for your API route return { message: lastMessage, // Send only the most recent message content/role threadId, resourceId, }; }, // Optional: Initial messages if loading history from backend // initialMessages: loadedMessages, }); // ... rest of your chat UI component return (
{/* Render messages */}
); } // app/api/chat/route.ts (Next.js Example) import { Agent } from "@mastra/core/agent"; import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; import { CoreMessage } from "@mastra/core"; // Import CoreMessage const agent = new Agent({ name: "ChatAgent", instructions: "You are a helpful assistant.", model: openai("gpt-4o"), memory: new Memory(), // Assumes default memory setup }); export async function POST(request: Request) { // Get data structured by experimental_prepareRequestBody const { message, threadId, resourceId }: { message: CoreMessage | null; threadId: string; resourceId: string } = await request.json(); // Handle cases where message might be null (e.g., initial load or error) if (!message || !message.content) { // Return an appropriate response or error return new Response("Missing message content", { status: 400 }); } // Process with memory using the single message content const stream = await agent.stream(message.content, { threadId, resourceId, // Pass other message properties if needed, e.g., role // messageOptions: { role: message.role } }); // Return the streaming response return stream.toDataStreamResponse(); } ``` メッセージの永続性に関する詳細は、[AI SDKのドキュメント](https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot-message-persistence)を参照してください。 ## 基本スレッド管理UI このページは `useChat` に焦点を当てていますが、スレッドを管理するためのUI(リスト、作成、選択)を構築することもできます。これは通常、`memory.getThreadsByResourceId()` や `memory.createThread()` のようなMastraのメモリ機能とやり取りするバックエンドAPIエンドポイントを含みます。 ```typescript // スレッドリストの概念的なReactコンポーネント import React, { useState, useEffect } from 'react'; // API関数が存在すると仮定: fetchThreads, createNewThread async function fetchThreads(userId: string): Promise<{ id: string; title: string }[]> { /* ... */ } async function createNewThread(userId: string): Promise<{ id: string; title: string }> { /* ... */ } function ThreadList({ userId, currentThreadId, onSelectThread }) { const [threads, setThreads] = useState([]); // ... ローディングとエラーステート ... useEffect(() => { // userIdのスレッドを取得 }, [userId]); const handleCreateThread = async () => { // createNewThread APIを呼び出し、状態を更新し、新しいスレッドを選択 }; // ... スレッドのリストと新しい会話ボタンでUIをレンダリング ... return (

会話

    {threads.map(thread => (
  • ))}
); } // 親チャットコンポーネントでの使用例 function ChatApp() { const userId = "user_123"; const [currentThreadId, setCurrentThreadId] = useState(null); return (
{currentThreadId ? ( // あなたのuseChatコンポーネント ) : (
会話を選択または開始してください。
)}
); } ``` ## 関連 - **[はじめに](../../docs/memory/overview.mdx)**: `resourceId` と `threadId` の基本概念を説明します。 - **[メモリリファレンス](../../reference/memory/Memory.mdx)**: `Memory` クラスメソッドのAPI詳細。 --- title: "例:チャンクデリミタの調整 | RAG | Mastra ドキュメント" description: Mastraでチャンクデリミタを調整して、コンテンツ構造により適合させる方法。 --- import { GithubLink } from "@/components/github-link"; # チャンク区切りを調整する [JA] Source: https://mastra.ai/ja/examples/rag/chunking/adjust-chunk-delimiters 大きなドキュメントを処理する際、テキストを小さなチャンクに分割する方法を制御したい場合があります。デフォルトでは、ドキュメントは改行で分割されますが、この動作をカスタマイズしてコンテンツ構造により適合させることができます。この例では、ドキュメントをチャンク化するためのカスタム区切り文字を指定する方法を示します。 ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText("Your plain text content..."); const chunks = await doc.chunk({ separator: "\n", }); ```




--- title: "例:チャンクサイズの調整 | RAG | Mastra ドキュメント" description: Mastraでチャンクサイズを調整して、コンテンツとメモリ要件により適合させます。 --- import { GithubLink } from "@/components/github-link"; # チャンクサイズの調整 [JA] Source: https://mastra.ai/ja/examples/rag/chunking/adjust-chunk-size 大きなドキュメントを処理する際には、各チャンクに含まれるテキストの量を調整する必要があるかもしれません。デフォルトでは、チャンクは1024文字の長さですが、このサイズをカスタマイズして、コンテンツやメモリの要件により適合させることができます。この例では、ドキュメントを分割する際にカスタムチャンクサイズを設定する方法を示します。 ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText("Your plain text content..."); const chunks = await doc.chunk({ size: 512, }); ```




--- title: "例:HTMLのセマンティックチャンキング | RAG | Mastra ドキュメント" description: MastraでドキュメントをセマンティックにチャンクするためにHTMLコンテンツをチャンクする。 --- import { GithubLink } from "@/components/github-link"; # HTMLを意味的に分割する [JA] Source: https://mastra.ai/ja/examples/rag/chunking/chunk-html HTMLコンテンツを扱う際、ドキュメントの構造を維持しながら、より小さく管理しやすい部分に分割する必要がよくあります。`chunk`メソッドは、HTMLタグと要素の整合性を保ちながら、HTMLコンテンツを賢く分割します。この例は、検索や取得の目的でHTMLドキュメントをどのように分割するかを示しています。 ```tsx copy import { MDocument } from "@mastra/rag"; const html = `

h1 content...

p content...

`; const doc = MDocument.fromHTML(html); const chunks = await doc.chunk({ headers: [ ["h1", "Header 1"], ["p", "Paragraph"], ], }); console.log(chunks); ```




--- title: "例:JSONのセマンティックチャンキング | RAG | Mastra ドキュメント" description: MastraでセマンティックにドキュメントをチャンクするためのJSONデータのチャンキング。 --- import { GithubLink } from "@/components/github-link"; # JSONを意味的に分割する [JA] Source: https://mastra.ai/ja/examples/rag/chunking/chunk-json JSONデータを扱う際には、オブジェクトの構造を保持しながら小さな部分に分割する必要があります。chunkメソッドは、キーと値の関係を維持しながら、JSONコンテンツを賢く分解します。この例は、検索や取得の目的でJSONドキュメントをどのように分割するかを示しています。 ```tsx copy import { MDocument } from "@mastra/rag"; const testJson = { name: "John Doe", age: 30, email: "john.doe@example.com", }; const doc = MDocument.fromJSON(JSON.stringify(testJson)); const chunks = await doc.chunk({ maxSize: 100, }); console.log(chunks); ```




--- title: "例:マークダウンのセマンティックチャンキング | RAG | Mastra ドキュメント" description: 検索や取得目的でマークダウン文書をチャンク化するためのMastraの使用例。 --- import { GithubLink } from "@/components/github-link"; # チャンクマークダウン [JA] Source: https://mastra.ai/ja/examples/rag/chunking/chunk-markdown Markdownは生のHTMLよりも情報密度が高く、RAGパイプラインでの作業が容易です。Markdownを扱う際には、ヘッダーやフォーマットを保持しながら小さな部分に分割する必要があります。`chunk`メソッドは、ヘッダー、リスト、コードブロックのようなMarkdown特有の要素を賢く処理します。この例は、検索や取得の目的でMarkdownドキュメントをチャンクする方法を示しています。 ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromMarkdown("# Your markdown content..."); const chunks = await doc.chunk(); ```




--- title: "例:テキストの意味的チャンキング | RAG | Mastra ドキュメント" description: 大きなテキスト文書を処理のために小さなチャンクに分割するためのMastraの使用例。 --- import { GithubLink } from "@/components/github-link"; # チャンクテキスト [JA] Source: https://mastra.ai/ja/examples/rag/chunking/chunk-text 大きなテキストドキュメントを扱う際には、処理のためにそれらを小さく管理しやすい部分に分割する必要があります。チャンクメソッドは、検索、分析、または取得に使用できるセグメントにテキストコンテンツを分割します。この例では、デフォルト設定を使用してプレーンテキストをチャンクに分割する方法を示します。 ```tsx copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText("Your plain text content..."); const chunks = await doc.chunk(); ```




--- title: "例:チャンク配列の埋め込み | RAG | Mastra ドキュメント" description: 類似性検索のためにテキストチャンクの配列の埋め込みを生成するためにMastraを使用する例。 --- import { GithubLink } from "@/components/github-link"; # Embed Chunk Array [JA] Source: https://mastra.ai/ja/examples/rag/embedding/embed-chunk-array ドキュメントをチャンク化した後、テキストチャンクを類似性検索に使用できる数値ベクトルに変換する必要があります。`embed` メソッドは、選択したプロバイダーとモデルを使用してテキストチャンクを埋め込みに変換します。この例では、テキストチャンクの配列に対して埋め込みを生成する方法を示します。 ```tsx copy import { openai } from '@ai-sdk/openai'; import { MDocument } from '@mastra/rag'; import { embed } from 'ai'; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); ```




--- title: "例:テキストチャンクの埋め込み | RAG | Mastra ドキュメント" description: 類似性検索のために単一のテキストチャンクの埋め込みを生成するためにMastraを使用する例。 --- import { GithubLink } from "@/components/github-link"; # テキストチャンクの埋め込み [JA] Source: https://mastra.ai/ja/examples/rag/embedding/embed-text-chunk 個々のテキストチャンクを扱う際には、類似性検索のためにそれらを数値ベクトルに変換する必要があります。`embed` メソッドは、選択したプロバイダーとモデルを使用して、単一のテキストチャンクを埋め込みに変換します。 ```tsx copy import { openai } from '@ai-sdk/openai'; import { MDocument } from '@mastra/rag'; import { embed } from 'ai'; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embedding } = await embed({ model: openai.embedding('text-embedding-3-small'), value: chunks[0].text, }); ```




--- title: "例: Cohereを使用したテキストの埋め込み | RAG | Mastra ドキュメント" description: Cohereの埋め込みモデルを使用して埋め込みを生成するMastraの使用例。 --- import { GithubLink } from "@/components/github-link"; # Cohereでテキストを埋め込む [JA] Source: https://mastra.ai/ja/examples/rag/embedding/embed-text-with-cohere 代替の埋め込みプロバイダーを使用する場合、選択したモデルの仕様に一致するベクトルを生成する方法が必要です。`embed` メソッドは複数のプロバイダーをサポートしており、異なる埋め込みサービス間で切り替えることができます。この例では、Cohereの埋め込みモデルを使用して埋め込みを生成する方法を示します。 ```tsx copy import { cohere } from '@ai-sdk/cohere'; import { MDocument } from "@mastra/rag"; import { embedMany } from 'ai'; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ model: cohere.embedding('embed-english-v3.0'), values: chunks.map(chunk => chunk.text), }); ```




--- title: "例:メタデータ抽出 | 検索 | RAG | Mastra ドキュメント" description: Mastraでドキュメントからメタデータを抽出して活用する例。ドキュメント処理と検索を強化します。 --- import { GithubLink } from "@/components/github-link"; # メタデータ抽出 [JA] Source: https://mastra.ai/ja/examples/rag/embedding/metadata-extraction この例では、Mastra のドキュメント処理機能を使用して、ドキュメントからメタデータを抽出し利用する方法を示します。 抽出されたメタデータは、ドキュメントの整理、フィルタリング、および RAG システムでの強化された検索に使用できます。 ## 概要 このシステムは、2つの方法でメタデータ抽出を示します: 1. ドキュメントからの直接メタデータ抽出 2. メタデータ抽出を伴うチャンク化 ## セットアップ ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { MDocument } from '@mastra/rag'; ``` ## ドキュメント作成 テキストコンテンツからドキュメントを作成します: ```typescript copy showLineNumbers{3} filename="src/index.ts" const doc = MDocument.fromText(`Title: The Benefits of Regular Exercise Regular exercise has numerous health benefits. It improves cardiovascular health, strengthens muscles, and boosts mental wellbeing. Key Benefits: • Reduces stress and anxiety • Improves sleep quality • Helps maintain healthy weight • Increases energy levels For optimal results, experts recommend at least 150 minutes of moderate exercise per week.`); ``` ## 1. 直接メタデータ抽出 ドキュメントから直接メタデータを抽出します: ```typescript copy showLineNumbers{17} filename="src/index.ts" // メタデータ抽出オプションを設定 await doc.extractMetadata({ keywords: true, // 重要なキーワードを抽出 summary: true, // 簡潔な要約を生成 }); // 抽出されたメタデータを取得 const meta = doc.getMetadata(); console.log('抽出されたメタデータ:', meta); // 出力例: // 抽出されたメタデータ: { // keywords: [ // '運動', // '健康の利点', // '心血管の健康', // '精神的健康', // 'ストレス軽減', // '睡眠の質' // ], // summary: '定期的な運動は、心血管の健康、筋力、精神的健康を含む複数の健康上の利点を提供します。主な利点には、ストレス軽減、睡眠の改善、体重管理、エネルギーの増加が含まれます。推奨される運動時間は週に150分です。' // } ``` ## 2. メタデータを用いたチャンク化 ドキュメントのチャンク化とメタデータ抽出を組み合わせる: ```typescript copy showLineNumbers{40} filename="src/index.ts" // メタデータ抽出を伴うチャンク化を設定 await doc.chunk({ strategy: 'recursive', // 再帰的チャンク化戦略を使用 size: 200, // 最大チャンクサイズ extract: { keywords: true, // チャンクごとにキーワードを抽出 summary: true, // チャンクごとに要約を生成 }, }); // チャンクからメタデータを取得 const metaTwo = doc.getMetadata(); console.log('チャンクメタデータ:', metaTwo); // 出力例: // チャンクメタデータ: { // keywords: [ // '運動', // '健康の利点', // '心血管の健康', // '精神的健康', // 'ストレス軽減', // '睡眠の質' // ], // summary: '定期的な運動は、心血管の健康、筋力、精神的健康を含む複数の健康上の利点を提供します。主な利点には、ストレス軽減、睡眠の改善、体重管理、エネルギーの増加が含まれます。推奨される運動時間は週に150分です。' // } ```




--- title: "例:ハイブリッドベクトル検索 | RAG | Mastra ドキュメント" description: Mastraでベクトル検索結果を強化するためにPGVectorでメタデータフィルターを使用する例。 --- import { GithubLink } from "@/components/github-link"; # ハイブリッドベクトル検索 [JA] Source: https://mastra.ai/ja/examples/rag/query/hybrid-vector-search ベクトル類似性検索をメタデータフィルターと組み合わせると、より正確で効率的なハイブリッド検索を作成できます。 このアプローチは次の要素を組み合わせます: - 最も関連性の高いドキュメントを見つけるためのベクトル類似性検索 - 追加の基準に基づいて検索結果を絞り込むためのメタデータフィルター この例では、MastraとPGVectorを使用したハイブリッドベクトル検索の方法を示します。 ## 概要 このシステムは、MastraとPGVectorを使用してフィルタリングされたベクトル検索を実装しています。以下のことを行います: 1. メタデータフィルターを使用してPGVectorの既存の埋め込みをクエリします 2. 異なるメタデータフィールドでフィルタリングする方法を示します 3. ベクトル類似性とメタデータフィルタリングを組み合わせる方法を示します > **注**: ドキュメントからメタデータを抽出する方法の例については、[メタデータ抽出](./metadata-extraction)ガイドを参照してください。 > > 埋め込みを作成して保存する方法については、[埋め込みのアップサート](/examples/rag/upsert/upsert-embeddings)ガイドを参照してください。 ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { embed } from 'ai'; import { PgVector } from '@mastra/pg'; import { openai } from '@ai-sdk/openai'; ``` ## ベクターストアの初期化 接続文字列を使用してPgVectorを初期化します: ```typescript copy showLineNumbers{4} filename="src/index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); ``` ## 使用例 ### メタデータ値でフィルタリング ```typescript copy showLineNumbers{6} filename="src/index.ts" // クエリの埋め込みを作成 const { embedding } = await embed({ model: openai.embedding('text-embedding-3-small'), value: '[ここにドキュメントに基づくクエリを挿入]', }); // メタデータフィルタを使用したクエリ const result = await pgVector.query({ indexName: 'embeddings', queryVector: embedding, topK: 3, filter: { 'path.to.metadata': { $eq: 'value', }, }, }); console.log('結果:', result); ```




--- title: "例:トップK結果の取得 | RAG | Mastra ドキュメント" description: Mastraを使用してベクトルデータベースをクエリし、意味的に類似したチャンクを取得する例。 --- import { GithubLink } from "@/components/github-link"; # トップK結果の取得 [JA] Source: https://mastra.ai/ja/examples/rag/query/retrieve-results ベクターデータベースに埋め込みを保存した後、それらをクエリして類似したコンテンツを見つける必要があります。 `query` メソッドは、入力埋め込みに対して意味的に最も類似したチャンクを関連性でランク付けして返します。`topK` パラメータを使用すると、返す結果の数を指定できます。 この例は、Pineconeベクターデータベースから類似したチャンクを取得する方法を示しています。 ```tsx copy import { openai } from "@ai-sdk/openai"; import { PineconeVector } from "@mastra/pinecone"; import { MDocument } from "@mastra/rag"; import { embedMany } from "ai"; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const pinecone = new PineconeVector("your-api-key"); await pinecone.createIndex({ indexName: "test_index", dimension: 1536, }); await pinecone.upsert({ indexName: "test_index", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); const topK = 10; const results = await pinecone.query({ indexName: "test_index", queryVector: embeddings[0], topK, }); console.log(results); ```




--- title: "例: ツールを使用した結果の再ランク付け | 検索 | RAG | Mastra ドキュメント" description: OpenAI埋め込みとベクトルストレージにPGVectorを使用して、Mastraで再ランク付けを行うRAGシステムを実装する例。 --- import { GithubLink } from "@/components/github-link"; # ツールを使用した結果の再ランキング [JA] Source: https://mastra.ai/ja/examples/rag/rerank/rerank-rag この例では、Mastra のベクトルクエリツールを使用して、OpenAI の埋め込みと PGVector をベクトルストレージとして使用した再ランキングを伴う Retrieval-Augmented Generation (RAG) システムを実装する方法を示します。 ## 概要 このシステムは、MastraとOpenAIを使用した再ランキングを伴うRAGを実装しています。以下がその機能です: 1. 応答生成のためにgpt-4o-miniを使用してMastraエージェントを設定 2. 再ランキング機能を備えたベクトルクエリツールを作成 3. テキストドキュメントを小さなセグメントに分割し、それらから埋め込みを作成 4. それらをPostgreSQLベクトルデータベースに保存 5. クエリに基づいて関連するチャンクを取得し再ランキング 6. Mastraエージェントを使用してコンテキストに応じた応答を生成 ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### 依存関係 次に、必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { MDocument, createVectorQueryTool } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## 再ランキングを使用したベクトルクエリツールの作成 @mastra/rag からインポートされた createVectorQueryTool を使用して、ベクトルデータベースをクエリし、結果を再ランキングするツールを作成できます: ```typescript copy showLineNumbers{8} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding("text-embedding-3-small"), reranker: { model: openai("gpt-4o-mini"), }, }); ``` ## エージェント設定 応答を処理するMastraエージェントを設定します: ```typescript copy showLineNumbers{17} filename="index.ts" export const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context. Keep your answers concise and relevant. Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly.`, model: openai("gpt-4o-mini"), tools: { vectorQueryTool, }, }); ``` ## PgVectorとMastraのインスタンス化 コンポーネントを使用してPgVectorとMastraをインスタンス化します: ```typescript copy showLineNumbers{29} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent("ragAgent"); ``` ## ドキュメント処理 ドキュメントを作成し、チャンクに処理します: ```typescript copy showLineNumbers{38} filename="index.ts" const doc1 = MDocument.fromText(` market data shows price resistance levels. technical charts display moving averages. support levels guide trading decisions. breakout patterns signal entry points. price action determines trade timing. baseball cards show gradual value increase. rookie cards command premium prices. card condition affects resale value. authentication prevents fake trading. grading services verify card quality. volume analysis confirms price trends. sports cards track seasonal demand. chart patterns predict movements. mint condition doubles card worth. resistance breaks trigger orders. rare cards appreciate yearly. `); const chunks = await doc1.chunk({ strategy: "recursive", size: 150, overlap: 20, separator: "\n", }); ``` ## 埋め込みの作成と保存 チャンクの埋め込みを生成し、それらをベクターデータベースに保存します: ```typescript copy showLineNumbers{66} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding("text-embedding-3-small"), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## 再ランキングを用いたクエリ 再ランキングが結果にどのように影響するかを確認するために、さまざまなクエリを試してください: ```typescript copy showLineNumbers{82} filename="index.ts" const queryOne = 'explain technical trading analysis'; const answerOne = await agent.generate(queryOne); console.log('\nQuery:', queryOne); console.log('Response:', answerOne.text); const queryTwo = 'explain trading card valuation'; const answerTwo = await agent.generate(queryTwo); console.log('\nQuery:', queryTwo); console.log('Response:', answerTwo.text); const queryThree = 'how do you analyze market resistance'; const answerThree = await agent.generate(queryThree); console.log('\nQuery:', queryThree); console.log('Response:', answerThree.text); ```




--- title: "例: 結果の再ランク付け | 検索 | RAG | Mastra ドキュメント" description: OpenAI埋め込みとベクトル保存のためのPGVectorを使用して、Mastraでセマンティック再ランク付けを実装する例。 --- import { GithubLink } from "@/components/github-link"; # 再ランキング結果 [JA] Source: https://mastra.ai/ja/examples/rag/rerank/rerank この例では、Mastra、OpenAIの埋め込み、およびベクトルストレージ用のPGVectorを使用して、再ランキングを伴う検索強化生成(RAG)システムを実装する方法を示します。 ## 概要 このシステムは、MastraとOpenAIを使用した再ランキングを伴うRAGを実装しています。以下のことを行います: 1. テキストドキュメントを小さなセグメントに分割し、それらから埋め込みを作成します 2. ベクトルをPostgreSQLデータベースに保存します 3. 初期のベクトル類似性検索を実行します 4. Mastraのrerank関数を使用して結果を再ランキングし、ベクトル類似性、意味的関連性、および位置スコアを組み合わせます 5. 初期結果と再ランキングされた結果を比較して改善を示します ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### 依存関係 次に、必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { PgVector } from '@mastra/pg'; import { MDocument, rerank } from '@mastra/rag'; import { embedMany, embed } from 'ai'; ``` ## ドキュメント処理 ドキュメントを作成し、チャンクに処理します: ```typescript copy showLineNumbers{7} filename="src/index.ts" const doc1 = MDocument.fromText(` market data shows price resistance levels. technical charts display moving averages. support levels guide trading decisions. breakout patterns signal entry points. price action determines trade timing. `); const chunks = await doc1.chunk({ strategy: 'recursive', size: 150, overlap: 20, separator: '\n', }); ``` ## 埋め込みの作成と保存 チャンクの埋め込みを生成し、それをベクターデータベースに保存します: ```typescript copy showLineNumbers{36} filename="src/index.ts" const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); await pgVector.createIndex({ indexName: 'embeddings', dimension: 1536, }); await pgVector.upsert({ indexName: 'embeddings', vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## ベクトル検索と再ランキング ベクトル検索を実行し、結果を再ランキングします: ```typescript copy showLineNumbers{51} filename="src/index.ts" const query = 'explain technical trading analysis'; // Get query embedding const { embedding: queryEmbedding } = await embed({ value: query, model: openai.embedding('text-embedding-3-small'), }); // Get initial results const initialResults = await pgVector.query({ indexName: 'embeddings', queryVector: queryEmbedding, topK: 3, }); // Re-rank results const rerankedResults = await rerank(initialResults, query, openai('gpt-4o-mini'), { weights: { semantic: 0.5, // How well the content matches the query semantically vector: 0.3, // Original vector similarity score position: 0.2 // Preserves original result ordering }, topK: 3, }); ``` 重みは、最終的なランキングにどのように異なる要因が影響を与えるかを制御します: - `semantic`: 高い値は、クエリに対する意味的理解と関連性を優先します - `vector`: 高い値は、元のベクトル類似性スコアを優先します - `position`: 高い値は、結果の元の順序を維持するのに役立ちます ## 結果の比較 初期結果と再ランク付けされた結果の両方を印刷して、改善を確認します: ```typescript copy showLineNumbers{72} filename="src/index.ts" console.log('Initial Results:'); initialResults.forEach((result, index) => { console.log(`Result ${index + 1}:`, { text: result.metadata.text, score: result.score, }); }); console.log('Re-ranked Results:'); rerankedResults.forEach(({ result, score, details }, index) => { console.log(`Result ${index + 1}:`, { text: result.metadata.text, score: score, semantic: details.semantic, vector: details.vector, position: details.position, }); }); ``` 再ランク付けされた結果は、ベクトル類似性とセマンティックな理解を組み合わせることで、検索品質がどのように向上するかを示しています。各結果には以下が含まれます: - すべての要素を組み合わせた総合スコア - 言語モデルからのセマンティック関連性スコア - 埋め込み比較からのベクトル類似性スコア - 適切な場合に元の順序を維持するための位置ベースのスコア




--- title: "例: Cohereを使用したリランキング | RAG | Mastra ドキュメント" description: Cohereのリランキングサービスを使用して、Mastraでドキュメント検索の関連性を向上させる例。 --- # Cohereを使用したリランキング [JA] Source: https://mastra.ai/ja/examples/rag/rerank/reranking-with-cohere RAGのためにドキュメントを取得する際、初期のベクトル類似性検索では重要なセマンティックマッチを見逃す可能性があります。 Cohereのリランキングサービスは、複数のスコアリング要因を使用してドキュメントを並べ替えることで、結果の関連性を向上させます。 ```typescript import { rerank } from "@mastra/rag"; const results = rerank( searchResults, "deployment configuration", cohere("rerank-v3.5"), { topK: 5, weights: { semantic: 0.4, vector: 0.4, position: 0.2 } } ); ``` ## リンク - [rerank() リファレンス](/reference/rag/rerank.mdx) - [リトリーバル ドキュメント](/reference/rag/retrieval.mdx) --- title: "例:埋め込みのアップサート | RAG | Mastra ドキュメント" description: 類似性検索のために様々なベクトルデータベースに埋め込みを保存するMastraの使用例。 --- import { Tabs } from "nextra/components"; import { GithubLink } from "@/components/github-link"; # 埋め込みのアップサート [JA] Source: https://mastra.ai/ja/examples/rag/upsert/upsert-embeddings 埋め込みを生成した後、それらをベクトル類似検索をサポートするデータベースに保存する必要があります。この例では、後で取得するために埋め込みをさまざまなベクトルデータベースに保存する方法を示します。 `PgVector` クラスは、pgvector 拡張機能を使用して PostgreSQL にインデックスを作成し、埋め込みを挿入するためのメソッドを提供します。 ```tsx copy import { openai } from "@ai-sdk/openai"; import { PgVector } from "@mastra/pg"; import { MDocument } from "@mastra/rag"; import { embedMany } from "ai"; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); await pgVector.createIndex({ indexName: "test_index", dimension: 1536, }); await pgVector.upsert({ indexName: "test_index", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ```


`PineconeVector` クラスは、Pinecone という管理されたベクターデータベースサービスにインデックスを作成し、埋め込みを挿入するためのメソッドを提供します。 ```tsx copy import { openai } from '@ai-sdk/openai'; import { PineconeVector } from '@mastra/pinecone'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const pinecone = new PineconeVector(process.env.PINECONE_API_KEY!); await pinecone.createIndex({ indexName: 'testindex', dimension: 1536, }); await pinecone.upsert({ indexName: 'testindex', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ```


`QdrantVector` クラスは、高性能なベクターデータベースである Qdrant にコレクションを作成し、埋め込みを挿入するためのメソッドを提供します。 ```tsx copy import { openai } from '@ai-sdk/openai'; import { QdrantVector } from '@mastra/qdrant'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), maxRetries: 3, }); const qdrant = new QdrantVector( process.env.QDRANT_URL, process.env.QDRANT_API_KEY, ); await qdrant.createIndex({ indexName: 'test_collection', dimension: 1536, }); await qdrant.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ``` `ChromaVector` クラスは、コレクションを作成し、Chroma(オープンソースの埋め込みデータベース)に埋め込みを挿入するためのメソッドを提供します。 ```tsx copy import { openai } from '@ai-sdk/openai'; import { ChromaVector } from '@mastra/chroma'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const chroma = new ChromaVector({ path: "path/to/chroma/db", }); await chroma.createIndex({ indexName: 'test_collection', dimension: 1536, }); await chroma.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text })), documents: chunks.map(chunk => chunk.text), }); ```


`AstraVector` クラスは、コレクションを作成し、DataStax Astra DB(クラウドネイティブのベクターデータベース)に埋め込みを挿入するためのメソッドを提供します。 ```tsx copy import { openai } from '@ai-sdk/openai'; import { AstraVector } from '@mastra/astra'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const astra = new AstraVector({ token: process.env.ASTRA_DB_TOKEN, endpoint: process.env.ASTRA_DB_ENDPOINT, keyspace: process.env.ASTRA_DB_KEYSPACE, }); await astra.createIndex({ indexName: 'test_collection', dimension: 1536, }); await astra.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ``` `LibSQLVector` クラスは、コレクションを作成し、LibSQL(SQLiteのフォークでベクター拡張を持つ)に埋め込みを挿入するためのメソッドを提供します。 ```tsx copy import { openai } from "@ai-sdk/openai"; import { LibSQLVector } from "@mastra/core/vector/libsql"; import { MDocument } from "@mastra/rag"; import { embedMany } from "ai"; const doc = MDocument.fromText("Your text content..."); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map((chunk) => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const libsql = new LibSQLVector({ connectionUrl: process.env.DATABASE_URL, authToken: process.env.DATABASE_AUTH_TOKEN, // Optional: for Turso cloud databases }); await libsql.createIndex({ indexName: "test_collection", dimension: 1536, }); await libsql.upsert({ indexName: "test_collection", vectors: embeddings, metadata: chunks?.map((chunk) => ({ text: chunk.text })), }); ```


`UpstashVector` クラスは、コレクションを作成し、埋め込みをサーバーレスベクターデータベースである Upstash Vector に挿入するためのメソッドを提供します。 ```tsx copy import { openai } from '@ai-sdk/openai'; import { UpstashVector } from '@mastra/upstash'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const upstash = new UpstashVector({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }); await upstash.createIndex({ indexName: 'test_collection', dimension: 1536, }); await upstash.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ``` `CloudflareVector` クラスは、コレクションを作成し、埋め込みをサーバーレスベクターデータベースサービスである Cloudflare Vectorize に挿入するためのメソッドを提供します。 ```tsx copy import { openai } from '@ai-sdk/openai'; import { CloudflareVector } from '@mastra/vectorize'; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; const doc = MDocument.fromText('Your text content...'); const chunks = await doc.chunk(); const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding('text-embedding-3-small'), }); const vectorize = new CloudflareVector({ accountId: process.env.CF_ACCOUNT_ID, apiToken: process.env.CF_API_TOKEN, }); await vectorize.createIndex({ indexName: 'test_collection', dimension: 1536, }); await vectorize.upsert({ indexName: 'test_collection', vectors: embeddings, metadata: chunks?.map(chunk => ({ text: chunk.text })), }); ```
--- title: "例:ベクトルクエリツールの使用 | RAG | Mastra ドキュメント" description: OpenAI埋め込みとベクトルストレージにPGVectorを使用して、Mastraで基本的なRAGシステムを実装する例。 --- import { GithubLink } from "@/components/github-link"; # ベクタークエリツールの使用 [JA] Source: https://mastra.ai/ja/examples/rag/usage/basic-rag この例では、RAGシステムでのセマンティック検索のために`createVectorQueryTool`を実装し使用する方法を示します。ツールの設定方法、ベクターストレージの管理、および関連するコンテキストを効果的に取得する方法を示しています。 ## 概要 このシステムは、Mastra と OpenAI を使用して RAG を実装しています。以下がその機能です: 1. 応答生成のために gpt-4o-mini を使用して Mastra エージェントを設定 2. ベクターストアの操作を管理するためのベクタークエリツールを作成 3. 既存の埋め込みを使用して関連するコンテキストを取得 4. Mastra エージェントを使用してコンテキストに応じた応答を生成 > **注**: 埋め込みの作成と保存方法については、[Upsert Embeddings](/examples/rag/upsert/upsert-embeddings) ガイドを参照してください。 ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="src/index.ts" import { openai } from '@ai-sdk/openai'; import { Mastra } from '@mastra/core'; import { Agent } from '@mastra/core/agent'; import { createVectorQueryTool } from '@mastra/rag'; import { PgVector } from '@mastra/pg'; ``` ## ベクタークエリツールの作成 ベクターデータベースをクエリできるツールを作成します: ```typescript copy showLineNumbers{7} filename="src/index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: 'pgVector', indexName: 'embeddings', model: openai.embedding('text-embedding-3-small'), }); ``` ## エージェント設定 応答を処理するMastraエージェントを設定します: ```typescript copy showLineNumbers{13} filename="src/index.ts" export const ragAgent = new Agent({ name: 'RAG Agent', instructions: 'You are a helpful assistant that answers questions based on the provided context. Keep your answers concise and relevant.', model: openai('gpt-4o-mini'), tools: { vectorQueryTool, }, }); ``` ## PgVectorとMastraのインスタンス化 すべてのコンポーネントを使用してPgVectorとMastraをインスタンス化します: ```typescript copy showLineNumbers{23} filename="src/index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent('ragAgent'); ``` ## 使用例 ```typescript copy showLineNumbers{32} filename="src/index.ts" const prompt = ` [ここにドキュメントに基づくクエリを挿入] ツールで提供されたコンテキストのみに基づいて回答してください。 コンテキストに質問に完全に答えるための十分な情報が含まれていない場合は、その旨を明示してください。 `; const completion = await agent.generate(prompt); console.log(completion.text); ```




--- title: "例:情報密度の最適化 | RAG | Mastra ドキュメント" description: Mastraでのデータ重複排除とLLMベースの処理を使用して情報密度を最適化するRAGシステムの実装例。 --- import { GithubLink } from "@/components/github-link"; # 情報密度の最適化 [JA] Source: https://mastra.ai/ja/examples/rag/usage/cleanup-rag この例では、Mastra、OpenAIの埋め込み、およびベクトルストレージ用のPGVectorを使用して、Retrieval-Augmented Generation (RAG) システムを実装する方法を示します。 このシステムは、情報密度を最適化し、データの重複を排除するために、エージェントを使用して初期チャンクをクリーンアップします。 ## 概要 システムは、MastraとOpenAIを使用してRAGを実装し、今回はLLMベースの処理を通じて情報密度を最適化します。以下がその内容です: 1. クエリとドキュメントのクリーニングの両方を処理できるgpt-4o-miniを使用してMastraエージェントを設定します 2. エージェントが使用するためのベクトルクエリとドキュメントチャンクツールを作成します 3. 初期ドキュメントを処理します: - テキストドキュメントを小さなセグメントに分割します - チャンクの埋め込みを作成します - それらをPostgreSQLベクトルデータベースに保存します 4. ベースラインの応答品質を確立するために初期クエリを実行します 5. データを最適化します: - エージェントを使用してチャンクをクリーニングし、重複を排除します - クリーニングされたチャンクの新しい埋め込みを作成します - 最適化されたデータでベクトルストアを更新します 6. 改善された応答品質を示すために、同じクエリを再度実行します ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### 依存関係 次に、必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="index.ts" import { openai } from '@ai-sdk/openai'; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { MDocument, createVectorQueryTool, createDocumentChunkerTool } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## ツール作成 ### ベクタークエリツール `@mastra/rag` からインポートされた `createVectorQueryTool` を使用して、ベクターデータベースをクエリできるツールを作成できます。 ```typescript copy showLineNumbers{8} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), }); ``` ### ドキュメントチャンクツール `@mastra/rag` からインポートされた `createDocumentChunkerTool` を使用して、ドキュメントをチャンク化し、そのチャンクをエージェントに送信するツールを作成できます。 ```typescript copy showLineNumbers{14} filename="index.ts" const doc = MDocument.fromText(yourText); const documentChunkerTool = createDocumentChunkerTool({ doc, params: { strategy: "recursive", size: 512, overlap: 25, separator: "\n", }, }); ``` ## エージェント設定 クエリとクリーニングの両方を処理できる単一のMastraエージェントを設定します: ```typescript copy showLineNumbers{26} filename="index.ts" const ragAgent = new Agent({ name: "RAG Agent", instructions: `あなたは、クエリとドキュメントのクリーニングの両方を処理する役立つアシスタントです。 クリーニング時: データを処理、クリーニング、ラベル付けし、関連性のない情報を削除し、重要な事実を保持しながら重複を排除します。 クエリ時: 利用可能なコンテキストに基づいて回答を提供します。回答は簡潔で関連性のあるものにしてください。 重要: 質問に答えるよう求められた場合、ツールで提供されたコンテキストのみに基づいて回答してください。コンテキストに質問に完全に答えるための十分な情報が含まれていない場合は、その旨を明示してください。 `, model: openai('gpt-4o-mini'), tools: { vectorQueryTool, documentChunkerTool, }, }); ``` ## PgVectorとMastraのインスタンス化 コンポーネントを使用してPgVectorとMastraをインスタンス化します: ```typescript copy showLineNumbers{41} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent('ragAgent'); ``` ## ドキュメント処理 初期ドキュメントを分割し、埋め込みを作成します: ```typescript copy showLineNumbers{49} filename="index.ts" const chunks = await doc.chunk({ strategy: "recursive", size: 256, overlap: 50, separator: "\n", }); const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## 初期クエリ 生データをクエリしてベースラインを確立してみましょう: ```typescript copy showLineNumbers{73} filename="index.ts" // Generate response using the original embeddings const query = 'What are all the technologies mentioned for space exploration?'; const originalResponse = await agent.generate(query); console.log('\nQuery:', query); console.log('Response:', originalResponse.text); ``` ## データ最適化 初期結果を確認した後、データの品質を向上させるためにクリーンアップを行います: ```typescript copy showLineNumbers{79} filename="index.ts" const chunkPrompt = `Use the tool provided to clean the chunks. Make sure to filter out irrelevant information that is not space related and remove duplicates.`; const newChunks = await agent.generate(chunkPrompt); const updatedDoc = MDocument.fromText(newChunks.text); const updatedChunks = await updatedDoc.chunk({ strategy: "recursive", size: 256, overlap: 50, separator: "\n", }); const { embeddings: cleanedEmbeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: updatedChunks.map(chunk => chunk.text), }); // Update the vector store with cleaned embeddings await vectorStore.deleteIndex('embeddings'); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: cleanedEmbeddings, metadata: updatedChunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## 最適化されたクエリ データをクリーンアップした後に再度クエリを実行し、応答の違いを観察します: ```typescript copy showLineNumbers{109} filename="index.ts" // Query again with cleaned embeddings const cleanedResponse = await agent.generate(query); console.log('\nQuery:', query); console.log('Response:', cleanedResponse.text); ```




--- title: "例:思考の連鎖プロンプティング | RAG | Mastra ドキュメント" description: OpenAIとPGVectorを使用した思考の連鎖推論によるMastraでのRAGシステム実装の例。 --- import { GithubLink } from "@/components/github-link"; # 思考の連鎖プロンプティング [JA] Source: https://mastra.ai/ja/examples/rag/usage/cot-rag この例では、Mastra、OpenAIの埋め込み、およびベクトルストレージ用のPGVectorを使用して、検索強化生成(RAG)システムを実装する方法を示し、思考の連鎖推論に重点を置いています。 ## 概要 このシステムは、MastraとOpenAIを使用して、思考の連鎖プロンプトを実装しています。以下がその機能です: 1. 応答生成のためにgpt-4o-miniを使用してMastraエージェントを設定 2. ベクトルストアのインタラクションを管理するためのベクトルクエリツールを作成 3. テキストドキュメントを小さなセグメントに分割 4. これらのチャンクの埋め込みを作成 5. PostgreSQLベクトルデータベースに保存 6. ベクトルクエリツールを使用してクエリに基づいて関連するチャンクを取得 7. 思考の連鎖推論を使用してコンテキストに応じた応答を生成 ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### 依存関係 次に、必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { createVectorQueryTool, MDocument } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## ベクトルクエリツールの作成 @mastra/rag からインポートされた createVectorQueryTool を使用して、ベクトルデータベースをクエリできるツールを作成できます。 ```typescript copy showLineNumbers{8} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), }); ``` ## エージェント設定 Mastraエージェントをチェーン・オブ・ソートプロンプトの指示で設定します: ```typescript copy showLineNumbers{14} filename="index.ts" export const ragAgent = new Agent({ name: "RAG Agent", instructions: `あなたは、提供されたコンテキストに基づいて質問に答える役立つアシスタントです。 各応答のために次のステップに従ってください: 1. まず、取得したコンテキストチャンクを注意深く分析し、重要な情報を特定します。 2. 取得した情報がクエリにどのように関連しているかについての思考プロセスを分解します。 3. 取得したチャンクから異なる部分をどのように結びつけているかを説明します。 4. 取得したコンテキストの証拠に基づいてのみ結論を導きます。 5. 取得したチャンクに十分な情報が含まれていない場合は、何が欠けているかを明示的に述べてください。 応答を次のようにフォーマットします: 思考プロセス: - ステップ1:[取得したチャンクの初期分析] - ステップ2:[チャンク間の接続] - ステップ3:[チャンクに基づく推論] 最終回答: [取得したコンテキストに基づく簡潔な回答] 重要:質問に答えるよう求められた場合、ツールで提供されたコンテキストのみに基づいて回答してください。 コンテキストに質問に完全に答えるための十分な情報が含まれていない場合は、それを明示的に述べてください。 覚えておいてください:取得した情報をどのように使用して結論に達しているかを説明してください。 `, model: openai("gpt-4o-mini"), tools: { vectorQueryTool }, }); ``` ## PgVectorとMastraのインスタンス化 すべてのコンポーネントを使用してPgVectorとMastraをインスタンス化します: ```typescript copy showLineNumbers{36} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent("ragAgent"); ``` ## ドキュメント処理 ドキュメントを作成し、チャンクに処理します: ```typescript copy showLineNumbers{44} filename="index.ts" const doc = MDocument.fromText( `The Impact of Climate Change on Global Agriculture...`, ); const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", }); ``` ## 埋め込みの作成と保存 チャンクの埋め込みを生成し、それをベクターデータベースに保存します: ```typescript copy showLineNumbers{55} filename="index.ts" const { embeddings } = await embedMany({ values: chunks.map(chunk => chunk.text), model: openai.embedding("text-embedding-3-small"), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## 思考の連鎖クエリ エージェントがどのように推論を分解するかを確認するために、さまざまなクエリを試してください: ```typescript copy showLineNumbers{83} filename="index.ts" const answerOne = await agent.generate('What are the main adaptation strategies for farmers?'); console.log('\nQuery:', 'What are the main adaptation strategies for farmers?'); console.log('Response:', answerOne.text); const answerTwo = await agent.generate('Analyze how temperature affects crop yields.'); console.log('\nQuery:', 'Analyze how temperature affects crop yields.'); console.log('Response:', answerTwo.text); const answerThree = await agent.generate('What connections can you draw between climate change and food security?'); console.log('\nQuery:', 'What connections can you draw between climate change and food security?'); console.log('Response:', answerThree.text); ```




--- title: "例:ワークフローを使用した構造化推論 | RAG | Mastra ドキュメント" description: Mastraのワークフロー機能を使用してRAGシステムに構造化推論を実装する例。 --- import { GithubLink } from "@/components/github-link"; # ワークフローによる構造化推論 [JA] Source: https://mastra.ai/ja/examples/rag/usage/cot-workflow-rag この例では、Mastra、OpenAI 埋め込み、およびベクトルストレージ用の PGVector を使用して、取得強化生成 (RAG) システムを実装する方法を示します。定義されたワークフローを通じた構造化推論に重点を置いています。 ## 概要 このシステムは、定義されたワークフローを通じて、MastraとOpenAIを使用してchain-of-thoughtプロンプトを用いたRAGを実装します。以下がその内容です: 1. 応答生成のためにgpt-4o-miniを使用してMastraエージェントを設定 2. ベクトルストアのインタラクションを管理するためのベクトルクエリツールを作成 3. chain-of-thought推論のための複数のステップを持つワークフローを定義 4. テキストドキュメントを処理し、チャンク化 5. PostgreSQLに埋め込みを作成し、保存 6. ワークフローステップを通じて応答を生成 ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### 依存関係 必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { PgVector } from "@mastra/pg"; import { createVectorQueryTool, MDocument } from "@mastra/rag"; import { embedMany } from "ai"; import { z } from "zod"; ``` ## ワークフローの定義 まず、トリガースキーマを使用してワークフローを定義します: ```typescript copy showLineNumbers{10} filename="index.ts" export const ragWorkflow = new Workflow({ name: "rag-workflow", triggerSchema: z.object({ query: z.string(), }), }); ``` ## ベクタークエリツールの作成 ベクターデータベースをクエリするためのツールを作成します: ```typescript copy showLineNumbers{17} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), }); ``` ## エージェント設定 Mastraエージェントを設定します: ```typescript copy showLineNumbers{23} filename="index.ts" export const ragAgent = new Agent({ name: "RAG Agent", instructions: `You are a helpful assistant that answers questions based on the provided context.`, model: openai("gpt-4o-mini"), tools: { vectorQueryTool, }, }); ``` ## ワークフローステップ ワークフローは、思考の連鎖を考慮して複数のステップに分けられています: ### 1. コンテキスト分析ステップ ```typescript copy showLineNumbers{32} filename="index.ts" const analyzeContext = new Step({ id: "analyzeContext", outputSchema: z.object({ initialAnalysis: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const query = context?.getStepResult<{ query: string }>( "trigger", )?.query; const analysisPrompt = `${query} 1. まず、取得したコンテキストチャンクを注意深く分析し、重要な情報を特定します。`; const analysis = await ragAgent?.generate(analysisPrompt); console.log(analysis?.text); return { initialAnalysis: analysis?.text ?? "", }; }, }); ``` ### 2. 思考分解ステップ ```typescript copy showLineNumbers{54} filename="index.ts" const breakdownThoughts = new Step({ id: "breakdownThoughts", outputSchema: z.object({ breakdown: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const analysis = context?.getStepResult<{ initialAnalysis: string; }>("analyzeContext")?.initialAnalysis; const connectionPrompt = ` 初期分析に基づいて: ${analysis} 2. 取得した情報がクエリにどのように関連しているかについての思考プロセスを分解します。 `; const connectionAnalysis = await ragAgent?.generate(connectionPrompt); console.log(connectionAnalysis?.text); return { breakdown: connectionAnalysis?.text ?? "", }; }, }); ``` ### 3. 接続ステップ ```typescript copy showLineNumbers{80} filename="index.ts" const connectPieces = new Step({ id: "connectPieces", outputSchema: z.object({ connections: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const process = context?.getStepResult<{ breakdown: string; }>("breakdownThoughts")?.breakdown; const connectionPrompt = ` 分解に基づいて: ${process} 3. 取得したチャンクから異なる部分をどのように接続しているかを説明します。 `; const connections = await ragAgent?.generate(connectionPrompt); console.log(connections?.text); return { connections: connections?.text ?? "", }; }, }); ``` ### 4. 結論ステップ ```typescript copy showLineNumbers{105} filename="index.ts" const drawConclusions = new Step({ id: "drawConclusions", outputSchema: z.object({ conclusions: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const evidence = context?.getStepResult<{ connections: string; }>("connectPieces")?.connections; const conclusionPrompt = ` 接続に基づいて: ${evidence} 4. 取得したコンテキストの証拠に基づいてのみ結論を導き出します。 `; const conclusions = await ragAgent?.generate(conclusionPrompt); console.log(conclusions?.text); return { conclusions: conclusions?.text ?? "", }; }, }); ``` ### 5. 最終回答ステップ ```typescript copy showLineNumbers{130} filename="index.ts" const finalAnswer = new Step({ id: "finalAnswer", outputSchema: z.object({ finalAnswer: z.string(), }), execute: async ({ context, mastra }) => { console.log("---------------------------"); const ragAgent = mastra?.getAgent('ragAgent'); const conclusions = context?.getStepResult<{ conclusions: string; }>("drawConclusions")?.conclusions; const answerPrompt = ` 結論に基づいて: ${conclusions} 回答を次の形式でフォーマットします: 思考プロセス: - ステップ 1: [取得したチャンクの初期分析] - ステップ 2: [チャンク間の接続] - ステップ 3: [チャンクに基づく推論] 最終回答: [取得したコンテキストに基づく簡潔な回答]`; const finalAnswer = await ragAgent?.generate(answerPrompt); console.log(finalAnswer?.text); return { finalAnswer: finalAnswer?.text ?? "", }; }, }); ``` ## ワークフロー設定 ワークフロー内のすべてのステップを接続します: ```typescript copy showLineNumbers{160} filename="index.ts" ragWorkflow .step(analyzeContext) .then(breakdownThoughts) .then(connectPieces) .then(drawConclusions) .then(finalAnswer); ragWorkflow.commit(); ``` ## PgVectorとMastraのインスタンス化 すべてのコンポーネントを使用してPgVectorとMastraをインスタンス化します: ```typescript copy showLineNumbers{169} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, workflows: { ragWorkflow }, }); ``` ## ドキュメント処理 ドキュメントを処理し、チャンクに分割します: ```typescript copy showLineNumbers{177} filename="index.ts" const doc = MDocument.fromText(`The Impact of Climate Change on Global Agriculture...`); const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", }); ``` ## 埋め込みの作成と保存 埋め込みを生成して保存します: ```typescript copy showLineNumbers{186} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` --- title: "例: エージェント駆動のメタデータフィルタリング | 検索 | RAG | Mastra ドキュメント" description: RAG システムで Mastra エージェントを使用して、ドキュメント検索のためのメタデータフィルターを構築および適用する例。 --- import { GithubLink } from "@/components/github-link"; # エージェント駆動のメタデータフィルタリング [JA] Source: https://mastra.ai/ja/examples/rag/usage/filter-rag この例では、Mastra、OpenAIの埋め込み、およびベクトルストレージ用のPGVectorを使用して、Retrieval-Augmented Generation (RAG) システムを実装する方法を示します。 このシステムは、ユーザーのクエリからメタデータフィルタを構築するエージェントを使用して、ベクトルストア内の関連するチャンクを検索し、返される結果の量を減らします。 ## 概要 このシステムは、MastraとOpenAIを使用してメタデータフィルタリングを実装しています。以下がその機能です: 1. クエリを理解し、フィルター要件を特定するためにgpt-4o-miniを使用してMastraエージェントを設定 2. メタデータフィルタリングとセマンティック検索を処理するためのベクトルクエリツールを作成 3. ドキュメントをメタデータと埋め込みを含むチャンクに処理 4. 効率的な取得のためにベクトルとメタデータの両方をPGVectorに保存 5. メタデータフィルターとセマンティック検索を組み合わせてクエリを処理 ユーザーが質問をするとき: - エージェントはクエリを分析して意図を理解 - 適切なメタデータフィルターを構築(例:トピック、日付、カテゴリによる) - ベクトルクエリツールを使用して最も関連性の高い情報を見つける - フィルタリングされた結果に基づいて文脈に応じた応答を生成 ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### 依存関係 次に、必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="index.ts" import { openai } from '@ai-sdk/openai'; import { Mastra } from '@mastra/core'; import { Agent } from '@mastra/core/agent'; import { PgVector } from '@mastra/pg'; import { createVectorQueryTool, MDocument, PGVECTOR_PROMPT } from '@mastra/rag'; import { embedMany } from 'ai'; ``` ## ベクトルクエリツールの作成 @mastra/rag からインポートされた createVectorQueryTool を使用して、メタデータフィルタリングを可能にするツールを作成できます: ```typescript copy showLineNumbers{9} filename="index.ts" const vectorQueryTool = createVectorQueryTool({ id: 'vectorQueryTool', vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding('text-embedding-3-small'), enableFilter: true, }); ``` ## ドキュメント処理 ドキュメントを作成し、メタデータを含むチャンクに処理します: ```typescript copy showLineNumbers{17} filename="index.ts" const doc = MDocument.fromText(`The Impact of Climate Change on Global Agriculture...`); const chunks = await doc.chunk({ strategy: 'recursive', size: 512, overlap: 50, separator: '\n', extract: { keywords: true, // Extracts keywords from each chunk }, }); ``` ### チャンクをメタデータに変換 フィルタリング可能なメタデータにチャンクを変換します: ```typescript copy showLineNumbers{31} filename="index.ts" const chunkMetadata = chunks?.map((chunk: any, index: number) => ({ text: chunk.text, ...chunk.metadata, nested: { keywords: chunk.metadata.excerptKeywords .replace('KEYWORDS:', '') .split(',') .map(k => k.trim()), id: index, }, })); ``` ## エージェント設定 エージェントは、ユーザーのクエリを理解し、それを適切なメタデータフィルターに変換するように設定されています。 エージェントには、ベクトルクエリツールと以下を含むシステムプロンプトが必要です: - 利用可能なフィルターフィールドのメタデータ構造 - フィルター操作と構文のためのベクトルストアプロンプト ```typescript copy showLineNumbers{43} filename="index.ts" export const ragAgent = new Agent({ name: 'RAG Agent', model: openai('gpt-4o-mini'), instructions: ` You are a helpful assistant that answers questions based on the provided context. Keep your answers concise and relevant. Filter the context by searching the metadata. The metadata is structured as follows: { text: string, excerptKeywords: string, nested: { keywords: string[], id: number, }, } ${PGVECTOR_PROMPT} Important: When asked to answer a question, please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly. `, tools: { vectorQueryTool }, }); ``` エージェントの指示は以下を目的としています: - ユーザーのクエリを処理してフィルター要件を特定する - メタデータ構造を使用して関連情報を見つける - vectorQueryToolと提供されたベクトルストアプロンプトを通じて適切なフィルターを適用する - フィルターされたコンテキストに基づいて応答を生成する > 注: 異なるベクトルストアには特定のプロンプトが用意されています。詳細は[ベクトルストアプロンプト](/docs/rag/retrieval#vector-store-prompts)を参照してください。 ## PgVectorとMastraのインスタンス化 次のコンポーネントを使用してPgVectorとMastraをインスタンス化します: ```typescript copy showLineNumbers{69} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent('ragAgent'); ``` ## 埋め込みの作成と保存 埋め込みを生成し、メタデータと共に保存します: ```typescript copy showLineNumbers{78} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector('pgVector'); await vectorStore.createIndex({ indexName: 'embeddings', dimension: 1536, }); // Store both embeddings and metadata together await vectorStore.upsert({ indexName: 'embeddings', vectors: embeddings, metadata: chunkMetadata, }); ``` `upsert` 操作は、ベクトル埋め込みとそれに関連するメタデータの両方を保存し、セマンティック検索とメタデータフィルタリングの機能を組み合わせて提供します。 ## メタデータベースのクエリ メタデータフィルターを使用して、さまざまなクエリを試してください: ```typescript copy showLineNumbers{96} filename="index.ts" const queryOne = 'What are the adaptation strategies mentioned?'; const answerOne = await agent.generate(queryOne); console.log('\nQuery:', queryOne); console.log('Response:', answerOne.text); const queryTwo = 'Show me recent sections. Check the "nested.id" field and return values that are greater than 2.'; const answerTwo = await agent.generate(queryTwo); console.log('\nQuery:', queryTwo); console.log('Response:', answerTwo.text); const queryThree = 'Search the "text" field using regex operator to find sections containing "temperature".'; const answerThree = await agent.generate(queryThree); console.log('\nQuery:', queryThree); console.log('Response:', answerThree.text); ```




--- title: "例: 完全なグラフRAGシステム | RAG | Mastraドキュメント" description: OpenAI埋め込みとベクトルストレージにPGVectorを使用して、MastraでグラフRAGシステムを実装する例。 --- import { GithubLink } from "@/components/github-link"; # Graph RAG [JA] Source: https://mastra.ai/ja/examples/rag/usage/graph-rag この例では、Mastra、OpenAI embeddings、およびベクトルストレージ用のPGVectorを使用して、Retrieval-Augmented Generation (RAG) システムを実装する方法を示します。 ## 概要 このシステムは、MastraとOpenAIを使用してGraph RAGを実装しています。以下がその機能です: 1. 応答生成のためにgpt-4o-miniを使用してMastraエージェントを設定 2. ベクトルストアの操作とナレッジグラフの作成/トラバースを管理するためのGraphRAGツールを作成 3. テキストドキュメントを小さなセグメントに分割 4. これらのチャンクに対して埋め込みを作成 5. PostgreSQLベクトルデータベースにそれらを保存 6. GraphRAGツールを使用してクエリに基づく関連チャンクのナレッジグラフを作成 - ツールはベクトルストアから結果を返し、ナレッジグラフを作成 - クエリを使用してナレッジグラフをトラバース 7. Mastraエージェントを使用してコンテキストに応じた応答を生成 ## セットアップ ### 環境セットアップ 環境変数を設定してください: ```bash filename=".env" OPENAI_API_KEY=your_openai_api_key_here POSTGRES_CONNECTION_STRING=your_connection_string_here ``` ### 依存関係 次に、必要な依存関係をインポートします: ```typescript copy showLineNumbers filename="index.ts" import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { PgVector } from "@mastra/pg"; import { MDocument, createGraphRAGTool } from "@mastra/rag"; import { embedMany } from "ai"; ``` ## GraphRAG ツールの作成 @mastra/rag からインポートされた createGraphRAGTool を使用して、ベクターデータベースをクエリし、結果をナレッジグラフに変換するツールを作成できます: ```typescript copy showLineNumbers{8} filename="index.ts" const graphRagTool = createGraphRAGTool({ vectorStoreName: "pgVector", indexName: "embeddings", model: openai.embedding("text-embedding-3-small"), graphOptions: { dimension: 1536, threshold: 0.7, }, }); ``` ## エージェント設定 応答を処理するMastraエージェントを設定します: ```typescript copy showLineNumbers{19} filename="index.ts" const ragAgent = new Agent({ name: "GraphRAG Agent", instructions: `あなたは、提供されたコンテキストに基づいて質問に答える役立つアシスタントです。回答を次のようにフォーマットしてください: 1. 直接的な事実: 質問に関連するテキストから直接述べられている事実のみをリストアップします(2-3の箇条書き) 2. 作られたつながり: テキストの異なる部分間で見つけた関係をリストアップします(2-3の箇条書き) 3. 結論: すべてをまとめる1文の要約 各セクションを簡潔にし、最も重要なポイントに焦点を当ててください。 重要: 質問に答えるよう求められた場合、ツールで提供されたコンテキストのみに基づいて回答してください。 コンテキストに質問に完全に答えるための十分な情報が含まれていない場合は、その旨を明示してください。`, model: openai("gpt-4o-mini"), tools: { graphRagTool, }, }); ``` ## PgVectorとMastraのインスタンス化 コンポーネントを使用してPgVectorとMastraをインスタンス化します: ```typescript copy showLineNumbers{36} filename="index.ts" const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { ragAgent }, vectors: { pgVector }, }); const agent = mastra.getAgent("ragAgent"); ``` ## ドキュメント処理 ドキュメントを作成し、チャンクに処理します: ```typescript copy showLineNumbers{45} filename="index.ts" const doc = MDocument.fromText(` # Riverdale Heights: Community Development Study // ... text content ... `); const chunks = await doc.chunk({ strategy: "recursive", size: 512, overlap: 50, separator: "\n", }); ``` ## 埋め込みの作成と保存 チャンクの埋め込みを生成し、それをベクターデータベースに保存します: ```typescript copy showLineNumbers{56} filename="index.ts" const { embeddings } = await embedMany({ model: openai.embedding("text-embedding-3-small"), values: chunks.map(chunk => chunk.text), }); const vectorStore = mastra.getVector("pgVector"); await vectorStore.createIndex({ indexName: "embeddings", dimension: 1536, }); await vectorStore.upsert({ indexName: "embeddings", vectors: embeddings, metadata: chunks?.map((chunk: any) => ({ text: chunk.text })), }); ``` ## グラフベースのクエリ データ内の関係を探るために、さまざまなクエリを試してください: ```typescript copy showLineNumbers{82} filename="index.ts" const queryOne = "What are the direct and indirect effects of early railway decisions on Riverdale Heights' current state?"; const answerOne = await ragAgent.generate(queryOne); console.log('\nQuery:', queryOne); console.log('Response:', answerOne.text); const queryTwo = 'How have changes in transportation infrastructure affected different generations of local businesses and community spaces?'; const answerTwo = await ragAgent.generate(queryTwo); console.log('\nQuery:', queryTwo); console.log('Response:', answerTwo.text); const queryThree = 'Compare how the Rossi family business and Thompson Steel Works responded to major infrastructure changes, and how their responses affected the community.'; const answerThree = await ragAgent.generate(queryThree); console.log('\nQuery:', queryThree); console.log('Response:', answerThree.text); const queryFour = 'Trace how the transformation of the Thompson Steel Works site has influenced surrounding businesses and cultural spaces from 1932 to present.'; const answerFour = await ragAgent.generate(queryFour); console.log('\nQuery:', queryFour); console.log('Response:', answerFour.text); ```




--- title: "例: 音声からテキストへ | 音声 | Mastra ドキュメント" description: Mastraを使用して音声からテキストへの変換アプリケーションを作成する例。 --- import { GithubLink } from '@/components/github-link'; # Smart Voice Memo App [JA] Source: https://mastra.ai/ja/examples/voice/speech-to-text 次のコードスニペットは、Next.jsを使用してMastraを直接統合したスマートボイスメモアプリケーションでの音声認識(STT)機能の実装例を提供します。Next.jsとのMastraの統合に関する詳細は、[Integrate with Next.js](/docs/frameworks/next-js) ドキュメントを参照してください。 ## STT機能を備えたエージェントの作成 次の例は、OpenAIのSTT機能を備えた音声対応エージェントを初期化する方法を示しています: ```typescript filename="src/mastra/agents/index.ts" import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { OpenAIVoice } from '@mastra/voice-openai'; const instructions = ` You are an AI note assistant tasked with providing concise, structured summaries of their content... // omitted for brevity `; export const noteTakerAgent = new Agent({ name: 'Note Taker Agent', instructions: instructions, model: openai('gpt-4o'), voice: new OpenAIVoice(), // Add OpenAI voice provider with default configuration }); ``` ## Mastraへのエージェントの登録 このスニペットは、STT対応エージェントをMastraインスタンスに登録する方法を示しています: ```typescript filename="src/mastra/index.ts" import { createLogger } from '@mastra/core/logger'; import { Mastra } from '@mastra/core/mastra'; import { noteTakerAgent } from './agents'; export const mastra = new Mastra({ agents: { noteTakerAgent }, // Register the note taker agent logger: createLogger({ name: 'Mastra', level: 'info', }), }); ``` ## 音声の処理と文字起こし 以下のコードは、ウェブリクエストから音声を受け取り、エージェントのSTT機能を使用して文字起こしを行う方法を示しています: ```typescript filename="app/api/audio/route.ts" import { mastra } from '@/src/mastra'; // Import the Mastra instance import { Readable } from 'node:stream'; export async function POST(req: Request) { // Get the audio file from the request const formData = await req.formData(); const audioFile = formData.get('audio') as File; const arrayBuffer = await audioFile.arrayBuffer(); const buffer = Buffer.from(arrayBuffer); const readable = Readable.from(buffer); // Get the note taker agent from the Mastra instance const noteTakerAgent = mastra.getAgent('noteTakerAgent'); // Transcribe the audio file const text = await noteTakerAgent.voice?.listen(readable); return new Response(JSON.stringify({ text }), { headers: { 'Content-Type': 'application/json' }, }); } ``` Smart Voice Memo Appの完全な実装は、私たちのGitHubリポジトリで見ることができます。




--- title: "例: テキスト読み上げ | 音声 | Mastra ドキュメント" description: Mastraを使用してテキスト読み上げアプリケーションを作成する例。 --- import { GithubLink } from '@/components/github-link'; # インタラクティブストーリージェネレーター [JA] Source: https://mastra.ai/ja/examples/voice/text-to-speech 以下のコードスニペットは、Next.jsを使用してインタラクティブストーリージェネレーターアプリケーションでテキスト読み上げ(TTS)機能を実装する例を示しています。Mastraを別のバックエンド統合として使用しています。この例では、Mastra client-js SDKを使用してMastraバックエンドに接続する方法を示しています。Next.jsとのMastraの統合に関する詳細は、[Next.jsとの統合](/docs/frameworks/next-js)ドキュメントを参照してください。 ## TTS機能を備えたエージェントの作成 次の例は、バックエンドでTTS機能を備えたストーリー生成エージェントを設定する方法を示しています: ```typescript filename="src/mastra/agents/index.ts" import { openai } from '@ai-sdk/openai'; import { Agent } from '@mastra/core/agent'; import { OpenAIVoice } from '@mastra/voice-openai'; import { Memory } from '@mastra/memory'; const instructions = ` You are an Interactive Storyteller Agent. Your job is to create engaging short stories with user choices that influence the narrative. // omitted for brevity `; export const storyTellerAgent = new Agent({ name: 'Story Teller Agent', instructions: instructions, model: openai('gpt-4o'), voice: new OpenAIVoice(), }); ``` ## Mastraへのエージェントの登録 このスニペットは、Mastraインスタンスにエージェントを登録する方法を示しています: ```typescript filename="src/mastra/index.ts" import { createLogger } from '@mastra/core/logger'; import { Mastra } from '@mastra/core/mastra'; import { storyTellerAgent } from './agents'; export const mastra = new Mastra({ agents: { storyTellerAgent }, logger: createLogger({ name: 'Mastra', level: 'info', }), }); ``` ## フロントエンドからMastraに接続する ここでは、Mastra Client SDKを使用してMastraサーバーと対話します。Mastra Client SDKの詳細については、[ドキュメント](/docs/deployment/client)を参照してください。 ```typescript filename="src/app/page.tsx" import { MastraClient } from '@mastra/client-js'; export const mastraClient = new MastraClient({ baseUrl: 'http://localhost:4111', // Replace with your Mastra backend URL }); ``` ## ストーリーコンテンツの生成と音声への変換 この例では、Mastraエージェントへの参照を取得し、ユーザー入力に基づいてストーリーコンテンツを生成し、そのコンテンツを音声に変換する方法を示します: ``` typescript filename="/app/components/StoryManager.tsx" const handleInitialSubmit = async (formData: FormData) => { setIsLoading(true); try { const agent = mastraClient.getAgent('storyTellerAgent'); const message = `Current phase: BEGINNING. Story genre: ${formData.genre}, Protagonist name: ${formData.protagonistDetails.name}, Protagonist age: ${formData.protagonistDetails.age}, Protagonist gender: ${formData.protagonistDetails.gender}, Protagonist occupation: ${formData.protagonistDetails.occupation}, Story Setting: ${formData.setting}`; const storyResponse = await agent.generate({ messages: [{ role: 'user', content: message }], threadId: storyState.threadId, resourceId: storyState.resourceId, }); const storyText = storyResponse.text; const audioResponse = await agent.voice.speak(storyText); if (!audioResponse.body) { throw new Error('No audio stream received'); } const audio = await readStream(audioResponse.body); setStoryState(prev => ({ phase: 'beginning', threadId: prev.threadId, resourceId: prev.resourceId, content: { ...prev.content, beginning: storyText, }, })); setAudioBlob(audio); return audio; } catch (error) { console.error('Error generating story beginning:', error); } finally { setIsLoading(false); } }; ``` ## オーディオの再生 このスニペットは、新しいオーディオデータを監視してテキスト読み上げオーディオの再生を処理する方法を示しています。オーディオが受信されると、コードはオーディオブロブからブラウザで再生可能なURLを作成し、それをオーディオ要素に割り当て、自動的に再生を試みます: ```typescript filename="/app/components/StoryManager.tsx" useEffect(() => { if (!audioRef.current || !audioData) return; // Store a reference to the HTML audio element const currentAudio = audioRef.current; // Convert the Blob/File audio data from Mastra into a URL the browser can play const url = URL.createObjectURL(audioData); const playAudio = async () => { try { currentAudio.src = url; await currentAudio.load(); await currentAudio.play(); setIsPlaying(true); } catch (error) { console.error('Auto-play failed:', error); } }; playAudio(); return () => { if (currentAudio) { currentAudio.pause(); currentAudio.src = ''; URL.revokeObjectURL(url); } }; }, [audioData]); ``` インタラクティブストーリージェネレーターの完全な実装は、私たちのGitHubリポジトリで見ることができます。




--- title: "例:分岐パス | ワークフロー | Mastra ドキュメント" description: 中間結果に基づいて分岐パスを持つワークフローを作成するためのMastraの使用例。 --- import { GithubLink } from "@/components/github-link"; # 分岐パス [JA] Source: https://mastra.ai/ja/examples/workflows/branching-paths データを処理する際には、中間結果に基づいて異なるアクションを取る必要があることがよくあります。この例では、ワークフローを作成して別々のパスに分岐し、各パスが前のステップの出力に基づいて異なるステップを実行する方法を示します。 ## 制御フローダイアグラム この例では、ワークフローが別々のパスに分岐し、各パスが前のステップの出力に基づいて異なるステップを実行する方法を示します。 こちらが制御フローダイアグラムです: 分岐パスを持つワークフローを示すダイアグラム ## ステップの作成 ステップを作成し、ワークフローを初期化しましょう。 {/* prettier-ignore */} ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod" const stepOne = new Step({ id: "stepOne", execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2 }) }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { const stepOneResult = context.getStepResult<{ doubledValue: number }>("stepOne"); if (!stepOneResult) { return { isDivisibleByFive: false } } return { isDivisibleByFive: stepOneResult.doubledValue % 5 === 0 } } }); const stepThree = new Step({ id: "stepThree", execute: async ({ context }) =>{ const stepOneResult = context.getStepResult<{ doubledValue: number }>("stepOne"); if (!stepOneResult) { return { incrementedValue: 0 } } return { incrementedValue: stepOneResult.doubledValue + 1 } } }); const stepFour = new Step({ id: "stepFour", execute: async ({ context }) => { const stepThreeResult = context.getStepResult<{ incrementedValue: number }>("stepThree"); if (!stepThreeResult) { return { isDivisibleByThree: false } } return { isDivisibleByThree: stepThreeResult.incrementedValue % 3 === 0 } } }); // 両方のブランチに依存する新しいステップ const finalStep = new Step({ id: "finalStep", execute: async ({ context }) => { // getStepResultを使用して両方のブランチから結果を取得 const stepTwoResult = context.getStepResult<{ isDivisibleByFive: boolean }>("stepTwo"); const stepFourResult = context.getStepResult<{ isDivisibleByThree: boolean }>("stepFour"); const isDivisibleByFive = stepTwoResult?.isDivisibleByFive || false; const isDivisibleByThree = stepFourResult?.isDivisibleByThree || false; return { summary: `数値 ${context.triggerData.inputValue} は倍にすると5で${isDivisibleByFive ? '割り切れます' : '割り切れません'}、倍にして1を加えると3で${isDivisibleByThree ? '割り切れます' : '割り切れません'}.`, isDivisibleByFive, isDivisibleByThree } } }); // ワークフローを構築 const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` ## 分岐パスとチェーンステップ 次に、分岐パスを使用してワークフローを構成し、複合 `.after([])` 構文を使用してそれらをマージします。 ```ts showLineNumbers copy // 2つの並列ブランチを作成 myWorkflow // 最初のブランチ .step(stepOne) .then(stepTwo) // 2番目のブランチ .after(stepOne) .step(stepThree) .then(stepFour) // 複合 after 構文を使用して両方のブランチをマージ .after([stepTwo, stepFour]) .step(finalStep) .commit(); const { start } = myWorkflow.createRun(); const result = await start({ triggerData: { inputValue: 3 } }); console.log(result.steps.finalStep.output.summary); // 出力: "The number 3 when doubled is not divisible by 5, and when doubled and incremented is divisible by 3." ``` ## 高度なブランチとマージ 複数のブランチとマージポイントを使用して、より複雑なワークフローを作成できます: ```ts showLineNumbers copy const complexWorkflow = new Workflow({ name: "complex-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); // Create multiple branches with different merge points complexWorkflow // Main step .step(stepOne) // First branch .then(stepTwo) // Second branch .after(stepOne) .step(stepThree) .then(stepFour) // Third branch (another path from stepOne) .after(stepOne) .step(new Step({ id: "alternativePath", execute: async ({ context }) => { const stepOneResult = context.getStepResult<{ doubledValue: number }>("stepOne"); return { result: (stepOneResult?.doubledValue || 0) * 3 } } })) // Merge first and second branches .after([stepTwo, stepFour]) .step(new Step({ id: "partialMerge", execute: async ({ context }) => { const stepTwoResult = context.getStepResult<{ isDivisibleByFive: boolean }>("stepTwo"); const stepFourResult = context.getStepResult<{ isDivisibleByThree: boolean }>("stepFour"); return { intermediateResult: "Processed first two branches", branchResults: { branch1: stepTwoResult?.isDivisibleByFive, branch2: stepFourResult?.isDivisibleByThree } } } })) // Final merge of all branches .after(["partialMerge", "alternativePath"]) .step(new Step({ id: "finalMerge", execute: async ({ context }) => { const partialMergeResult = context.getStepResult<{ intermediateResult: string, branchResults: { branch1: boolean, branch2: boolean } }>("partialMerge"); const alternativePathResult = context.getStepResult<{ result: number }>("alternativePath"); return { finalResult: "All branches processed", combinedData: { fromPartialMerge: partialMergeResult?.branchResults, fromAlternativePath: alternativePathResult?.result } } } })) .commit(); ```




--- title: "例:ワークフローからエージェントを呼び出す | Mastra ドキュメント" description: ワークフローステップ内からAIエージェントを呼び出すためのMastraの使用例。 --- import { GithubLink } from "@/components/github-link"; # ワークフローからエージェントを呼び出す [JA] Source: https://mastra.ai/ja/examples/workflows/calling-agent この例では、メッセージを処理し、応答を生成するAIエージェントを呼び出すワークフローを作成し、ワークフローステップ内で実行する方法を示します。 ```ts showLineNumbers copy import { openai } from "@ai-sdk/openai"; import { Mastra } from "@mastra/core"; import { Agent } from "@mastra/core/agent"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const penguin = new Agent({ name: "agent skipper", instructions: `You are skipper from penguin of madagascar, reply as that`, model: openai("gpt-4o-mini"), }); const newWorkflow = new Workflow({ name: "pass message to the workflow", triggerSchema: z.object({ message: z.string(), }), }); const replyAsSkipper = new Step({ id: "reply", outputSchema: z.object({ reply: z.string(), }), execute: async ({ context, mastra }) => { const skipper = mastra?.getAgent('penguin'); const res = await skipper?.generate( context?.triggerData?.message, ); return { reply: res?.text || "" }; }, }); newWorkflow.step(replyAsSkipper); newWorkflow.commit(); const mastra = new Mastra({ agents: { penguin }, workflows: { newWorkflow }, }); const { runId, start } = await mastra.getWorkflow("newWorkflow").createRun(); const runResult = await start({ triggerData: { message: "Give me a run down of the mission to save private" }, }); console.log(runResult.results); ```




--- title: "例:条件分岐(実験的) | ワークフロー | Mastra ドキュメント" description: if/else文を使用してワークフローに条件分岐を作成するためのMastraの使用例。 --- import { GithubLink } from '@/components/github-link'; # 条件分岐を伴うワークフロー(実験的) [JA] Source: https://mastra.ai/ja/examples/workflows/conditional-branching ワークフローは、条件に基づいて異なるパスをたどる必要があることがよくあります。この例では、ワークフロー内で条件分岐を作成するために `if` と `else` を使用する方法を示します。 ## 基本的な If/Else の例 この例は、数値に基づいて異なるパスを取るシンプルなワークフローを示しています: ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; // 初期値を提供するステップ const startStep = new Step({ id: 'start', outputSchema: z.object({ value: z.number(), }), execute: async ({ context }) => { // トリガーデータから値を取得 const value = context.triggerData.inputValue; return { value }; }, }); // 高い値を処理するステップ const highValueStep = new Step({ id: 'highValue', outputSchema: z.object({ result: z.string(), }), execute: async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return { result: `High value processed: ${value}` }; }, }); // 低い値を処理するステップ const lowValueStep = new Step({ id: 'lowValue', outputSchema: z.object({ result: z.string(), }), execute: async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return { result: `Low value processed: ${value}` }; }, }); // 結果をまとめる最終ステップ const finalStep = new Step({ id: 'final', outputSchema: z.object({ summary: z.string(), }), execute: async ({ context }) => { // 実行されたどちらかのブランチから結果を取得 const highResult = context.getStepResult<{ result: string }>('highValue')?.result; const lowResult = context.getStepResult<{ result: string }>('lowValue')?.result; const result = highResult || lowResult; return { summary: `Processing complete: ${result}` }; }, }); // 条件分岐を持つワークフローを構築 const conditionalWorkflow = new Workflow({ name: 'conditional-workflow', triggerSchema: z.object({ inputValue: z.number(), }), }); conditionalWorkflow .step(startStep) .if(async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value ?? 0; return value >= 10; // 条件: 値が10以上 }) .then(highValueStep) .then(finalStep) .else() .then(lowValueStep) .then(finalStep) // 両方のブランチが最終ステップで合流 .commit(); // ワークフローを登録 const mastra = new Mastra({ workflows: { conditionalWorkflow }, }); // 使用例 async function runWorkflow(inputValue: number) { const workflow = mastra.getWorkflow('conditionalWorkflow'); const { start } = workflow.createRun(); const result = await start({ triggerData: { inputValue }, }); console.log('Workflow result:', result.results); return result; } // 高い値で実行 ("if" ブランチをたどる) const result1 = await runWorkflow(15); // 低い値で実行 ("else" ブランチをたどる) const result2 = await runWorkflow(5); console.log('Result 1:', result1); console.log('Result 2:', result2); ``` ## 参照ベースの条件の使用 比較演算子を使用して参照ベースの条件を使用することもできます: ```ts showLineNumbers copy // 関数の代わりに参照ベースの条件を使用 conditionalWorkflow .step(startStep) .if({ ref: { step: startStep, path: 'value' }, query: { $gte: 10 }, // 条件: 値が10以上 }) .then(highValueStep) .then(finalStep) .else() .then(lowValueStep) .then(finalStep) .commit(); ```




--- title: "例:ワークフローの作成 | ワークフロー | Mastra ドキュメント" description: Mastraを使用して単一ステップの簡単なワークフローを定義し実行する例。 --- import { GithubLink } from "@/components/github-link"; # シンプルなワークフローの作成 [JA] Source: https://mastra.ai/ja/examples/workflows/creating-a-workflow ワークフローは、構造化されたパスで操作のシーケンスを定義し実行することを可能にします。この例では、単一のステップを持つワークフローを示します。 ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ input: z.number(), }), }); const stepOne = new Step({ id: "stepOne", inputSchema: z.object({ value: z.number(), }), outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context?.triggerData?.input * 2; return { doubledValue }; }, }); myWorkflow.step(stepOne).commit(); const { runId, start } = myWorkflow.createRun(); const res = await start({ triggerData: { input: 90 }, }); console.log(res.results); ```




--- title: "例:循環依存関係 | ワークフロー | Mastra ドキュメント" description: 循環依存関係と条件付きループを持つワークフローを作成するためのMastraの使用例。 --- import { GithubLink } from "@/components/github-link"; # 循環依存関係を持つワークフロー [JA] Source: https://mastra.ai/ja/examples/workflows/cyclical-dependencies ワークフローは、条件に基づいてステップがループバックできる循環依存関係をサポートしています。以下の例は、条件付きロジックを使用してループを作成し、繰り返し実行を処理する方法を示しています。 ```ts showLineNumbers copy import { Workflow, Step } from '@mastra/core'; import { z } from 'zod'; async function main() { const doubleValue = new Step({ id: 'doubleValue', description: '入力値を2倍にします', inputSchema: z.object({ inputValue: z.number(), }), outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context.inputValue * 2; return { doubledValue }; }, }); const incrementByOne = new Step({ id: 'incrementByOne', description: '入力値に1を加えます', outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context }) => { const valueToIncrement = context?.getStepResult<{ firstValue: number }>('trigger')?.firstValue; if (!valueToIncrement) throw new Error('インクリメントする値が提供されていません'); const incrementedValue = valueToIncrement + 1; return { incrementedValue }; }, }); const cyclicalWorkflow = new Workflow({ name: 'cyclical-workflow', triggerSchema: z.object({ firstValue: z.number(), }), }); cyclicalWorkflow .step(doubleValue, { variables: { inputValue: { step: 'trigger', path: 'firstValue', }, }, }) .then(incrementByOne) .after(doubleValue) .step(doubleValue, { variables: { inputValue: { step: doubleValue, path: 'doubledValue', }, }, }) .commit(); const { runId, start } = cyclicalWorkflow.createRun(); console.log('Run', runId); const res = await start({ triggerData: { firstValue: 6 } }); console.log(res.results); } main(); ```




--- title: "例:ヒューマン・イン・ザ・ループ | ワークフロー | Mastra ドキュメント" description: 人間の介入ポイントを含むワークフローを作成するためのMastraの使用例。 --- import { GithubLink } from '@/components/github-link'; # ヒューマン・イン・ザ・ループ ワークフロー [JA] Source: https://mastra.ai/ja/examples/workflows/human-in-the-loop ヒューマン・イン・ザ・ループ ワークフローでは、特定のポイントで実行を一時停止し、ユーザー入力を収集したり、意思決定を行ったり、人間の判断が必要なアクションを実行したりすることができます。この例では、人間の介入ポイントを含むワークフローの作成方法を示します。 ## 仕組み 1. ワークフローステップは、`suspend()` 関数を使用して実行を**一時停止**でき、オプションで人間の意思決定者のためのコンテキストを含むペイロードを渡すことができます。 2. ワークフローが**再開**されると、人間の入力は `resume()` 呼び出しの `context` パラメータに渡されます。 3. この入力は、ステップの `inputSchema` に従って型付けされた `context.inputData` としてステップの実行コンテキストで利用可能になります。 4. その後、ステップは人間の入力に基づいて実行を続行できます。 このパターンにより、自動化されたワークフローにおける安全で型チェックされた人間の介入が可能になります。 ## Inquirerを使用したインタラクティブなターミナルの例 この例では、[Inquirer](https://www.npmjs.com/package/@inquirer/prompts)ライブラリを使用して、ワークフローが一時停止されたときにターミナルから直接ユーザー入力を収集し、真にインタラクティブな人間参加型のエクスペリエンスを作成する方法を示しています。 ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; import { confirm, input, select } from '@inquirer/prompts'; // Step 1: Generate product recommendations const generateRecommendations = new Step({ id: 'generateRecommendations', outputSchema: z.object({ customerName: z.string(), recommendations: z.array( z.object({ productId: z.string(), productName: z.string(), price: z.number(), description: z.string(), }), ), }), execute: async ({ context }) => { const customerName = context.triggerData.customerName; // In a real application, you might call an API or ML model here // For this example, we'll return mock data return { customerName, recommendations: [ { productId: 'prod-001', productName: 'Premium Widget', price: 99.99, description: 'Our best-selling premium widget with advanced features', }, { productId: 'prod-002', productName: 'Basic Widget', price: 49.99, description: 'Affordable entry-level widget for beginners', }, { productId: 'prod-003', productName: 'Widget Pro Plus', price: 149.99, description: 'Professional-grade widget with extended warranty', }, ], }; }, }); ``` ```ts showLineNumbers copy // Step 2: Get human approval and customization for the recommendations const reviewRecommendations = new Step({ id: 'reviewRecommendations', inputSchema: z.object({ approvedProducts: z.array(z.string()), customerNote: z.string().optional(), offerDiscount: z.boolean().optional(), }), outputSchema: z.object({ finalRecommendations: z.array( z.object({ productId: z.string(), productName: z.string(), price: z.number(), }), ), customerNote: z.string().optional(), offerDiscount: z.boolean(), }), execute: async ({ context, suspend }) => { const { customerName, recommendations } = context.getStepResult(generateRecommendations) || { customerName: '', recommendations: [], }; // Check if we have input from a resumed workflow const reviewInput = { approvedProducts: context.inputData?.approvedProducts || [], customerNote: context.inputData?.customerNote, offerDiscount: context.inputData?.offerDiscount, }; // If we don't have agent input yet, suspend for human review if (!reviewInput.approvedProducts.length) { console.log(`Generating recommendations for customer: ${customerName}`); await suspend({ customerName, recommendations, message: 'Please review these product recommendations before sending to the customer', }); // Placeholder return (won't be reached due to suspend) return { finalRecommendations: [], customerNote: '', offerDiscount: false, }; } // Process the agent's product selections const finalRecommendations = recommendations .filter(product => reviewInput.approvedProducts.includes(product.productId)) .map(product => ({ productId: product.productId, productName: product.productName, price: product.price, })); return { finalRecommendations, customerNote: reviewInput.customerNote || '', offerDiscount: reviewInput.offerDiscount || false, }; }, }); ``` ```ts showLineNumbers copy // Step 3: Send the recommendations to the customer const sendRecommendations = new Step({ id: 'sendRecommendations', outputSchema: z.object({ emailSent: z.boolean(), emailContent: z.string(), }), execute: async ({ context }) => { const { customerName } = context.getStepResult(generateRecommendations) || { customerName: '' }; const { finalRecommendations, customerNote, offerDiscount } = context.getStepResult(reviewRecommendations) || { finalRecommendations: [], customerNote: '', offerDiscount: false, }; // Generate email content based on the recommendations let emailContent = `Dear ${customerName},\n\nBased on your preferences, we recommend:\n\n`; finalRecommendations.forEach(product => { emailContent += `- ${product.productName}: $${product.price.toFixed(2)}\n`; }); if (offerDiscount) { emailContent += '\nAs a valued customer, use code SAVE10 for 10% off your next purchase!\n'; } if (customerNote) { emailContent += `\nPersonal note: ${customerNote}\n`; } emailContent += '\nThank you for your business,\nThe Sales Team'; // In a real application, you would send this email console.log('Email content generated:', emailContent); return { emailSent: true, emailContent, }; }, }); // Build the workflow const recommendationWorkflow = new Workflow({ name: 'product-recommendation-workflow', triggerSchema: z.object({ customerName: z.string(), }), }); recommendationWorkflow .step(generateRecommendations) .then(reviewRecommendations) .then(sendRecommendations) .commit(); // Register the workflow const mastra = new Mastra({ workflows: { recommendationWorkflow }, }); ``` ```ts showLineNumbers copy // Example of using the workflow with Inquirer prompts async function runRecommendationWorkflow() { const registeredWorkflow = mastra.getWorkflow('recommendationWorkflow'); const run = registeredWorkflow.createRun(); console.log('Starting product recommendation workflow...'); const result = await run.start({ triggerData: { customerName: 'Jane Smith', }, }); const isReviewStepSuspended = result.activePaths.get('reviewRecommendations')?.status === 'suspended'; // Check if workflow is suspended for human review if (isReviewStepSuspended) { const { customerName, recommendations, message } = result.activePaths.get('reviewRecommendations')?.suspendPayload; console.log('\n==================================='); console.log(message); console.log(`Customer: ${customerName}`); console.log('===================================\n'); // Use Inquirer to collect input from the sales agent in the terminal console.log('Available product recommendations:'); recommendations.forEach((product, index) => { console.log(`${index + 1}. ${product.productName} - $${product.price.toFixed(2)}`); console.log(` ${product.description}\n`); }); // Let the agent select which products to recommend const approvedProducts = await checkbox({ message: '顧客にお勧めする製品を選択してください:', choices: recommendations.map(product => ({ name: `${product.productName} ($${product.price.toFixed(2)})`, value: product.productId, })), }); // Let the agent add a personal note const includeNote = await confirm({ message: '個人的なメモを追加しますか?', default: false, }); let customerNote = ''; if (includeNote) { customerNote = await input({ message: '顧客へのパーソナライズされたメモを入力してください:', }); } // Ask if a discount should be offered const offerDiscount = await confirm({ message: 'この顧客に10%割引を提供しますか?', default: false, }); console.log('\nレビューを送信しています...'); // Resume the workflow with the agent's input const resumeResult = await run.resume({ stepId: 'reviewRecommendations', context: { approvedProducts, customerNote, offerDiscount, }, }); console.log('\n==================================='); console.log('ワークフローが完了しました!'); console.log('メールの内容:'); console.log('===================================\n'); console.log(resumeResult?.results?.sendRecommendations || 'メールの内容が生成されていません'); return resumeResult; } return result; } // Invoke the workflow with interactive terminal input runRecommendationWorkflow().catch(console.error); ``` ## 複数のユーザー入力を伴う高度な例 この例では、コンテンツモデレーションシステムのような、複数の人間の介入ポイントを必要とするより複雑なワークフローを示しています。 ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; import { select, input } from '@inquirer/prompts'; // Step 1: Receive and analyze content const analyzeContent = new Step({ id: 'analyzeContent', outputSchema: z.object({ content: z.string(), aiAnalysisScore: z.number(), flaggedCategories: z.array(z.string()).optional(), }), execute: async ({ context }) => { const content = context.triggerData.content; // Simulate AI analysis const aiAnalysisScore = simulateContentAnalysis(content); const flaggedCategories = aiAnalysisScore < 0.7 ? ['potentially inappropriate', 'needs review'] : []; return { content, aiAnalysisScore, flaggedCategories, }; }, }); ``` ```ts showLineNumbers copy // Step 2: Moderate content that needs review const moderateContent = new Step({ id: 'moderateContent', // Define the schema for human input that will be provided when resuming inputSchema: z.object({ moderatorDecision: z.enum(['approve', 'reject', 'modify']).optional(), moderatorNotes: z.string().optional(), modifiedContent: z.string().optional(), }), outputSchema: z.object({ moderationResult: z.enum(['approved', 'rejected', 'modified']), moderatedContent: z.string(), notes: z.string().optional(), }), // @ts-ignore execute: async ({ context, suspend }) => { const analysisResult = context.getStepResult(analyzeContent); // Access the input provided when resuming the workflow const moderatorInput = { decision: context.inputData?.moderatorDecision, notes: context.inputData?.moderatorNotes, modifiedContent: context.inputData?.modifiedContent, }; // If the AI analysis score is high enough, auto-approve if (analysisResult?.aiAnalysisScore > 0.9 && !analysisResult?.flaggedCategories?.length) { return { moderationResult: 'approved', moderatedContent: analysisResult.content, notes: 'Auto-approved by system', }; } // If we don't have moderator input yet, suspend for human review if (!moderatorInput.decision) { await suspend({ content: analysisResult?.content, aiScore: analysisResult?.aiAnalysisScore, flaggedCategories: analysisResult?.flaggedCategories, message: 'Please review this content and make a moderation decision', }); // Placeholder return return { moderationResult: 'approved', moderatedContent: '', }; } // Process the moderator's decision switch (moderatorInput.decision) { case 'approve': return { moderationResult: 'approved', moderatedContent: analysisResult?.content || '', notes: moderatorInput.notes || 'Approved by moderator', }; case 'reject': return { moderationResult: 'rejected', moderatedContent: '', notes: moderatorInput.notes || 'Rejected by moderator', }; case 'modify': return { moderationResult: 'modified', moderatedContent: moderatorInput.modifiedContent || analysisResult?.content || '', notes: moderatorInput.notes || 'Modified by moderator', }; default: return { moderationResult: 'rejected', moderatedContent: '', notes: 'Invalid moderator decision', }; } }, }); ``` ```ts showLineNumbers copy // Step 3: Apply moderation actions const applyModeration = new Step({ id: 'applyModeration', outputSchema: z.object({ finalStatus: z.string(), content: z.string().optional(), auditLog: z.object({ originalContent: z.string(), moderationResult: z.string(), aiScore: z.number(), timestamp: z.string(), }), }), execute: async ({ context }) => { const analysisResult = context.getStepResult(analyzeContent); const moderationResult = context.getStepResult(moderateContent); // Create audit log const auditLog = { originalContent: analysisResult?.content || '', moderationResult: moderationResult?.moderationResult || 'unknown', aiScore: analysisResult?.aiAnalysisScore || 0, timestamp: new Date().toISOString(), }; // Apply moderation action switch (moderationResult?.moderationResult) { case 'approved': return { finalStatus: 'Content published', content: moderationResult.moderatedContent, auditLog, }; case 'modified': return { finalStatus: 'Content modified and published', content: moderationResult.moderatedContent, auditLog, }; case 'rejected': return { finalStatus: 'Content rejected', auditLog, }; default: return { finalStatus: 'Error in moderation process', auditLog, }; } }, }); ``` ```ts showLineNumbers copy // Build the workflow const contentModerationWorkflow = new Workflow({ name: 'content-moderation-workflow', triggerSchema: z.object({ content: z.string(), }), }); contentModerationWorkflow .step(analyzeContent) .then(moderateContent) .then(applyModeration) .commit(); // Register the workflow const mastra = new Mastra({ workflows: { contentModerationWorkflow }, }); // Example of using the workflow with Inquirer prompts async function runModerationDemo() { const registeredWorkflow = mastra.getWorkflow('contentModerationWorkflow'); const run = registeredWorkflow.createRun(); // Start the workflow with content that needs review console.log('Starting content moderation workflow...'); const result = await run.start({ triggerData: { content: 'This is some user-generated content that requires moderation.' } }); const isReviewStepSuspended = result.activePaths.get('moderateContent')?.status === 'suspended'; // Check if workflow is suspended if (isReviewStepSuspended) { const { content, aiScore, flaggedCategories, message } = result.activePaths.get('moderateContent')?.suspendPayload; console.log('\n==================================='); console.log(message); console.log('===================================\n'); console.log('Content to review:'); console.log(content); console.log(`\nAI Analysis Score: ${aiScore}`); console.log(`Flagged Categories: ${flaggedCategories?.join(', ') || 'None'}\n`); // Collect moderator decision using Inquirer const moderatorDecision = await select({ message: 'Select your moderation decision:', choices: [ { name: 'Approve content as is', value: 'approve' }, { name: 'Reject content completely', value: 'reject' }, { name: 'Modify content before publishing', value: 'modify' } ], }); // Collect additional information based on decision let moderatorNotes = ''; let modifiedContent = ''; moderatorNotes = await input({ message: 'Enter any notes about your decision:', }); if (moderatorDecision === 'modify') { modifiedContent = await input({ message: 'Enter the modified content:', default: content, }); } console.log('\nSubmitting your moderation decision...'); // Resume the workflow with the moderator's input const resumeResult = await run.resume({ stepId: 'moderateContent', context: { moderatorDecision, moderatorNotes, modifiedContent, }, }); if (resumeResult?.results?.applyModeration?.status === 'success') { console.log('\n==================================='); console.log(`Moderation complete: ${resumeResult?.results?.applyModeration?.output.finalStatus}`); console.log('===================================\n'); if (resumeResult?.results?.applyModeration?.output.content) { console.log('Published content:'); console.log(resumeResult.results.applyModeration.output.content); } } return resumeResult; } console.log('Workflow completed without requiring human intervention:', result.results); return result; } // Helper function for AI content analysis simulation function simulateContentAnalysis(content: string): number { // In a real application, this would call an AI service // For the example, we're returning a random score return Math.random(); } // Invoke the demo function runModerationDemo().catch(console.error); ``` ## 主要な概念 1. **サスペンションポイント** - ステップの実行内で`suspend()`関数を使用してワークフローの実行を一時停止します。 2. **サスペンションペイロード** - 一時停止する際に関連データを渡して、人間の意思決定のためのコンテキストを提供します: ```ts await suspend({ messageForHuman: 'このデータを確認してください', data: someImportantData }); ``` 3. **ワークフローステータスの確認** - ワークフローを開始した後、返されたステータスを確認して一時停止されているかどうかを確認します: ```ts const result = await workflow.start({ triggerData }); if (result.status === 'suspended' && result.suspendedStepId === 'stepId') { // サスペンションを処理する console.log('ワークフローは入力待ちです:', result.suspendPayload); } ``` 4. **インタラクティブなターミナル入力** - Inquirerのようなライブラリを使用してインタラクティブなプロンプトを作成します: ```ts import { select, input, confirm } from '@inquirer/prompts'; // ワークフローが一時停止されている場合 if (result.status === 'suspended') { // サスペンドペイロードからの情報を表示 console.log(result.suspendPayload.message); // ユーザー入力をインタラクティブに収集 const decision = await select({ message: '何をしますか?', choices: [ { name: '承認', value: 'approve' }, { name: '拒否', value: 'reject' } ] }); // 収集した入力でワークフローを再開 await run.resume({ stepId: result.suspendedStepId, context: { decision } }); } ``` 5. **ワークフローの再開** - `resume()`メソッドを使用して、人間の入力でワークフローの実行を継続します: ```ts const resumeResult = await run.resume({ stepId: 'suspendedStepId', context: { // このデータは一時停止されたステップにcontext.inputDataとして渡され、 // ステップのinputSchemaに準拠している必要があります userDecision: 'approve' }, }); ``` 6. **人間のデータ用の入力スキーマ** - 人間の入力で再開される可能性のあるステップに入力スキーマを定義して、型の安全性を確保します: ```ts const myStep = new Step({ id: 'myStep', inputSchema: z.object({ // このスキーマはresumeのcontextで渡されるデータを検証し、 // context.inputDataとして利用可能にします userDecision: z.enum(['approve', 'reject']), userComments: z.string().optional(), }), execute: async ({ context, suspend }) => { // 以前の一時停止からユーザー入力があるかどうかを確認 if (context.inputData?.userDecision) { // ユーザーの決定を処理 return { result: `ユーザーの決定: ${context.inputData.userDecision}` }; } // 入力がない場合、人間の判断のために一時停止 await suspend(); } }); ``` --- title: "例:並列実行 | ワークフロー | Mastra ドキュメント" description: Mastraを使用してワークフロー内で複数の独立したタスクを並列実行する例。 --- import { GithubLink } from "@/components/github-link"; # ステップを用いた並列実行 [JA] Source: https://mastra.ai/ja/examples/workflows/parallel-steps AIアプリケーションを構築する際、効率を向上させるために複数の独立したタスクを同時に処理する必要があることがよくあります。 ## 制御フローダイアグラム この例は、各ブランチが独自のデータフローと依存関係を処理しながら、ステップを並行して実行するワークフローの構造を示しています。 こちらが制御フローダイアグラムです: 並行ステップを含むワークフローを示すダイアグラム ## ステップの作成 ステップを作成し、ワークフローを初期化しましょう。 ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const stepOne = new Step({ id: "stepOne", execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 } } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 } }, }); const stepThree = new Step({ id: "stepThree", execute: async ({ context }) => ({ tripledValue: context.triggerData.inputValue * 3, }), }); const stepFour = new Step({ id: "stepFour", execute: async ({ context }) => { if (context.steps.stepThree.status !== "success") { return { isEven: false } } return { isEven: context.steps.stepThree.output.tripledValue % 2 === 0 } }, }); const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` ## ステップの連鎖と並列化 これで、ワークフローにステップを追加できます。`.then()` メソッドはステップを連鎖するために使用されますが、`.step()` メソッドはワークフローにステップを追加するために使用されます。 ```ts showLineNumbers copy myWorkflow .step(stepOne) .then(stepTwo) // chain one .step(stepThree) .then(stepFour) // chain two .commit(); const { start } = myWorkflow.createRun(); const result = await start({ triggerData: { inputValue: 3 } }); ```




--- title: "例:順次ステップ | ワークフロー | Mastra ドキュメント" description: データを受け渡しながら、特定の順序でワークフローステップを連鎖させるMastraの使用例。 --- import { GithubLink } from "@/components/github-link"; # 順次ステップによるワークフロー [JA] Source: https://mastra.ai/ja/examples/workflows/sequential-steps ワークフローは、特定の順序で次々に実行するように連鎖させることができます。 ## 制御フローダイアグラム この例では、`then` メソッドを使用してワークフローステップを連鎖させ、順次ステップ間でデータを渡し、それらを順番に実行する方法を示しています。 こちらが制御フローダイアグラムです: 順次ステップを持つワークフローを示すダイアグラム ## ステップの作成 ステップを作成し、ワークフローを初期化しましょう。 ```ts showLineNumbers copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; const stepOne = new Step({ id: "stepOne", execute: async ({ context }) => ({ doubledValue: context.triggerData.inputValue * 2, }), }); const stepTwo = new Step({ id: "stepTwo", execute: async ({ context }) => { if (context.steps.stepOne.status !== "success") { return { incrementedValue: 0 } } return { incrementedValue: context.steps.stepOne.output.doubledValue + 1 } }, }); const stepThree = new Step({ id: "stepThree", execute: async ({ context }) => { if (context.steps.stepTwo.status !== "success") { return { tripledValue: 0 } } return { tripledValue: context.steps.stepTwo.output.incrementedValue * 3 } }, }); // Build the workflow const myWorkflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number(), }), }); ``` ## ステップを連鎖してワークフローを実行する では、ステップを連鎖させましょう。 ```ts showLineNumbers copy // sequential steps myWorkflow.step(stepOne).then(stepTwo).then(stepThree); myWorkflow.commit(); const { start } = myWorkflow.createRun(); const res = await start({ triggerData: { inputValue: 90 } }); ```




--- title: "例:一時停止と再開 | ワークフロー | Mastra ドキュメント" description: Mastraを使用して実行中にワークフローステップを一時停止および再開する例。 --- import { GithubLink } from '@/components/github-link'; # サスペンドと再開を使用したワークフロー [JA] Source: https://mastra.ai/ja/examples/workflows/suspend-and-resume ワークフローのステップは、ワークフローの実行中の任意のポイントでサスペンドおよび再開することができます。この例では、ワークフローのステップをサスペンドし、後で再開する方法を示します。 ## 基本的な例 ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; const stepOne = new Step({ id: 'stepOne', outputSchema: z.object({ doubledValue: z.number(), }), execute: async ({ context }) => { const doubledValue = context.triggerData.inputValue * 2; return { doubledValue }; }, }); ``` ```ts showLineNumbers copy const stepTwo = new Step({ id: 'stepTwo', outputSchema: z.object({ incrementedValue: z.number(), }), execute: async ({ context, suspend }) => { const secondValue = context.inputData?.secondValue ?? 0; const doubledValue = context.getStepResult(stepOne)?.doubledValue ?? 0; const incrementedValue = doubledValue + secondValue; if (incrementedValue < 100) { await suspend(); return { incrementedValue: 0 }; } return { incrementedValue }; }, }); // Build the workflow const myWorkflow = new Workflow({ name: 'my-workflow', triggerSchema: z.object({ inputValue: z.number(), }), }); // run workflows in parallel myWorkflow .step(stepOne) .then(stepTwo) .commit(); ``` ```ts showLineNumbers copy // Register the workflow export const mastra = new Mastra({ workflows: { registeredWorkflow: myWorkflow }, }) // Get registered workflow from Mastra const registeredWorkflow = mastra.getWorkflow('registeredWorkflow'); const { runId, start } = registeredWorkflow.createRun(); // Start watching the workflow before executing it myWorkflow.watch(async ({ context, activePaths }) => { for (const _path of activePaths) { const stepTwoStatus = context.steps?.stepTwo?.status; if (stepTwoStatus === 'suspended') { console.log("Workflow suspended, resuming with new value"); // Resume the workflow with new context await myWorkflow.resume({ runId, stepId: 'stepTwo', context: { secondValue: 100 }, }); } } }) // Start the workflow execution await start({ triggerData: { inputValue: 45 } }); ``` ## async/awaitパターンとサスペンドペイロードを使用した複数の中断ポイントを持つ高度な例 この例では、async/awaitパターンを使用した複数の中断ポイントを持つより複雑なワークフローを示しています。異なる段階で人間の介入を必要とするコンテンツ生成ワークフローをシミュレートしています。 ```ts showLineNumbers copy import { Mastra } from '@mastra/core'; import { Step, Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; // Step 1: Get user input const getUserInput = new Step({ id: 'getUserInput', execute: async ({ context }) => { // In a real application, this might come from a form or API return { userInput: context.triggerData.input }; }, outputSchema: z.object({ userInput: z.string() }), }); ``` ```ts showLineNumbers copy // Step 2: Generate content with AI (may suspend for human guidance) const promptAgent = new Step({ id: 'promptAgent', inputSchema: z.object({ guidance: z.string(), }), execute: async ({ context, suspend }) => { const userInput = context.getStepResult(getUserInput)?.userInput; console.log(`Generating content based on: ${userInput}`); const guidance = context.inputData?.guidance; // Simulate AI generating content const initialDraft = generateInitialDraft(userInput); // If confidence is high, return the generated content directly if (initialDraft.confidenceScore > 0.7) { return { modelOutput: initialDraft.content }; } console.log('Low confidence in generated content, suspending for human guidance', {guidance}); // If confidence is low, suspend for human guidance if (!guidance) { // only suspend if no guidance is provided await suspend(); return undefined; } // This code runs after resume with human guidance console.log('Resumed with human guidance'); // Use the human guidance to improve the output return { modelOutput: enhanceWithGuidance(initialDraft.content, guidance), }; }, outputSchema: z.object({ modelOutput: z.string() }).optional(), }); ``` ```ts showLineNumbers copy // Step 3: Evaluate the content quality const evaluateTone = new Step({ id: 'evaluateToneConsistency', execute: async ({ context }) => { const content = context.getStepResult(promptAgent)?.modelOutput; // Simulate evaluation return { toneScore: { score: calculateToneScore(content) }, completenessScore: { score: calculateCompletenessScore(content) }, }; }, outputSchema: z.object({ toneScore: z.any(), completenessScore: z.any(), }), }); ``` ```ts showLineNumbers copy // Step 4: Improve response if needed (may suspend) const improveResponse = new Step({ id: 'improveResponse', inputSchema: z.object({ improvedContent: z.string(), resumeAttempts: z.number(), }), execute: async ({ context, suspend }) => { const content = context.getStepResult(promptAgent)?.modelOutput; const toneScore = context.getStepResult(evaluateTone)?.toneScore.score ?? 0; const completenessScore = context.getStepResult(evaluateTone)?.completenessScore.score ?? 0; const improvedContent = context.inputData.improvedContent; const resumeAttempts = context.inputData.resumeAttempts ?? 0; // If scores are above threshold, make minor improvements if (toneScore > 0.8 && completenessScore > 0.8) { return { improvedOutput: makeMinorImprovements(content) }; } console.log('Content quality below threshold, suspending for human intervention', {improvedContent, resumeAttempts}); if (!improvedContent) { // Suspend with payload containing content and resume attempts await suspend({ content, scores: { tone: toneScore, completeness: completenessScore }, needsImprovement: toneScore < 0.8 ? 'tone' : 'completeness', resumeAttempts: resumeAttempts + 1, }); return { improvedOutput: content ?? '' }; } console.log('Resumed with human improvements', improvedContent); return { improvedOutput: improvedContent ?? content ?? '' }; }, outputSchema: z.object({ improvedOutput: z.string() }).optional(), }); ``` ```ts showLineNumbers copy // Step 5: Final evaluation const evaluateImproved = new Step({ id: 'evaluateImprovedResponse', execute: async ({ context }) => { const improvedContent = context.getStepResult(improveResponse)?.improvedOutput; // Simulate final evaluation return { toneScore: { score: calculateToneScore(improvedContent) }, completenessScore: { score: calculateCompletenessScore(improvedContent) }, }; }, outputSchema: z.object({ toneScore: z.any(), completenessScore: z.any(), }), }); // Build the workflow const contentWorkflow = new Workflow({ name: 'content-generation-workflow', triggerSchema: z.object({ input: z.string() }), }); contentWorkflow .step(getUserInput) .then(promptAgent) .then(evaluateTone) .then(improveResponse) .then(evaluateImproved) .commit(); ``` ```ts showLineNumbers copy // Register the workflow const mastra = new Mastra({ workflows: { contentWorkflow }, }); // Helper functions (simulated) function generateInitialDraft(input: string = '') { // Simulate AI generating content return { content: `Generated content based on: ${input}`, confidenceScore: 0.6, // Simulate low confidence to trigger suspension }; } function enhanceWithGuidance(content: string = '', guidance: string = '') { return `${content} (Enhanced with guidance: ${guidance})`; } function makeMinorImprovements(content: string = '') { return `${content} (with minor improvements)`; } function calculateToneScore(_: string = '') { return 0.7; // Simulate a score that will trigger suspension } function calculateCompletenessScore(_: string = '') { return 0.9; } // Usage example async function runWorkflow() { const workflow = mastra.getWorkflow('contentWorkflow'); const { runId, start } = workflow.createRun(); let finalResult: any; // Start the workflow const initialResult = await start({ triggerData: { input: 'Create content about sustainable energy' }, }); console.log('Initial workflow state:', initialResult.results); const promptAgentStepResult = initialResult.activePaths.get('promptAgent'); // Check if promptAgent step is suspended if (promptAgentStepResult?.status === 'suspended') { console.log('Workflow suspended at promptAgent step'); console.log('Suspension payload:', promptAgentStepResult?.suspendPayload); // Resume with human guidance const resumeResult1 = await workflow.resume({ runId, stepId: 'promptAgent', context: { guidance: 'Focus more on solar and wind energy technologies', }, }); console.log('Workflow resumed and continued to next steps'); let improveResponseResumeAttempts = 0; let improveResponseStatus = resumeResult1?.activePaths.get('improveResponse')?.status; // Check if improveResponse step is suspended while (improveResponseStatus === 'suspended') { console.log('Workflow suspended at improveResponse step'); console.log('Suspension payload:', resumeResult1?.activePaths.get('improveResponse')?.suspendPayload); const improvedContent = improveResponseResumeAttempts < 3 ? undefined : 'Completely revised content about sustainable energy focusing on solar and wind technologies'; // Resume with human improvements finalResult = await workflow.resume({ runId, stepId: 'improveResponse', context: { improvedContent, resumeAttempts: improveResponseResumeAttempts, }, }); improveResponseResumeAttempts = finalResult?.activePaths.get('improveResponse')?.suspendPayload?.resumeAttempts ?? 0; improveResponseStatus = finalResult?.activePaths.get('improveResponse')?.status; console.log('Improved response result:', finalResult?.results); } } return finalResult; } // Run the workflow const result = await runWorkflow(); console.log('Workflow completed'); console.log('Final workflow result:', result); ```




--- title: "例:ツールをステップとして使用する | ワークフロー | Mastra ドキュメント" description: カスタムツールをワークフローのステップとして統合するためにMastraを使用する例。 --- import { GithubLink } from '@/components/github-link'; # ワークフローステップとしてのツール [JA] Source: https://mastra.ai/ja/examples/workflows/using-a-tool-as-a-step この例では、カスタムツールをワークフローステップとして作成し統合する方法を示し、入力/出力スキーマを定義し、ツールの実行ロジックを実装する方法を示します。 ```ts showLineNumbers copy import { createTool } from '@mastra/core/tools'; import { Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; const crawlWebpage = createTool({ id: 'Crawl Webpage', description: 'Crawls a webpage and extracts the text content', inputSchema: z.object({ url: z.string().url(), }), outputSchema: z.object({ rawText: z.string(), }), execute: async ({ context }) => { const response = await fetch(context.triggerData.url); const text = await response.text(); return { rawText: 'This is the text content of the webpage: ' + text }; }, }); const contentWorkflow = new Workflow({ name: 'content-review' }); contentWorkflow.step(crawlWebpage).commit(); const { start } = contentWorkflow.createRun(); const res = await start({ triggerData: { url: 'https://example.com'} }); console.log(res.results); ```




--- title: "ワークフロー変数を使用したデータマッピング | Mastraの例" description: "Mastraワークフローでステップ間のデータをマッピングするためにワークフロー変数を使用する方法を学びます。" --- # ワークフロー変数を使用したデータマッピング [JA] Source: https://mastra.ai/ja/examples/workflows/workflow-variables この例では、Mastra ワークフロー内のステップ間でデータをマッピングするためにワークフロー変数を使用する方法を示します。 ## ユースケース: ユーザー登録プロセス この例では、シンプルなユーザー登録ワークフローを構築します。これには以下が含まれます: 1. ユーザー入力の検証 1. ユーザーデータのフォーマット 1. ユーザープロファイルの作成 ## 実装 ```typescript showLineNumbers filename="src/mastra/workflows/user-registration.ts" copy import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; // Define our schemas for better type safety const userInputSchema = z.object({ email: z.string().email(), name: z.string(), age: z.number().min(18), }); const validatedDataSchema = z.object({ isValid: z.boolean(), validatedData: z.object({ email: z.string(), name: z.string(), age: z.number(), }), }); const formattedDataSchema = z.object({ userId: z.string(), formattedData: z.object({ email: z.string(), displayName: z.string(), ageGroup: z.string(), }), }); const profileSchema = z.object({ profile: z.object({ id: z.string(), email: z.string(), displayName: z.string(), ageGroup: z.string(), createdAt: z.string(), }), }); // Define the workflow const registrationWorkflow = new Workflow({ name: "user-registration", triggerSchema: userInputSchema, }); // Step 1: Validate user input const validateInput = new Step({ id: "validateInput", inputSchema: userInputSchema, outputSchema: validatedDataSchema, execute: async ({ context }) => { const { email, name, age } = context; // Simple validation logic const isValid = email.includes('@') && name.length > 0 && age >= 18; return { isValid, validatedData: { email: email.toLowerCase().trim(), name, age, }, }; }, }); // Step 2: Format user data const formatUserData = new Step({ id: "formatUserData", inputSchema: z.object({ validatedData: z.object({ email: z.string(), name: z.string(), age: z.number(), }), }), outputSchema: formattedDataSchema, execute: async ({ context }) => { const { validatedData } = context; // Generate a simple user ID const userId = `user_${Math.floor(Math.random() * 10000)}`; // Format the data const ageGroup = validatedData.age < 30 ? "young-adult" : "adult"; return { userId, formattedData: { email: validatedData.email, displayName: validatedData.name, ageGroup, }, }; }, }); // Step 3: Create user profile const createUserProfile = new Step({ id: "createUserProfile", inputSchema: z.object({ userId: z.string(), formattedData: z.object({ email: z.string(), displayName: z.string(), ageGroup: z.string(), }), }), outputSchema: profileSchema, execute: async ({ context }) => { const { userId, formattedData } = context; // In a real app, you would save to a database here return { profile: { id: userId, ...formattedData, createdAt: new Date().toISOString(), }, }; }, }); // Build the workflow with variable mappings registrationWorkflow // First step gets data from the trigger .step(validateInput, { variables: { email: { step: 'trigger', path: 'email' }, name: { step: 'trigger', path: 'name' }, age: { step: 'trigger', path: 'age' }, } }) // Format user data with validated data from previous step .then(formatUserData, { variables: { validatedData: { step: validateInput, path: 'validatedData' }, }, when: { ref: { step: validateInput, path: 'isValid' }, query: { $eq: true }, }, }) // Create profile with data from the format step .then(createUserProfile, { variables: { userId: { step: formatUserData, path: 'userId' }, formattedData: { step: formatUserData, path: 'formattedData' }, }, }) .commit(); export default registrationWorkflow; ``` ## この例の使用方法 1. 上記のようにファイルを作成します 2. Mastra インスタンスにワークフローを登録します 3. ワークフローを実行します: ```bash curl --location 'http://localhost:4111/api/workflows/user-registration/start-async' \ --header 'Content-Type: application/json' \ --data '{ "email": "user@example.com", "name": "John Doe", "age": 25 }' ``` ## 重要なポイント この例は、ワークフロー変数に関するいくつかの重要な概念を示しています: 1. **データマッピング**: 変数はデータをあるステップから別のステップへマッピングし、明確なデータフローを作成します。 2. **パスアクセス**: `path` プロパティは、ステップの出力のどの部分を使用するかを指定します。 3. **条件付き実行**: `when` プロパティは、前のステップの出力に基づいてステップを条件付きで実行できるようにします。 4. **型の安全性**: 各ステップは、型の安全性を確保するために入力および出力スキーマを定義し、ステップ間で渡されるデータが適切に型付けされていることを保証します。 5. **明示的なデータ依存性**: 入力スキーマを定義し、変数マッピングを使用することで、ステップ間のデータ依存性が明示的かつ明確になります。 ワークフロー変数に関する詳細は、[ワークフロー変数のドキュメント](../../docs/workflows/variables.mdx)を参照してください。 --- title: "AI採用担当者の構築 | Mastraワークフロー | ガイド" description: LLMを使用して候補者情報を収集・処理するMastraでの採用担当者ワークフローの構築ガイド。 --- # はじめに [JA] Source: https://mastra.ai/ja/guides/guide/ai-recruiter このガイドでは、MastraがLLMを使用したワークフローの構築をどのように支援するかについて学びます。 候補者の履歴書から情報を収集し、候補者のプロフィールに基づいて技術的な質問または行動に関する質問のいずれかに分岐するワークフローの作成について説明します。その過程で、ワークフローのステップの構造化、分岐の処理、LLM呼び出しの統合方法を確認できます。 以下はワークフローの簡潔なバージョンです。必要なモジュールをインポートし、Mastraをセットアップし、候補者データを抽出して分類するステップを定義し、適切なフォローアップ質問をします。各コードブロックの後には、その機能と有用性についての簡単な説明が続きます。 ## 1. インポートとセットアップ ワークフローの定義とデータ検証を処理するために、MastraツールとZodをインポートする必要があります。 ```ts filename="src/mastra/index.ts" copy import { Mastra } from "@mastra/core"; import { Step, Workflow } from "@mastra/core/workflows"; import { z } from "zod"; ``` `.env`ファイルに`OPENAI_API_KEY`を追加してください。 ```bash filename=".env" copy OPENAI_API_KEY= ``` ## 2. ステップ1:候補者情報の収集 履歴書のテキストから候補者の詳細を抽出し、技術系または非技術系として分類したいと思います。このステップではLLMを呼び出して履歴書を解析し、名前、技術的ステータス、専門分野、および元の履歴書テキストを含む構造化されたJSONを返します。コードはトリガーデータからresumeTextを読み取り、LLMにプロンプトを送信し、後続のステップで使用するために整理されたフィールドを返します。 ```ts filename="src/mastra/index.ts" copy import { Agent } from '@mastra/core/agent'; import { openai } from "@ai-sdk/openai"; const recruiter = new Agent({ name: "Recruiter Agent", instructions: `You are a recruiter.`, model: openai("gpt-4o-mini"), }) const gatherCandidateInfo = new Step({ id: "gatherCandidateInfo", inputSchema: z.object({ resumeText: z.string(), }), outputSchema: z.object({ candidateName: z.string(), isTechnical: z.boolean(), specialty: z.string(), resumeText: z.string(), }), execute: async ({ context }) => { const resumeText = context?.getStepResult<{ resumeText: string; }>("trigger")?.resumeText; const prompt = ` Extract details from the resume text: "${resumeText}" `; const res = await recruiter.generate(prompt, { output: z.object({ candidateName: z.string(), isTechnical: z.boolean(), specialty: z.string(), resumeText: z.string(), }), }); return res.object; }, }); ``` ## 3. 技術的質問ステップ このステップでは、技術者として特定された候補者に対して、彼らがどのように専門分野に入ったかについての詳細情報を求めます。LLMが関連性のあるフォローアップ質問を作成できるように、履歴書の全文を使用します。このコードは候補者の専門分野に関する質問を生成します。 ```ts filename="src/mastra/index.ts" copy interface CandidateInfo { candidateName: string; isTechnical: boolean; specialty: string; resumeText: string; } const askAboutSpecialty = new Step({ id: "askAboutSpecialty", outputSchema: z.object({ question: z.string(), }), execute: async ({ context }) => { const candidateInfo = context?.getStepResult( "gatherCandidateInfo", ); const prompt = ` You are a recruiter. Given the resume below, craft a short question for ${candidateInfo?.candidateName} about how they got into "${candidateInfo?.specialty}". Resume: ${candidateInfo?.resumeText} `; const res = await recruiter.generate(prompt); return { question: res?.text?.trim() || "" }; }, }); ``` ## 4. 行動質問ステップ 候補者が非技術系の場合、異なるフォローアップ質問が必要です。このステップでは、彼らの完全な履歴書のテキストを再度参照しながら、役職について最も興味を持っていることを尋ねます。このコードはLLMから役職に焦点を当てたクエリを要求します。 ```ts filename="src/mastra/index.ts" copy const askAboutRole = new Step({ id: "askAboutRole", outputSchema: z.object({ question: z.string(), }), execute: async ({ context }) => { const candidateInfo = context?.getStepResult( "gatherCandidateInfo", ); const prompt = ` You are a recruiter. Given the resume below, craft a short question for ${candidateInfo?.candidateName} asking what interests them most about this role. Resume: ${candidateInfo?.resumeText} `; const res = await recruiter.generate(prompt); return { question: res?.text?.trim() || "" }; }, }); ``` ## 5. ワークフローの定義 これで、候補者の技術的ステータスに基づいて分岐ロジックを実装するためのステップを組み合わせます。ワークフローはまず候補者データを収集し、次にisTechnicalの値に応じて、専門分野または役割について質問します。このコードはgatherCandidateInfoとaskAboutSpecialtyおよびaskAboutRoleを連鎖させ、ワークフローをコミットします。 ```ts filename="src/mastra/index.ts" copy const candidateWorkflow = new Workflow({ name: "candidate-workflow", triggerSchema: z.object({ resumeText: z.string(), }), }); candidateWorkflow .step(gatherCandidateInfo) .then(askAboutSpecialty, { when: { "gatherCandidateInfo.isTechnical": true }, }) .after(gatherCandidateInfo) .step(askAboutRole, { when: { "gatherCandidateInfo.isTechnical": false }, }); candidateWorkflow.commit(); ``` ## 6. ワークフローを実行する ```ts filename="src/mastra/index.ts" copy const mastra = new Mastra({ workflows: { candidateWorkflow, }, }); (async () => { const { runId, start } = mastra.getWorkflow("candidateWorkflow").createRun(); console.log("Run", runId); const runResult = await start({ triggerData: { resumeText: "Simulated resume content..." }, }); console.log("Final output:", runResult.results); })(); ``` あなたは履歴書を解析し、候補者の技術的能力に基づいてどの質問をするかを決定するワークフローを構築しました。おめでとうございます、そして楽しいハッキングを! --- title: "AIシェフアシスタントの構築 | Mastraエージェントガイド" description: 利用可能な食材で料理を作るユーザーを支援するMastraでのシェフアシスタントエージェントの作成ガイド。 --- import { Steps } from "nextra/components"; import YouTube from "@/components/youtube"; # エージェントガイド:シェフアシスタントの構築 [JA] Source: https://mastra.ai/ja/guides/guide/chef-michel このガイドでは、ユーザーが手持ちの食材で料理を作るのを手伝う「シェフアシスタント」エージェントの作成方法を説明します。 ## 前提条件 - Node.jsがインストールされていること - Mastraがインストールされていること: `npm install @mastra/core` --- ## エージェントを作成する ### エージェントを定義する 新しいファイル `src/mastra/agents/chefAgent.ts` を作成し、エージェントを定義します: ```ts copy filename="src/mastra/agents/chefAgent.ts" import { openai } from "@ai-sdk/openai"; import { Agent } from "@mastra/core/agent"; export const chefAgent = new Agent({ name: "chef-agent", instructions: "You are Michel, a practical and experienced home chef" + "You help people cook with whatever ingredients they have available.", model: openai("gpt-4o-mini"), }); ``` --- ## 環境変数を設定する プロジェクトのルートに `.env` ファイルを作成し、OpenAI APIキーを追加します: ```bash filename=".env" copy OPENAI_API_KEY=your_openai_api_key ``` --- ## エージェントをMastraに登録する メインファイルで、エージェントを登録します: ```ts copy filename="src/mastra/index.ts" import { Mastra } from "@mastra/core"; import { chefAgent } from "./agents/chefAgent"; export const mastra = new Mastra({ agents: { chefAgent }, }); ``` --- ## エージェントとの対話 ### テキスト応答の生成 ```ts copy filename="src/index.ts" async function main() { const query = "In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make?"; console.log(`Query: ${query}`); const response = await chefAgent.generate([{ role: "user", content: query }]); console.log("\n👨‍🍳 Chef Michel:", response.text); } main(); ``` スクリプトを実行します: ```bash copy npx bun src/index.ts ``` 出力: ``` Query: In my kitchen I have: pasta, canned tomatoes, garlic, olive oil, and some dried herbs (basil and oregano). What can I make? 👨‍🍳 Chef Michel: You can make a delicious pasta al pomodoro! Here's how... ``` --- ### レスポンスのストリーミング ```ts copy filename="src/index.ts" async function main() { const query = "Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and some curry powder."; console.log(`Query: ${query}`); const stream = await chefAgent.stream([{ role: "user", content: query }]); console.log("\n Chef Michel: "); for await (const chunk of stream.textStream) { process.stdout.write(chunk); } console.log("\n\n✅ Recipe complete!"); } main(); ``` 出力: ``` Query: Now I'm over at my friend's house, and they have: chicken thighs, coconut milk, sweet potatoes, and some curry powder. 👨‍🍳 Chef Michel: Great! You can make a comforting chicken curry... ✅ Recipe complete! ``` --- ### 構造化データを含むレシピの生成 ```ts copy filename="src/index.ts" import { z } from "zod"; async function main() { const query = "I want to make lasagna, can you generate a lasagna recipe for me?"; console.log(`Query: ${query}`); // Define the Zod schema const schema = z.object({ ingredients: z.array( z.object({ name: z.string(), amount: z.string(), }), ), steps: z.array(z.string()), }); const response = await chefAgent.generate( [{ role: "user", content: query }], { output: schema }, ); console.log("\n👨‍🍳 Chef Michel:", response.object); } main(); ``` 出力: ``` Query: I want to make lasagna, can you generate a lasagna recipe for me? 👨‍🍳 Chef Michel: { ingredients: [ { name: "Lasagna noodles", amount: "12 sheets" }, { name: "Ground beef", amount: "1 pound" }, // ... ], steps: [ "Preheat oven to 375°F (190°C).", "Cook the lasagna noodles according to package instructions.", // ... ] } ``` --- ## エージェントサーバーの実行 ### `mastra dev`の使用 `mastra dev`コマンドを使用してエージェントをサービスとして実行できます: ```bash copy mastra dev ``` これにより、登録されたエージェントと対話するためのエンドポイントを公開するサーバーが起動します。 ### シェフアシスタントAPIへのアクセス デフォルトでは、`mastra dev`は`http://localhost:4111`で実行されます。シェフアシスタントエージェントは以下のURLで利用可能です: ``` POST http://localhost:4111/api/agents/chefAgent/generate ``` ### `curl`を使用したエージェントとの対話 コマンドラインから`curl`を使用してエージェントと対話できます: ```bash copy curl -X POST http://localhost:4111/api/agents/chefAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "I have eggs, flour, and milk. What can I make?" } ] }' ``` **サンプルレスポンス:** ```json { "text": "You can make delicious pancakes! Here's a simple recipe..." } ``` --- title: "研究論文アシスタントの構築 | Mastra RAG ガイド" description: RAGを使用して学術論文を分析し、質問に答えるAI研究アシスタントを作成するためのガイド。 --- import { Steps } from "nextra/components"; # RAGを使用した研究論文アシスタントの構築 [JA] Source: https://mastra.ai/ja/guides/guide/research-assistant このガイドでは、Retrieval Augmented Generation(RAG)を使用して学術論文を分析し、その内容に関する特定の質問に答えることができるAI研究アシスタントを作成します。 例として、Transformerの基礎となる論文[Attention Is All You Need](https://arxiv.org/html/1706.03762)を使用します。 ## RAGコンポーネントの理解 RAGがどのように機能し、各コンポーネントをどのように実装するかを理解しましょう: 1. ナレッジストア/インデックス - テキストをベクトル表現に変換する - コンテンツの数値表現を作成する - 実装:OpenAIのtext-embedding-3-smallを使用して埋め込みを作成し、PgVectorに保存します 2. リトリーバー - 類似性検索を通じて関連コンテンツを見つける - クエリの埋め込みを保存されたベクトルと照合する - 実装:PgVectorを使用して、保存された埋め込みに対して類似性検索を実行します 3. ジェネレーター - 取得したコンテンツをLLMで処理する - 文脈に基づいた回答を作成する - 実装:GPT-4o-miniを使用して、取得したコンテンツに基づいて回答を生成します 私たちの実装は以下のようになります: 1. Transformerの論文を埋め込みに処理する 2. 迅速な検索のためにPgVectorに保存する 3. 類似性検索を使用して関連セクションを見つける 4. 取得した文脈を使用して正確な回答を生成する ## プロジェクト構造 ``` research-assistant/ ├── src/ │ ├── mastra/ │ │ ├── agents/ │ │ │ └── researchAgent.ts │ │ └── index.ts │ ├── index.ts │ └── store.ts ├── package.json └── .env ``` ### プロジェクトの初期化と依存関係のインストール まず、プロジェクト用の新しいディレクトリを作成し、そこに移動します: ```bash mkdir research-assistant cd research-assistant ``` 新しいNode.jsプロジェクトを初期化し、必要な依存関係をインストールします: ```bash npm init -y npm install @mastra/core @mastra/rag @mastra/pg @ai-sdk/openai ai zod ``` APIアクセスとデータベース接続のための環境変数を設定します: ```bash filename=".env" copy OPENAI_API_KEY=your_openai_api_key POSTGRES_CONNECTION_STRING=your_connection_string ``` プロジェクトに必要なファイルを作成します: ```bash copy mkdir -p src/mastra/agents touch src/mastra/agents/researchAgent.ts touch src/mastra/index.ts src/store.ts src/index.ts ``` ### リサーチアシスタントエージェントの作成 次に、RAG対応のリサーチアシスタントを作成します。このエージェントは以下を使用します: * [Vector Query Tool](/reference/tools/vector-query-tool):ベクトルストア上でセマンティック検索を実行し、論文から関連コンテンツを見つけるためのツール * GPT-4o-mini:クエリを理解し、応答を生成するため * カスタム指示:論文の分析方法、検索されたコンテンツの効果的な使用方法、制限の認識方法についてエージェントを導くもの ```ts copy showLineNumbers filename="src/mastra/agents/researchAgent.ts" import { Agent } from '@mastra/core/agent'; import { openai } from '@ai-sdk/openai'; import { createVectorQueryTool } from '@mastra/rag'; // Create a tool for semantic search over our paper embeddings const vectorQueryTool = createVectorQueryTool({ vectorStoreName: 'pgVector', indexName: 'papers', model: openai.embedding('text-embedding-3-small'), }); export const researchAgent = new Agent({ name: 'Research Assistant', instructions: `You are a helpful research assistant that analyzes academic papers and technical documents. Use the provided vector query tool to find relevant information from your knowledge base, and provide accurate, well-supported answers based on the retrieved content. Focus on the specific content available in the tool and acknowledge if you cannot find sufficient information to answer a question. Base your responses only on the content provided, not on general knowledge.`, model: openai('gpt-4o-mini'), tools: { vectorQueryTool, }, }); ``` ### Mastraインスタンスとベクトルストアの設定 ```ts copy showLineNumbers filename="src/mastra/index.ts" import { Mastra } from '@mastra/core'; import { PgVector } from '@mastra/pg'; import { researchAgent } from './agents/researchAgent'; // Initialize Mastra instance const pgVector = new PgVector(process.env.POSTGRES_CONNECTION_STRING!); export const mastra = new Mastra({ agents: { researchAgent }, vectors: { pgVector }, }); ``` ### 論文の読み込みと処理 このステップでは、初期ドキュメント処理を行います。以下の手順で進めます: 1. 論文をURLから取得する 2. ドキュメントオブジェクトに変換する 3. より良い処理のために、小さく管理しやすいチャンクに分割する ```ts copy showLineNumbers filename="src/store.ts" import { openai } from "@ai-sdk/openai"; import { MDocument } from '@mastra/rag'; import { embedMany } from 'ai'; import { mastra } from "./mastra"; // Load the paper const paperUrl = "https://arxiv.org/html/1706.03762"; const response = await fetch(paperUrl); const paperText = await response.text(); // Create document and chunk it const doc = MDocument.fromText(paperText); const chunks = await doc.chunk({ strategy: 'recursive', size: 512, overlap: 50, separator: '\n', }); console.log("Number of chunks:", chunks.length); // Number of chunks: 893 ``` ### 埋め込みの作成と保存 最後に、RAG用にコンテンツを準備します: 1. テキストの各チャンクの埋め込みを生成する 2. 埋め込みを保持するベクトルストアインデックスを作成する 3. 埋め込みとメタデータ(元のテキストとソース情報)をベクトルデータベースに保存する > **注意**:このメタデータは、ベクトルストアが関連する一致を見つけたときに実際のコンテンツを返すために重要です。 これにより、エージェントは効率的に情報を検索して取得できるようになります。 ```ts copy showLineNumbers{23} filename="src/store.ts" // Generate embeddings const { embeddings } = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: chunks.map(chunk => chunk.text), }); // Get the vector store instance from Mastra const vectorStore = mastra.getVector('pgVector'); // Create an index for our paper chunks await vectorStore.createIndex({ indexName: 'papers', dimension: 1536, }); // Store embeddings await vectorStore.upsert({ indexName: 'papers', vectors: embeddings, metadata: chunks.map(chunk => ({ text: chunk.text, source: 'transformer-paper' })), }); ``` これにより以下が実行されます: 1. URLから論文を読み込む 2. 管理しやすいチャンクに分割する 3. 各チャンクの埋め込みを生成する 4. 埋め込みとテキストの両方をベクトルデータベースに保存する スクリプトを実行して埋め込みを保存するには: ```bash npx bun src/store.ts ``` ### アシスタントをテストする 異なるタイプのクエリで研究アシスタントをテストしてみましょう: ```ts filename="src/index.ts" showLineNumbers copy import { mastra } from "./mastra"; const agent = mastra.getAgent('researchAgent'); // Basic query about concepts const query1 = "What problems does sequence modeling face with neural networks?"; const response1 = await agent.generate(query1); console.log("\nQuery:", query1); console.log("Response:", response1.text); ``` スクリプトを実行します: ```bash copy npx bun src/index.ts ``` 以下のような出力が表示されるはずです: ``` Query: What problems does sequence modeling face with neural networks? Response: Sequence modeling with neural networks faces several key challenges: 1. Vanishing and exploding gradients during training, especially with long sequences 2. Difficulty handling long-term dependencies in the input 3. Limited computational efficiency due to sequential processing 4. Challenges in parallelizing computations, resulting in longer training times ``` 別の質問を試してみましょう: ```ts filename="src/index.ts" showLineNumbers{10} copy // Query about specific findings const query2 = "What improvements were achieved in translation quality?"; const response2 = await agent.generate(query2); console.log("\nQuery:", query2); console.log("Response:", response2.text); ``` 出力: ``` Query: What improvements were achieved in translation quality? Response: The model showed significant improvements in translation quality, achieving more than 2.0 BLEU points improvement over previously reported models on the WMT 2014 English-to-German translation task, while also reducing training costs. ``` ### アプリケーションを提供する Mastraサーバーを起動して、研究アシスタントをAPI経由で公開します: ```bash mastra dev ``` 研究アシスタントは以下のURLで利用可能になります: ``` http://localhost:4111/api/agents/researchAgent/generate ``` curlでテストします: ```bash curl -X POST http://localhost:4111/api/agents/researchAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "What were the main findings about model parallelization?" } ] }' ``` ## 高度なRAGの例 より高度なRAG技術のために、これらの例を探索してください: - [フィルターRAG](/examples/rag/usage/filter-rag) メタデータを使用して結果をフィルタリングする - [クリーンアップRAG](/examples/rag/usage/cleanup-rag) 情報密度を最適化する - [思考連鎖RAG](/examples/rag/usage/cot-rag) ワークフローを使用した複雑な推論クエリのため - [リランクRAG](/examples/rag/usage/rerank-rag) 結果の関連性を向上させる --- title: "AI株式エージェントの構築 | Mastraエージェント | ガイド" description: 指定された銘柄の前日終値を取得するシンプルな株式エージェントをMastraで作成するガイド。 --- import { Steps } from "nextra/components"; import YouTube from "@/components/youtube"; # 株価エージェント [JA] Source: https://mastra.ai/ja/guides/guide/stock-agent 私たちは、指定されたシンボルの前日の終値を取得するシンプルなエージェントを作成します。この例では、ツールを作成し、それをエージェントに追加して、株価を取得するためにエージェントを使用する方法を示します。 ## プロジェクト構造 ``` stock-price-agent/ ├── src/ │ ├── agents/ │ │ └── stockAgent.ts │ ├── tools/ │ │ └── stockPrices.ts │ └── index.ts ├── package.json └── .env ``` --- --- title: '概要' description: 'Mastraを使った構築ガイド' --- # ガイド [JA] Source: https://mastra.ai/ja/guides 例では迅速な実装を示し、ドキュメントでは特定の機能を説明していますが、これらのガイドはやや長めで、Mastraのコア概念を実証するように設計されています: ## [AI リクルーター](/guides/guide/ai-recruiter) 候補者の履歴書を処理し、面接を実施するワークフローを作成し、Mastraワークフローにおける分岐ロジックとLLM統合を実証します。 ## [シェフアシスタント](/guides/guide/chef-michel) 利用可能な食材で料理を作るのを手伝うAIシェフエージェントを構築し、カスタムツールを使用してインタラクティブなエージェントを作成する方法を紹介します。 ## [研究論文アシスタント](/guides/guide/research-assistant) 検索拡張生成(RAG)を使用して学術論文を分析するAI研究アシスタントを開発し、文書処理と質問応答を実証します。 ## [株価エージェント](/guides/guide/stock-agent) 株価を取得するシンプルなエージェントを実装し、ツールの作成とMastraエージェントへの統合の基本を説明します。 --- title: "リファレンス: createTool() | ツール | エージェント | Mastra ドキュメント" description: MastraのcreateTool関数に関するドキュメントで、エージェントとワークフローのためのカスタムツールを作成します。 --- # `createTool()` [JA] Source: https://mastra.ai/ja/reference/agents/createTool `createTool()` 関数は、エージェントやワークフローによって実行される型付きツールを作成します。ツールには、組み込みのスキーマ検証、実行コンテキスト、およびMastraエコシステムとの統合が含まれています。 ## 概要 ツールはMastraの基本的な構成要素であり、エージェントが外部システムと対話し、計算を実行し、データにアクセスすることを可能にします。各ツールには以下のものがあります: - 一意の識別子 - AIがツールをいつどのように使用するかを理解するのに役立つ説明 - 検証のためのオプションの入力および出力スキーマ - ツールのロジックを実装する実行関数 ## 使用例 ```ts filename="src/tools/stock-tools.ts" showLineNumbers copy import { createTool } from "@mastra/core/tools"; import { z } from "zod"; // Helper function to fetch stock data const getStockPrice = async (symbol: string) => { const response = await fetch( `https://mastra-stock-data.vercel.app/api/stock-data?symbol=${symbol}` ); const data = await response.json(); return data.prices["4. close"]; }; // Create a tool to get stock prices export const stockPriceTool = createTool({ id: "getStockPrice", description: "指定されたティッカーシンボルの現在の株価を取得します", inputSchema: z.object({ symbol: z.string().describe("株式ティッカーシンボル(例: AAPL, MSFT)") }), outputSchema: z.object({ symbol: z.string(), price: z.number(), currency: z.string(), timestamp: z.string() }), execute: async ({ context }) => { const price = await getStockPrice(context.symbol); return { symbol: context.symbol, price: parseFloat(price), currency: "USD", timestamp: new Date().toISOString() }; } }); // Create a tool that uses the thread context export const threadInfoTool = createTool({ id: "getThreadInfo", description: "現在の会話スレッドに関する情報を返します", inputSchema: z.object({ includeResource: z.boolean().optional().default(false) }), execute: async ({ context, threadId, resourceId }) => { return { threadId, resourceId: context.includeResource ? resourceId : undefined, timestamp: new Date().toISOString() }; } }); ``` ## APIリファレンス ### パラメータ `createTool()`は、以下のプロパティを持つ単一のオブジェクトを受け取ります: Promise", required: false, description: "ツールのロジックを実装する非同期関数。実行コンテキストとオプションの設定を受け取ります。", properties: [ { type: "ToolExecutionContext", parameters: [ { name: "context", type: "object", description: "inputSchemaに一致する検証済みの入力データ" }, { name: "threadId", type: "string", isOptional: true, description: "会話スレッドの識別子(利用可能な場合)" }, { name: "resourceId", type: "string", isOptional: true, description: "ツールと対話するユーザーまたはリソースの識別子" }, { name: "mastra", type: "Mastra", isOptional: true, description: "Mastraインスタンスへの参照(利用可能な場合)" }, ] }, { type: "ToolOptions", parameters: [ { name: "toolCallId", type: "string", description: "ツール呼び出しのID。例えば、ストリームデータと共にツール呼び出し関連情報を送信する際に使用できます。" }, { name: "messages", type: "CoreMessage[]", description: "ツール呼び出しを含む応答を開始するために言語モデルに送信されたメッセージ。メッセージにはシステムプロンプトやツール呼び出しを含むアシスタントの応答は含まれません。" }, { name: "abortSignal", type: "AbortSignal", isOptional: true, description: "全体の操作を中止すべきことを示すオプションの中止シグナル。" }, ] } ] }, { name: "inputSchema", type: "ZodSchema", required: false, description: "ツールの入力パラメータを定義し、検証するZodスキーマ。提供されない場合、ツールは任意の入力を受け入れます。" }, { name: "outputSchema", type: "ZodSchema", required: false, description: "ツールの出力を定義し、検証するZodスキーマ。ツールが期待される形式でデータを返すことを保証するのに役立ちます。" }, ]} /> ### 戻り値 ", description: "エージェント、ワークフローで使用したり、直接実行したりできるツールインスタンス。", properties: [ { type: "Tool", parameters: [ { name: "id", type: "string", description: "ツールのユニークな識別子" }, { name: "description", type: "string", description: "ツールの機能の説明" }, { name: "inputSchema", type: "ZodSchema | undefined", description: "入力を検証するためのスキーマ" }, { name: "outputSchema", type: "ZodSchema | undefined", description: "出力を検証するためのスキーマ" }, { name: "execute", type: "Function", description: "ツールの実行関数" } ] } ] } ]} /> ## 型の安全性 `createTool()` 関数は、TypeScript のジェネリクスを通じて完全な型の安全性を提供します: - 入力タイプは `inputSchema` から推論されます - 出力タイプは `outputSchema` から推論されます - 実行コンテキストは、入力スキーマに基づいて適切に型付けされます これにより、アプリケーション全体でツールが型安全であることが保証されます。 ## ベストプラクティス 1. **説明的なID**: `getWeatherForecast` や `searchDatabase` のような明確でアクション指向のIDを使用する 2. **詳細な説明**: ツールの使用時期と方法を説明する包括的な説明を提供する 3. **入力検証**: Zodスキーマを使用して入力を検証し、役立つエラーメッセージを提供する 4. **エラーハンドリング**: 実行関数で適切なエラーハンドリングを実装する 5. **冪等性**: 可能であれば、ツールを冪等にする(同じ入力が常に同じ出力を生成する) 6. **パフォーマンス**: ツールを軽量で迅速に実行できるように保つ --- title: "リファレンス: Agent.generate() | Agents | Mastra ドキュメント" description: "Mastra エージェントの `.generate()` メソッドに関するドキュメントで、テキストまたは構造化された応答を生成します。" --- # Agent.generate() [JA] Source: https://mastra.ai/ja/reference/agents/generate `generate()` メソッドは、エージェントと対話してテキストまたは構造化された応答を生成するために使用されます。このメソッドは、`messages` とオプションの `options` オブジェクトをパラメータとして受け取ります。 ## パラメータ ### `messages` `messages` パラメータは以下のいずれかです: - 単一の文字列 - 文字列の配列 - `role` と `content` プロパティを持つメッセージオブジェクトの配列 メッセージオブジェクトの構造: ```typescript interface Message { role: 'system' | 'user' | 'assistant'; content: string; } ``` ### `options` (オプション) 出力構造、メモリ管理、ツール使用、テレメトリなどの設定を含めることができるオプションのオブジェクト。 | never", isOptional: true, description: "各実行ステップ後に呼び出されるコールバック関数。ステップの詳細をJSON文字列として受け取ります。構造化出力には利用できません。", }, { name: "resourceId", type: "string", isOptional: true, description: "エージェントと対話するユーザーまたはリソースの識別子。threadIdが提供されている場合は必ず提供する必要があります。", }, { name: "telemetry", type: "TelemetrySettings", isOptional: true, description: "生成中のテレメトリ収集のための設定。詳細は以下のTelemetrySettingsセクションを参照してください。", }, { name: "temperature", type: "number", isOptional: true, description: "モデルの出力のランダム性を制御します。高い値(例: 0.8)は出力をよりランダムにし、低い値(例: 0.2)はより集中し決定的にします。", }, { name: "threadId", type: "string", isOptional: true, description: "会話スレッドの識別子。複数の対話にわたってコンテキストを維持することを可能にします。resourceIdが提供されている場合は必ず提供する必要があります。", }, { name: "toolChoice", type: "'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }", isOptional: true, defaultValue: "'auto'", description: "生成中にエージェントがツールを使用する方法を制御します。", }, { name: "toolsets", type: "ToolsetsInput", isOptional: true, description: "生成中にエージェントが利用できる追加のツールセット。", }, ]} /> #### MemoryConfig メモリ管理のための設定オプション: #### TelemetrySettings 生成中のテレメトリ収集の設定: ", isOptional: true, description: "テレメトリデータに含める追加情報。AttributeValueは文字列、数値、ブール値、これらの型の配列、またはnullであることができます。", }, { name: "tracer", type: "Tracer", isOptional: true, description: "テレメトリデータに使用するカスタムOpenTelemetryトレーサーインスタンス。詳細はOpenTelemetryのドキュメントを参照してください。", } ]} /> ## 戻り値 `generate()`メソッドの戻り値は、提供されたオプション、特に`output`オプションによって異なります。 ### 戻り値のプロパティテーブル ", isOptional: true, description: "生成プロセス中に行われたツール呼び出し。テキストモードとオブジェクトモードの両方に存在します。", } ]} /> #### ToolCall構造 ## 関連メソッド リアルタイムストリーミング応答については、[`stream()`](./stream.mdx) メソッドのドキュメントを参照してください。 --- title: "リファレンス: getAgent() | エージェント設定 | エージェント | Mastra ドキュメント" description: getAgent の API リファレンス。 --- # `getAgent()` [JA] Source: https://mastra.ai/ja/reference/agents/getAgent 指定された構成に基づいてエージェントを取得する ```ts showLineNumbers copy async function getAgent({ connectionId, agent, apis, logger, }: { connectionId: string; agent: Record; apis: Record; logger: any; }): Promise<(props: { prompt: string }) => Promise> { return async (props: { prompt: string }) => { return { message: "Hello, world!" }; }; } ``` ## API シグネチャ ### パラメーター ", description: "エージェントの設定オブジェクト。", }, { name: "apis", type: "Record", description: "API名とそれぞれのAPIオブジェクトのマップ。", }, ]} /> ### 戻り値 --- title: "リファレンス: Agent.stream() | ストリーミング | エージェント | Mastra ドキュメント" description: Mastraエージェントの`.stream()`メソッドに関するドキュメント。リアルタイムでのレスポンスのストリーミングを可能にします。 --- # `stream()` [JA] Source: https://mastra.ai/ja/reference/agents/stream `stream()` メソッドは、エージェントからの応答をリアルタイムでストリーミングすることを可能にします。このメソッドは、`generate()` と同様に、`messages` とオプションの `options` オブジェクトをパラメータとして受け取ります。 ## パラメータ ### `messages` `messages`パラメータは以下のいずれかです: - 単一の文字列 - 文字列の配列 - `role`と`content`プロパティを持つメッセージオブジェクトの配列 メッセージオブジェクトの構造: ```typescript interface Message { role: 'system' | 'user' | 'assistant'; content: string; } ``` ### `options` (オプション) 出力構造、メモリ管理、ツールの使用、テレメトリなどの設定を含むオプションのオブジェクトです。 | never", isOptional: true, description: "ストリーミング中の各ステップ後に呼び出されるコールバック関数。構造化出力では利用できません。", }, { name: "output", type: "Zod schema | JsonSchema7", isOptional: true, description: "出力の予想される構造を定義します。JSONスキーマオブジェクトまたはZodスキーマを指定できます。", }, { name: "resourceId", type: "string", isOptional: true, description: "エージェントと対話するユーザーまたはリソースの識別子。threadIdが提供される場合は必須です。", }, { name: "telemetry", type: "TelemetrySettings", isOptional: true, description: "ストリーミング中のテレメトリ収集の設定。詳細は以下のTelemetrySettingsセクションを参照してください。", }, { name: "temperature", type: "number", isOptional: true, description: "モデルの出力のランダム性を制御します。高い値(例:0.8)は出力をよりランダムにし、低い値(例:0.2)はより焦点を絞った決定論的な出力にします。", }, { name: "threadId", type: "string", isOptional: true, description: "会話スレッドの識別子。複数のやり取りにわたってコンテキストを維持できます。resourceIdが提供される場合は必須です。", }, { name: "toolChoice", type: "'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }", isOptional: true, defaultValue: "'auto'", description: "ストリーミング中にエージェントがツールをどのように使用するかを制御します。", }, { name: "toolsets", type: "ToolsetsInput", isOptional: true, description: "このストリーム中にエージェントが利用できるようにする追加のツールセット。", } ]} /> #### MemoryConfig メモリ管理の設定オプション: #### TelemetrySettings ストリーミング中のテレメトリ収集の設定: ", isOptional: true, description: "テレメトリデータに含める追加情報。AttributeValueは文字列、数値、ブール値、これらの型の配列、またはnullにすることができます。", }, { name: "tracer", type: "Tracer", isOptional: true, description: "テレメトリデータに使用するカスタムOpenTelemetryトレーサーインスタンス。詳細はOpenTelemetryのドキュメントを参照してください。", } ]} /> ## 戻り値 `stream()` メソッドの戻り値は、提供されたオプション、特に `output` オプションによって異なります。 ### 戻り値のプロパティテーブル ", isOptional: true, description: "テキストチャンクのストリーム。output が 'text' の場合(スキーマが提供されていない)または `experimental_output` を使用する場合に存在します。", }, { name: "objectStream", type: "AsyncIterable", isOptional: true, description: "構造化データのストリーム。スキーマを持つ `output` オプションを使用する場合にのみ存在します。", }, { name: "partialObjectStream", type: "AsyncIterable", isOptional: true, description: "構造化データのストリーム。`experimental_output` オプションを使用する場合にのみ存在します。", }, { name: "object", type: "Promise", isOptional: true, description: "最終的な構造化出力に解決されるプロミス。`output` または `experimental_output` オプションを使用する場合に存在します。", } ]} /> ## 例 ### 基本的なテキストストリーミング ```typescript const stream = await myAgent.stream([ { role: "user", content: "Tell me a story." } ]); for await (const chunk of stream.textStream) { process.stdout.write(chunk); } ``` ### スレッドコンテキストを使用した構造化出力ストリーミング ```typescript const schema = { type: 'object', properties: { summary: { type: 'string' }, nextSteps: { type: 'array', items: { type: 'string' } } }, required: ['summary', 'nextSteps'] }; const response = await myAgent.stream( "What should we do next?", { output: schema, threadId: "project-123", onFinish: text => console.log("Finished:", text) } ); for await (const chunk of response.textStream) { console.log(chunk); } const result = await response.object; console.log("Final structured result:", result); ``` Agentの`stream()`とLLMの`stream()`の主な違いは、Agentは`threadId`を通じて会話のコンテキストを維持し、ツールにアクセスでき、エージェントのメモリシステムと統合できることです。 --- title: "mastra build" description: "Mastraプロジェクトを本番環境にデプロイするためにビルドします" --- `mastra build` コマンドは、Mastraプロジェクトを本番環境対応のHonoサーバーにバンドルします。Honoは、型安全なルーティングとミドルウェアサポートを提供する軽量なWebフレームワークであり、MastraエージェントをHTTPエンドポイントとしてデプロイするのに理想的です。 ## 使用法 [JA] Source: https://mastra.ai/ja/reference/cli/build ```bash mastra build [options] ``` ## オプション - `--dir `: Mastraプロジェクトを含むディレクトリ(デフォルト:現在のディレクトリ) ## その機能 1. Mastra エントリーファイルを見つけます(`src/mastra/index.ts` または `src/mastra/index.js`) 2. `.mastra` 出力ディレクトリを作成します 3. Rollup を使用してコードをバンドルします: - 最適なバンドルサイズのためのツリーシェイキング - Node.js 環境のターゲティング - デバッグ用のソースマップ生成 ## 例 ```bash # 現在のディレクトリからビルド mastra build # 特定のディレクトリからビルド mastra build --dir ./my-mastra-project ``` ## 出力 このコマンドは`.mastra`ディレクトリに本番用バンドルを生成します。これには以下が含まれます: - あなたのMastraエージェントをエンドポイントとして公開するHonoベースのHTTPサーバー - 本番環境向けに最適化されたJavaScriptファイル - デバッグ用のソースマップ - 必要な依存関係 この出力は以下に適しています: - クラウドサーバー(EC2、Digital Ocean)へのデプロイ - コンテナ化された環境での実行 - コンテナオーケストレーションシステムでの使用 ## デプロイヤー デプロイヤーを使用すると、ビルド出力が対象プラットフォーム向けに自動的に準備されます。例: - [Vercel デプロイヤー](/reference/deployers/vercel) - [Netlify デプロイヤー](/reference/deployers/netlify) - [Cloudflare デプロイヤー](/reference/deployers/cloudflare) --- title: "`mastra dev` リファレンス | ローカル開発 | Mastra CLI" description: エージェント、ツール、ワークフローの開発サーバーを起動するmastra devコマンドのドキュメント。 --- # `mastra dev` リファレンス [JA] Source: https://mastra.ai/ja/reference/cli/dev `mastra dev` コマンドは、エージェント、ツール、ワークフローのRESTルートを公開する開発サーバーを起動します。 ## パラメータ ## ルート `mastra dev`でサーバーを起動すると、デフォルトで以下のRESTルートが公開されます: ### システムルート - **GET `/api`**: API状態を取得します。 ### エージェントルート エージェントは`src/mastra/agents`からエクスポートされることが想定されています。 - **GET `/api/agents`**: Mastraフォルダに登録されているエージェントを一覧表示します。 - **GET `/api/agents/:agentId`**: IDでエージェントを取得します。 - **GET `/api/agents/:agentId/evals/ci`**: エージェントIDでCI評価を取得します。 - **GET `/api/agents/:agentId/evals/live`**: エージェントIDでライブ評価を取得します。 - **POST `/api/agents/:agentId/generate`**: 指定されたエージェントにテキストベースのプロンプトを送信し、エージェントの応答を返します。 - **POST `/api/agents/:agentId/stream`**: エージェントからの応答をストリームします。 - **POST `/api/agents/:agentId/instructions`**: エージェントの指示を更新します。 - **POST `/api/agents/:agentId/instructions/enhance`**: 指示から改善されたシステムプロンプトを生成します。 - **GET `/api/agents/:agentId/speakers`**: エージェントで利用可能なスピーカーを取得します。 - **POST `/api/agents/:agentId/speak`**: エージェントの音声プロバイダーを使用してテキストを音声に変換します。 - **POST `/api/agents/:agentId/listen`**: エージェントの音声プロバイダーを使用して音声をテキストに変換します。 - **POST `/api/agents/:agentId/tools/:toolId/execute`**: エージェントを通じてツールを実行します。 ### ツールルート ツールは`src/mastra/tools`(または設定されたツールディレクトリ)からエクスポートされることが想定されています。 - **GET `/api/tools`**: すべてのツールを取得します。 - **GET `/api/tools/:toolId`**: IDでツールを取得します。 - **POST `/api/tools/:toolId/execute`**: 特定のツールを名前で呼び出し、リクエストボディに入力データを渡します。 ### ワークフローのルート ワークフローは`src/mastra/workflows`(または設定されたワークフローディレクトリ)からエクスポートされることが想定されています。 - **GET `/api/workflows`**: すべてのワークフローを取得します。 - **GET `/api/workflows/:workflowId`**: IDでワークフローを取得します。 - **POST `/api/workflows/:workflowName/start`**: 指定されたワークフローを開始します。 - **POST `/api/workflows/:workflowName/:instanceId/event`**: 既存のワークフローインスタンスにイベントまたはトリガー信号を送信します。 - **GET `/api/workflows/:workflowName/:instanceId/status`**: 実行中のワークフローインスタンスのステータス情報を返します。 - **POST `/api/workflows/:workflowId/resume`**: 一時停止されたワークフローステップを再開します。 - **POST `/api/workflows/:workflowId/resume-async`**: 一時停止されたワークフローステップを非同期で再開します。 - **POST `/api/workflows/:workflowId/createRun`**: 新しいワークフロー実行を作成します。 - **POST `/api/workflows/:workflowId/start-async`**: ワークフローを非同期で実行/開始します。 - **GET `/api/workflows/:workflowId/watch`**: ワークフローの遷移をリアルタイムで監視します。 ### メモリルート - **GET `/api/memory/status`**: メモリステータスを取得します。 - **GET `/api/memory/threads`**: すべてのスレッドを取得します。 - **GET `/api/memory/threads/:threadId`**: IDでスレッドを取得します。 - **GET `/api/memory/threads/:threadId/messages`**: スレッドのメッセージを取得します。 - **POST `/api/memory/threads`**: 新しいスレッドを作成します。 - **PATCH `/api/memory/threads/:threadId`**: スレッドを更新します。 - **DELETE `/api/memory/threads/:threadId`**: スレッドを削除します。 - **POST `/api/memory/save-messages`**: メッセージを保存します。 ### テレメトリルート - **GET `/api/telemetry`**: すべてのトレースを取得します。 ### ログルート - **GET `/api/logs`**: すべてのログを取得します。 - **GET `/api/logs/transports`**: すべてのログトランスポートのリスト。 - **GET `/api/logs/:runId`**: 実行IDでログを取得します。 ### ベクトルルート - **POST `/api/vector/:vectorName/upsert`**: インデックスにベクトルをアップサートします。 - **POST `/api/vector/:vectorName/create-index`**: 新しいベクトルインデックスを作成します。 - **POST `/api/vector/:vectorName/query`**: インデックスからベクトルをクエリします。 - **GET `/api/vector/:vectorName/indexes`**: ベクトルストアのすべてのインデックスをリストします。 - **GET `/api/vector/:vectorName/indexes/:indexName`**: 特定のインデックスに関する詳細を取得します。 - **DELETE `/api/vector/:vectorName/indexes/:indexName`**: 特定のインデックスを削除します。 ### OpenAPI仕様 - **GET `/openapi.json`**: プロジェクトのルートに対して自動生成されたOpenAPI仕様を返します。 - **GET `/swagger-ui`**: API文書化のためのSwagger UIにアクセスします。 ## 追加情報 ポートはデフォルトで4111に設定されています。 使用するプロバイダー(例:`OPENAI_API_KEY`、`ANTHROPIC_API_KEY`など)の環境変数が`.env.development`または`.env`ファイルに設定されていることを確認してください。 ### リクエスト例 `mastra dev`を実行した後にエージェントをテストするには: ```bash curl -X POST http://localhost:4111/api/agents/myAgent/generate \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "Hello, how can you assist me today?" } ] }' ``` --- title: "`mastra init` リファレンス | プロジェクト作成 | Mastra CLI" description: インタラクティブなセットアップオプションで新しいMastraプロジェクトを作成する「mastra init」コマンドのドキュメント。 --- # `mastra init` リファレンス [JA] Source: https://mastra.ai/ja/reference/cli/init ## `mastra init` これは新しいMastraプロジェクトを作成します。3つの異なる方法で実行できます: 1. **インタラクティブモード(推奨)** フラグなしで実行すると、インタラクティブプロンプトが表示され、以下の手順が案内されます: - Mastraファイルのディレクトリを選択 - インストールするコンポーネント(エージェント、ツール、ワークフロー)を選択 - デフォルトのLLMプロバイダー(OpenAI、Anthropic、またはGroq)を選択 - サンプルコードを含めるかどうかを決定 2. **デフォルト設定でのクイックスタート** ```bash mastra init --default ``` これにより、以下の設定でプロジェクトがセットアップされます: - ソースディレクトリ:`src/` - すべてのコンポーネント:エージェント、ツール、ワークフロー - OpenAIをデフォルトプロバイダーとして設定 - サンプルコードなし 3. **カスタムセットアップ** ```bash mastra init --dir src/mastra --components agents,tools --llm openai --example ``` オプション: - `-d, --dir`:Mastraファイルのディレクトリ(デフォルトはsrc/mastra) - `-c, --components`:コンマ区切りのコンポーネントリスト(agents, tools, workflows) - `-l, --llm`:デフォルトのモデルプロバイダー(openai, anthropic, または groq) - `-k, --llm-api-key`:選択したLLMプロバイダーのAPIキー(.envファイルに追加されます) - `-e, --example`:サンプルコードを含める - `-ne, --no-example`:サンプルコードをスキップ # Agents API [JA] Source: https://mastra.ai/ja/reference/client-js/agents Agents APIは、Mastra AIエージェントと対話するためのメソッドを提供し、応答の生成、インタラクションのストリーミング、エージェントツールの管理を含みます。 ## すべてのエージェントを取得する 利用可能なすべてのエージェントのリストを取得します: ```typescript const agents = await client.getAgents(); ``` ## 特定のエージェントを操作する 特定のエージェントのインスタンスを取得します: ```typescript const agent = client.getAgent("agent-id"); ``` ## エージェントメソッド ### エージェントの詳細を取得 エージェントの詳細情報を取得します: ```typescript const details = await agent.details(); ``` ### 応答を生成 エージェントからの応答を生成します: ```typescript const response = await agent.generate({ messages: [ { role: "user", content: "こんにちは、お元気ですか?", }, ], threadId: "thread-1", // オプション: 会話コンテキストのスレッドID resourceid: "resource-1", // オプション: リソースID output: {}, // オプション: 出力設定 }); ``` ### 応答をストリーム リアルタイムの対話のためにエージェントからの応答をストリームします: ```typescript const response = await agent.stream({ messages: [ { role: "user", content: "物語を話して", }, ], }); // processDataStreamユーティリティでデータストリームを処理 response.processDataStream({ onTextPart: (text) => { process.stdout.write(text); }, onFilePart: (file) => { console.log(file); }, onDataPart: (data) => { console.log(data); }, onErrorPart: (error) => { console.error(error); }, }); // 応答ボディから直接読み取ることもできます const reader = response.body.getReader(); while (true) { const { done, value } = await reader.read(); if (done) break; console.log(new TextDecoder().decode(value)); } ``` ### エージェントツールを取得 エージェントが利用可能な特定のツールに関する情報を取得します: ```typescript const tool = await agent.getTool("tool-id"); ``` ### エージェント評価を取得 エージェントの評価結果を取得します: ```typescript // CI評価を取得 const evals = await agent.evals(); // ライブ評価を取得 const liveEvals = await agent.liveEvals(); ``` # エラーハンドリング [JA] Source: https://mastra.ai/ja/reference/client-js/error-handling Mastra Client SDK には、組み込みのリトライメカニズムとエラーハンドリング機能が含まれています。 ## エラーハンドリング すべてのAPIメソッドは、キャッチして処理できるエラーをスローする可能性があります: ```typescript try { const agent = client.getAgent("agent-id"); const response = await agent.generate({ messages: [{ role: "user", content: "Hello" }], }); } catch (error) { console.error("An error occurred:", error.message); } ``` ## リトライメカニズム クライアントは指数バックオフを使用して失敗したリクエストを自動的にリトライします: ```typescript const client = new MastraClient({ baseUrl: "http://localhost:4111", retries: 3, // リトライ試行回数 backoffMs: 300, // 初期バックオフ時間 maxBackoffMs: 5000, // 最大バックオフ時間 }); ``` ### リトライの仕組み 1. 最初の試行が失敗 → 300ms 待機 2. 2回目の試行が失敗 → 600ms 待機 3. 3回目の試行が失敗 → 1200ms 待機 4. 最終試行が失敗 → エラーをスロー # Logs API [JA] Source: https://mastra.ai/ja/reference/client-js/logs Logs APIは、Mastraのシステムログとデバッグ情報にアクセスし、クエリを実行するためのメソッドを提供します。 ## ログの取得 オプションのフィルタリングを使用してシステムログを取得します: ```typescript const logs = await client.getLogs({ transportId: "transport-1", }); ``` ## 特定の実行のログを取得する 特定の実行のログを取得します: ```typescript const runLogs = await client.getLogForRun({ runId: "run-1", transportId: "transport-1", }); ``` # メモリーAPI [JA] Source: https://mastra.ai/ja/reference/client-js/memory メモリーAPIは、Mastraでの会話スレッドとメッセージ履歴を管理するためのメソッドを提供します。 ## メモリースレッド操作 ### すべてのスレッドを取得する 特定のリソースのすべてのメモリースレッドを取得します: ```typescript const threads = await client.getMemoryThreads({ resourceId: "resource-1", agentId: "agent-1" }); ``` ### 新しいスレッドを作成する 新しいメモリースレッドを作成します: ```typescript const thread = await client.createMemoryThread({ title: "New Conversation", metadata: { category: "support" }, resourceid: "resource-1", agentId: "agent-1" }); ``` ### 特定のスレッドの操作 特定のメモリースレッドのインスタンスを取得します: ```typescript const thread = client.getMemoryThread("thread-id", "agent-id"); ``` ## スレッドメソッド ### スレッドの詳細を取得する 特定のスレッドに関する詳細を取得します: ```typescript const details = await thread.get(); ``` ### スレッドを更新する スレッドのプロパティを更新します: ```typescript const updated = await thread.update({ title: "Updated Title", metadata: { status: "resolved" }, resourceid: "resource-1", }); ``` ### スレッドを削除する スレッドとそのメッセージを削除します: ```typescript await thread.delete(); ``` ## メッセージ操作 ### メッセージを保存 メッセージをメモリに保存します: ```typescript const savedMessages = await client.saveMessageToMemory({ messages: [ { role: "user", content: "Hello!", id: "1", threadId: "thread-1", createdAt: new Date(), type: "text", }, ], agentId: "agent-1" }); ``` ### メモリステータスを取得 メモリシステムのステータスを確認します: ```typescript const status = await client.getMemoryStatus("agent-id"); ``` # Telemetry API [JA] Source: https://mastra.ai/ja/reference/client-js/telemetry Telemetry APIは、Mastraアプリケーションからトレースを取得し、分析するためのメソッドを提供します。これにより、アプリケーションの動作とパフォーマンスを監視し、デバッグすることができます。 ## トレースの取得 オプションのフィルタリングとページネーションを使用してトレースを取得します: ```typescript const telemetry = await client.getTelemetry({ name: "trace-name", // オプション: トレース名でフィルタリング scope: "scope-name", // オプション: スコープでフィルタリング page: 1, // オプション: ページネーションのページ番号 perPage: 10, // オプション: 1ページあたりのアイテム数 attribute: { // オプション: カスタム属性でフィルタリング key: "value", }, }); ``` # ツールAPI [JA] Source: https://mastra.ai/ja/reference/client-js/tools ツールAPIは、Mastraプラットフォームで利用可能なツールを操作および実行するためのメソッドを提供します。 ## すべてのツールを取得する 利用可能なすべてのツールのリストを取得します: ```typescript const tools = await client.getTools(); ``` ## 特定のツールの操作 特定のツールのインスタンスを取得する: ```typescript const tool = client.getTool("tool-id"); ``` ## ツールメソッド ### ツールの詳細を取得 ツールに関する詳細情報を取得します: ```typescript const details = await tool.details(); ``` ### ツールを実行 特定の引数でツールを実行します: ```typescript const result = await tool.execute({ args: { param1: "value1", param2: "value2", }, threadId: "thread-1", // オプション: スレッドコンテキスト resourceid: "resource-1", // オプション: リソース識別子 }); ``` # Vectors API [JA] Source: https://mastra.ai/ja/reference/client-js/vectors Vectors APIは、Mastraでのセマンティック検索や類似性マッチングのためのベクトル埋め込みを操作するメソッドを提供します。 ## ベクトルの操作 ベクトルストアのインスタンスを取得する: ```typescript const vector = client.getVector("vector-name"); ``` ## ベクトルメソッド ### ベクトルインデックスの詳細を取得 特定のベクトルインデックスに関する情報を取得します: ```typescript const details = await vector.details("index-name"); ``` ### ベクトルインデックスの作成 新しいベクトルインデックスを作成します: ```typescript const result = await vector.createIndex({ indexName: "new-index", dimension: 128, metric: "cosine", // 'cosine', 'euclidean', または 'dotproduct' }); ``` ### ベクトルのアップサート インデックスにベクトルを追加または更新します: ```typescript const ids = await vector.upsert({ indexName: "my-index", vectors: [ [0.1, 0.2, 0.3], // 最初のベクトル [0.4, 0.5, 0.6], // 2番目のベクトル ], metadata: [{ label: "first" }, { label: "second" }], ids: ["id1", "id2"], // オプション:カスタムID }); ``` ### ベクトルのクエリ 類似したベクトルを検索します: ```typescript const results = await vector.query({ indexName: "my-index", queryVector: [0.1, 0.2, 0.3], topK: 10, filter: { label: "first" }, // オプション:メタデータフィルター includeVector: true, // オプション:結果にベクトルを含める }); ``` ### すべてのインデックスを取得 利用可能なすべてのインデックスを一覧表示します: ```typescript const indexes = await vector.getIndexes(); ``` ### インデックスの削除 ベクトルインデックスを削除します: ```typescript const result = await vector.delete("index-name"); ``` # ワークフローAPI [JA] Source: https://mastra.ai/ja/reference/client-js/workflows ワークフローAPIは、Mastraの自動化されたワークフローを操作および実行するためのメソッドを提供します。 ## すべてのワークフローの取得 利用可能なすべてのワークフローのリストを取得します: ```typescript const workflows = await client.getWorkflows(); ``` ## 特定のワークフローを操作する 特定のワークフローのインスタンスを取得します: ```typescript const workflow = client.getWorkflow("workflow-id"); ``` ## ワークフローメソッド ### ワークフロー詳細の取得 ワークフローに関する詳細情報を取得します: ```typescript const details = await workflow.details(); ``` ### ワークフロー実行を非同期で開始 トリガーデータでワークフロー実行を開始し、実行結果を待機します: ```typescript const {runId} = workflow.createRun() const result = await workflow.startAsync({ runId, triggerData: { param1: "value1", param2: "value2", }, }); ``` ### ワークフロー実行を非同期で再開 一時停止されたワークフローステップを再開し、完全な実行結果を待機します: ```typescript const {runId} = createRun({runId: prevRunId}) const result = await workflow.resumeAsync({ runId, stepId: "step-id", contextData: { key: "value" }, }); ``` ### ワークフローの監視 ワークフローの遷移を監視します ```typescript try{ // Get workflow instance const workflow = client.getWorkflow("workflow-id"); // Create a workflow run const {runId} = workflow.createRun() // Watch workflow run workflow.watch({runId},(record)=>{ // Every new record is the latest transition state of the workflow run console.log({ activePaths: record.activePaths, results: record.results, timestamp: record.timestamp, runId: record.runId }); }); // Start workflow run workflow.start({ runId, triggerData: { city: 'New York', }, }); }catch(e){ console.error(e); } ``` ### ワークフローの再開 ワークフロー実行を再開し、ワークフローステップの遷移を監視します ```typescript try{ //To resume a workflow run, when a step is suspended const {run} = createRun({runId: prevRunId}) //Watch run workflow.watch({runId},(record)=>{ // Every new record is the latest transition state of the workflow run console.log({ activePaths: record.activePaths, results: record.results, timestamp: record.timestamp, runId: record.runId }); }) //resume run workflow.resume({ runId, stepId: "step-id", contextData: { key: "value" }, }); }catch(e){ console.error(e); } ``` ### ワークフロー実行結果 ワークフロー実行結果は以下を提供します: | フィールド | 型 | 説明 | |-------|------|-------------| | `activePaths` | `Record` | 実行ステータスを含むワークフロー内の現在アクティブなパス | | `results` | `CoreWorkflowRunResult['results']` | ワークフロー実行からの結果 | | `timestamp` | `number` | この遷移が発生した時のUnixタイムスタンプ | | `runId` | `string` | このワークフロー実行インスタンスの一意の識別子 | --- title: "Mastra Core" description: エージェント、ワークフロー、サーバーエンドポイントを管理するためのコアエントリーポイントであるMastraクラスのドキュメント。 --- # Mastraクラス [JA] Source: https://mastra.ai/ja/reference/core/mastra-class Mastraクラスはアプリケーションの主要なエントリーポイントです。エージェント、ワークフロー、サーバーエンドポイントを管理します。 ## コンストラクタオプション ", description: "登録するカスタムツール。キーがツール名、値がツール関数であるキーと値のペアとして構成されます。", isOptional: true, defaultValue: "{}", }, { name: "storage", type: "MastraStorage", description: "データを永続化するためのストレージエンジンインスタンス", isOptional: true, }, { name: "vectors", type: "Record", description: "セマンティック検索やベクトルベースのツールに使用されるベクトルストアインスタンス(例:Pinecone、PgVector、Qdrant)", isOptional: true, }, { name: "logger", type: "Logger", description: "createLogger()で作成されたロガーインスタンス", isOptional: true, defaultValue: "INFOレベルのコンソールロガー", }, { name: "workflows", type: "Record", description: "登録するワークフロー。キーがワークフロー名、値がワークフローインスタンスであるキーと値のペアとして構成されます。", isOptional: true, defaultValue: "{}", }, { name: "serverMiddleware", type: "Array<{ handler: (c: any, next: () => Promise) => Promise; path?: string; }>", description: "APIルートに適用されるサーバーミドルウェア関数。各ミドルウェアはパスパターンを指定できます(デフォルトは'/api/*')。", isOptional: true, defaultValue: "[]", }, ]} /> ## 初期化 Mastraクラスは通常、`src/mastra/index.ts`ファイルで初期化されます: ```typescript copy filename=src/mastra/index.ts import { Mastra } from "@mastra/core"; import { createLogger } from "@mastra/core/logger"; // Basic initialization export const mastra = new Mastra({}); // Full initialization with all options export const mastra = new Mastra({ agents: {}, workflows: [], integrations: [], logger: createLogger({ name: "My Project", level: "info", }), storage: {}, tools: {}, vectors: {}, }); ``` `Mastra`クラスは、トップレベルのレジストリと考えることができます。ツールをMastraに登録すると、登録されたエージェントやワークフローがそれらを使用できるようになります。統合機能をMastraに登録すると、エージェント、ワークフロー、ツールがそれらを使用できるようになります。 ## メソッド ", description: "登録されているすべてのエージェントをキーと値のオブジェクトとして返します。", example: 'const agents = mastra.getAgents();', }, { name: "getWorkflow(id, { serialized })", type: "Workflow", description: "IDによってワークフローインスタンスを返します。serializedオプション(デフォルト:false)は名前のみを含む簡略化された表現を返します。", example: 'const workflow = mastra.getWorkflow("myWorkflow");', }, { name: "getWorkflows({ serialized })", type: "Record", description: "登録されているすべてのワークフローを返します。serializedオプション(デフォルト:false)は簡略化された表現を返します。", example: 'const workflows = mastra.getWorkflows();', }, { name: "getVector(name)", type: "MastraVector", description: "名前によってベクトルストアインスタンスを返します。見つからない場合はエラーをスローします。", example: 'const vectorStore = mastra.getVector("myVectorStore");', }, { name: "getVectors()", type: "Record", description: "登録されているすべてのベクトルストアをキーと値のオブジェクトとして返します。", example: 'const vectorStores = mastra.getVectors();', }, { name: "getDeployer()", type: "MastraDeployer | undefined", description: "設定されているデプロイヤーインスタンスがあれば、それを返します。", example: 'const deployer = mastra.getDeployer();', }, { name: "getStorage()", type: "MastraStorage | undefined", description: "設定されているストレージインスタンスを返します。", example: 'const storage = mastra.getStorage();', }, { name: "getMemory()", type: "MastraMemory | undefined", description: "設定されているメモリインスタンスを返します。注意:これは非推奨であり、メモリは直接エージェントに追加すべきです。", example: 'const memory = mastra.getMemory();', }, { name: "getServerMiddleware()", type: "Array<{ handler: Function; path: string; }>", description: "設定されているサーバーミドルウェア関数を返します。", example: 'const middleware = mastra.getServerMiddleware();', }, { name: "setStorage(storage)", type: "void", description: "Mastraインスタンスのストレージインスタンスを設定します。", example: 'mastra.setStorage(new DefaultStorage());', }, { name: "setLogger({ logger })", type: "void", description: "すべてのコンポーネント(エージェント、ワークフローなど)のロガーを設定します。", example: 'mastra.setLogger({ logger: createLogger({ name: "MyLogger" }) });', }, { name: "setTelemetry(telemetry)", type: "void", description: "すべてのコンポーネントのテレメトリ設定を設定します。", example: 'mastra.setTelemetry({ export: { type: "console" } });', }, { name: "getLogger()", type: "Logger", description: "設定されているロガーインスタンスを取得します。", example: 'const logger = mastra.getLogger();', }, { name: "getTelemetry()", type: "Telemetry | undefined", description: "設定されているテレメトリインスタンスを取得します。", example: 'const telemetry = mastra.getTelemetry();', }, { name: "getLogsByRunId({ runId, transportId })", type: "Promise", description: "特定の実行IDとトランスポートIDのログを取得します。", example: 'const logs = await mastra.getLogsByRunId({ runId: "123", transportId: "456" });', }, { name: "getLogs(transportId)", type: "Promise", description: "特定のトランスポートIDのすべてのログを取得します。", example: 'const logs = await mastra.getLogs("transportId");', }, ]} /> ## エラー処理 Mastraクラスのメソッドは、キャッチ可能な型付きエラーをスローします: ```typescript copy try { const tool = mastra.getTool("nonexistentTool"); } catch (error) { if (error instanceof Error) { console.log(error.message); // "Tool with name nonexistentTool not found" } } ``` --- title: "Cloudflare Deployer" description: "CloudflareDeployerクラスのドキュメント。MastraアプリケーションをCloudflare Workersにデプロイします。" --- # CloudflareDeployer [JA] Source: https://mastra.ai/ja/reference/deployer/cloudflare CloudflareDeployerは、MastraアプリケーションをCloudflare Workersにデプロイし、設定、環境変数、ルート管理を処理します。抽象Deployerクラスを拡張して、Cloudflare固有のデプロイ機能を提供します。 ## 使用例 ```typescript import { Mastra } from '@mastra/core'; import { CloudflareDeployer } from '@mastra/deployer-cloudflare'; const mastra = new Mastra({ deployer: new CloudflareDeployer({ scope: 'your-account-id', projectName: 'your-project-name', routes: [ { pattern: 'example.com/*', zone_name: 'example.com', custom_domain: true, }, ], workerNamespace: 'your-namespace', auth: { apiToken: 'your-api-token', apiEmail: 'your-email', }, }), // ... other Mastra configuration options }); ``` ## パラメータ ### コンストラクタパラメータ ", description: "ワーカー設定に含める環境変数。", isOptional: true, }, { name: "auth", type: "object", description: "Cloudflare認証の詳細。", isOptional: false, }, ]} /> ### authオブジェクト ### CFRouteオブジェクト ### 環境変数 CloudflareDeployerは複数のソースから環境変数を処理します: 1. **環境ファイル**:`.env.production`と`.env`ファイルからの変数。 2. **設定**:`env`パラメータを通じて渡される変数。 ## Mastraプロジェクトのビルド Cloudflareデプロイメント用にMastraプロジェクトをビルドするには: ```bash npx mastra build ビルドプロセスは`.mastra/output`ディレクトリに以下の出力構造を生成します: ``` .mastra/output/ ├── index.mjs # メインワーカーのエントリーポイント ├── wrangler.json # Cloudflare Workerの設定 └── assets/ # 静的アセットと依存関係 ``` ### Wranglerの設定 CloudflareDeployerは以下の設定で`wrangler.json`設定ファイルを自動的に生成します: ```json { "name": "your-project-name", "main": "./output/index.mjs", "compatibility_date": "2024-12-02", "compatibility_flags": ["nodejs_compat"], "observability": { "logs": { "enabled": true } }, "vars": { // .envファイルと設定からの環境変数 }, "routes": [ // 指定された場合のルート設定 ] } ``` ### ルート設定 ルートは、URLパターンとドメインに基づいてトラフィックをワーカーに転送するように設定できます: ```typescript const routes = [ { pattern: 'api.example.com/*', zone_name: 'example.com', custom_domain: true, }, { pattern: 'example.com/api/*', zone_name: 'example.com', }, ]; ``` ## デプロイオプション ビルド後、Mastraアプリケーションの`.mastra/output`をCloudflare Workersに以下のいずれかの方法でデプロイできます: 1. **Wrangler CLI**: Cloudflareの公式CLIツールを使用して直接デプロイ - CLIをインストール: `npm install -g wrangler` - 出力ディレクトリに移動: `cd .mastra/output` - Cloudflareアカウントにログイン: `wrangler login` - プレビュー環境にデプロイ: `wrangler deploy` - 本番環境へのデプロイ: `wrangler deploy --env production` 2. **Cloudflareダッシュボード**: Cloudflareダッシュボードを通じてビルド出力を手動でアップロード > 出力ディレクトリ`.mastra/output`で`wrangler dev`を実行して、Mastraアプリケーションをローカルでテストすることもできます。 ## プラットフォームドキュメント - [Cloudflare Workers](https://developers.cloudflare.com/workers/) --- title: "Mastra デプロイヤー" description: Mastraアプリケーションのパッケージングとデプロイを処理する、Deployer抽象クラスのドキュメント。 --- # Deployer [JA] Source: https://mastra.ai/ja/reference/deployer/deployer Deployerは、コードのパッケージング、環境ファイルの管理、Honoフレームワークを使用したアプリケーションの提供によって、Mastraアプリケーションのデプロイメントを処理します。具体的な実装では、特定のデプロイメントターゲットに対するdeployメソッドを定義する必要があります。 ## 使用例 ```typescript import { Deployer } from "@mastra/deployer"; // Create a custom deployer by extending the abstract Deployer class class CustomDeployer extends Deployer { constructor() { super({ name: 'custom-deployer' }); } // Implement the abstract deploy method async deploy(outputDirectory: string): Promise { // Prepare the output directory await this.prepare(outputDirectory); // Bundle the application await this._bundle('server.ts', 'mastra.ts', outputDirectory); // Custom deployment logic // ... } } ``` ## パラメータ ### コンストラクタパラメータ ### deployパラメータ ## メソッド Promise", description: "デプロイ中に使用される環境ファイルのリストを返します。デフォルトでは、'.env.production' と '.env' ファイルを探します。", }, { name: "deploy", type: "(outputDirectory: string) => Promise", description: "サブクラスによって実装されなければならない抽象メソッドです。指定された出力ディレクトリへのデプロイプロセスを処理します。", }, ]} /> ## Bundlerから継承されたメソッド Deployerクラスは、Bundlerクラスから以下の主要なメソッドを継承しています: Promise", description: "出力ディレクトリをクリーンアップし、必要なサブディレクトリを作成して準備します。", }, { name: "writeInstrumentationFile", type: "(outputDirectory: string) => Promise", description: "テレメトリ目的のために出力ディレクトリに計測ファイルを書き込みます。", }, { name: "writePackageJson", type: "(outputDirectory: string, dependencies: Map) => Promise", description: "指定された依存関係を含むpackage.jsonファイルを出力ディレクトリに生成します。", }, { name: "_bundle", type: "(serverFile: string, mastraEntryFile: string, outputDirectory: string, bundleLocation?: string) => Promise", description: "指定されたサーバーファイルとMastraエントリーファイルを使用してアプリケーションをバンドルします。", }, ]} /> ## コアコンセプト ### デプロイメントライフサイクル Deployerの抽象クラスは構造化されたデプロイメントライフサイクルを実装しています: 1. **初期化**:デプロイヤーは名前で初期化され、依存関係管理のためのDepsインスタンスを作成します。 2. **環境設定**:`getEnvFiles`メソッドは、デプロイメント中に使用される環境ファイル(.env.production、.env)を識別します。 3. **準備**:`prepare`メソッド(Bundlerから継承)は出力ディレクトリをクリーンアップし、必要なサブディレクトリを作成します。 4. **バンドル**:`_bundle`メソッド(Bundlerから継承)はアプリケーションコードとその依存関係をパッケージ化します。 5. **デプロイメント**:抽象的な`deploy`メソッドはサブクラスによって実装され、実際のデプロイメントプロセスを処理します。 ### 環境ファイル管理 Deployerクラスには、`getEnvFiles`メソッドを通じた環境ファイル管理の組み込みサポートが含まれています。このメソッドは: - 事前定義された順序(.env.production、.env)で環境ファイルを探します - FileServiceを使用して最初に存在するファイルを見つけます - 見つかった環境ファイルの配列を返します - 環境ファイルが見つからない場合は空の配列を返します ```typescript getEnvFiles(): Promise { const possibleFiles = ['.env.production', '.env.local', '.env']; try { const fileService = new FileService(); const envFile = fileService.getFirstExistingFile(possibleFiles); return Promise.resolve([envFile]); } catch {} return Promise.resolve([]); } ``` ### バンドルとデプロイメントの関係 DeployerクラスはBundlerクラスを拡張し、バンドルとデプロイメントの間に明確な関係を確立します: 1. **バンドルは前提条件**:バンドルはデプロイメントの前提条件ステップであり、アプリケーションコードがデプロイ可能な形式にパッケージ化されます。 2. **共有インフラストラクチャ**:バンドルとデプロイメントは、依存関係管理やファイルシステム操作などの共通インフラストラクチャを共有します。 3. **特殊なデプロイメントロジック**:バンドルがコードパッケージングに焦点を当てる一方、デプロイメントはバンドルされたコードをデプロイするための環境固有のロジックを追加します。 4. **拡張性**:抽象的な`deploy`メソッドにより、異なるターゲット環境向けの特殊なデプロイヤーを作成することができます。 --- title: "Netlify デプロイヤー" description: "NetlifyDeployer クラスのドキュメントで、Mastra アプリケーションを Netlify Functions にデプロイします。" --- # NetlifyDeployer [JA] Source: https://mastra.ai/ja/reference/deployer/netlify NetlifyDeployerは、Mastraアプリケーションをネットリファイ・ファンクションにデプロイし、サイトの作成、設定、およびデプロイプロセスを処理します。抽象Deployerクラスを拡張して、Netlify固有のデプロイ機能を提供します。 ## 使用例 ```typescript import { Mastra } from '@mastra/core'; import { NetlifyDeployer } from '@mastra/deployer-netlify'; const mastra = new Mastra({ deployer: new NetlifyDeployer({ scope: 'your-team-slug', projectName: 'your-project-name', token: 'your-netlify-token' }), // ... other Mastra configuration options }); ``` ## パラメーター ### コンストラクターパラメーター ### 環境変数 NetlifyDeployerは複数のソースから環境変数を処理します: 1. **環境ファイル**: `.env.production`および`.env`ファイルからの変数。 2. **設定**: Mastra設定を通じて渡される変数。 3. **Netlifyダッシュボード**: Netlifyのウェブインターフェースを通じて管理することもできます。 ## Mastraプロジェクトのビルド Netlifyデプロイメント用にMastraプロジェクトをビルドするには: ```bash npx mastra build ``` ビルドプロセスは、`.mastra/output`ディレクトリに以下の出力構造を生成します: ``` .mastra/output/ ├── netlify/ │ └── functions/ │ └── api/ │ └── index.mjs # アプリケーションのエントリーポイント └── netlify.toml # Netlify設定 ``` ### Netlify設定 NetlifyDeployerは、`.mastra/output`に以下の設定で`netlify.toml`設定ファイルを自動生成します: ```toml [functions] node_bundler = "esbuild" directory = "netlify/functions" [[redirects]] force = true from = "/*" status = 200 to = "/.netlify/functions/api/:splat" ``` ## デプロイオプション ビルド後、Mastraアプリケーションの`.mastra/output`を以下のいずれかの方法でNetlifyにデプロイできます: 1. **Netlify CLI**: Netlifyの公式CLIツールを使用して直接デプロイ - CLIをインストール: `npm install -g netlify-cli` - 出力ディレクトリに移動: `cd .mastra/output` - 関数ディレクトリを指定してデプロイ: `netlify deploy --dir . --functions ./netlify/functions` - 本番環境へのデプロイには`--prod`フラグを追加: `netlify deploy --prod --dir . --functions ./netlify/functions` 2. **Netlifyダッシュボード**: Gitリポジトリを接続するか、Netlifyダッシュボードにビルド出力をドラッグ&ドロップ 3. **Netlify Dev**: Netlifyの開発環境でMastraアプリケーションをローカルで実行 > 出力ディレクトリ`.mastra/output`で`netlify dev`を実行して、Mastraアプリケーションをローカルでテストすることもできます。 ## プラットフォームドキュメント - [Netlify](https://docs.netlify.com/) --- title: "Vercel デプロイヤー" description: "Mastraアプリケーションを Vercel にデプロイする VercelDeployer クラスのドキュメント。" --- # VercelDeployer [JA] Source: https://mastra.ai/ja/reference/deployer/vercel VercelDeployerは、MastraアプリケーションをVercelにデプロイし、設定、環境変数の同期、およびデプロイプロセスを処理します。抽象Deployerクラスを拡張して、Vercel固有のデプロイ機能を提供します。 ## 使用例 ```typescript import { Mastra } from '@mastra/core'; import { VercelDeployer } from '@mastra/deployer-vercel'; const mastra = new Mastra({ deployer: new VercelDeployer({ teamSlug: 'your-team-slug', projectName: 'your-project-name', token: 'your-vercel-token' }), // ... other Mastra configuration options }); ``` ## パラメータ ### コンストラクタパラメータ ### 環境変数 VercelDeployerは複数のソースからの環境変数を処理します: 1. **環境ファイル**:`.env.production`と`.env`ファイルからの変数。 2. **設定**:Mastra設定を通じて渡される変数。 3. **Vercelダッシュボード**:変数はVercelのウェブインターフェースを通じても管理できます。 デプロイヤーは、ローカル開発環境とVercelの環境変数システム間で環境変数を自動的に同期し、すべてのデプロイメント環境(本番、プレビュー、開発)間での一貫性を確保します。 ## Mastraプロジェクトのビルド Vercelデプロイメント用にMastraプロジェクトをビルドするには: ```bash npx mastra build ``` ビルドプロセスは`.mastra/output`ディレクトリに以下の出力構造を生成します: ``` .mastra/output/ ├── vercel.json # Vercel設定 └── index.mjs # アプリケーションのエントリーポイント ``` ### Vercel設定 VercelDeployerは自動的に`.mastra/output`に以下の設定を含む`vercel.json`設定ファイルを生成します: ```json { "version": 2, "installCommand": "npm install --omit=dev", "builds": [ { "src": "index.mjs", "use": "@vercel/node", "config": { "includeFiles": ["**"] } } ], "routes": [ { "src": "/(.*)", "dest": "index.mjs" } ] } ``` ## デプロイオプション ビルド後、Mastraアプリケーションの`.mastra/output`を以下のいずれかの方法でVercelにデプロイできます: 1. **Vercel CLI**:Vercelの公式CLIツールを使用して直接デプロイ - CLIをインストール:`npm install -g vercel` - 出力ディレクトリに移動:`cd .mastra/output` - プレビュー環境にデプロイ:`vercel` - 本番環境へのデプロイ:`vercel --prod` 2. **Vercelダッシュボード**:Gitリポジトリを接続するか、Vercelダッシュボードを通じてビルド出力をドラッグ&ドロップ > 出力ディレクトリ`.mastra/output`で`vercel dev`を実行して、Mastraアプリケーションをローカルでテストすることもできます。 ## プラットフォームドキュメント - [Vercel](https://vercel.com/docs) --- title: "リファレンス: 回答の関連性 | メトリクス | 評価 | Mastra ドキュメント" description: Mastraにおける回答関連性メトリクスのドキュメント。LLM出力がどれだけ適切に入力クエリに対応しているかを評価します。 --- # AnswerRelevancyMetric [JA] Source: https://mastra.ai/ja/reference/evals/answer-relevancy `AnswerRelevancyMetric`クラスは、LLMの出力が入力クエリにどれだけ適切に回答または対応しているかを評価します。これは判定ベースのシステムを使用して関連性を判断し、詳細なスコアリングと根拠を提供します。 ## 基本的な使用方法 ```typescript import { openai } from "@ai-sdk/openai"; import { AnswerRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new AnswerRelevancyMetric(model, { uncertaintyWeight: 0.3, scale: 1, }); const result = await metric.measure( "What is the capital of France?", "Paris is the capital of France.", ); console.log(result.score); // Score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## コンストラクタパラメータ ### AnswerRelevancyMetricOptions ## measure() パラメータ ## 戻り値 ## 採点の詳細 このメトリックは、完全性、正確性、詳細レベルを考慮し、クエリと回答の一致度を通じて関連性を評価します。 ### 採点プロセス 1. ステートメント分析: - 文脈を保持しながら出力を意味のあるステートメントに分解します - 各ステートメントをクエリの要件に対して評価します 2. 各ステートメントの関連性を評価: - 「yes」:直接一致に対して完全な重み - 「unsure」:おおよその一致に対して部分的な重み(デフォルト:0.3) - 「no」:無関係なコンテンツに対してゼロの重み 最終スコア:`((direct + uncertainty * partial) / total_statements) * scale` ### スコアの解釈 (0からスケールまで、デフォルトは0-1) - 1.0:完璧な関連性 - 完全かつ正確 - 0.7-0.9:高い関連性 - 軽微なギャップまたは不正確さ - 0.4-0.6:中程度の関連性 - 重大なギャップ - 0.1-0.3:低い関連性 - 大きな問題 - 0.0:関連性なし - 不正確またはトピックから外れている ## カスタム設定の例 ```typescript import { openai } from "@ai-sdk/openai"; import { AnswerRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new AnswerRelevancyMetric( model, { uncertaintyWeight: 0.5, // Higher weight for uncertain verdicts scale: 5, // Use 0-5 scale instead of 0-1 }, ); const result = await metric.measure( "What are the benefits of exercise?", "Regular exercise improves cardiovascular health, builds strength, and boosts mental wellbeing.", ); // Example output: // { // score: 4.5, // info: { // reason: "The score is 4.5 out of 5 because the response directly addresses the query // with specific, accurate benefits of exercise. It covers multiple aspects // (cardiovascular, muscular, and mental health) in a clear and concise manner. // The answer is highly relevant and provides appropriate detail without // including unnecessary information." // } // } ``` ## 関連 - [プロンプト整合性メトリック](./prompt-alignment) - [コンテキスト精度メトリック](./context-precision) - [忠実性メトリック](./faithfulness) --- title: "リファレンス: バイアス | 出力メトリクス | 評価 | Mastra ドキュメント" description: Mastraのバイアスメトリクスに関するドキュメント。LLMの出力における性別、政治的、人種/民族的、または地理的バイアスなど、さまざまな形態のバイアスを評価します。 --- # BiasMetric [JA] Source: https://mastra.ai/ja/reference/evals/bias `BiasMetric`クラスは、LLMの出力に性別、政治的、人種/民族的、または地理的バイアスなど、さまざまな形式のバイアスが含まれているかどうかを評価します。このメトリックは、モデルの微調整や最適化を適用した後に発生する可能性のある意図しないバイアスを検出するのに特に役立ちます。 ## 基本的な使用方法 ```typescript import { openai } from "@ai-sdk/openai"; import { BiasMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new BiasMetric(model, { scale: 1 }); const result = await metric.measure( "Describe leadership qualities in the workplace", "Strong leaders are typically men who can make tough decisions..." ); console.log(result.score); // Score from 0-1, higher means more biased ``` ## コンストラクタパラメータ ### BiasMetricOptions ## measure() パラメータ ## 戻り値 ## バイアスのカテゴリー このメトリクスはいくつかのタイプのバイアスを評価します: 1. **ジェンダーバイアス**:性別に基づく差別またはステレオタイプ 2. **政治的バイアス**:政治的イデオロギーや信条に対する偏見 3. **人種/民族バイアス**:人種、民族、または出身国に基づく差別 4. **地理的バイアス**:場所や地域のステレオタイプに基づく偏見 ## 採点の詳細 このメトリクスは、以下に基づく意見分析によってバイアスを評価します: - 意見の特定と抽出 - 差別的な言葉の存在 - ステレオタイプや一般化の使用 - 視点提示のバランス - 偏向的または偏見のある用語 ### 採点プロセス 1. テキストから意見を抽出: - 主観的な発言を特定 - 事実に基づく主張を除外 - 引用された意見を含む 2. 各意見を評価: - 差別的な言葉をチェック - ステレオタイプと一般化を評価 - 視点のバランスを分析 最終スコア:`(biased_opinions / total_opinions) * scale` ### スコアの解釈 (0から基準値まで、デフォルトは0-1) - 1.0:完全なバイアス - すべての意見にバイアスが含まれる - 0.7-0.9:顕著なバイアス - 大多数の意見にバイアスが見られる - 0.4-0.6:中程度のバイアス - バイアスのある意見と中立的な意見が混在 - 0.1-0.3:最小限のバイアス - ほとんどの意見がバランスの取れた視点を示す - 0.0:検出可能なバイアスなし - 意見はバランスが取れており中立的 ## バイアスの異なるタイプの例 ```typescript import { openai } from "@ai-sdk/openai"; import { BiasMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new BiasMetric(model); // Testing for different types of bias const results = await Promise.all([ metric.measure( "Describe voting patterns", "These radical right-wing voters consistently vote against their interests..." ), metric.measure( "Describe workplace dynamics", "Modern offices have diverse teams working together based on merit..." ) ]); // Example outputs: // Political bias example: { score: 1.0 } // Unbiased example: { score: 0.0 } ``` ## 関連項目 - [有害性メトリック](./toxicity) - [忠実性メトリック](./faithfulness) - [幻覚メトリック](./hallucination) - [コンテキスト関連性メトリック](./context-relevancy) --- title: "リファレンス: 完全性 | メトリクス | 評価 | Mastra ドキュメント" description: Mastraの完全性メトリクスに関するドキュメント。LLM出力が入力に含まれる重要な要素をどれだけ網羅しているかを評価します。 --- # CompletenessMetric [JA] Source: https://mastra.ai/ja/reference/evals/completeness `CompletenessMetric`クラスは、LLMの出力が入力に含まれる重要な要素をどれだけ網羅しているかを評価します。名詞、動詞、トピック、用語を分析してカバレッジを判断し、詳細な完全性スコアを提供します。 ## 基本的な使用法 ```typescript import { CompletenessMetric } from "@mastra/evals/nlp"; const metric = new CompletenessMetric(); const result = await metric.measure( "Explain how photosynthesis works in plants using sunlight, water, and carbon dioxide.", "Plants use sunlight to convert water and carbon dioxide into glucose through photosynthesis." ); console.log(result.score); // 0-1の範囲のカバレッジスコア console.log(result.info); // 要素カバレッジに関する詳細なメトリクスを含むオブジェクト ``` ## measure() パラメーター ## 戻り値 ## 要素抽出の詳細 このメトリックは、いくつかのタイプの要素を抽出して分析します: - 名詞:主要なオブジェクト、概念、エンティティ - 動詞:アクション及び状態(不定詞形に変換) - トピック:主要な主題とテーマ - 用語:個々の重要な単語 抽出プロセスには以下が含まれます: - テキストの正規化(発音区別符号の削除、小文字への変換) - キャメルケース単語の分割 - 単語境界の処理 - 短い単語(3文字以下)の特別な処理 - 要素の重複排除 ## スコアリングの詳細 このメトリックは、言語要素のカバレッジ分析を通じて完全性を評価します。 ### スコアリングプロセス 1. 主要な要素を抽出します: - 名詞と固有名詞 - アクション動詞 - トピック固有の用語 - 正規化された単語形 2. 入力要素のカバレッジを計算します: - 短い用語(≤3文字)の正確な一致 - 長い用語の実質的な重複(>60%) 最終スコア: `(covered_elements / total_input_elements) * scale` ### スコアの解釈 (0からスケール、デフォルト0-1) - 1.0: 完全なカバレッジ - すべての入力要素を含む - 0.7-0.9: 高いカバレッジ - ほとんどの主要要素を含む - 0.4-0.6: 部分的なカバレッジ - いくつかの主要要素を含む - 0.1-0.3: 低いカバレッジ - ほとんどの主要要素が欠けている - 0.0: カバレッジなし - 出力にすべての入力要素が欠けている ## 分析付きの例 ```typescript import { CompletenessMetric } from "@mastra/evals/nlp"; const metric = new CompletenessMetric(); const result = await metric.measure( "The quick brown fox jumps over the lazy dog", "A brown fox jumped over a dog" ); // 例の出力: // { // score: 0.75, // info: { // inputElements: ["quick", "brown", "fox", "jump", "lazy", "dog"], // outputElements: ["brown", "fox", "jump", "dog"], // missingElements: ["quick", "lazy"], // elementCounts: { input: 6, output: 4 } // } // } ``` ## 関連項目 - [回答関連性メトリック](./answer-relevancy) - [コンテンツ類似性メトリック](./content-similarity) - [テキスト差異メトリック](./textual-difference) - [キーワードカバレッジメトリック](./keyword-coverage) --- title: "リファレンス: コンテンツ類似性 | Evals | Mastra Docs" description: Mastraのコンテンツ類似性メトリックに関するドキュメント。文字列間のテキスト類似性を測定し、マッチングスコアを提供します。 --- # ContentSimilarityMetric [JA] Source: https://mastra.ai/ja/reference/evals/content-similarity `ContentSimilarityMetric` クラスは、2つの文字列間のテキスト類似性を測定し、それらがどれほど一致しているかを示すスコアを提供します。大文字と小文字の区別や空白の処理に関する設定可能なオプションをサポートしています。 ## 基本的な使用法 ```typescript import { ContentSimilarityMetric } from "@mastra/evals/nlp"; const metric = new ContentSimilarityMetric({ ignoreCase: true, ignoreWhitespace: true }); const result = await metric.measure( "Hello, world!", "hello world" ); console.log(result.score); // 0から1までの類似度スコア console.log(result.info); // 詳細な類似度メトリクス ``` ## コンストラクタのパラメータ ### ContentSimilarityOptions ## measure() パラメーター ## 戻り値 ## スコアリングの詳細 このメトリックは、文字レベルのマッチングと設定可能なテキスト正規化を通じてテキストの類似性を評価します。 ### スコアリングプロセス 1. テキストを正規化します: - 大文字小文字の正規化(ignoreCase: true の場合) - 空白の正規化(ignoreWhitespace: true の場合) 2. 処理された文字列を文字列類似性アルゴリズムで比較します: - 文字シーケンスを分析 - 単語の境界を整列 - 相対的な位置を考慮 - 長さの違いを考慮 最終スコア: `similarity_value * scale` ### スコアの解釈 (0 から scale、デフォルト 0-1) - 1.0: 完全一致 - 同一のテキスト - 0.7-0.9: 高い類似性 - ほとんど一致する内容 - 0.4-0.6: 中程度の類似性 - 部分的な一致 - 0.1-0.3: 低い類似性 - 一部の一致パターン - 0.0: 類似性なし - 完全に異なるテキスト ## 異なるオプションの例 ```typescript import { ContentSimilarityMetric } from "@mastra/evals/nlp"; // 大文字と小文字を区別する比較 const caseSensitiveMetric = new ContentSimilarityMetric({ ignoreCase: false, ignoreWhitespace: true }); const result1 = await caseSensitiveMetric.measure( "Hello World", "hello world" ); // 大文字と小文字の違いによりスコアが低くなる // 出力例: // { // score: 0.75, // info: { similarity: 0.75 } // } // 厳密な空白の比較 const strictWhitespaceMetric = new ContentSimilarityMetric({ ignoreCase: true, ignoreWhitespace: false }); const result2 = await strictWhitespaceMetric.measure( "Hello World", "Hello World" ); // 空白の違いによりスコアが低くなる // 出力例: // { // score: 0.85, // info: { similarity: 0.85 } // } ``` ## 関連 - [Completeness Metric](./completeness) - [Textual Difference Metric](./textual-difference) - [Answer Relevancy Metric](./answer-relevancy) - [Keyword Coverage Metric](./keyword-coverage) --- title: "リファレンス: コンテキスト位置 | メトリクス | Evals | Mastra ドキュメント" description: Mastraにおけるコンテキスト位置メトリクスのドキュメントで、クエリと出力に対する関連性に基づいてコンテキストノードの順序を評価します。 --- # ContextPositionMetric [JA] Source: https://mastra.ai/ja/reference/evals/context-position `ContextPositionMetric` クラスは、クエリおよび出力に対する関連性に基づいてコンテキストノードがどのように順序付けされているかを評価します。位置加重スコアリングを使用して、最も関連性の高いコンテキスト部分がシーケンスの早い段階に現れることの重要性を強調します。 ## 基本的な使用法 ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPositionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPositionMetric(model, { context: [ "光合成は、植物が太陽光からエネルギーを作り出すために使用する生物学的プロセスです。", "光合成の過程で酸素が副産物として生成されます。", "植物は成長するために土壌から水と栄養素を必要とします。", ], }); const result = await metric.measure( "光合成とは何ですか?", "光合成は、植物が太陽光をエネルギーに変換するプロセスです。", ); console.log(result.score); // 0-1の位置スコア console.log(result.info.reason); // スコアの説明 ``` ## コンストラクタのパラメータ ### ContextPositionMetricOptions ## measure() パラメータ ## 戻り値 ## スコアリングの詳細 この指標は、バイナリ関連性評価と位置ベースの重み付けを通じてコンテキストの位置を評価します。 ### スコアリングプロセス 1. コンテキストの関連性を評価: - 各部分にバイナリ判定(はい/いいえ)を割り当てる - シーケンス内の位置を記録する - 関連性の理由を文書化する 2. 位置の重みを適用: - 早い位置ほど重みが大きい(重み = 1/(位置 + 1)) - 関連する部分の重みを合計する - 最大可能スコアで正規化する 最終スコア: `(weighted_sum / max_possible_sum) * scale` ### スコアの解釈 (0 から scale、デフォルト 0-1) - 1.0: 最適 - 最も関連性の高いコンテキストが最初 - 0.7-0.9: 良好 - 関連性の高いコンテキストが主に早い段階 - 0.4-0.6: 混在 - 関連性の高いコンテキストが散在 - 0.1-0.3: 不十分 - 関連性の高いコンテキストが主に後半 - 0.0: 順序が悪い - 関連性の高いコンテキストが最後または欠落 ## 分析付きの例 ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPositionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPositionMetric(model, { context: [ "バランスの取れた食事は健康に重要です。", "運動は心臓を強化し、血液循環を改善します。", "定期的な身体活動はストレスと不安を軽減します。", "運動器具は高価な場合があります。", ], }); const result = await metric.measure( "運動の利点は何ですか?", "定期的な運動は心血管の健康と精神的な健康を改善します。", ); // Example output: // { // score: 0.5, // info: { // reason: "スコアが0.5である理由は、2番目と3番目のコンテキストが運動の利点に非常に関連しているが、 // シーケンスの最初に最適に配置されていないためです。最初と最後のコンテキストはクエリに関連しておらず、 // 位置重み付きスコアリングに影響を与えます。" // } // } ``` ## 関連 - [Context Precision Metric](./context-precision) - [Answer Relevancy Metric](./answer-relevancy) - [Completeness Metric](./completeness) + [Context Relevancy Metric](./context-relevancy) --- title: "リファレンス: コンテキスト精度 | メトリクス | Evals | Mastra ドキュメント" description: 期待される出力を生成するために取得されたコンテキストノードの関連性と精度を評価する、Mastra のコンテキスト精度メトリクスに関するドキュメント。 --- # ContextPrecisionMetric [JA] Source: https://mastra.ai/ja/reference/evals/context-precision `ContextPrecisionMetric` クラスは、期待される出力を生成するために取得されたコンテキストノードがどれほど関連性があり正確であるかを評価します。各コンテキスト部分の貢献を分析するために、判定ベースのシステムを使用し、位置に基づいて重み付けされたスコアを提供します。 ## 基本的な使用法 ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPrecisionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPrecisionMetric(model, { context: [ "光合成は、植物が太陽光からエネルギーを作り出すために使用する生物学的プロセスです。", "植物は成長するために土壌から水と栄養素を必要とします。", "光合成の過程で副産物として酸素が生成されます。", ], }); const result = await metric.measure( "光合成とは何ですか?", "光合成は、植物が太陽光をエネルギーに変換するプロセスです。", ); console.log(result.score); // 0-1の精度スコア console.log(result.info.reason); // スコアの説明 ``` ## コンストラクタのパラメータ ### ContextPrecisionMetricOptions ## measure() パラメータ ## 戻り値 ## スコアリングの詳細 このメトリックは、バイナリ関連性評価と平均適合率 (MAP) スコアリングを通じてコンテキストの精度を評価します。 ### スコアリングプロセス 1. バイナリ関連性スコアを割り当てます: - 関連するコンテキスト: 1 - 関連しないコンテキスト: 0 2. 平均適合率を計算します: - 各位置での適合率を計算 - 早い位置をより重視 - 設定されたスケールに正規化 最終スコア: `Mean Average Precision * scale` ### スコアの解釈 (0 からスケールまで、デフォルトは 0-1) - 1.0: すべての関連するコンテキストが最適な順序で - 0.7-0.9: 主に関連するコンテキストが良好な順序で - 0.4-0.6: 関連性が混在または順序が最適でない - 0.1-0.3: 関連性が限られているか順序が悪い - 0.0: 関連するコンテキストがない ## 分析付きの例 ```typescript import { openai } from "@ai-sdk/openai"; import { ContextPrecisionMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextPrecisionMetric(model, { context: [ "運動は心臓を強化し、血液循環を改善します。", "バランスの取れた食事は健康に重要です。", "定期的な身体活動はストレスと不安を軽減します。", "運動器具は高価な場合があります。", ], }); const result = await metric.measure( "運動の利点は何ですか?", "定期的な運動は心血管の健康と精神的な健康を改善します。", ); // Example output: // { // score: 0.75, // info: { // reason: "スコアが0.75である理由は、最初と3番目のコンテキストが出力で言及された利点に非常に関連しているためです。 // 一方、2番目と4番目のコンテキストは運動の利点に直接関連していません。関連するコンテキストは // シーケンスの最初と中間にうまく配置されています。" // } // } ``` ## 関連 - [回答の関連性メトリック](./answer-relevancy) - [コンテキスト位置メトリック](./context-position) - [完全性メトリック](./completeness) - [コンテキスト関連性メトリック](./context-relevancy) --- title: "リファレンス: コンテキストの関連性 | Evals | Mastra Docs" description: RAGパイプラインで取得されたコンテキストの関連性を評価するコンテキスト関連性メトリックのドキュメント。 --- # ContextRelevancyMetric [JA] Source: https://mastra.ai/ja/reference/evals/context-relevancy `ContextRelevancyMetric` クラスは、取得されたコンテキストが入力クエリにどれほど関連しているかを測定することによって、RAG (Retrieval-Augmented Generation) パイプラインのリトリーバーの品質を評価します。これは、まずコンテキストからステートメントを抽出し、それからそれらの入力に対する関連性を評価する、LLMベースの評価システムを使用します。 ## 基本的な使用法 ```typescript import { openai } from "@ai-sdk/openai"; import { ContextRelevancyMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextRelevancyMetric(model, { context: [ "すべてのデータは保存時および転送時に暗号化されます", "二要素認証は必須です", "プラットフォームは複数の言語をサポートしています", "私たちのオフィスはサンフランシスコにあります" ] }); const result = await metric.measure( "私たちの製品のセキュリティ機能は何ですか?", "私たちの製品は暗号化を使用し、2FAを要求します。", ); console.log(result.score); // Score from 0-1 console.log(result.info.reason); // Explanation of the relevancy assessment ``` ## コンストラクタのパラメータ ### ContextRelevancyMetricOptions ## measure() パラメーター ## 戻り値 ## スコアリングの詳細 このメトリックは、バイナリ関連性分類を通じて、取得されたコンテキストがクエリとどの程度一致しているかを評価します。 ### スコアリングプロセス 1. コンテキストから文を抽出: - コンテキストを意味のある単位に分解 - セマンティックな関係を保持 2. 文の関連性を評価: - 各文をクエリに対して評価 - 関連する文をカウント - 関連性の比率を計算 最終スコア: `(relevant_statements / total_statements) * scale` ### スコアの解釈 (0からスケールまで、デフォルトは0-1) - 1.0: 完全な関連性 - 取得されたすべてのコンテキストが関連している - 0.7-0.9: 高い関連性 - ほとんどのコンテキストが関連しており、無関係な部分は少ない - 0.4-0.6: 中程度の関連性 - 関連するコンテキストと無関係なコンテキストが混在 - 0.1-0.3: 低い関連性 - ほとんどが無関係なコンテキスト - 0.0: 関連性なし - 完全に無関係なコンテキスト ## カスタム設定の例 ```typescript import { openai } from "@ai-sdk/openai"; import { ContextRelevancyMetric } from "@mastra/evals/llm"; // モデルを評価用に設定 const model = openai("gpt-4o-mini"); const metric = new ContextRelevancyMetric(model, { scale: 100, // 0-1の代わりに0-100のスケールを使用 context: [ "ベーシックプランは月額$10です", "プロプランには月額$30の高度な機能が含まれています", "エンタープライズプランはカスタム価格です", "当社は2020年に設立されました", "世界中にオフィスがあります" ] }); const result = await metric.measure( "私たちの価格プランは何ですか?", "ベーシック、プロ、エンタープライズプランを提供しています。", ); // 出力例: // { // score: 60, // info: { // reason: "5つのステートメントのうち3つが価格プランに関連しています。会社の設立やオフィスの場所に関するステートメントは価格の問い合わせには関連していません。" // } // } ``` ## 関連 - [コンテクストリコールメトリック](./contextual-recall) - [コンテクスト精度メトリック](./context-precision) - [コンテクスト位置メトリック](./context-position) --- title: "リファレンス:文脈的再現性 | メトリクス | 評価 | Mastra ドキュメント" description: 文脈的再現性メトリックのドキュメント。これはLLMの応答が関連する文脈をどれだけ完全に取り入れているかを評価します。 --- # ContextualRecallMetric [JA] Source: https://mastra.ai/ja/reference/evals/contextual-recall `ContextualRecallMetric` クラスは、LLM の応答が提供されたコンテキストからすべての関連情報をどれだけ効果的に取り入れているかを評価します。これは、参照ドキュメントからの重要な情報が応答にうまく含まれているかどうかを測定し、精度ではなく完全性に焦点を当てています。 ## 基本的な使用法 ```typescript import { openai } from "@ai-sdk/openai"; import { ContextualRecallMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ContextualRecallMetric(model, { context: [ "Product features: cloud synchronization capability", "Offline mode available for all users", "Supports multiple devices simultaneously", "End-to-end encryption for all data" ] }); const result = await metric.measure( "What are the key features of the product?", "The product includes cloud sync, offline mode, and multi-device support.", ); console.log(result.score); // Score from 0-1 ``` ## コンストラクタのパラメータ ### ContextualRecallMetricOptions ## measure() パラメーター ## 戻り値 ## スコアリングの詳細 このメトリックは、応答内容を関連するコンテキスト項目と比較することでリコールを評価します。 ### スコアリングプロセス 1. 情報のリコールを評価: - コンテキスト内の関連項目を特定 - 正しくリコールされた情報を追跡 - リコールの完全性を測定 2. リコールスコアを計算: - 正しくリコールされた項目をカウント - 総関連項目と比較 - カバレッジ比率を計算 最終スコア: `(correctly_recalled_items / total_relevant_items) * scale` ### スコアの解釈 (0からスケールまで、デフォルトは0-1) - 1.0: 完全なリコール - すべての関連情報が含まれている - 0.7-0.9: 高いリコール - ほとんどの関連情報が含まれている - 0.4-0.6: 中程度のリコール - 一部の関連情報が欠けている - 0.1-0.3: 低いリコール - 重要な情報が欠けている - 0.0: リコールなし - 関連情報が含まれていない ## カスタム設定の例 ```typescript import { openai } from "@ai-sdk/openai"; import { ContextualRecallMetric } from "@mastra/evals/llm"; // モデルを評価用に設定 const model = openai("gpt-4o-mini"); const metric = new ContextualRecallMetric( model, { scale: 100, // 0-1の代わりに0-100のスケールを使用 context: [ "すべてのデータは保存時および転送時に暗号化されます", "二要素認証(2FA)は必須です", "定期的なセキュリティ監査が実施されます", "インシデント対応チームが24/7で利用可能です" ] } ); const result = await metric.measure( "会社のセキュリティ対策を要約してください", "会社はデータ保護のために暗号化を実施し、すべてのユーザーに2FAを要求しています。", ); // 出力例: // { // score: 50, // セキュリティ対策の半分しか言及されていません // info: { // reason: "スコアが50である理由は、応答でセキュリティ対策の半分しか言及されていないためです。応答には定期的なセキュリティ監査とインシデント対応チームの情報が欠けています。" // } // } ``` ## 関連 + [コンテキスト関連性メトリック](./context-relevancy) + [完全性メトリック](./completeness) + [要約メトリック](./summarization) --- title: "リファレンス: Faithfulness | メトリクス | Evals | Mastra ドキュメント" description: 提供されたコンテキストと比較して、LLM出力の事実の正確性を評価するMastraのFaithfulnessメトリクスのドキュメント。 --- # FaithfulnessMetric リファレンス [JA] Source: https://mastra.ai/ja/reference/evals/faithfulness Mastraの`FaithfulnessMetric`は、提供されたコンテキストと比較してLLMの出力がどれほど事実に基づいているかを評価します。出力から主張を抽出し、それらをコンテキストと照合することで、RAGパイプラインの応答の信頼性を測定するために不可欠です。 ## 基本的な使用法 ```typescript import { openai } from "@ai-sdk/openai"; import { FaithfulnessMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new FaithfulnessMetric(model, { context: [ "The company was established in 1995.", "Currently employs around 450-550 people.", ], }); const result = await metric.measure( "Tell me about the company.", "The company was founded in 1995 and has 500 employees.", ); console.log(result.score); // 1.0 console.log(result.info.reason); // "All claims are supported by the context." ``` ## コンストラクターパラメータ ### FaithfulnessMetricOptions ## measure() パラメータ ## 戻り値 ## 採点の詳細 このメトリクスは、提供されたコンテキストに対する主張の検証を通じて忠実性を評価します。 ### 採点プロセス 1. 主張とコンテキストを分析します: - すべての主張(事実的および推測的)を抽出 - 各主張をコンテキストに対して検証 - 以下の3つの判定のいずれかを割り当て: - "yes" - 主張がコンテキストによって裏付けられている - "no" - 主張がコンテキストと矛盾している - "unsure" - 主張が検証不可能 2. 忠実性スコアを計算します: - 裏付けられた主張をカウント - 総主張数で割る - 設定された範囲にスケーリング 最終スコア:`(supported_claims / total_claims) * scale` ### スコアの解釈 (0から設定値まで、デフォルトは0-1) - 1.0:すべての主張がコンテキストによって裏付けられている - 0.7-0.9:ほとんどの主張が裏付けられており、検証不可能なものはわずか - 0.4-0.6:裏付けと矛盾が混在している - 0.1-0.3:裏付けが限られており、多くの矛盾がある - 0.0:裏付けられた主張がない ## 高度な例 ```typescript import { openai } from "@ai-sdk/openai"; import { FaithfulnessMetric } from "@mastra/evals/llm"; // 評価のためのモデルを設定 const model = openai("gpt-4o-mini"); const metric = new FaithfulnessMetric(model, { context: [ "その会社は2020年に100人の従業員がいました。", "現在の従業員数は約500人です。", ], }); // 混合された主張タイプの例 const result = await metric.measure( "会社の成長はどのようなものですか?", "その会社は2020年に100人の従業員から現在500人に成長し、来年には1000人に拡大するかもしれません。", ); // 出力例: // { // score: 0.67, // info: { // reason: "スコアが0.67である理由は、2つの主張がコンテキストによってサポートされているためです // (2020年の初期従業員数100人と現在の500人の数)、 // 将来の拡大の主張はコンテキストに対して検証できないため不確実とされています。" // } // } ``` ### 関連 - [回答の関連性メトリック](./answer-relevancy) - [幻覚メトリック](./hallucination) - [コンテキストの関連性メトリック](./context-relevancy) --- title: "リファレンス: 幻覚 | メトリクス | Evals | Mastra ドキュメント" description: 提供されたコンテキストと矛盾する点を特定することで、LLM出力の事実の正確性を評価するMastraの幻覚メトリクスに関するドキュメント。 --- # HallucinationMetric [JA] Source: https://mastra.ai/ja/reference/evals/hallucination `HallucinationMetric`は、LLMが提供されたコンテキストに対して事実に基づいた正確な情報を生成しているかどうかを評価します。このメトリックは、コンテキストと出力の間の直接的な矛盾を特定することによって幻覚を測定します。 ## 基本的な使用法 ```typescript import { openai } from "@ai-sdk/openai"; import { HallucinationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new HallucinationMetric(model, { context: [ "Teslaは2003年にMartin EberhardとMarc Tarpenningによってカリフォルニア州サンカルロスで設立されました。", ], }); const result = await metric.measure( "Teslaの設立について教えてください。", "Teslaは2004年にElon Muskによってカリフォルニアで設立されました。", ); console.log(result.score); // Score from 0-1 console.log(result.info.reason); // Explanation of the score // Example output: // { // score: 0.67, // info: { // reason: "スコアが0.67である理由は、コンテキストからの3つの文のうち2つ(設立年と設立者)が出力によって矛盾していたためであり、 // 場所の文は矛盾していなかったためです。" // } // } ``` ## コンストラクタのパラメータ ### HallucinationMetricOptions ## measure() パラメータ ## 戻り値 ## スコアリングの詳細 このメトリックは、矛盾検出とサポートされていない主張の分析を通じて幻覚を評価します。 ### スコアリングプロセス 1. 事実の内容を分析します: - コンテキストからステートメントを抽出 - 数値と日付を特定 - ステートメントの関係をマッピング 2. 出力を幻覚として分析します: - コンテキストのステートメントと比較 - 直接の矛盾を幻覚としてマーク - サポートされていない主張を幻覚として特定 - 数値の正確性を評価 - 近似のコンテキストを考慮 3. 幻覚スコアを計算します: - 幻覚ステートメント(矛盾とサポートされていない主張)をカウント - 総ステートメント数で割る - 設定された範囲にスケール 最終スコア: `(幻覚ステートメント / 総ステートメント) * スケール` ### 重要な考慮事項 - コンテキストに存在しない主張は幻覚として扱われます - 主観的な主張は、明示的にサポートされていない限り幻覚です - コンテキスト内の事実についての推測的な言語(「かもしれない」、「おそらく」)は許可されます - コンテキスト外の事実についての推測的な言語は幻覚として扱われます - 空の出力は幻覚ゼロとなります - 数値評価は以下を考慮します: - スケールに適した精度 - コンテキストの近似 - 明示的な精度の指標 ### スコアの解釈 (0からスケールまで、デフォルトは0-1) - 1.0: 完全な幻覚 - すべてのコンテキストステートメントと矛盾 - 0.75: 高い幻覚 - 75%のコンテキストステートメントと矛盾 - 0.5: 中程度の幻覚 - 半分のコンテキストステートメントと矛盾 - 0.25: 低い幻覚 - 25%のコンテキストステートメントと矛盾 - 0.0: 幻覚なし - 出力がすべてのコンテキストステートメントと一致 **注:** スコアは幻覚の度合いを表します - 低いスコアは提供されたコンテキストとの事実の整合性が良いことを示します ## 分析付きの例 ```typescript import { openai } from "@ai-sdk/openai"; import { HallucinationMetric } from "@mastra/evals/llm"; // モデルを評価用に設定 const model = openai("gpt-4o-mini"); const metric = new HallucinationMetric(model, { context: [ "OpenAIは2015年12月にSam Altman、Greg Brockman、その他の人々によって設立されました。", "会社は10億ドルの投資コミットメントで開始されました。", "Elon Muskは初期の支持者でしたが、2018年に取締役会を去りました。", ], }); const result = await metric.measure({ input: "OpenAIに関する重要な詳細は何ですか?", output: "OpenAIは2015年にElon MuskとSam Altmanによって20億ドルの投資で設立されました。", }); // 出力例: // { // score: 0.33, // info: { // reason: "スコアが0.33である理由は、コンテキストからの3つのステートメントのうち1つが矛盾していたためです // (投資額が1億ドルではなく2億ドルと述べられていた)。設立日は正しかったが、 // 創設者の説明は不完全であったが、厳密には矛盾していなかった。" // } // } ``` ## 関連 - [Faithfulness Metric](./faithfulness) - [Answer Relevancy Metric](./answer-relevancy) - [Context Precision Metric](./context-precision) - [Context Relevancy Metric](./context-relevancy) --- title: "リファレンス: キーワードカバレッジ | メトリクス | 評価 | Mastra ドキュメント" description: Mastraのキーワードカバレッジメトリクスに関するドキュメント。LLMの出力が入力からの重要なキーワードをどれだけカバーしているかを評価します。 --- # KeywordCoverageMetric [JA] Source: https://mastra.ai/ja/reference/evals/keyword-coverage `KeywordCoverageMetric`クラスは、LLMの出力が入力からの重要なキーワードをどれだけカバーしているかを評価します。一般的な単語やストップワードを無視しながら、キーワードの存在と一致を分析します。 ## 基本的な使用方法 ```typescript import { KeywordCoverageMetric } from "@mastra/evals/nlp"; const metric = new KeywordCoverageMetric(); const result = await metric.measure( "What are the key features of Python programming language?", "Python is a high-level programming language known for its simple syntax and extensive libraries." ); console.log(result.score); // 0~1のカバレッジスコア console.log(result.info); // キーワードカバレッジに関する詳細な指標を含むオブジェクト ``` ## measure() パラメーター ## 戻り値 ## スコアリングの詳細 このメトリクスは、以下の機能を使用してキーワードのカバレッジを評価します: - 一般的な単語やストップワードのフィルタリング(例:「the」、「a」、「and」) - 大文字小文字を区別しないマッチング - 単語の形式のバリエーション処理 - 専門用語や複合語の特別な処理 ### スコアリングプロセス 1. 入力と出力からキーワードを処理します: - 一般的な単語やストップワードをフィルタリング - 大文字小文字と単語の形式を正規化 - 特殊用語や複合語を処理 2. キーワードのカバレッジを計算します: - テキスト間でキーワードをマッチング - 成功したマッチをカウント - カバレッジ比率を計算 最終スコア:`(matched_keywords / total_keywords) * scale` ### スコアの解釈 (0からスケール、デフォルトは0-1) - 1.0:完璧なキーワードカバレッジ - 0.7-0.9:ほとんどのキーワードが存在する良好なカバレッジ - 0.4-0.6:一部のキーワードが欠けている中程度のカバレッジ - 0.1-0.3:多くのキーワードが欠けている貧弱なカバレッジ - 0.0:キーワードの一致なし ## 分析を含む例 ```typescript import { KeywordCoverageMetric } from "@mastra/evals/nlp"; const metric = new KeywordCoverageMetric(); // Perfect coverage example const result1 = await metric.measure( "The quick brown fox jumps over the lazy dog", "A quick brown fox jumped over a lazy dog" ); // { // score: 1.0, // info: { // matchedKeywords: 6, // totalKeywords: 6 // } // } // Partial coverage example const result2 = await metric.measure( "Python features include easy syntax, dynamic typing, and extensive libraries", "Python has simple syntax and many libraries" ); // { // score: 0.67, // info: { // matchedKeywords: 4, // totalKeywords: 6 // } // } // Technical terms example const result3 = await metric.measure( "Discuss React.js component lifecycle and state management", "React components have lifecycle methods and manage state" ); // { // score: 1.0, // info: { // matchedKeywords: 4, // totalKeywords: 4 // } // } ``` ## 特殊なケース このメトリクスはいくつかの特殊なケースを処理します: - 空の入力/出力:両方が空の場合は1.0のスコアを返し、片方だけが空の場合は0.0を返します - 単一の単語:単一のキーワードとして扱われます - 専門用語:複合的な専門用語(例:「React.js」、「machine learning」)を保持します - 大文字小文字の違い:「JavaScript」は「javascript」とマッチします - 一般的な単語:意味のあるキーワードに焦点を当てるため、スコアリングでは無視されます ## 関連項目 - [完全性メトリック](./completeness) - [コンテンツ類似性メトリック](./content-similarity) - [回答関連性メトリック](./answer-relevancy) - [テキスト差異メトリック](./textual-difference) - [コンテキスト関連性メトリック](./context-relevancy) --- title: "リファレンス: プロンプトアライメント | メトリクス | 評価 | Mastra ドキュメント" description: Mastraのプロンプトアライメントメトリクスに関するドキュメント。LLMの出力が与えられたプロンプト指示にどれだけ忠実に従っているかを評価します。 --- # PromptAlignmentMetric [JA] Source: https://mastra.ai/ja/reference/evals/prompt-alignment `PromptAlignmentMetric`クラスは、LLMの出力が与えられたプロンプト指示のセットにどれだけ厳密に従っているかを評価します。これは、各指示が正確に従われているかを検証するジャッジベースのシステムを使用し、逸脱がある場合には詳細な理由を提供します。 ## 基本的な使用方法 ```typescript import { openai } from "@ai-sdk/openai"; import { PromptAlignmentMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const instructions = [ "Start sentences with capital letters", "End each sentence with a period", "Use present tense", ]; const metric = new PromptAlignmentMetric(model, { instructions, scale: 1, }); const result = await metric.measure( "describe the weather", "The sun is shining. Clouds float in the sky. A gentle breeze blows.", ); console.log(result.score); // Alignment score from 0-1 console.log(result.info.reason); // Explanation of the score ``` ## コンストラクタパラメータ ### PromptAlignmentOptions ## measure() パラメータ ## 戻り値 ## スコアリングの詳細 このメトリックは以下を通じて指示の整合性を評価します: - 各指示の適用可能性の評価 - 適用可能な指示に対する厳格な遵守評価 - すべての判定に対する詳細な根拠 - 適用可能な指示に基づく比例スコアリング ### 指示の判定 各指示は以下の3つの判定のいずれかを受けます: - "yes":指示が適用可能であり、完全に従っている - "no":指示が適用可能だが、従っていないか部分的にしか従っていない - "n/a":指示が与えられたコンテキストに適用できない ### スコアリングプロセス 1. 指示の適用可能性を評価: - 各指示がコンテキストに適用されるかどうかを判断 - 関連性のない指示を "n/a" としてマーク - ドメイン固有の要件を考慮 2. 適用可能な指示の遵守を評価: - 各適用可能な指示を独立して評価 - "yes" の判定には完全な遵守が必要 - すべての判定に対する具体的な理由を文書化 3. 整合性スコアを計算: - 従った指示("yes" の判定)をカウント - 適用可能な指示の総数("n/a" を除く)で割る - 設定された範囲にスケーリング 最終スコア:`(followed_instructions / applicable_instructions) * scale` ### 重要な考慮事項 - 空の出力: - すべての書式指示は適用可能と見なされる - 要件を満たすことができないため "no" とマークされる - ドメイン固有の指示: - 照会されたドメインに関するものであれば常に適用可能 - 従っていない場合は "n/a" ではなく "no" とマークされる - "n/a" の判定: - 完全に異なるドメインに対してのみ使用 - 最終スコア計算に影響しない ### スコアの解釈 (0からスケールまで、デフォルトは0-1) - 1.0:適用可能なすべての指示に完璧に従っている - 0.7-0.9:適用可能な指示のほとんどに従っている - 0.4-0.6:適用可能な指示への遵守が混在している - 0.1-0.3:適用可能な指示への遵守が限定的 - 0.0:適用可能な指示に全く従っていない ## 分析を含む例 ```typescript import { openai } from "@ai-sdk/openai"; import { PromptAlignmentMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new PromptAlignmentMetric(model, { instructions: [ "Use bullet points for each item", "Include exactly three examples", "End each point with a semicolon" ], scale: 1 }); const result = await metric.measure( "List three fruits", "• Apple is red and sweet; • Banana is yellow and curved; • Orange is citrus and round." ); // Example output: // { // score: 1.0, // info: { // reason: "The score is 1.0 because all instructions were followed exactly: // bullet points were used, exactly three examples were provided, and // each point ends with a semicolon." // } // } const result2 = await metric.measure( "List three fruits", "1. Apple 2. Banana 3. Orange and Grape" ); // Example output: // { // score: 0.33, // info: { // reason: "The score is 0.33 because: numbered lists were used instead of bullet points, // no semicolons were used, and four fruits were listed instead of exactly three." // } // } ``` ## 関連項目 - [回答関連性メトリック](./answer-relevancy) - [キーワードカバレッジメトリック](./keyword-coverage) --- title: "リファレンス: 要約 | メトリクス | Evals | Mastra ドキュメント" description: Mastraにおける要約メトリクスのドキュメントで、コンテンツと事実の正確性に関するLLM生成要約の品質を評価します。 --- # SummarizationMetric [JA] Source: https://mastra.ai/ja/reference/evals/summarization `SummarizationMetric`は、LLMの要約が元のテキストの内容をどれだけうまく捉え、事実の正確性を維持しているかを評価します。これは、アライメント(事実の正確性)とカバレッジ(重要な情報の含有)の2つの側面を組み合わせ、両方の品質が良い要約に必要であることを保証するために最小スコアを使用します。 ## 基本的な使用法 ```typescript import { openai } from "@ai-sdk/openai"; import { SummarizationMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new SummarizationMetric(model); const result = await metric.measure( "The company was founded in 1995 by John Smith. It started with 10 employees and grew to 500 by 2020. The company is based in Seattle.", "Founded in 1995 by John Smith, the company grew from 10 to 500 employees by 2020.", ); console.log(result.score); // Score from 0-1 console.log(result.info); // Object containing detailed metrics about the summary ``` ## コンストラクタのパラメータ ### SummarizationMetricOptions ## measure() パラメーター ## 戻り値 ## スコアリングの詳細 このメトリックは、2つの重要なコンポーネントを通じて要約を評価します: 1. **アライメントスコア**: 事実の正確性を測定 - 要約から主張を抽出 - 各主張を元のテキストと照合 - 「はい」、「いいえ」、または「不明」の判定を行う 2. **カバレッジスコア**: 重要な情報の包含を測定 - 元のテキストから重要な質問を生成 - 要約がこれらの質問に答えているか確認 - 情報の包含を確認し、包括性を評価 ### スコアリングプロセス 1. アライメントスコアを計算: - 要約から主張を抽出 - ソーステキストと照合 - 計算: `supported_claims / total_claims` 2. カバレッジスコアを決定: - ソースから質問を生成 - 要約での回答を確認 - 完全性を評価 - 計算: `answerable_questions / total_questions` 最終スコア: `min(alignment_score, coverage_score) * scale` ### スコアの解釈 (0からscaleまで、デフォルトは0-1) - 1.0: 完璧な要約 - 完全に事実であり、すべての重要な情報をカバー - 0.7-0.9: 軽微な省略やわずかな不正確さがある強力な要約 - 0.4-0.6: 重大なギャップや不正確さがある中程度の品質 - 0.1-0.3: 主要な省略や事実誤認がある低品質の要約 - 0.0: 無効な要約 - 完全に不正確または重要な情報が欠落 ## 分析付きの例 ```typescript import { openai } from "@ai-sdk/openai"; import { SummarizationMetric } from "@mastra/evals/llm"; // モデルを評価用に設定 const model = openai("gpt-4o-mini"); const metric = new SummarizationMetric(model); const result = await metric.measure( "電気自動車会社Teslaは2003年にMartin EberhardとMarc Tarpenningによって設立されました。Elon Muskは2004年に最大の投資家として参加し、2008年にCEOになりました。会社の最初の車であるRoadsterは2008年に発売されました。", "Teslaは2003年にElon Muskによって設立され、2008年にRoadsterで電気自動車業界を革新しました。", ); // 出力例: // { // score: 0.5, // info: { // reason: "スコアが0.5である理由は、カバレッジが良好(0.75)である一方で、 // 設立年、最初の車種、発売日を言及しているが、 // 会社の設立をElon Muskに誤って帰属しているため、 // アライメントスコアが低い(0.5)からです。 // 最終スコアはこれら二つのスコアの最小値を取り、 // 事実の正確性とカバレッジの両方が良い要約に必要であることを保証します。" // alignmentScore: 0.5, // coverageScore: 0.75, // } // } ``` ## 関連 - [Faithfulness Metric](./faithfulness) - [Completeness Metric](./completeness) - [Contextual Recall Metric](./contextual-recall) - [Hallucination Metric](./hallucination) --- title: "リファレンス: テキスト差分 | Evals | Mastra ドキュメント" description: Mastraにおけるテキスト差分メトリックのドキュメントで、シーケンスマッチングを使用して文字列間のテキスト差分を測定します。 --- # TextualDifferenceMetric [JA] Source: https://mastra.ai/ja/reference/evals/textual-difference `TextualDifferenceMetric` クラスは、シーケンスマッチングを使用して2つの文字列間のテキストの違いを測定します。これは、あるテキストを別のテキストに変換するために必要な操作の数を含む、変更に関する詳細な情報を提供します。 ## 基本的な使用法 ```typescript import { TextualDifferenceMetric } from "@mastra/evals/nlp"; const metric = new TextualDifferenceMetric(); const result = await metric.measure( "The quick brown fox", "The fast brown fox" ); console.log(result.score); // 0-1の類似度比率 console.log(result.info); // 詳細な変更メトリクス ``` ## measure() パラメーター ## 戻り値 ## スコアリングの詳細 このメトリックは、いくつかの測定を計算します: - **類似度比**: テキスト間のシーケンスマッチングに基づく (0-1) - **変更**: 一致しない操作の必要数 - **長さの差**: テキストの長さの正規化された差 - **信頼度**: 長さの差に反比例 ### スコアリングプロセス 1. テキストの違いを分析します: - 入力と出力の間でシーケンスマッチングを行う - 必要な変更操作の数を数える - 長さの差を測定する 2. メトリックを計算します: - 類似度比を計算する - 信頼度スコアを決定する - 重み付けされたスコアに結合する 最終スコア: `(similarity_ratio * confidence) * scale` ### スコアの解釈 (0からスケールまで、デフォルトは0-1) - 1.0: 同一のテキスト - 違いなし - 0.7-0.9: 小さな違い - 少しの変更が必要 - 0.4-0.6: 中程度の違い - かなりの変更 - 0.1-0.3: 大きな違い - 大幅な変更 - 0.0: 完全に異なるテキスト ## 分析付きの例 ```typescript import { TextualDifferenceMetric } from "@mastra/evals/nlp"; const metric = new TextualDifferenceMetric(); const result = await metric.measure( "Hello world! How are you?", "Hello there! How is it going?" ); // Example output: // { // score: 0.65, // info: { // confidence: 0.95, // ratio: 0.65, // changes: 2, // lengthDiff: 0.05 // } // } ``` ## 関連 - [コンテンツ類似性メトリック](./content-similarity) - [完全性メトリック](./completeness) - [キーワードカバレッジメトリック](./keyword-coverage) --- title: "リファレンス: トーンの一貫性 | メトリクス | Evals | Mastra ドキュメント" description: Mastraにおけるトーンの一貫性メトリクスのドキュメントで、テキストの感情的トーンと感情の一貫性を評価します。 --- # ToneConsistencyMetric [JA] Source: https://mastra.ai/ja/reference/evals/tone-consistency `ToneConsistencyMetric` クラスは、テキストの感情的なトーンと感情の一貫性を評価します。これは、入力/出力ペア間のトーンを比較するモードと、単一のテキスト内のトーンの安定性を分析するモードの2つのモードで動作できます。 ## 基本的な使用法 ```typescript import { ToneConsistencyMetric } from "@mastra/evals/nlp"; const metric = new ToneConsistencyMetric(); // 入力と出力のトーンを比較 const result1 = await metric.measure( "I love this amazing product!", "This product is wonderful and fantastic!" ); // 単一のテキストでのトーンの安定性を分析 const result2 = await metric.measure( "The service is excellent. The staff is friendly. The atmosphere is perfect.", "" // 単一テキスト分析のための空文字列 ); console.log(result1.score); // 0-1のトーン一貫性スコア console.log(result2.score); // 0-1のトーン安定性スコア ``` ## measure() パラメーター ## 戻り値 ### info オブジェクト (トーン比較) ### info オブジェクト (トーン安定性) ## スコアリングの詳細 この指標は、トーンパターン分析とモード固有のスコアリングを通じて感情の一貫性を評価します。 ### スコアリングプロセス 1. トーンパターンを分析します: - 感情の特徴を抽出 - 感情スコアを計算 - トーンの変動を測定 2. モード固有のスコアを計算します: **トーンの一貫性**(入力と出力): - テキスト間の感情を比較 - 感情の差を計算 - スコア = 1 - (感情の差 / 最大差) **トーンの安定性**(単一入力): - 文全体の感情を分析 - 感情の分散を計算 - スコア = 1 - (感情の分散 / 最大分散) 最終スコア: `mode_specific_score * scale` ### スコアの解釈 (0からスケールまで、デフォルトは0-1) - 1.0: 完璧なトーンの一貫性/安定性 - 0.7-0.9: 軽微な変動を伴う強い一貫性 - 0.4-0.6: 顕著な変化を伴う中程度の一貫性 - 0.1-0.3: 大きなトーンの変化を伴う低い一貫性 - 0.0: 一貫性なし - 完全に異なるトーン ## 両方のモードの例 ```typescript import { ToneConsistencyMetric } from "@mastra/evals/nlp"; const metric = new ToneConsistencyMetric(); // トーンの一貫性モード const consistencyResult = await metric.measure( "この製品は素晴らしくて驚くべきものです!", "製品は優れていて素晴らしいです!" ); // 例の出力: // { // score: 0.95, // info: { // responseSentiment: 0.8, // referenceSentiment: 0.75, // difference: 0.05 // } // } // トーンの安定性モード const stabilityResult = await metric.measure( "素晴らしいサービス!フレンドリーなスタッフ。完璧な雰囲気。", "" ); // 例の出力: // { // score: 0.9, // info: { // avgSentiment: 0.6, // sentimentVariance: 0.1 // } // } ``` ## 関連 - [コンテンツ類似性メトリック](./content-similarity) - [毒性メトリック](./toxicity) --- title: "リファレンス: 有害性 | メトリクス | Evals | Mastra ドキュメント" description: Mastraにおける有害性メトリクスのドキュメントで、人種差別的、偏見的、または有害な要素を含むLLM出力を評価します。 --- # ToxicityMetric [JA] Source: https://mastra.ai/ja/reference/evals/toxicity `ToxicityMetric` クラスは、LLM の出力に人種差別的、偏見的、または有害な要素が含まれているかどうかを評価します。これは、個人攻撃、嘲笑、ヘイトスピーチ、軽蔑的な発言、脅迫を含むさまざまな形態の有害性を分析するために、判定ベースのシステムを使用します。 ## 基本的な使用法 ```typescript import { openai } from "@ai-sdk/openai"; import { ToxicityMetric } from "@mastra/evals/llm"; // Configure the model for evaluation const model = openai("gpt-4o-mini"); const metric = new ToxicityMetric(model, { scale: 1, // Default scale is 0-1 }); const result = await metric.measure( "How is Sarah as a person?", "Sarah is a dedicated team member who always delivers quality work.", ); console.log(result.score); // Score from 0-1 (0 = not toxic, 1 = toxic) console.log(result.info.reason); // Explanation of the toxicity assessment ``` ## コンストラクタのパラメータ ### ToxicityMetricOptions ## measure() パラメーター ## 戻り値 ## スコアリングの詳細 このメトリックは、複数の側面から毒性を評価します: - 個人攻撃 - 嘲笑や皮肉 - ヘイトスピーチ - 軽蔑的な発言 - 脅迫や威圧 ### スコアリングプロセス 1. 有害な要素を分析します: - 個人攻撃と嘲笑を特定 - ヘイトスピーチと脅迫を検出 - 軽蔑的な発言を評価 - 深刻度レベルを評価 2. 毒性スコアを計算します: - 検出された要素を重み付け - 深刻度の評価を組み合わせ - スケールに正規化 最終スコア: `(toxicity_weighted_sum / max_toxicity) * scale` ### スコアの解釈 (0からスケールまで、デフォルトは0-1) - 0.8-1.0: 深刻な毒性 - 0.4-0.7: 中程度の毒性 - 0.1-0.3: 軽度の毒性 - 0.0: 有害な要素は検出されませんでした ## カスタム設定の例 ```typescript import { openai } from "@ai-sdk/openai"; const model = openai("gpt-4o-mini"); const metric = new ToxicityMetric(model, { scale: 10, // 0-1の代わりに0-10のスケールを使用 }); const result = await metric.measure( "新しいチームメンバーについてどう思いますか?", "新しいチームメンバーは有望ですが、基本的なスキルの大幅な改善が必要です。", ); ``` ## 関連 - [トーンの一貫性メトリック](./tone-consistency) - [バイアスメトリック](./bias) --- title: "API リファレンス" description: "Mastra API リファレンス" --- import { ReferenceCards } from "@/components/reference-cards"; # リファレンス [JA] Source: https://mastra.ai/ja/reference リファレンスセクションでは、パラメータ、タイプ、使用例を含むMastraのAPIのドキュメントを提供しています。 # Memory クラスリファレンス [JA] Source: https://mastra.ai/ja/reference/memory/Memory `Memory` クラスは、Mastra における会話履歴とスレッドベースのメッセージストレージを管理するための堅牢なシステムを提供します。これにより、会話の永続的な保存、セマンティック検索機能、および効率的なメッセージの取得が可能になります。デフォルトでは、LibSQL をストレージとベクトル検索に使用し、FastEmbed を埋め込みに使用します。 ## 基本的な使用法 ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; import { Agent } from "@mastra/core/agent"; const agent = new Agent({ memory: new Memory(), ...otherOptions, }); ``` ## カスタム設定 ```typescript copy showLineNumbers import { Memory } from "@mastra/memory"; import { LibSQLStore } from "@mastra/core/storage/libsql"; import { LibSQLVector } from "@mastra/core/vector/libsql"; import { Agent } from "@mastra/core/agent"; const memory = new Memory({ // オプションのストレージ設定 - デフォルトでlibsqlが使用されます storage: new LibSQLStore({ url: "file:memory.db", }), // セマンティック検索のためのオプションのベクターデータベース - デフォルトでlibsqlが使用されます vector: new LibSQLVector({ url: "file:vector.db", }), // メモリ設定オプション options: { // 含める最近のメッセージの数 lastMessages: 20, // セマンティック検索設定 semanticRecall: { topK: 3, // 取得する類似メッセージの数 messageRange: { // 各結果の周囲に含めるメッセージ before: 2, after: 1, }, }, // 作業メモリ設定 workingMemory: { enabled: true, template: ` # User - First Name: - Last Name: `, }, }, }); const agent = new Agent({ memory, ...otherOptions, }); ``` ### 作業メモリ 作業メモリ機能により、エージェントは会話を通じて持続的な情報を保持できます。有効にすると、Memoryクラスはテキストストリームタグまたはツールコールを通じて作業メモリの更新を自動的に管理します。 作業メモリの更新を処理するための2つのモードがあります: 1. **text-stream** (デフォルト): エージェントはMarkdownを含むXMLタグを使用して作業メモリの更新を直接応答に含めます (`# User \n ## Preferences...`)。これらのタグは自動的に処理され、表示される出力から削除されます。 2. **tool-call**: エージェントは専用のツールを使用して作業メモリを更新します。このモードは、`toDataStream()`を使用する場合に使用する必要があります。text-streamモードはデータストリーミングと互換性がありません。さらに、このモードはメモリ更新に対するより明示的な制御を提供し、テキストタグの管理よりもツールの使用が得意なエージェントと連携する場合に好まれるかもしれません。 設定例: ```typescript copy showLineNumbers const memory = new Memory({ options: { workingMemory: { enabled: true, template: "# User\n- **First Name**:\n- **Last Name**:", use: "tool-call", // または 'text-stream' }, }, }); ``` テンプレートが提供されていない場合、Memoryクラスはユーザーの詳細、好み、目標、その他のコンテキスト情報をMarkdown形式で含むデフォルトテンプレートを使用します。詳細な使用例とベストプラクティスについては、[作業メモリガイド](/docs/memory/working-memory.mdx#designing-effective-templates)を参照してください。 ### embedder デフォルトでは、Memoryは`bge-small-en-v1.5`モデルを使用したFastEmbedを使用します。これはパフォーマンスとモデルサイズ(約130MB)のバランスが良好です。異なるモデルやプロバイダーを使用したい場合のみ、embedderを指定する必要があります。 ローカル埋め込みがサポートされていない環境では、APIベースのembedderを使用できます: ```typescript {6} import { Memory } from "@mastra/memory"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ memory: new Memory({ embedder: openai.embedding("text-embedding-3-small"), // ネットワークリクエストを追加 }), }); ``` Mastraは、OpenAI、Google、Mistral、Cohereからのオプションを含む、多くの埋め込みモデルを[Vercel AI SDK](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings)を通じてサポートしています。 ## パラメータ ### options ### 関連 - [メモリの始め方](/docs/memory/overview.mdx) - [セマンティックリコール](/docs/memory/semantic-recall.mdx) - [ワーキングメモリ](/docs/memory/working-memory.mdx) - [メモリプロセッサ](/docs/memory/memory-processors.mdx) - [createThread](/reference/memory/createThread.mdx) - [query](/reference/memory/query.mdx) - [getThreadById](/reference/memory/getThreadById.mdx) - [getThreadsByResourceId](/reference/memory/getThreadsByResourceId.mdx) # createThread [JA] Source: https://mastra.ai/ja/reference/memory/createThread メモリシステム内で新しい会話スレッドを作成します。各スレッドは個別の会話またはコンテキストを表し、複数のメッセージを含むことができます。 ## 使用例 ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ /* config */ }); const thread = await memory.createThread({ resourceId: "user-123", title: "Support Conversation", metadata: { category: "support", priority: "high" } }); ``` ## パラメーター ", description: "スレッドに関連付けるオプションのメタデータ", isOptional: true, }, ]} /> ## 戻り値 ", description: "スレッドに関連付けられた追加のメタデータ", }, ]} /> ### 関連項目 - [Memory クラスリファレンス](/reference/memory/Memory.mdx) - [Memory の始め方](/docs/memory/overview.mdx)(スレッドの概念をカバー) - [getThreadById](/reference/memory/getThreadById.mdx) - [getThreadsByResourceId](/reference/memory/getThreadsByResourceId.mdx) - [query](/reference/memory/query.mdx) # getThreadById リファレンス [JA] Source: https://mastra.ai/ja/reference/memory/getThreadById `getThreadById` 関数は、ストレージからIDによって特定のスレッドを取得します。 ## 使用例 ```typescript import { Memory } from "@mastra/core/memory"; const memory = new Memory(config); const thread = await memory.getThreadById({ threadId: "thread-123" }); ``` ## パラメーター ## 戻り値 ### 関連 - [Memory クラスリファレンス](/reference/memory/Memory.mdx) - [メモリの概要](/docs/memory/overview.mdx) (スレッドの概念をカバー) - [createThread](/reference/memory/createThread.mdx) - [getThreadsByResourceId](/reference/memory/getThreadsByResourceId.mdx) # getThreadsByResourceId リファレンス [JA] Source: https://mastra.ai/ja/reference/memory/getThreadsByResourceId `getThreadsByResourceId` 関数は、特定のリソースIDに関連付けられたすべてのスレッドをストレージから取得します。 ## 使用例 ```typescript import { Memory } from "@mastra/core/memory"; const memory = new Memory(config); const threads = await memory.getThreadsByResourceId({ resourceId: "resource-123", }); ``` ## パラメーター ## 戻り値 ### 関連 - [Memory クラスリファレンス](/reference/memory/Memory.mdx) - [Memory の始め方](/docs/memory/overview.mdx) (スレッド/リソースの概念をカバー) - [createThread](/reference/memory/createThread.mdx) - [getThreadById](/reference/memory/getThreadById.mdx) # query [JA] Source: https://mastra.ai/ja/reference/memory/query 特定のスレッドからメッセージを取得し、ページネーションとフィルタリングオプションをサポートします。 ## 使用例 ```typescript import { Memory } from "@mastra/memory"; const memory = new Memory({ /* config */ }); // 最後の50件のメッセージを取得 const { messages, uiMessages } = await memory.query({ threadId: "thread-123", selectBy: { last: 50, }, }); // 特定のメッセージ周辺のコンテキストを含むメッセージを取得 const { messages: contextMessages } = await memory.query({ threadId: "thread-123", selectBy: { include: [ { id: "msg-123", // このメッセージのみを取得(コンテキストなし) }, { id: "msg-456", // カスタムコンテキスト付きでこのメッセージを取得 withPreviousMessages: 3, // 前の3件のメッセージ withNextMessages: 1, // 次の1件のメッセージ }, ], }, }); // メッセージ内のセマンティック検索 const { messages } = await memory.query({ threadId: "thread-123", selectBy: { vectorSearchString: "デプロイメントについて何が議論されましたか?", }, threadConfig: { historySearch: true, }, }); ``` ## パラメーター ### selectBy ### include ## 戻り値 ## 追加の注意事項 `query` 関数は2つの異なるメッセージ形式を返します: - `messages`: 内部で使用されるコアメッセージ形式 - `uiMessages`: ツールの呼び出しと結果の適切なスレッド化を含む、UI表示に適したフォーマット済みメッセージ ### 関連 - [Memory クラスリファレンス](/reference/memory/Memory.mdx) - [Memory の始め方](/docs/memory/overview.mdx) - [セマンティックリコール](/docs/memory/semantic-recall.mdx) - [createThread](/reference/memory/createThread.mdx) --- title: 'AgentNetwork(実験的)' description: 'AgentNetworkクラスのリファレンスドキュメント' --- # AgentNetwork (実験的) [JA] Source: https://mastra.ai/ja/reference/networks/agent-network > **注:** AgentNetwork機能は実験的であり、将来のリリースで変更される可能性があります。 `AgentNetwork`クラスは、複雑なタスクを解決するために協力できる専門化されたエージェントのネットワークを作成する方法を提供します。実行パスを明示的に制御する必要があるWorkflowsとは異なり、AgentNetworkはLLMベースのルーターを使用して、次に呼び出すエージェントを動的に決定します。 ## 主要な概念 - **LLMベースのルーティング**: AgentNetworkは、エージェントを最適に活用する方法をLLMを用いて決定します - **エージェントの協力**: 複数の専門エージェントが協力して複雑なタスクを解決できます - **動的な意思決定**: ルーターはタスクの要件に基づいてどのエージェントを呼び出すかを決定します ## 使用法 ```typescript import { AgentNetwork } from '@mastra/core/network'; import { openai } from '@mastra/openai'; // Create specialized agents const webSearchAgent = new Agent({ name: 'Web Search Agent', instructions: 'You search the web for information.', model: openai('gpt-4o'), tools: { /* web search tools */ }, }); const dataAnalysisAgent = new Agent({ name: 'Data Analysis Agent', instructions: 'You analyze data and provide insights.', model: openai('gpt-4o'), tools: { /* data analysis tools */ }, }); // Create the network const researchNetwork = new AgentNetwork({ name: 'Research Network', instructions: 'Coordinate specialized agents to research topics thoroughly.', model: openai('gpt-4o'), agents: [webSearchAgent, dataAnalysisAgent], }); // Use the network const result = await researchNetwork.generate('Research the impact of climate change on agriculture'); console.log(result.text); ``` ## コンストラクタ ```typescript constructor(config: AgentNetworkConfig) ``` ### パラメータ - `config`: AgentNetworkの設定オブジェクト - `name`: ネットワークの名前 - `instructions`: ルーティングエージェントのための指示 - `model`: ルーティングに使用する言語モデル - `agents`: ネットワーク内の専門エージェントの配列 ## メソッド ### generate() エージェントネットワークを使用して応答を生成します。このメソッドは、コードベースの他の部分との一貫性のために廃止された`run()`メソッドに取って代わりました。 ```typescript async generate( messages: string | string[] | CoreMessage[], args?: AgentGenerateOptions ): Promise ``` ### stream() エージェントネットワークを使用して応答をストリームします。 ```typescript async stream( messages: string | string[] | CoreMessage[], args?: AgentStreamOptions ): Promise ``` ### getRoutingAgent() ネットワークで使用されるルーティングエージェントを返します。 ```typescript getRoutingAgent(): Agent ``` ### getAgents() ネットワーク内の専門エージェントの配列を返します。 ```typescript getAgents(): Agent[] ``` ### getAgentHistory() 特定のエージェントのインタラクション履歴を返します。 ```typescript getAgentHistory(agentId: string): Array<{ input: string; output: string; timestamp: string; }> ``` ### getAgentInteractionHistory() ネットワーク内で発生したすべてのエージェントインタラクションの履歴を返します。 ```typescript getAgentInteractionHistory(): Record< string, Array<{ input: string; output: string; timestamp: string; }> > ``` ### getAgentInteractionSummary() エージェントインタラクションの時系列順にフォーマットされた概要を返します。 ```typescript getAgentInteractionSummary(): string ``` ## AgentNetworkとWorkflowsの使い分け - **AgentNetworkを使用する場合:** タスクの要件に基づいて動的なルーティングを行い、AIがエージェントの最適な使用方法を見つけることを望むとき。 - **Workflowsを使用する場合:** エージェント呼び出しの事前に決められたシーケンスと条件ロジックを用いて、実行パスを明示的に制御する必要があるとき。 ## 内部ツール AgentNetworkは、ルーティングエージェントが専門のエージェントを呼び出すことを可能にする特別な`transmit`ツールを使用します。このツールは以下を処理します: - 単一エージェントの呼び出し - 複数の並列エージェントの呼び出し - エージェント間のコンテキスト共有 ## 制限事項 - AgentNetworkアプローチは、同じタスクに対してよく設計されたWorkflowよりも多くのトークンを使用する可能性があります - ルーティングの決定がLLMによって行われるため、デバッグがより困難になることがあります - パフォーマンスは、ルーティング指示の品質や専門エージェントの能力に基づいて変動する可能性があります --- title: "リファレンス: createLogger() | Mastra Observability ドキュメント" description: 指定された設定に基づいてロガーをインスタンス化する createLogger 関数のドキュメント。 --- # createLogger() [JA] Source: https://mastra.ai/ja/reference/observability/create-logger `createLogger()` 関数は、指定された設定に基づいてロガーをインスタンス化するために使用されます。タイプとそのタイプに関連する追加のパラメータを指定することで、コンソールベース、ファイルベース、または Upstash Redis ベースのロガーを作成できます。 ### 使用法 #### コンソールロガー(開発) ```typescript showLineNumbers copy const consoleLogger = createLogger({ name: "Mastra", level: "debug" }); consoleLogger.info("App started"); ``` #### ファイルトランスポート(構造化ログ) ```typescript showLineNumbers copy import { FileTransport } from "@mastra/loggers/file"; const fileLogger = createLogger({ name: "Mastra", transports: { file: new FileTransport({ path: "test-dir/test.log" }) }, level: "warn", }); fileLogger.warn("Low disk space", { destinationPath: "system", type: "WORKFLOW", }); ``` #### Upstash ロガー(リモートログドレイン) ```typescript showLineNumbers copy import { UpstashTransport } from "@mastra/loggers/upstash"; const logger = createLogger({ name: "Mastra", transports: { upstash: new UpstashTransport({ listName: "production-logs", upstashUrl: process.env.UPSTASH_URL!, upstashToken: process.env.UPSTASH_TOKEN!, }), }, level: "info", }); logger.info({ message: "User signed in", destinationPath: "auth", type: "AGENT", runId: "run_123", }); ``` ### パラメータ --- title: "リファレンス: Logger インスタンス | Mastra Observability ドキュメント" description: 様々な重大度レベルでイベントを記録するためのメソッドを提供する Logger インスタンスのドキュメント。 --- # ロガーインスタンス [JA] Source: https://mastra.ai/ja/reference/observability/logger ロガーインスタンスは `createLogger()` によって作成され、さまざまな重大度レベルでイベントを記録するためのメソッドを提供します。ロガーの種類に応じて、メッセージはコンソール、ファイル、または外部サービスに書き込まれることがあります。 ## 例 ```typescript showLineNumbers copy // Using a console logger const logger = createLogger({ name: 'Mastra', level: 'info' }); logger.debug('Debug message'); // Won't be logged because level is INFO logger.info({ message: 'User action occurred', destinationPath: 'user-actions', type: 'AGENT' }); // Logged logger.error('An error occurred'); // Logged as ERROR ``` ## メソッド void | Promise', description: 'DEBUGレベルのログを書き込みます。レベルがDEBUG以下の場合のみ記録されます。', }, { name: 'info', type: '(message: BaseLogMessage | string, ...args: any[]) => void | Promise', description: 'INFOレベルのログを書き込みます。レベルがINFO以下の場合のみ記録されます。', }, { name: 'warn', type: '(message: BaseLogMessage | string, ...args: any[]) => void | Promise', description: 'WARNレベルのログを書き込みます。レベルがWARN以下の場合のみ記録されます。', }, { name: 'error', type: '(message: BaseLogMessage | string, ...args: any[]) => void | Promise', description: 'ERRORレベルのログを書き込みます。レベルがERROR以下の場合のみ記録されます。', }, { name: 'cleanup', type: '() => Promise', isOptional: true, description: 'ロガーが保持するリソースをクリーンアップします(例:Upstashのネットワーク接続)。すべてのロガーがこれを実装しているわけではありません。', }, ]} /> **注意:** 一部のロガーは`BaseLogMessage`オブジェクト(`message`、`destinationPath`、`type`フィールドを含む)を必要とします。例えば、`File`および`Upstash`ロガーは構造化されたメッセージを必要とします。 --- title: "リファレンス: OtelConfig | Mastra Observability ドキュメント" description: OpenTelemetry の計装、トレース、およびエクスポートの動作を設定する OtelConfig オブジェクトのドキュメント。 --- # `OtelConfig` [JA] Source: https://mastra.ai/ja/reference/observability/otel-config `OtelConfig` オブジェクトは、アプリケーション内で OpenTelemetry の計装、トレース、およびエクスポートの動作を設定するために使用されます。そのプロパティを調整することで、テレメトリデータ(トレースなど)がどのように収集、サンプリング、およびエクスポートされるかを制御できます。 Mastra 内で `OtelConfig` を使用するには、Mastra を初期化する際に `telemetry` キーの値として渡します。これにより、Mastra はトレースと計装のためにカスタムの OpenTelemetry 設定を使用するように構成されます。 ```typescript showLineNumbers copy import { Mastra } from 'mastra'; const otelConfig: OtelConfig = { serviceName: 'my-awesome-service', enabled: true, sampling: { type: 'ratio', probability: 0.5, }, export: { type: 'otlp', endpoint: 'https://otel-collector.example.com/v1/traces', headers: { Authorization: 'Bearer YOUR_TOKEN_HERE', }, }, }; ``` ### プロパティ ', isOptional: true, description: 'OTLP リクエストと共に送信する追加のヘッダー。認証やルーティングに役立ちます。', }, ], }, ]} /> --- title: "リファレンス: Braintrust | 観測性 | Mastra ドキュメント" description: BraintrustをMastraと統合するためのドキュメント。MastraはLLMアプリケーションの評価と監視プラットフォームです。 --- # Braintrust [JA] Source: https://mastra.ai/ja/reference/observability/providers/braintrust Braintrustは、LLMアプリケーションの評価と監視のためのプラットフォームです。 ## 設定 BraintrustをMastraで使用するには、次の環境変数を設定してください: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer , x-bt-parent=project_id:" ``` ## 実装 MastraをBraintrustで使用するための設定方法は次のとおりです: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## ダッシュボード [braintrust.dev](https://www.braintrust.dev/)でBraintrustダッシュボードにアクセス --- title: "リファレンス: Dash0 統合 | Mastra オブザーバビリティ ドキュメント" description: MastraとDash0、Open Telementryネイティブのオブザーバビリティソリューションとの統合に関するドキュメント。 --- # Dash0 [JA] Source: https://mastra.ai/ja/reference/observability/providers/dash0 Dash0は、PersesやPrometheusのような他のCNCFプロジェクトとの統合だけでなく、フルスタックの監視機能を提供するOpen Telementryネイティブの可観測性ソリューションです。 ## 設定 Dash0をMastraで使用するには、これらの環境変数を設定してください: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://ingress..dash0.com OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer , Dash0-Dataset= ``` ## 実装 MastraをDash0で使用するための設定方法は次のとおりです: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## ダッシュボード [Dash0](https://www.dash0.com/) のダッシュボードにアクセスして、[Dash0 Integration Hub](https://www.dash0.com/hub/integrations) でより多くの [Distributed Tracing](https://www.dash0.com/distributed-tracing) 統合を行う方法を見つけてください。 --- title: "リファレンス: プロバイダーリスト | オブザーバビリティ | Mastra ドキュメント" description: Dash0、SigNoz、Braintrust、Langfuse など、Mastra がサポートするオブザーバビリティプロバイダーの概要。 --- # オブザーバビリティプロバイダー [JA] Source: https://mastra.ai/ja/reference/observability/providers オブザーバビリティプロバイダーには以下が含まれます: - [Braintrust](./providers/braintrust.mdx) - [Dash0](./providers/dash0.mdx) - [Laminar](./providers/laminar.mdx) - [Langfuse](./providers/langfuse.mdx) - [Langsmith](./providers/langsmith.mdx) - [New Relic](./providers/new-relic.mdx) - [SigNoz](./providers/signoz.mdx) - [Traceloop](./providers/traceloop.mdx) --- title: "リファレンス: Laminar 統合 | Mastra 観測性ドキュメント" description: LLMアプリケーション向けの専門的な観測性プラットフォームであるMastraとLaminarを統合するためのドキュメント。 --- # Laminar [JA] Source: https://mastra.ai/ja/reference/observability/providers/laminar Laminarは、LLMアプリケーション向けの専門的なオブザーバビリティプラットフォームです。 ## 設定 LaminarをMastraと一緒に使用するには、これらの環境変数を設定してください: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://api.lmnr.ai:8443 OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your_api_key, x-laminar-team-id=your_team_id" ``` ## 実装 こちらは、MastraをLaminarで使用するための設定方法です: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", protocol: "grpc", }, }, }); ``` ## ダッシュボード Laminar ダッシュボードにアクセスするには、[https://lmnr.ai/](https://lmnr.ai/) をご覧ください。 --- title: "リファレンス: Langfuse 統合 | Mastra オブザーバビリティ ドキュメント" description: LLM アプリケーション向けのオープンソースオブザーバビリティプラットフォームである Mastra と Langfuse を統合するためのドキュメント。 --- # Langfuse [JA] Source: https://mastra.ai/ja/reference/observability/providers/langfuse Langfuseは、LLMアプリケーション向けに特別に設計されたオープンソースのオブザーバビリティプラットフォームです。 > **注**: 現在、AI関連の呼び出しのみが詳細なテレメトリーデータを含みます。他の操作はトレースを作成しますが、情報は限られています。 ## 設定 LangfuseをMastraと一緒に使用するには、次の環境変数を設定する必要があります: ```env LANGFUSE_PUBLIC_KEY=your_public_key LANGFUSE_SECRET_KEY=your_secret_key LANGFUSE_BASEURL=https://cloud.langfuse.com # オプション - デフォルトはcloud.langfuse.com ``` **重要**: テレメトリーエクスポート設定を構成する際、Langfuseの統合が正しく機能するためには、`traceName`パラメータを`"ai"`に設定する必要があります。 ## 実装 こちらは、MastraをLangfuseで使用するための設定方法です: ```typescript import { Mastra } from "@mastra/core"; import { LangfuseExporter } from "langfuse-vercel"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "ai", // this must be set to "ai" so that the LangfuseExporter thinks it's an AI SDK trace enabled: true, export: { type: "custom", exporter: new LangfuseExporter({ publicKey: process.env.LANGFUSE_PUBLIC_KEY, secretKey: process.env.LANGFUSE_SECRET_KEY, baseUrl: process.env.LANGFUSE_BASEURL, }), }, }, }); ``` ## ダッシュボード 設定が完了すると、[cloud.langfuse.com](https://cloud.langfuse.com) のLangfuseダッシュボードでトレースと分析を表示できます。 --- title: "リファレンス: LangSmith 統合 | Mastra オブザーバビリティ ドキュメント" description: LLMアプリケーションのデバッグ、テスト、評価、監視のためのプラットフォームであるMastraとLangSmithを統合するためのドキュメント。 --- # LangSmith [JA] Source: https://mastra.ai/ja/reference/observability/providers/langsmith LangSmithは、LLMアプリケーションのデバッグ、テスト、評価、監視のためのLangChainのプラットフォームです。 > **注**: 現在、この統合はアプリケーション内のAI関連の呼び出しのみをトレースします。他の種類の操作はテレメトリーデータにキャプチャされません。 ## 設定 LangSmithをMastraで使用するには、次の環境変数を設定する必要があります: ```env LANGSMITH_TRACING=true LANGSMITH_ENDPOINT=https://api.smith.langchain.com LANGSMITH_API_KEY=your-api-key LANGSMITH_PROJECT=your-project-name ``` ## 実装 LangSmithを使用するようにMastraを設定する方法は次のとおりです: ```typescript import { Mastra } from "@mastra/core"; import { AISDKExporter } from "langsmith/vercel"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "custom", exporter: new AISDKExporter(), }, }, }); ``` ## ダッシュボード LangSmith ダッシュボードでトレースと分析にアクセスするには、[smith.langchain.com](https://smith.langchain.com) をご覧ください。 > **注意**: ワークフローを実行しても、新しいプロジェクトにデータが表示されない場合があります。すべてのプロジェクトを表示するには、Name 列で並べ替えを行い、プロジェクトを選択してから、Root Runs の代わりに LLM Calls でフィルタリングする必要があります。 --- title: "リファレンス: LangWatch 統合 | Mastra オブザーバビリティ ドキュメント" description: LLMアプリケーション向けの専門的なオブザーバビリティプラットフォームであるMastraとLangWatchの統合に関するドキュメント。 --- # LangWatch [JA] Source: https://mastra.ai/ja/reference/observability/providers/langwatch LangWatchは、LLMアプリケーション向けの専門的なオブザーバビリティプラットフォームです。 ## 設定 LangWatchをMastraで使用するには、次の環境変数を設定してください: ```env LANGWATCH_API_KEY=your_api_key LANGWATCH_PROJECT_ID=your_project_id ``` ## 実装 MastraをLangWatchで使用するための設定方法は次のとおりです: ```typescript import { Mastra } from "@mastra/core"; import { LangWatchExporter } from "langwatch"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "custom", exporter: new LangWatchExporter({ apiKey: process.env.LANGWATCH_API_KEY, projectId: process.env.LANGWATCH_PROJECT_ID, }), }, }, }); ``` ## ダッシュボード [app.langwatch.ai](https://app.langwatch.ai)でLangWatchダッシュボードにアクセスしてください --- title: "リファレンス: New Relic 統合 | Mastra オブザーバビリティ ドキュメント" description: New Relic と Mastra の統合に関するドキュメント。Mastra は、OpenTelemetry をサポートするフルスタック監視のための包括的なオブザーバビリティ プラットフォームです。 --- # New Relic [JA] Source: https://mastra.ai/ja/reference/observability/providers/new-relic New Relicは、フルスタックモニタリングのためにOpenTelemetry (OTLP) をサポートする包括的なオブザーバビリティプラットフォームです。 ## 設定 OTLPを介してMastraでNew Relicを使用するには、これらの環境変数を設定してください: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:4317 OTEL_EXPORTER_OTLP_HEADERS="api-key=your_license_key" ``` ## 実装 MastraをNew Relicで使用するための設定方法は次のとおりです: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## ダッシュボード [one.newrelic.com](https://one.newrelic.com) で New Relic One ダッシュボードにテレメトリーデータを表示します --- title: "リファレンス: SigNoz 統合 | Mastra オブザーバビリティ ドキュメント" description: SigNozをMastraと統合するためのドキュメント。Mastraは、OpenTelemetryを通じてフルスタック監視を提供するオープンソースのAPMおよびオブザーバビリティプラットフォームです。 --- # SigNoz [JA] Source: https://mastra.ai/ja/reference/observability/providers/signoz SigNozは、OpenTelemetryを通じてフルスタックの監視機能を提供するオープンソースのAPMおよびオブザーバビリティプラットフォームです。 ## 設定 SigNozをMastraと一緒に使用するには、これらの環境変数を設定してください: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.{region}.signoz.cloud:443 OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=your_signoz_token ``` ## 実装 MastraをSigNozで使用するための設定方法は次のとおりです: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## ダッシュボード あなたのSigNozダッシュボードにアクセスするには、[signoz.io](https://signoz.io/)をご覧ください。 --- title: "リファレンス: Traceloop 統合 | Mastra 観測性ドキュメント" description: Traceloop を Mastra と統合するためのドキュメント。Mastra は LLM アプリケーション向けの OpenTelemetry ネイティブの観測性プラットフォームです。 --- # Traceloop [JA] Source: https://mastra.ai/ja/reference/observability/providers/traceloop Traceloopは、LLMアプリケーション向けに特別に設計されたOpenTelemetryネイティブのオブザーバビリティプラットフォームです。 ## 設定 TraceloopをMastraと一緒に使用するには、次の環境変数を設定してください: ```env OTEL_EXPORTER_OTLP_ENDPOINT=https://api.traceloop.com OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your_api_key, x-traceloop-destination-id=your_destination_id" ``` ## 実装 MastraをTraceloopで使用するための設定方法は次のとおりです: ```typescript import { Mastra } from "@mastra/core"; export const mastra = new Mastra({ // ... other config telemetry: { serviceName: "your-service-name", enabled: true, export: { type: "otlp", }, }, }); ``` ## ダッシュボード [app.traceloop.com](https://app.traceloop.com) で Traceloop ダッシュボードにアクセスして、トレースと分析を確認してください。 --- title: "リファレンス: Astra Vector Store | ベクターデータベース | RAG | Mastra ドキュメント" description: DataStax Astra DBを使用したベクター検索を提供するMastraのAstraVectorクラスのドキュメント。 --- # Astra Vector Store [JA] Source: https://mastra.ai/ja/reference/rag/astra AstraVector クラスは、Apache Cassandra 上に構築されたクラウドネイティブでサーバーレスのデータベースである [DataStax Astra DB](https://www.datastax.com/products/datastax-astra) を使用したベクター検索を提供します。 エンタープライズグレードのスケーラビリティと高可用性を備えたベクター検索機能を提供します。 ## コンストラクタオプション ## メソッド ### createIndex() ### upsert() []", isOptional: true, description: "各ベクトルのメタデータ", }, { name: "ids", type: "string[]", isOptional: true, description: "オプションのベクトルID(提供されない場合は自動生成されます)", }, ]} /> ### query() ", isOptional: true, description: "クエリのメタデータフィルター", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "結果にベクトルを含めるかどうか", }, ]} /> ### listIndexes() 文字列としてインデックス名の配列を返します。 ### describeIndex() 返される内容: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() ", isOptional: true, description: "新しいメタデータの値", }, ], }, ]} /> ### deleteIndexById() ## レスポンスタイプ クエリ結果はこの形式で返されます: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## エラーハンドリング ストアはキャッチ可能な型付きエラーをスローします: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // 追加のエラーコンテキスト } } ``` ## 環境変数 必要な環境変数: - `ASTRA_DB_TOKEN`: あなたのAstra DB APIトークン - `ASTRA_DB_ENDPOINT`: あなたのAstra DB APIエンドポイント ## 関連 - [メタデータフィルター](./metadata-filters) --- title: "リファレンス: Chroma Vector Store | ベクターデータベース | RAG | Mastra ドキュメント" description: MastraのChromaVectorクラスのドキュメントで、ChromaDBを使用したベクター検索を提供します。 --- # Chroma Vector Store [JA] Source: https://mastra.ai/ja/reference/rag/chroma ChromaVector クラスは、オープンソースの埋め込みデータベースである [ChromaDB](https://www.trychroma.com/) を使用したベクター検索を提供します。 メタデータフィルタリングとハイブリッド検索機能を備えた効率的なベクター検索を提供します。 ## コンストラクタオプション ### auth ## メソッド ### createIndex() ### upsert() []", isOptional: true, description: "各ベクトルのメタデータ", }, { name: "ids", type: "string[]", isOptional: true, description: "オプションのベクトルID(提供されない場合は自動生成)", }, { name: "documents", type: "string[]", isOptional: true, description: "Chroma固有: ベクトルに関連付けられた元のテキストドキュメント", }, ]} /> ### query() ", isOptional: true, description: "クエリのメタデータフィルター", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "結果にベクトルを含めるかどうか", }, { name: "documentFilter", type: "Record", isOptional: true, description: "Chroma固有: ドキュメント内容に適用するフィルター", }, ]} /> ### listIndexes() 文字列としてインデックス名の配列を返します。 ### describeIndex() 返される内容: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() `update` オブジェクトには以下を含めることができます: ", isOptional: true, description: "既存のメタデータを置き換える新しいメタデータ", }, ]} /> ### deleteIndexById() ## 応答タイプ クエリ結果は次の形式で返されます: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; document?: string; // Chroma-specific: Original document if it was stored vector?: number[]; // Only included if includeVector is true } ``` ## エラーハンドリング ストアはキャッチ可能な型付きエラーをスローします: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // 追加のエラーコンテキスト } } ``` ## 関連 - [メタデータフィルター](./metadata-filters) --- title: "リファレンス: .chunk() | ドキュメント処理 | RAG | Mastra ドキュメント" description: Mastraのchunk関数のドキュメントで、さまざまな戦略を使用してドキュメントを小さなセグメントに分割します。 --- # リファレンス: .chunk() [JA] Source: https://mastra.ai/ja/reference/rag/chunk `.chunk()` 関数は、さまざまな戦略とオプションを使用してドキュメントを小さなセグメントに分割します。 ## 例 ```typescript import { MDocument } from '@mastra/rag'; const doc = MDocument.fromMarkdown(` # Introduction This is a sample document that we want to split into chunks. ## Section 1 Here is the first section with some content. ## Section 2 Here is another section with different content. `); // Basic chunking with defaults const chunks = await doc.chunk(); // Markdown-specific chunking with header extraction const chunksWithMetadata = await doc.chunk({ strategy: 'markdown', headers: [['#', 'title'], ['##', 'section']], extract: { summary: true, // Extract summaries with default settings keywords: true // Extract keywords with default settings } }); ``` ## パラメータ ## 戦略固有のオプション 戦略固有のオプションは、戦略パラメータと共にトップレベルのパラメータとして渡されます。例えば: ```typescript showLineNumbers copy // HTML戦略の例 const chunks = await doc.chunk({ strategy: 'html', headers: [['h1', 'title'], ['h2', 'subtitle']], // HTML固有のオプション sections: [['div.content', 'main']], // HTML固有のオプション size: 500 // 一般的なオプション }); // Markdown戦略の例 const chunks = await doc.chunk({ strategy: 'markdown', headers: [['#', 'title'], ['##', 'section']], // Markdown固有のオプション stripHeaders: true, // Markdown固有のオプション overlap: 50 // 一般的なオプション }); // Token戦略の例 const chunks = await doc.chunk({ strategy: 'token', encodingName: 'gpt2', // Token固有のオプション modelName: 'gpt-3.5-turbo', // Token固有のオプション size: 1000 // 一般的なオプション }); ``` 以下に記載されているオプションは、別のオプションオブジェクト内にネストされるのではなく、設定オブジェクトのトップレベルで直接渡されます。 ### HTML ", description: "ヘッダーに基づく分割のための[セレクタ, メタデータキー]ペアの配列", }, { name: "sections", type: "Array<[string, string]>", description: "セクションに基づく分割のための[セレクタ, メタデータキー]ペアの配列", }, { name: "returnEachLine", type: "boolean", isOptional: true, description: "各行を個別のチャンクとして返すかどうか", }, ]} /> ### Markdown ", description: "[ヘッダーレベル, メタデータキー]ペアの配列", }, { name: "stripHeaders", type: "boolean", isOptional: true, description: "出力からヘッダーを削除するかどうか", }, { name: "returnEachLine", type: "boolean", isOptional: true, description: "各行を個別のチャンクとして返すかどうか", }, ]} /> ### Token ### JSON ## 戻り値 チャンクされたドキュメントを含む`MDocument`インスタンスを返します。各チャンクには以下が含まれます: ```typescript interface DocumentNode { text: string; metadata: Record; embedding?: number[]; } ``` --- title: "リファレンス: MDocument | ドキュメント処理 | RAG | Mastra Docs" description: MastraのMDocumentクラスのドキュメントで、ドキュメントの処理とチャンク化を扱います。 --- # MDocument [JA] Source: https://mastra.ai/ja/reference/rag/document MDocumentクラスはRAGアプリケーションのためにドキュメントを処理します。主なメソッドは`.chunk()`と`.extractMetadata()`です。 ## コンストラクタ }>", description: "テキストコンテンツとオプションのメタデータを含むドキュメントチャンクの配列", }, { name: "type", type: "'text' | 'html' | 'markdown' | 'json' | 'latex'", description: "ドキュメントコンテンツのタイプ", } ]} /> ## 静的メソッド ### fromText() プレーンテキストコンテンツからドキュメントを作成します。 ```typescript static fromText(text: string, metadata?: Record): MDocument ``` ### fromHTML() HTMLコンテンツからドキュメントを作成します。 ```typescript static fromHTML(html: string, metadata?: Record): MDocument ``` ### fromMarkdown() Markdownコンテンツからドキュメントを作成します。 ```typescript static fromMarkdown(markdown: string, metadata?: Record): MDocument ``` ### fromJSON() JSONコンテンツからドキュメントを作成します。 ```typescript static fromJSON(json: string, metadata?: Record): MDocument ``` ## インスタンスメソッド ### chunk() ドキュメントをチャンクに分割し、オプションでメタデータを抽出します。 ```typescript async chunk(params?: ChunkParams): Promise ``` 詳細なオプションについては、[chunk() リファレンス](./chunk)を参照してください。 ### getDocs() 処理されたドキュメントチャンクの配列を返します。 ```typescript getDocs(): Chunk[] ``` ### getText() チャンクからテキスト文字列の配列を返します。 ```typescript getText(): string[] ``` ### getMetadata() チャンクからメタデータオブジェクトの配列を返します。 ```typescript getMetadata(): Record[] ``` ### extractMetadata() 指定された抽出器を使用してメタデータを抽出します。詳細については、[ExtractParams リファレンス](./extract-params)を参照してください。 ```typescript async extractMetadata(params: ExtractParams): Promise ``` ## 例 ```typescript import { MDocument } from '@mastra/rag'; // Create document from text const doc = MDocument.fromText('Your content here'); // Split into chunks with metadata extraction const chunks = await doc.chunk({ strategy: 'markdown', headers: [['#', 'title'], ['##', 'section']], extract: { summary: true, // Extract summaries with default settings keywords: true // Extract keywords with default settings } }); // Get processed chunks const docs = doc.getDocs(); const texts = doc.getText(); const metadata = doc.getMetadata(); ``` --- title: "リファレンス: embed() | ドキュメント埋め込み | RAG | Mastra ドキュメント" description: MastraでAI SDKを使用した埋め込み機能のドキュメント。 --- # 埋め込み [JA] Source: https://mastra.ai/ja/reference/rag/embeddings Mastraは、AI SDKの`embed`および`embedMany`関数を使用してテキスト入力のベクトル埋め込みを生成し、類似性検索とRAGワークフローを可能にします。 ## 単一埋め込み `embed` 関数は、単一のテキスト入力に対してベクトル埋め込みを生成します: ```typescript import { embed } from 'ai'; const result = await embed({ model: openai.embedding('text-embedding-3-small'), value: "Your text to embed", maxRetries: 2 // optional, defaults to 2 }); ``` ### パラメータ ", description: "埋め込むテキストコンテンツまたはオブジェクト" }, { name: "maxRetries", type: "number", description: "埋め込み呼び出しごとの最大リトライ回数。リトライを無効にするには0に設定します。", isOptional: true, defaultValue: "2" }, { name: "abortSignal", type: "AbortSignal", description: "リクエストをキャンセルするためのオプションの中止シグナル", isOptional: true }, { name: "headers", type: "Record", description: "リクエストの追加HTTPヘッダー (HTTPベースのプロバイダーのみ)", isOptional: true } ]} /> ### 戻り値 ## 複数の埋め込み 複数のテキストを一度に埋め込むには、`embedMany` 関数を使用します: ```typescript import { embedMany } from 'ai'; const result = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: ["First text", "Second text", "Third text"], maxRetries: 2 // optional, defaults to 2 }); ``` ### パラメータ []", description: "埋め込むテキストコンテンツまたはオブジェクトの配列" }, { name: "maxRetries", type: "number", description: "埋め込み呼び出しごとの最大リトライ回数。リトライを無効にするには0に設定します。", isOptional: true, defaultValue: "2" }, { name: "abortSignal", type: "AbortSignal", description: "リクエストをキャンセルするためのオプションの中止シグナル", isOptional: true }, { name: "headers", type: "Record", description: "リクエストの追加HTTPヘッダー (HTTPベースのプロバイダーのみ)", isOptional: true } ]} /> ### 戻り値 ## 使用例 ```typescript import { embed, embedMany } from 'ai'; import { openai } from '@ai-sdk/openai'; // Single embedding const singleResult = await embed({ model: openai.embedding('text-embedding-3-small'), value: "What is the meaning of life?", }); // Multiple embeddings const multipleResult = await embedMany({ model: openai.embedding('text-embedding-3-small'), values: [ "First question about life", "Second question about universe", "Third question about everything" ], }); ``` Vercel AI SDKにおける埋め込みの詳細情報については、以下を参照してください: - [AI SDK 埋め込み概要](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings) - [embed()](https://sdk.vercel.ai/docs/reference/ai-sdk-core/embed) - [embedMany()](https://sdk.vercel.ai/docs/reference/ai-sdk-core/embed-many) --- title: "リファレンス: ExtractParams | ドキュメント処理 | RAG | Mastra ドキュメント" description: Mastraにおけるメタデータ抽出設定のドキュメント。 --- # ExtractParams [JA] Source: https://mastra.ai/ja/reference/rag/extract-params ExtractParamsは、LLM分析を使用してドキュメントチャンクからメタデータを抽出するように設定します。 ## 例 ```typescript showLineNumbers copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText(text); const chunks = await doc.chunk({ extract: { title: true, // デフォルト設定を使用してタイトルを抽出 summary: true, // デフォルト設定を使用して要約を生成 keywords: true // デフォルト設定を使用してキーワードを抽出 } }); // 例の出力: // chunks[0].metadata = { // documentTitle: "AI Systems Overview", // sectionSummary: "人工知能の概念と応用の概要", // excerptKeywords: "キーワード: AI, 機械学習, アルゴリズム" // } ``` ## パラメーター `extract` パラメーターは以下のフィールドを受け入れます: ## 抽出引数 ### TitleExtractorsArgs ### SummaryExtractArgs ### QuestionAnswerExtractArgs ### KeywordExtractArgs ## 高度な例 ```typescript showLineNumbers copy import { MDocument } from "@mastra/rag"; const doc = MDocument.fromText(text); const chunks = await doc.chunk({ extract: { // Title extraction with custom settings title: { nodes: 2, // Extract 2 title nodes nodeTemplate: "Generate a title for this: {context}", combineTemplate: "Combine these titles: {context}" }, // Summary extraction with custom settings summary: { summaries: ["self"], // Generate summaries for current chunk promptTemplate: "Summarize this: {context}" }, // Question generation with custom settings questions: { questions: 3, // Generate 3 questions promptTemplate: "Generate {numQuestions} questions about: {context}", embeddingOnly: false }, // Keyword extraction with custom settings keywords: { keywords: 5, // Extract 5 keywords promptTemplate: "Extract {maxKeywords} key terms from: {context}" } } }); // Example output: // chunks[0].metadata = { // documentTitle: "AI in Modern Computing", // sectionSummary: "Overview of AI concepts and their applications in computing", // questionsThisExcerptCanAnswer: "1. What is machine learning?\n2. How do neural networks work?", // excerptKeywords: "1. Machine learning\n2. Neural networks\n3. Training data" // } ``` ## タイトル抽出のためのドキュメントグループ化 `TitleExtractor`を使用する際、各チャンクの`metadata`フィールドに共通の`docId`を指定することで、タイトル抽出のために複数のチャンクをグループ化できます。同じ`docId`を持つすべてのチャンクは、同じ抽出されたタイトルを受け取ります。`docId`が設定されていない場合、各チャンクはタイトル抽出のために独自のドキュメントとして扱われます。 **例:** ```ts import { MDocument } from "@mastra/rag"; const doc = new MDocument({ docs: [ { text: "chunk 1", metadata: { docId: "docA" } }, { text: "chunk 2", metadata: { docId: "docA" } }, { text: "chunk 3", metadata: { docId: "docB" } }, ], type: "text", }); await doc.extractMetadata({ title: true }); // 最初の2つのチャンクは同じタイトルを共有し、3番目のチャンクには別のタイトルが割り当てられます。 ``` --- title: "リファレンス: GraphRAG | グラフベースのRAG | RAG | Mastra ドキュメント" description: MastraのGraphRAGクラスのドキュメントで、グラフベースのアプローチを用いたリトリーバル拡張生成を実装しています。 --- # GraphRAG [JA] Source: https://mastra.ai/ja/reference/rag/graph-rag `GraphRAG` クラスは、検索拡張生成に対するグラフベースのアプローチを実装しています。ノードが文書を表し、エッジが意味的な関係を表す文書チャンクから知識グラフを作成し、直接的な類似性マッチングとグラフトラバーサルを通じた関連コンテンツの発見を可能にします。 ## 基本的な使用法 ```typescript import { GraphRAG } from "@mastra/rag"; const graphRag = new GraphRAG({ dimension: 1536, threshold: 0.7 }); // チャンクと埋め込みからグラフを作成 graphRag.createGraph(documentChunks, embeddings); // 埋め込みでグラフをクエリ const results = await graphRag.query({ query: queryEmbedding, topK: 10, randomWalkSteps: 100, restartProb: 0.15 }); ``` ## コンストラクタのパラメータ ## メソッド ### createGraph ドキュメントチャンクとその埋め込みからナレッジグラフを作成します。 ```typescript createGraph(chunks: GraphChunk[], embeddings: GraphEmbedding[]): void ``` #### パラメータ ### query ベクトル類似性とグラフトラバーサルを組み合わせたグラフベースの検索を実行します。 ```typescript query({ query, topK = 10, randomWalkSteps = 100, restartProb = 0.15 }: { query: number[]; topK?: number; randomWalkSteps?: number; restartProb?: number; }): RankedNode[] ``` #### パラメータ #### 戻り値 `RankedNode` オブジェクトの配列を返します。各ノードには以下が含まれます: ", description: "チャンクに関連付けられた追加のメタデータ", }, { name: "score", type: "number", description: "グラフトラバーサルからの結合関連性スコア", } ]} /> ## 高度な例 ```typescript const graphRag = new GraphRAG({ dimension: 1536, threshold: 0.8 // より厳しい類似性の閾値 }); // チャンクと埋め込みからグラフを作成 graphRag.createGraph(documentChunks, embeddings); // カスタムパラメータでクエリ const results = await graphRag.query({ query: queryEmbedding, topK: 5, randomWalkSteps: 200, restartProb: 0.2 }); ``` ## 関連 - [createGraphRAGTool](../tools/graph-rag-tool) --- title: "デフォルトベクターストア | ベクターデータベース | RAG | Mastra ドキュメント" description: LibSQLのベクター拡張を使用してベクター検索を提供するMastraのLibSQLVectorクラスのドキュメント。 --- # LibSQLVector Store [JA] Source: https://mastra.ai/ja/reference/rag/libsql LibSQLストレージ実装は、ベクトル拡張を備えたSQLiteのフォークであるSQLite互換のベクトル検索[LibSQL](https://github.com/tursodatabase/libsql)と、ベクトル拡張を備えた[Turso](https://turso.tech/)を提供し、軽量で効率的なベクトルデータベースソリューションを提供します。 これは`@mastra/core`パッケージの一部であり、メタデータフィルタリングを伴う効率的なベクトル類似性検索を提供します。 ## インストール デフォルトのベクトルストアはコアパッケージに含まれています: ```bash copy npm install @mastra/core ``` ## 使用法 ```typescript copy showLineNumbers import { LibSQLVector } from "@mastra/core/vector/libsql"; // Create a new vector store instance const store = new LibSQLVector({ connectionUrl: process.env.DATABASE_URL, // Optional: for Turso cloud databases authToken: process.env.DATABASE_AUTH_TOKEN, }); // Create an index await store.createIndex({ indexName: "myCollection", dimension: 1536, }); // Add vectors with metadata const vectors = [[0.1, 0.2, ...], [0.3, 0.4, ...]]; const metadata = [ { text: "first document", category: "A" }, { text: "second document", category: "B" } ]; await store.upsert({ indexName: "myCollection", vectors, metadata, }); // Query similar vectors const queryVector = [0.1, 0.2, ...]; const results = await store.query({ indexName: "myCollection", queryVector, topK: 10, // top K results filter: { category: "A" } // optional metadata filter }); ``` ## コンストラクタオプション ## メソッド ### createIndex() 新しいベクトルコレクションを作成します。インデックス名は文字またはアンダースコアで始まり、文字、数字、アンダースコアのみを含むことができます。次元は正の整数でなければなりません。 ### upsert() インデックスにベクトルとそのメタデータを追加または更新します。すべてのベクトルが原子性を持って挿入されるようにトランザクションを使用します - もし挿入が失敗した場合、全体の操作はロールバックされます。 []", isOptional: true, description: "各ベクトルのメタデータ", }, { name: "ids", type: "string[]", isOptional: true, description: "オプションのベクトルID(提供されない場合は自動生成)", }, ]} /> ### query() オプションのメタデータフィルタリングを使用して類似ベクトルを検索します。 ### describeIndex() インデックスに関する情報を取得します。 戻り値: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() インデックスとそのすべてのデータを削除します。 ### listIndexes() データベース内のすべてのベクトルインデックスを一覧表示します。 戻り値: `Promise` ### truncateIndex() インデックスの構造を保持しながら、すべてのベクトルを削除します。 ### updateIndexById() IDによって特定のベクトルエントリを新しいベクトルデータおよび/またはメタデータで更新します。 ", isOptional: true, description: "更新する新しいメタデータ", }, ]} /> ### deleteIndexById() IDによってインデックスから特定のベクトルエントリを削除します。 ## レスポンスタイプ クエリ結果はこの形式で返されます: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // includeVector が true の場合のみ含まれます } ``` ## エラーハンドリング ストアは、異なる失敗ケースに対して特定のエラーをスローします: ```typescript copy try { await store.query({ indexName: "my-collection", queryVector: queryVector, }); } catch (error) { // 特定のエラーケースを処理 if (error.message.includes("Invalid index name format")) { console.error( "インデックス名は文字/アンダースコアで始まり、英数字のみを含む必要があります", ); } else if (error.message.includes("Table not found")) { console.error("指定されたインデックスは存在しません"); } else { console.error("ベクトルストアエラー:", error.message); } } ``` 一般的なエラーケースには以下が含まれます: - 無効なインデックス名の形式 - 無効なベクトルの次元 - テーブル/インデックスが見つからない - データベース接続の問題 - アップサート中のトランザクションの失敗 ## 関連 - [メタデータフィルター](./metadata-filters) --- title: "リファレンス: メタデータフィルター | メタデータフィルタリング | RAG | Mastra ドキュメント" description: Mastraにおけるメタデータフィルタリング機能のドキュメントで、異なるベクトルストアにわたるベクトル検索結果の正確なクエリを可能にします。 --- # メタデータフィルター [JA] Source: https://mastra.ai/ja/reference/rag/metadata-filters Mastraは、MongoDB/Siftクエリ構文に基づいて、すべてのベクトルストアにわたる統一されたメタデータフィルタリング構文を提供します。各ベクトルストアは、これらのフィルターをネイティブ形式に変換します。 ## 基本的な例 ```typescript import { PgVector } from '@mastra/pg'; const store = new PgVector(connectionString); const results = await store.query({ indexName: "my_index", queryVector: queryVector, topK: 10, filter: { category: "electronics", // 単純な等価 price: { $gt: 100 }, // 数値比較 tags: { $in: ["sale", "new"] } // 配列メンバーシップ } }); ``` ## サポートされている演算子 ## 共通のルールと制限 1. フィールド名は以下を含むことができません: - ドット (.) を含むこと(ネストされたフィールドを参照する場合を除く) - $ で始まる、またはヌル文字を含むこと - 空の文字列であること 2. 値は以下でなければなりません: - 有効なJSONタイプ(文字列、数値、ブール値、オブジェクト、配列) - 未定義でないこと - 演算子に対して適切に型付けされていること(例:数値比較には数値) 3. 論理演算子: - 有効な条件を含むこと - 空でないこと - 適切にネストされていること - トップレベルまたは他の論理演算子内にネストされて使用されること - フィールドレベルまたはフィールド内にネストされて使用されないこと - 演算子内で使用されないこと - 有効: `{ "$and": [{ "field": { "$gt": 100 } }] }` - 有効: `{ "$or": [{ "$and": [{ "field": { "$gt": 100 } }] }] }` - 無効: `{ "field": { "$and": [{ "$gt": 100 }] } }` - 無効: `{ "field": { "$gt": { "$and": [{...}] } } }` 4. $not 演算子: - オブジェクトでなければならない - 空でないこと - フィールドレベルまたはトップレベルで使用できる - 有効: `{ "$not": { "field": "value" } }` - 有効: `{ "field": { "$not": { "$eq": "value" } } }` 5. 演算子のネスト: - 論理演算子はフィールド条件を含む必要があり、直接演算子を含むことはできない - 有効: `{ "$and": [{ "field": { "$gt": 100 } }] }` - 無効: `{ "$and": [{ "$gt": 100 }] }` ## ストア固有の注意事項 ### Astra - ネストされたフィールドクエリはドット表記を使用してサポートされています - 配列フィールドはメタデータで明示的に配列として定義する必要があります - メタデータの値は大文字と小文字を区別します ### ChromaDB - Whereフィルターは、フィルターされたフィールドがメタデータに存在する結果のみを返します - 空のメタデータフィールドはフィルター結果に含まれません - メタデータフィールドは否定的な一致のために存在する必要があります(例:$neはフィールドが欠けているドキュメントに一致しません) ### Cloudflare Vectorize - フィルタリングを使用する前に明示的なメタデータインデックス作成が必要です - フィルタリングしたいフィールドをインデックスするには`createMetadataIndex()`を使用します - Vectorizeインデックスごとに最大10のメタデータインデックス - 文字列値は最初の64バイトまでインデックスされます(UTF-8の境界で切り捨て) - 数値値はfloat64精度を使用します - フィルタJSONは2048バイト未満でなければなりません - フィールド名にはドット(.)を含めたり、$で始めたりすることはできません - フィールド名は512文字に制限されています - 新しいメタデータインデックスを作成した後、ベクトルはフィルタリングされた結果に含めるために再アップサートする必要があります - 非常に大きなデータセット(約10M+ベクトル)では範囲クエリの精度が低下する可能性があります ### LibSQL - ドット表記を使用したネストされたオブジェクトクエリをサポートしています - 配列フィールドは有効なJSON配列を含むことを確認するために検証されます - 数値比較は適切な型処理を維持します - 条件内の空の配列は適切に処理されます - メタデータは効率的なクエリのためにJSONB列に格納されます ### PgVector - PostgreSQLのネイティブJSONクエリ機能を完全にサポートしています - ネイティブ配列関数を使用した配列操作の効率的な処理 - 数値、文字列、ブール値の適切な型処理 - ネストされたフィールドクエリはPostgreSQLのJSONパス構文を内部的に使用します - メタデータは効率的なインデックス作成のためにJSONB列に格納されます ### Pinecone - メタデータフィールド名は512文字に制限されています - 数値値は±1e38の範囲内でなければなりません - メタデータ内の配列は合計64KBのサイズに制限されています - ネストされたオブジェクトはドット表記でフラット化されます - メタデータの更新はメタデータオブジェクト全体を置き換えます ### Qdrant - ネストされた条件を使用した高度なフィルタリングをサポートしています - ペイロード(メタデータ)フィールドはフィルタリングのために明示的にインデックスする必要があります - 地理空間クエリの効率的な処理 - nullおよび空の値の特別な処理 - ベクトル固有のフィルタリング機能 - 日時値はRFC 3339形式でなければなりません ### Upstash - メタデータフィールドキーの512文字制限 - クエリサイズは制限されています(大きなIN句を避ける) - フィルターでnull/undefined値をサポートしていません - 内部的にSQLライクな構文に変換されます - 大文字と小文字を区別する文字列比較 - メタデータの更新はアトミックです ## 関連 - [Astra](./astra) - [Chroma](./chroma) - [Cloudflare Vectorize](./vectorize) - [LibSQL](./libsql) - [PgStore](./pg) - [Pinecone](./pinecone) - [Qdrant](./qdrant) - [Upstash](./upstash) --- title: "リファレンス: PG Vector Store | ベクターデータベース | RAG | Mastra ドキュメント" description: PostgreSQLのpgvector拡張機能を使用してベクター検索を提供するMastraのPgVectorクラスのドキュメント。 --- # PG Vector Store [JA] Source: https://mastra.ai/ja/reference/rag/pg PgVectorクラスは、[PostgreSQL](https://www.postgresql.org/)と[pgvector](https://github.com/pgvector/pgvector)拡張機能を使用してベクトル検索を提供します。 既存のPostgreSQLデータベース内で強力なベクトル類似性検索機能を提供します。 ## コンストラクタオプション ## コンストラクタの例 `PgVector`を2つの方法でインスタンス化できます: ```ts import { PgVector } from '@mastra/pg'; // 接続文字列を使用する方法(文字列形式) const vectorStore1 = new PgVector('postgresql://user:password@localhost:5432/mydb'); // 設定オブジェクトを使用する方法(オプションのschemaNameを含む) const vectorStore2 = new PgVector({ connectionString: 'postgresql://user:password@localhost:5432/mydb', schemaName: 'custom_schema', // オプション }); ``` ## メソッド ### createIndex() #### IndexConfig #### メモリ要件 HNSWインデックスは構築時にかなりの共有メモリを必要とします。100Kベクトルの場合: - 小さな次元 (64d): デフォルト設定で約60MB - 中程度の次元 (256d): デフォルト設定で約180MB - 大きな次元 (384d+): デフォルト設定で約250MB以上 M値やefConstruction値が高いと、メモリ要件が大幅に増加します。必要に応じてシステムの共有メモリ制限を調整してください。 ### upsert() []", isOptional: true, description: "各ベクトルのメタデータ", }, { name: "ids", type: "string[]", isOptional: true, description: "オプションのベクトルID(指定しない場合は自動生成)", }, ]} /> ### query() ", isOptional: true, description: "メタデータフィルター", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "結果にベクトルを含めるかどうか", }, { name: "minScore", type: "number", isOptional: true, defaultValue: "0", description: "最小類似度スコアの閾値", }, { name: "options", type: "{ ef?: number; probes?: number }", isOptional: true, description: "HNSWおよびIVFインデックスの追加オプション", properties: [ { type: "object", parameters: [ { name: "ef", type: "number", description: "HNSW検索パラメータ", isOptional: true, }, { name: "probes", type: "number", description: "IVF検索パラメータ", isOptional: true, }, ], }, ], }, ]} /> ### listIndexes() インデックス名を文字列として配列で返します。 ### describeIndex() 返される内容: ```typescript copy interface PGIndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; type: "flat" | "hnsw" | "ivfflat"; config: { m?: number; efConstruction?: number; lists?: number; probes?: number; }; } ``` ### deleteIndex() ### updateIndexById() ", description: "新しいメタデータ値", isOptional: true, }, ], }, ], }, ]} /> IDで既存のベクトルを更新します。ベクトルまたはメタデータのいずれかを提供する必要があります。 ```typescript copy // ベクトルのみを更新 await pgVector.updateIndexById("my_vectors", "vector123", { vector: [0.1, 0.2, 0.3], }); // メタデータのみを更新 await pgVector.updateIndexById("my_vectors", "vector123", { metadata: { label: "updated" }, }); // ベクトルとメタデータの両方を更新 await pgVector.updateIndexById("my_vectors", "vector123", { vector: [0.1, 0.2, 0.3], metadata: { label: "updated" }, }); ``` ### deleteIndexById() 指定されたインデックスからIDで単一のベクトルを削除します。 ```typescript copy await pgVector.deleteIndexById("my_vectors", "vector123"); ``` ### disconnect() データベース接続プールを閉じます。ストアの使用が終了したら呼び出す必要があります。 ### buildIndex() 指定されたメトリックと設定でインデックスを構築または再構築します。新しいインデックスを作成する前に、既存のインデックスを削除します。 ```typescript copy // Define HNSW index await pgVector.buildIndex("my_vectors", "cosine", { type: "hnsw", hnsw: { m: 8, efConstruction: 32, }, }); // Define IVF index await pgVector.buildIndex("my_vectors", "cosine", { type: "ivfflat", ivf: { lists: 100, }, }); // Define flat index await pgVector.buildIndex("my_vectors", "cosine", { type: "flat", }); ``` ## レスポンスタイプ クエリ結果はこの形式で返されます: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## エラーハンドリング ストアは型付きエラーをスローし、キャッチすることができます: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // 追加のエラーコンテキスト } } ``` ## ベストプラクティス - 最適なパフォーマンスを確保するために、インデックス設定を定期的に評価してください。 - データセットのサイズやクエリの要件に基づいて、`lists` や `m` などのパラメータを調整してください。 - 特に大幅なデータ変更後には、効率を維持するために定期的にインデックスを再構築してください。 ## 関連 - [メタデータフィルター](./metadata-filters) --- title: "リファレンス: Pinecone Vector Store | Vector DBs | RAG | Mastra ドキュメント" description: MastraのPineconeVectorクラスのドキュメントで、Pineconeのベクターデータベースへのインターフェースを提供します。 --- # Pinecone Vector Store [JA] Source: https://mastra.ai/ja/reference/rag/pinecone PineconeVector クラスは、[Pinecone](https://www.pinecone.io/) のベクターデータベースへのインターフェースを提供します。 ハイブリッド検索、メタデータフィルタリング、ネームスペース管理などの機能を備えたリアルタイムベクター検索を提供します。 ## コンストラクタオプション ## メソッド ### createIndex() ### upsert() []", isOptional: true, description: "各ベクトルのメタデータ", }, { name: "ids", type: "string[]", isOptional: true, description: "オプションのベクトルID(提供されない場合は自動生成されます)", }, { name: "namespace", type: "string", isOptional: true, description: "ベクトルを保存するためのオプションの名前空間。異なる名前空間のベクトルは互いに分離されています。", }, ]} /> ### query() ", isOptional: true, description: "クエリのメタデータフィルター", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "結果にベクトルを含めるかどうか", }, { name: "namespace", type: "string", isOptional: true, description: "クエリを実行するベクトルのオプションの名前空間。指定された名前空間からのみ結果を返します。", }, ]} /> ### listIndexes() 文字列としてインデックス名の配列を返します。 ### describeIndex() 返される内容: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() ", isOptional: true, description: "更新する新しいメタデータ", }, ]} /> ### deleteIndexById() ## レスポンスタイプ クエリ結果はこの形式で返されます: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## エラーハンドリング ストアは型付きエラーをスローし、キャッチすることができます: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // 追加のエラーコンテキスト } } ``` ### 環境変数 必要な環境変数: - `PINECONE_API_KEY`: あなたのPinecone APIキー - `PINECONE_ENVIRONMENT`: Pinecone環境(例: 'us-west1-gcp') ## ハイブリッド検索 Pineconeは、密ベクトルとスパースベクトルを組み合わせることでハイブリッド検索をサポートします。ハイブリッド検索を使用するには: 1. `metric: 'dotproduct'`でインデックスを作成します 2. アップサート時に、`sparseVectors`パラメータを使用してスパースベクトルを提供します 3. クエリ時に、`sparseVector`パラメータを使用してスパースベクトルを提供します ## 関連 - [メタデータフィルター](./metadata-filters) --- title: "リファレンス: Qdrant ベクターストア | ベクターデータベース | RAG | Mastra ドキュメント" description: ベクターとペイロードを管理するためのベクター類似検索エンジンであるQdrantをMastraと統合するためのドキュメント。 --- # Qdrant Vector Store [JA] Source: https://mastra.ai/ja/reference/rag/qdrant QdrantVector クラスは、ベクトル類似性検索エンジンである [Qdrant](https://qdrant.tech/) を使用したベクトル検索を提供します。 これは、追加のペイロードと拡張フィルタリングサポートを備えたベクトルを保存、検索、管理するための便利なAPIを備えた、プロダクション対応のサービスを提供します。 ## コンストラクタオプション ## メソッド ### createIndex() ### upsert() []", isOptional: true, description: "各ベクトルのメタデータ", }, { name: "ids", type: "string[]", isOptional: true, description: "オプションのベクトルID(提供されない場合は自動生成)", }, ]} /> ### query() ", isOptional: true, description: "クエリのメタデータフィルター", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "結果にベクトルを含めるかどうか", }, ]} /> ### listIndexes() 文字列としてインデックス名の配列を返します。 ### describeIndex() 返される内容: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() ; }", description: "更新するベクトルおよび/またはメタデータを含むオブジェクト", }, ]} /> 指定されたインデックス内のベクトルおよび/またはそのメタデータを更新します。ベクトルとメタデータの両方が提供された場合、両方が更新されます。どちらか一方のみが提供された場合は、その部分のみが更新されます。 ### deleteIndexById() 指定されたインデックスからIDによってベクトルを削除します。 ## レスポンスタイプ クエリ結果はこの形式で返されます: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## エラーハンドリング ストアはキャッチ可能な型付きエラーをスローします: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // 追加のエラーコンテキスト } } ``` ## 関連 - [メタデータフィルター](./metadata-filters) --- title: "リファレンス: Rerank | ドキュメント検索 | RAG | Mastra Docs" description: Mastraのrerank関数のドキュメントで、ベクター検索結果の高度な再ランキング機能を提供します。 --- # rerank() [JA] Source: https://mastra.ai/ja/reference/rag/rerank `rerank()` 関数は、セマンティック関連性、ベクトル類似性、および位置ベースのスコアリングを組み合わせることにより、ベクトル検索結果の高度な再ランク付け機能を提供します。 ```typescript function rerank( results: QueryResult[], query: string, modelConfig: ModelConfig, options?: RerankerFunctionOptions ): Promise ``` ## 使用例 ```typescript import { openai } from "@ai-sdk/openai"; import { rerank } from "@mastra/rag"; const model = openai("gpt-4o-mini"); const rerankedResults = await rerank( vectorSearchResults, "How do I deploy to production?", model, { weights: { semantic: 0.5, vector: 0.3, position: 0.2 }, topK: 3 } ); ``` ## パラメータ rerank関数は、Vercel AI SDKの任意のLanguageModelを受け入れます。Cohereモデル`rerank-v3.5`を使用する場合、Cohereの再ランク付け機能が自動的に使用されます。 > **注意:** 再ランク付け中にセマンティックスコアリングが正しく機能するためには、各結果に`metadata.text`フィールドにテキストコンテンツが含まれている必要があります。 ### RerankerFunctionOptions ## 戻り値 この関数は `RerankResult` オブジェクトの配列を返します: ### ScoringDetails ## 関連 - [createVectorQueryTool](../tools/vector-query-tool) --- title: "リファレンス: Turbopuffer ベクターストア | ベクターデータベース | RAG | Mastra ドキュメント" description: TurbopufferをMastraと統合するためのドキュメント。効率的な類似検索のための高性能ベクターデータベース。 --- # Turbopuffer Vector Store [JA] Source: https://mastra.ai/ja/reference/rag/turbopuffer TurbopufferVector クラスは、RAG アプリケーション向けに最適化された高性能ベクターデータベースである [Turbopuffer](https://turbopuffer.com/) を使用したベクター検索を提供します。Turbopuffer は、高度なフィルタリング機能と効率的なストレージ管理を備えた高速なベクター類似検索を提供します。 ## コンストラクタオプション ## メソッド ### createIndex() ### upsert() []", isOptional: true, description: "各ベクトルのメタデータ", }, { name: "ids", type: "string[]", isOptional: true, description: "オプションのベクトルID(指定されない場合は自動生成)", }, ]} /> ### query() ", isOptional: true, description: "クエリのメタデータフィルター", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "結果にベクトルを含めるかどうか", }, ]} /> ### listIndexes() 文字列としてインデックス名の配列を返します。 ### describeIndex() 返される内容: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ## 応答タイプ クエリ結果はこの形式で返されます: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## スキーマ構成 `schemaConfigForIndex` オプションを使用すると、異なるインデックスに対して明示的なスキーマを定義できます: ```typescript copy schemaConfigForIndex: (indexName: string) => { // Mastraのデフォルトの埋め込みモデルとメモリメッセージのインデックス: if (indexName === "memory_messages_384") { return { dimensions: 384, schema: { thread_id: { type: "string", filterable: true, }, }, }; } else { throw new Error(`TODO: add schema for index: ${indexName}`); } }; ``` ## エラーハンドリング このストアは、キャッチ可能な型付きエラーをスローします: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // 追加のエラーコンテキスト } } ``` ## 関連 - [メタデータフィルター](./metadata-filters) --- title: "リファレンス: Upstash Vector Store | ベクターデータベース | RAG | Mastra ドキュメント" description: MastraのUpstashVectorクラスのドキュメントで、Upstash Vectorを使用したベクター検索を提供します。 --- # Upstash Vector Store [JA] Source: https://mastra.ai/ja/reference/rag/upstash UpstashVector クラスは、メタデータフィルタリング機能を備えたベクトル類似検索を提供するサーバーレスベクトルデータベースサービスである [Upstash Vector](https://upstash.com/vector) を使用してベクトル検索を提供します。 ## コンストラクタオプション ## メソッド ### createIndex() 注意: このメソッドはUpstashでは無操作です。インデックスは自動的に作成されます。 ### upsert() []", isOptional: true, description: "各ベクトルのメタデータ", }, { name: "ids", type: "string[]", isOptional: true, description: "オプションのベクトルID(提供されない場合は自動生成)", }, ]} /> ### query() ", isOptional: true, description: "クエリのメタデータフィルター", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "結果にベクトルを含めるかどうか", }, ]} /> ### listIndexes() インデックス名(名前空間)の配列を文字列として返します。 ### describeIndex() 返される内容: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### updateIndexById() `update` オブジェクトは以下のプロパティを持つことができます: - `vector` (オプション): 新しいベクトルを表す数値の配列。 - `metadata` (オプション): メタデータのキーと値のペアのレコード。 `vector` または `metadata` のいずれも提供されない場合、または `metadata` のみが提供された場合はエラーをスローします。 ### deleteIndexById() 指定されたインデックスからIDによってアイテムを削除しようとします。削除が失敗した場合はエラーメッセージを記録します。 ## レスポンスタイプ クエリ結果はこの形式で返されます: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; // Only included if includeVector is true } ``` ## エラーハンドリング ストアは型付きエラーをスローし、キャッチすることができます: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // 追加のエラーコンテキスト } } ``` ## 環境変数 必要な環境変数: - `UPSTASH_VECTOR_URL`: あなたのUpstash VectorデータベースURL - `UPSTASH_VECTOR_TOKEN`: あなたのUpstash Vector APIトークン ## 関連 - [メタデータフィルター](./metadata-filters) --- title: "リファレンス: Cloudflare Vector Store | ベクターデータベース | RAG | Mastra ドキュメント" description: Cloudflare Vectorizeを使用したベクター検索を提供するMastraのCloudflareVectorクラスのドキュメント。 --- # Cloudflare Vector Store [JA] Source: https://mastra.ai/ja/reference/rag/vectorize CloudflareVector クラスは、Cloudflare のエッジネットワークと統合されたベクターデータベースサービスである [Cloudflare Vectorize](https://developers.cloudflare.com/vectorize/) を使用したベクター検索を提供します。 ## コンストラクタオプション ## メソッド ### createIndex() ### upsert() []", isOptional: true, description: "各ベクトルのメタデータ", }, { name: "ids", type: "string[]", isOptional: true, description: "オプションのベクトルID(提供されない場合は自動生成されます)", }, ]} /> ### query() ", isOptional: true, description: "クエリのメタデータフィルター", }, { name: "includeVector", type: "boolean", isOptional: true, defaultValue: "false", description: "結果にベクトルを含めるかどうか", }, ]} /> ### listIndexes() 文字列としてインデックス名の配列を返します。 ### describeIndex() 返される内容: ```typescript copy interface IndexStats { dimension: number; count: number; metric: "cosine" | "euclidean" | "dotproduct"; } ``` ### deleteIndex() ### createMetadataIndex() フィルタリングを可能にするためにメタデータフィールドにインデックスを作成します。 ### deleteMetadataIndex() メタデータフィールドからインデックスを削除します。 ### listMetadataIndexes() インデックスのすべてのメタデータフィールドインデックスを一覧表示します。 ### updateIndexById() インデックス内の特定のIDに対してベクトルまたはメタデータを更新します。 ; }", description: "更新するベクトルおよび/またはメタデータを含むオブジェクト", }, ]} /> ### deleteIndexById() インデックス内の特定のIDに対するベクトルとその関連メタデータを削除します。 ## レスポンスタイプ クエリ結果はこの形式で返されます: ```typescript copy interface QueryResult { id: string; score: number; metadata: Record; vector?: number[]; } ``` ## エラーハンドリング ストアは、キャッチ可能な型付きエラーをスローします: ```typescript copy try { await store.query({ indexName: "index_name", queryVector: queryVector, }); } catch (error) { if (error instanceof VectorStoreError) { console.log(error.code); // 'connection_failed' | 'invalid_dimension' | etc console.log(error.details); // 追加のエラーコンテキスト } } ``` ## 環境変数 必要な環境変数: - `CLOUDFLARE_ACCOUNT_ID`: あなたのCloudflareアカウントID - `CLOUDFLARE_API_TOKEN`: Vectorize権限を持つあなたのCloudflare APIトークン ## 関連 - [メタデータフィルター](./metadata-filters) --- title: "Cloudflare D1 ストレージ | ストレージシステム | Mastra Core" description: Mastraにおける Cloudflare D1 SQL ストレージ実装のドキュメント。 --- # Cloudflare D1 ストレージ [JA] Source: https://mastra.ai/ja/reference/storage/cloudflare-d1 Cloudflare D1ストレージの実装は、Cloudflare D1を使用したサーバーレスSQLデータベースソリューションを提供し、リレーショナル操作とトランザクションの一貫性をサポートします。 ## インストール ```bash npm install @mastra/cloudflare-d1 ``` ## 使用方法 ```typescript copy showLineNumbers import { D1Store } from "@mastra/cloudflare-d1"; // --- Example 1: Using Workers Binding --- const storageWorkers = new D1Store({ binding: D1Database, // D1Database binding provided by the Workers runtime tablePrefix: 'dev_', // Optional: isolate tables per environment }); // --- Example 2: Using REST API --- const storageRest = new D1Store({ accountId: process.env.CLOUDFLARE_ACCOUNT_ID!, // Cloudflare Account ID databaseId: process.env.CLOUDFLARE_D1_DATABASE_ID!, // D1 Database ID apiToken: process.env.CLOUDFLARE_API_TOKEN!, // Cloudflare API Token tablePrefix: 'dev_', // Optional: isolate tables per environment }); ``` ## パラメータ ## 追加情報 ### スキーマ管理 ストレージの実装はスキーマの作成と更新を自動的に処理します。以下のテーブルが作成されます: - `threads`: 会話スレッドを保存します - `messages`: 個々のメッセージを保存します - `metadata`: スレッドとメッセージの追加メタデータを保存します ### トランザクションと一貫性 Cloudflare D1は単一行操作のトランザクション保証を提供します。これにより、複数の操作を単一の全か無かの作業単位として実行できます。 ### テーブル作成とマイグレーション テーブルはストレージが初期化されるときに自動的に作成されます(`tablePrefix`オプションを使用して環境ごとに分離できます)が、列の追加、データ型の変更、インデックスの修正などの高度なスキーマ変更には、データ損失を避けるために手動のマイグレーションと慎重な計画が必要です。 --- title: "Cloudflare ストレージ | ストレージシステム | Mastra Core" description: Mastraにおける Cloudflare KV ストレージ実装のドキュメント。 --- # Cloudflare Storage [JA] Source: https://mastra.ai/ja/reference/storage/cloudflare Cloudflare KV ストレージの実装は、Cloudflare Workers KVを使用したグローバルに分散されたサーバーレスのキーバリューストアソリューションを提供します。 ## インストール ```bash npm install @mastra/cloudflare ``` ## 使用方法 ```typescript copy showLineNumbers import { CloudflareStore } from "@mastra/cloudflare"; // --- Example 1: Using Workers Binding --- const storageWorkers = new CloudflareStore({ bindings: { threads: THREADS_KV, // KVNamespace binding for threads table messages: MESSAGES_KV, // KVNamespace binding for messages table // Add other tables as needed }, keyPrefix: 'dev_', // Optional: isolate keys per environment }); // --- Example 2: Using REST API --- const storageRest = new CloudflareStore({ accountId: process.env.CLOUDFLARE_ACCOUNT_ID!, // Cloudflare Account ID apiToken: process.env.CLOUDFLARE_API_TOKEN!, // Cloudflare API Token namespacePrefix: 'dev_', // Optional: isolate namespaces per environment }); ``` ## パラメータ ", description: "Cloudflare Workers KVバインディング(Workersランタイム用)", isOptional: true, }, { name: "accountId", type: "string", description: "CloudflareアカウントID(REST API用)", isOptional: true, }, { name: "apiToken", type: "string", description: "Cloudflare APIトークン(REST API用)", isOptional: true, }, { name: "namespacePrefix", type: "string", description: "すべての名前空間名のオプションのプレフィックス(環境分離に役立ちます)", isOptional: true, }, { name: "keyPrefix", type: "string", description: "すべてのキーのオプションのプレフィックス(環境分離に役立ちます)", isOptional: true, }, ]} /> #### 追加の注意事項 ### スキーマ管理 ストレージの実装はスキーマの作成と更新を自動的に処理します。以下のテーブルが作成されます: - `threads`: 会話スレッドを保存します - `messages`: 個々のメッセージを保存します - `metadata`: スレッドとメッセージの追加メタデータを保存します ### 一貫性と伝播 Cloudflare KVは結果整合性のあるストアであり、書き込み後にデータがすべてのリージョンで即座に利用可能にならない場合があります。 ### キー構造と名前空間 Cloudflare KVのキーは、設定可能なプレフィックスとテーブル固有の形式(例:`threads:threadId`)の組み合わせで構成されています。 Workersデプロイメントでは、`keyPrefix`を使用して名前空間内のデータを分離します。REST APIデプロイメントでは、`namespacePrefix`を使用して環境やアプリケーション間で名前空間全体を分離します。 --- title: "LibSQL ストレージ | ストレージシステム | Mastra Core" description: MastraにおけるLibSQLストレージ実装のドキュメント。 --- # LibSQL Storage [JA] Source: https://mastra.ai/ja/reference/storage/libsql LibSQLストレージ実装は、メモリ内および永続データベースとして動作可能なSQLite互換のストレージソリューションを提供します。 ## インストール ```bash npm install @mastra/storage-libsql ``` ## 使用方法 ```typescript copy showLineNumbers import { LibSQLStore } from "@mastra/core/storage/libsql"; // File database (development) const storage = new LibSQLStore({ config: { url: 'file:storage.db', } }); // Persistent database (production) const storage = new LibSQLStore({ config: { url: process.env.DATABASE_URL, } }); ``` ## パラメーター ## 追加の注意事項 ### インメモリ vs 永続ストレージ ファイル構成 (`file:storage.db`) は以下に役立ちます: - 開発とテスト - 一時的なストレージ - クイックプロトタイピング 本番環境の使用ケースでは、永続的なデータベースURLを使用してください:`libsql://your-database.turso.io` ### スキーマ管理 ストレージの実装は、スキーマの作成と更新を自動的に処理します。以下のテーブルを作成します: - `threads`: 会話スレッドを保存 - `messages`: 個々のメッセージを保存 - `metadata`: スレッドとメッセージの追加メタデータを保存 --- title: "PostgreSQL ストレージ | ストレージシステム | Mastra Core" description: MastraにおけるPostgreSQLストレージ実装のドキュメント。 --- # PostgreSQL ストレージ [JA] Source: https://mastra.ai/ja/reference/storage/postgresql PostgreSQL ストレージの実装は、PostgreSQL データベースを使用した本番環境対応のストレージソリューションを提供します。 ## インストール ```bash npm install @mastra/pg ``` ## 使用法 ```typescript copy showLineNumbers import { PostgresStore } from "@mastra/pg"; const storage = new PostgresStore({ connectionString: process.env.DATABASE_URL, }); ``` ## パラメータ ## コンストラクタの例 以下の方法で `PostgresStore` をインスタンス化できます: ```ts import { PostgresStore } from '@mastra/pg'; // 接続文字列のみを使用 const store1 = new PostgresStore({ connectionString: 'postgresql://user:password@localhost:5432/mydb', }); // カスタムスキーマ名を持つ接続文字列を使用 const store2 = new PostgresStore({ connectionString: 'postgresql://user:password@localhost:5432/mydb', schemaName: 'custom_schema', // オプション }); // 個別の接続パラメータを使用 const store4 = new PostgresStore({ host: 'localhost', port: 5432, database: 'mydb', user: 'user', password: 'password', }); // スキーマ名を含む個別のパラメータ const store5 = new PostgresStore({ host: 'localhost', port: 5432, database: 'mydb', user: 'user', password: 'password', schemaName: 'custom_schema', // オプション }); ``` ## 追加の注意事項 ### スキーマ管理 ストレージの実装は、スキーマの作成と更新を自動的に処理します。以下のテーブルを作成します: - `threads`: 会話スレッドを保存 - `messages`: 個々のメッセージを保存 - `metadata`: スレッドとメッセージの追加メタデータを保存 --- title: "Upstash Storage | ストレージシステム | Mastra Core" description: MastraにおけるUpstashストレージ実装のドキュメント。 --- # Upstash Storage [JA] Source: https://mastra.ai/ja/reference/storage/upstash Upstashのストレージ実装は、UpstashのRedis互換のキー・バリュー・ストアを使用したサーバーレスに適したストレージソリューションを提供します。 ## インストール ```bash npm install @mastra/upstash ``` ## 使用法 ```typescript copy showLineNumbers import { UpstashStore } from "@mastra/upstash"; const storage = new UpstashStore({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }); ``` ## パラメーター ## 追加の注意事項 ### キー構造 Upstash ストレージの実装はキーと値の構造を使用します: - スレッドキー: `{prefix}thread:{threadId}` - メッセージキー: `{prefix}message:{messageId}` - メタデータキー: `{prefix}metadata:{entityId}` ### サーバーレスの利点 Upstash ストレージは特にサーバーレス展開に適しています: - 接続管理が不要 - リクエストごとの料金 - グローバルなレプリケーションオプション - エッジ互換 ### データの永続性 Upstash は以下を提供します: - 自動データ永続性 - 時点復旧 - クロスリージョンレプリケーションオプション ### パフォーマンスの考慮事項 最適なパフォーマンスのために: - データを整理するために適切なキーのプレフィックスを使用する - Redis のメモリ使用量を監視する - 必要に応じてデータの有効期限ポリシーを検討する --- title: "リファレンス: MastraMCPClient | ツールディスカバリー | Mastra ドキュメント" description: MastraMCPClient の API リファレンス - モデルコンテキストプロトコルのクライアント実装。 --- # MastraMCPClient [JA] Source: https://mastra.ai/ja/reference/tools/client `MastraMCPClient` クラスは、Model Context Protocol (MCP) サーバーと対話するためのクライアント実装を提供します。これは、MCP プロトコルを通じて接続管理、リソース発見、およびツール実行を処理します。 ## コンストラクタ MastraMCPClientの新しいインスタンスを作成します。 ```typescript constructor({ name, version = '1.0.0', server, capabilities = {}, timeout = 60000, }: { name: string; server: MastraMCPServerDefinition; capabilities?: ClientCapabilities; version?: string; timeout?: number; }) ``` ### パラメータ
### MastraMCPServerDefinition MCPサーバーはstdioベースまたはSSEベースのサーバーとして構成できます。構成にはサーバー固有の設定と共通オプションの両方が含まれます:
", isOptional: true, description: "stdioサーバーの場合:コマンドに設定する環境変数。", }, { name: "url", type: "URL", isOptional: true, description: "SSEサーバーの場合:サーバーのURL。", }, { name: "requestInit", type: "RequestInit", isOptional: true, description: "SSEサーバーの場合:fetch APIのリクエスト構成。", }, { name: "eventSourceInit", type: "EventSourceInit", isOptional: true, description: "SSEサーバーの場合:SSE接続のカスタムフェッチ構成。カスタムヘッダーを使用する場合に必要です。", }, { name: "logger", type: "LogHandler", isOptional: true, description: "ロギング用のオプションの追加ハンドラー。", }, { name: "timeout", type: "number", isOptional: true, description: "サーバー固有のタイムアウト(ミリ秒)。", }, { name: "capabilities", type: "ClientCapabilities", isOptional: true, description: "サーバー固有の機能構成。", }, { name: "enableServerLogs", type: "boolean", isOptional: true, defaultValue: "true", description: "このサーバーのロギングを有効にするかどうか。", }, ]} /> ### LogHandler `LogHandler`関数は`LogMessage`オブジェクトをパラメータとして受け取り、voidを返します。`LogMessage`オブジェクトには以下のプロパティがあります。`LoggingLevel`型は`debug`、`info`、`warn`、`error`の値を持つ文字列列挙型です。
", isOptional: true, description: "オプションの追加ログ詳細", }, ]} /> ## メソッド ### connect() MCPサーバーとの接続を確立します。 ```typescript async connect(): Promise ``` ### disconnect() MCPサーバーとの接続を閉じます。 ```typescript async disconnect(): Promise ``` ### resources() サーバーから利用可能なリソースのリストを取得します。 ```typescript async resources(): Promise ``` ### tools() サーバーから利用可能なツールを取得し、Mastra互換のツール形式に変換して初期化します。 ```typescript async tools(): Promise> ``` ツール名を対応するMastraツール実装にマッピングするオブジェクトを返します。 ## 例 ### Mastra Agentでの使用 #### Stdio Serverの例 ```typescript import { Agent } from "@mastra/core/agent"; import { MastraMCPClient } from "@mastra/mcp"; import { openai } from "@ai-sdk/openai"; // Initialize the MCP client using mcp/fetch as an example https://hub.docker.com/r/mcp/fetch // Visit https://github.com/docker/mcp-servers for other reference docker mcp servers const fetchClient = new MastraMCPClient({ name: "fetch", server: { command: "docker", args: ["run", "-i", "--rm", "mcp/fetch"], logger: (logMessage) => { console.log(`[${logMessage.level}] ${logMessage.message}`); }, }, }); // Create a Mastra Agent const agent = new Agent({ name: "Fetch agent", instructions: "You are able to fetch data from URLs on demand and discuss the response data with the user.", model: openai("gpt-4o-mini"), }); try { // Connect to the MCP server await fetchClient.connect(); // Gracefully handle process exits so the docker subprocess is cleaned up process.on("exit", () => { fetchClient.disconnect(); }); // Get available tools const tools = await fetchClient.tools(); // Use the agent with the MCP tools const response = await agent.generate( "Tell me about mastra.ai/docs. Tell me generally what this page is and the content it includes.", { toolsets: { fetch: tools, }, }, ); console.log("\n\n" + response.text); } catch (error) { console.error("Error:", error); } finally { // Always disconnect when done await fetchClient.disconnect(); } ``` ### SSE Serverの例 ```typescript // Initialize the MCP client using an SSE server const sseClient = new MastraMCPClient({ name: "sse-client", server: { url: new URL("https://your-mcp-server.com/sse"), // Optional fetch request configuration - Note: requestInit alone isn't enough for SSE requestInit: { headers: { Authorization: "Bearer your-token", }, }, // Required for SSE connections with custom headers eventSourceInit: { fetch(input: Request | URL | string, init?: RequestInit) { const headers = new Headers(init?.headers || {}); headers.set('Authorization', 'Bearer your-token'); return fetch(input, { ...init, headers, }); }, }, // Optional additional logging configuration logger: (logMessage) => { console.log(`[${logMessage.level}] ${logMessage.serverName}: ${logMessage.message}`); }, // Disable server logs enableServerLogs: false }, }); // The rest of the usage is identical to the stdio example ``` ### SSE認証に関する重要な注意 認証やカスタムヘッダーを使用したSSE接続を行う場合、`requestInit`と`eventSourceInit`の両方を設定する必要があります。これは、SSE接続がブラウザのEventSource APIを使用しており、このAPIは直接カスタムヘッダーをサポートしていないためです。 `eventSourceInit`設定により、SSE接続に使用される基盤となるfetchリクエストをカスタマイズでき、認証ヘッダーが適切に含まれるようになります。 `eventSourceInit`がなければ、`requestInit`で指定された認証ヘッダーは接続リクエストに含まれず、401 Unauthorizedエラーが発生します。 ## 関連情報 - アプリケーションで複数のMCPサーバーを管理するには、[MCPConfiguration ドキュメント](./mcp-configuration)を参照してください。 - モデルコンテキストプロトコルの詳細については、[@modelcontextprotocol/sdk ドキュメント](https://github.com/modelcontextprotocol/typescript-sdk)を参照してください。 --- title: "リファレンス: createDocumentChunkerTool() | ツール | Mastra ドキュメント" description: Mastraのドキュメントチャンクツールのドキュメントで、効率的な処理と取得のためにドキュメントを小さなチャンクに分割します。 --- # createDocumentChunkerTool() [JA] Source: https://mastra.ai/ja/reference/tools/document-chunker-tool `createDocumentChunkerTool()` 関数は、ドキュメントを小さなチャンクに分割して効率的に処理および取得するためのツールを作成します。さまざまなチャンク戦略と設定可能なパラメータをサポートしています。 ## 基本的な使用法 ```typescript import { createDocumentChunkerTool, MDocument } from "@mastra/rag"; const document = new MDocument({ text: "Your document content here...", metadata: { source: "user-manual" } }); const chunker = createDocumentChunkerTool({ doc: document, params: { strategy: "recursive", size: 512, overlap: 50, separator: "\n" } }); const { chunks } = await chunker.execute(); ``` ## パラメータ ### ChunkParams ## 戻り値 ## カスタムパラメータを使用した例 ```typescript const technicalDoc = new MDocument({ text: longDocumentContent, metadata: { type: "technical", version: "1.0" } }); const chunker = createDocumentChunkerTool({ doc: technicalDoc, params: { strategy: "recursive", size: 1024, // Larger chunks overlap: 100, // More overlap separator: "\n\n" // Split on double newlines } }); const { chunks } = await chunker.execute(); // Process the chunks chunks.forEach((chunk, index) => { console.log(`Chunk ${index + 1} length: ${chunk.content.length}`); }); ``` ## ツールの詳細 チャンクは、以下のプロパティを持つMastraツールとして作成されます: - **ツールID**: `Document Chunker {strategy} {size}` - **説明**: `{strategy} 戦略を使用してサイズ {size} と {overlap} オーバーラップでドキュメントをチャンクします` - **入力スキーマ**: 空のオブジェクト(追加の入力は不要) - **出力スキーマ**: チャンクの配列を含むオブジェクト ## 関連 - [MDocument](../rag/document.mdx) - [createVectorQueryTool](./vector-query-tool) --- title: "リファレンス: createGraphRAGTool() | RAG | Mastra Tools ドキュメント" description: MastraのGraph RAG Toolのドキュメントで、ドキュメント間のセマンティック関係のグラフを構築することでRAGを強化します。 --- # createGraphRAGTool() [JA] Source: https://mastra.ai/ja/reference/tools/graph-rag-tool `createGraphRAGTool()`は、ドキュメント間のセマンティックな関係のグラフを構築することでRAGを強化するツールを作成します。これは、`GraphRAG`システムを内部で使用して、グラフベースの検索を提供し、直接的な類似性と接続された関係の両方を通じて関連するコンテンツを見つけます。 ## 使用例 ```typescript import { openai } from "@ai-sdk/openai"; import { createGraphRAGTool } from "@mastra/rag"; const graphTool = createGraphRAGTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), graphOptions: { dimension: 1536, threshold: 0.7, randomWalkSteps: 100, restartProb: 0.15 } }); ``` ## パラメータ ### GraphOptions ## 戻り値 このツールは以下のオブジェクトを返します: ## デフォルトツールの説明 デフォルトの説明は以下に焦点を当てています: - ドキュメント間の関係を分析する - パターンと接続を見つける - 複雑なクエリに答える ## 高度な例 ```typescript const graphTool = createGraphRAGTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), graphOptions: { dimension: 1536, threshold: 0.8, // より高い類似性のしきい値 randomWalkSteps: 200, // より多くの探索ステップ restartProb: 0.2 // より高い再開確率 } }); ``` ## カスタム説明付きの例 ```typescript const graphTool = createGraphRAGTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), description: "Analyze document relationships to find complex patterns and connections in our company's historical data" }); ``` この例は、関係分析の基本目的を維持しながら、特定の使用ケースに合わせてツールの説明をカスタマイズする方法を示しています。 ## 関連 - [createVectorQueryTool](./vector-query-tool) - [GraphRAG](../rag/graph-rag) --- title: "リファレンス: MCPConfiguration | ツール管理 | Mastra ドキュメント" description: MCPConfiguration の API リファレンス - 複数のモデルコンテキストプロトコルサーバーとそのツールを管理するためのクラス。 --- # MCPConfiguration [JA] Source: https://mastra.ai/ja/reference/tools/mcp-configuration `MCPConfiguration` クラスは、Mastra アプリケーション内で複数の MCP サーバー接続とそのツールを管理する方法を提供します。接続のライフサイクルを管理し、ツールの名前空間を処理し、すべての設定されたサーバーにわたってツールへの便利なアクセスを提供します。 ## コンストラクタ MCPConfigurationクラスの新しいインスタンスを作成します。 ```typescript constructor({ id?: string; servers: Record; timeout?: number; }: MCPConfigurationOptions) ``` ### MCPConfigurationOptions
", description: "サーバー設定のマップ。各キーは一意のサーバー識別子であり、値はサーバー設定です。", }, { name: "timeout", type: "number", isOptional: true, defaultValue: "60000", description: "個々のサーバー設定で上書きされない限り、すべてのサーバーに適用されるグローバルタイムアウト値(ミリ秒単位)。", }, ]} /> ### MastraMCPServerDefinition `servers`マップ内の各サーバーは、stdioベースのサーバーまたはSSEベースのサーバーとして設定できます。 利用可能な設定オプションの詳細については、MastraMCPClientドキュメントの[MastraMCPServerDefinition](./client#mastramcpserverdefinition)を参照してください。 ## メソッド ### getTools() 設定されたすべてのサーバーからすべてのツールを取得し、ツール名はサーバー名で名前空間化されます(`serverName_toolName`の形式)。これは競合を防ぐためです。 Agentの定義に渡すことを意図しています。 ```ts new Agent({ tools: await mcp.getTools() }); ``` ### getToolsets() 名前空間化されたツール名(`serverName.toolName`の形式)をそのツール実装にマッピングするオブジェクトを返します。 generateまたはstreamメソッドに動的に渡すことを意図しています。 ```typescript const res = await agent.stream(prompt, { toolsets: await mcp.getToolsets(), }); ``` ### disconnect() すべてのMCPサーバーから切断し、リソースをクリーンアップします。 ```typescript async disconnect(): Promise ``` ## 例 ### 基本的な使用法 ```typescript import { MCPConfiguration } from "@mastra/mcp"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; const mcp = new MCPConfiguration({ servers: { stockPrice: { command: "npx", args: ["tsx", "stock-price.ts"], env: { API_KEY: "your-api-key", }, log: (logMessage) => { console.log(`[${logMessage.level}] ${logMessage.message}`); }, }, weather: { url: new URL("http://localhost:8080/sse"),∂ }, }, timeout: 30000, // グローバルな30秒タイムアウト }); // すべてのツールにアクセスできるエージェントを作成 const agent = new Agent({ name: "Multi-tool Agent", instructions: "あなたは複数のツールサーバーにアクセスできます。", model: openai("gpt-4"), tools: await mcp.getTools(), }); ``` ### generate()またはstream()でのツールセットの使用 ```typescript import { Agent } from "@mastra/core/agent"; import { MCPConfiguration } from "@mastra/mcp"; import { openai } from "@ai-sdk/openai"; // まず、ツールなしでエージェントを作成 const agent = new Agent({ name: "Multi-tool Agent", instructions: "あなたはユーザーが株価と天気を確認するのを手伝います。", model: openai("gpt-4"), }); // 後で、ユーザー固有の設定でMCPを構成 const mcp = new MCPConfiguration({ servers: { stockPrice: { command: "npx", args: ["tsx", "stock-price.ts"], env: { API_KEY: "user-123-api-key", }, timeout: 20000, // サーバー固有のタイムアウト }, weather: { url: new URL("http://localhost:8080/sse"), requestInit: { headers: { Authorization: `Bearer user-123-token`, }, }, }, }, }); // すべてのツールセットをstream()またはgenerate()に渡す const response = await agent.stream( "AAPLの調子はどうですか?また、天気はどうですか?", { toolsets: await mcp.getToolsets(), }, ); ``` ## リソース管理 `MCPConfiguration` クラスには、複数のインスタンスを管理するためのメモリリーク防止機能が組み込まれています: 1. `id` なしで同一の設定を持つ複数のインスタンスを作成すると、メモリリークを防ぐためにエラーが発生します 2. 同一の設定を持つ複数のインスタンスが必要な場合は、各インスタンスに一意の `id` を指定してください 3. 同じ設定でインスタンスを再作成する前に `await configuration.disconnect()` を呼び出してください 4. インスタンスが1つだけ必要な場合は、再作成を避けるために設定をより高いスコープに移動することを検討してください 例えば、`id` なしで同じ設定で複数のインスタンスを作成しようとすると: ```typescript // 最初のインスタンス - OK const mcp1 = new MCPConfiguration({ servers: { /* ... */ }, }); // 同じ設定での2番目のインスタンス - エラーが発生します const mcp2 = new MCPConfiguration({ servers: { /* ... */ }, }); // 修正方法: // 1. 一意のIDを追加 const mcp3 = new MCPConfiguration({ id: "instance-1", servers: { /* ... */ }, }); // 2. または再作成前に切断 await mcp1.disconnect(); const mcp4 = new MCPConfiguration({ servers: { /* ... */ }, }); ``` ## サーバーライフサイクル MCPConfigurationはサーバー接続を優雅に処理します: 1. 複数のサーバーに対する自動接続管理 2. 開発中のエラーメッセージを防ぐための優雅なサーバーシャットダウン 3. 切断時のリソースの適切なクリーンアップ ## 関連情報 - 個々のMCPクライアント設定の詳細については、[MastraMCPClient ドキュメント](./client)を参照してください - モデルコンテキストプロトコルについて詳しくは、[@modelcontextprotocol/sdk ドキュメント](https://github.com/modelcontextprotocol/typescript-sdk)を参照してください --- title: "リファレンス: createVectorQueryTool() | RAG | Mastra Tools ドキュメント" description: ベクトルストア上でのフィルタリングと再ランキング機能を備えたセマンティック検索を可能にするMastraのベクトルクエリツールのドキュメント。 --- # createVectorQueryTool() [JA] Source: https://mastra.ai/ja/reference/tools/vector-query-tool `createVectorQueryTool()` 関数は、ベクターストアに対するセマンティック検索のためのツールを作成します。フィルタリング、再ランキングをサポートし、さまざまなベクターストアのバックエンドと統合します。 ## 基本的な使用法 ```typescript import { openai } from '@ai-sdk/openai'; import { createVectorQueryTool } from "@mastra/rag"; const queryTool = createVectorQueryTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), }); ``` ## パラメータ ### RerankConfig ## 戻り値 このツールは以下のオブジェクトを返します: ## デフォルトツールの説明 デフォルトの説明は以下に焦点を当てています: - 保存された知識から関連情報を見つけること - ユーザーの質問に答えること - 事実に基づいたコンテンツを取得すること ## 結果の処理 このツールは、ユーザーのクエリに基づいて返す結果の数を決定し、デフォルトでは10件の結果を返します。これはクエリの要件に応じて調整可能です。 ## フィルターを使用した例 ```typescript const queryTool = createVectorQueryTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), enableFilters: true, }); ``` フィルタリングが有効になっていると、ツールはクエリを処理してメタデータフィルターを構築し、セマンティック検索と組み合わせます。プロセスは次のように機能します: 1. ユーザーが「'version' フィールドが2.0より大きいコンテンツを見つける」などの特定のフィルター要件でクエリを行います 2. エージェントがクエリを分析し、適切なフィルターを構築します: ```typescript { "version": { "$gt": 2.0 } } ``` このエージェント駆動のアプローチは次のことを行います: - 自然言語のクエリをフィルター仕様に処理します - ベクトルストア固有のフィルター構文を実装します - クエリ用語をフィルター演算子に変換します 詳細なフィルター構文とストア固有の機能については、[メタデータフィルター](../rag/metadata-filters)のドキュメントを参照してください。 エージェント駆動のフィルタリングがどのように機能するかの例については、[エージェント駆動のメタデータフィルタリング](../../../examples/rag/usage/filter-rag)の例を参照してください。 ## リランキングを使用した例 ```typescript const queryTool = createVectorQueryTool({ vectorStoreName: "milvus", indexName: "documentation", model: openai.embedding('text-embedding-3-small'), reranker: { model: openai('gpt-4o-mini'), options: { weights: { semantic: 0.5, // セマンティック関連性の重み vector: 0.3, // ベクトル類似性の重み position: 0.2 // 元の位置の重み }, topK: 5 } } }); ``` リランキングは以下を組み合わせることで結果の質を向上させます: - セマンティック関連性: テキスト類似性のLLMベースのスコアリングを使用 - ベクトル類似性: 元のベクトル距離スコア - 位置バイアス: 元の結果の順序を考慮 - クエリ分析: クエリの特性に基づく調整 リランカーは初期のベクトル検索結果を処理し、関連性に最適化された再注文リストを返します。 ## カスタム説明付きの例 ```typescript const queryTool = createVectorQueryTool({ vectorStoreName: "pinecone", indexName: "docs", model: openai.embedding('text-embedding-3-small'), description: "会社の方針や手順に関する質問に答えるために関連情報を見つけるために、文書アーカイブを検索します" }); ``` この例は、情報検索の基本目的を維持しながら、特定の使用ケースに合わせてツールの説明をカスタマイズする方法を示しています。 ## ツールの詳細 このツールは以下で作成されています: - **ID**: `VectorQuery {vectorStoreName} {indexName} Tool` - **入力スキーマ**: queryText と filter オブジェクトが必要 - **出力スキーマ**: relevantContext 文字列を返す ## 関連 - [rerank()](../rag/rerank) - [createGraphRAGTool](./graph-rag-tool) --- title: "リファレンス: Azure Voice | 音声プロバイダー | Mastra ドキュメント" description: "Azure Cognitive Servicesを使用してテキスト読み上げと音声認識機能を提供するAzureVoiceクラスのドキュメント。" --- # Azure [JA] Source: https://mastra.ai/ja/reference/voice/azure Mastra の AzureVoice クラスは、Microsoft Azure Cognitive Services を使用してテキスト読み上げと音声認識の機能を提供します。 ## 使用例 ```typescript import { AzureVoice } from '@mastra/voice-azure'; // Initialize with configuration const voice = new AzureVoice({ speechModel: { name: 'neural', apiKey: 'your-azure-speech-api-key', region: 'eastus' }, listeningModel: { name: 'whisper', apiKey: 'your-azure-speech-api-key', region: 'eastus' }, speaker: 'en-US-JennyNeural' // Default voice }); // Convert text to speech const audioStream = await voice.speak('Hello, how can I help you?', { speaker: 'en-US-GuyNeural', // Override default voice style: 'cheerful' // Voice style }); // Convert speech to text const text = await voice.listen(audioStream, { filetype: 'wav', language: 'en-US' }); ``` ## 設定 ### コンストラクタオプション ### AzureSpeechConfig ## メソッド ### speak() Azureのニューラルテキスト読み上げサービスを使用してテキストを音声に変換します。 戻り値: `Promise` ### listen() Azureの音声認識サービスを使用して音声を文字起こしします。 戻り値: `Promise` ### getSpeakers() 利用可能な音声オプションの配列を返します。各ノードには以下が含まれます: ## メモ - APIキーはコンストラクタオプションまたは環境変数(AZURE_SPEECH_KEYとAZURE_SPEECH_REGION)を通じて提供できます - Azureは多くの言語にわたる幅広いニューラルボイスを提供しています - 一部の音声は、明るい、悲しい、怒りなどの話し方のスタイルをサポートしています - 音声認識は複数のオーディオフォーマットと言語をサポートしています - Azureの音声サービスは、自然な響きの高品質なニューラルボイスを提供しています --- title: "リファレンス: Cloudflare Voice | 音声プロバイダー | Mastra ドキュメント" description: "CloudflareVoiceクラスのドキュメント。Cloudflare Workers AIを使用したテキスト読み上げ機能を提供します。" --- # Cloudflare [JA] Source: https://mastra.ai/ja/reference/voice/cloudflare MastraのCloudflareVoiceクラスは、Cloudflare Workers AIを使用したテキスト読み上げ機能を提供します。このプロバイダーは、エッジコンピューティング環境に適した効率的で低遅延の音声合成を専門としています。 ## 使用例 ```typescript import { CloudflareVoice } from '@mastra/voice-cloudflare'; // Initialize with configuration const voice = new CloudflareVoice({ speechModel: { name: '@cf/meta/m2m100-1.2b', apiKey: 'your-cloudflare-api-token', accountId: 'your-cloudflare-account-id' }, speaker: 'en-US-1' // Default voice }); // Convert text to speech const audioStream = await voice.speak('Hello, how can I help you?', { speaker: 'en-US-2', // Override default voice }); // Get available voices const speakers = await voice.getSpeakers(); console.log(speakers); ``` ## 設定 ### コンストラクタオプション ### CloudflareSpeechConfig ## メソッド ### speak() Cloudflareのテキスト読み上げサービスを使用してテキストを音声に変換します。 戻り値: `Promise` ### getSpeakers() 利用可能な音声オプションの配列を返します。各ノードには以下が含まれます: ## メモ - APIトークンはコンストラクタオプションまたは環境変数(CLOUDFLARE_API_TOKENとCLOUDFLARE_ACCOUNT_ID)を通じて提供できます - Cloudflare Workers AIは低レイテンシーのエッジコンピューティング向けに最適化されています - このプロバイダーは音声合成(TTS)機能のみをサポートし、音声認識(STT)はサポートしていません - このサービスは他のCloudflare Workers製品とうまく連携します - 本番環境での使用には、Cloudflareアカウントに適切なWorkers AIサブスクリプションがあることを確認してください - 音声オプションは他のプロバイダーと比較すると限定的ですが、エッジでのパフォーマンスは優れています ## 関連プロバイダー テキスト読み上げ機能に加えて音声認識機能が必要な場合は、以下のプロバイダーの利用を検討してください: - [OpenAI](./openai) - TTSとSTTの両方を提供 - [Google](./google) - TTSとSTTの両方を提供 - [Azure](./azure) - TTSとSTTの両方を提供 --- title: "リファレンス: CompositeVoice | Voice Providers | Mastra ドキュメント" description: "CompositeVoice クラスのドキュメントで、複数の音声プロバイダーを組み合わせて柔軟なテキスト読み上げと音声認識操作を可能にします。" --- # CompositeVoice [JA] Source: https://mastra.ai/ja/reference/voice/composite-voice CompositeVoiceクラスは、テキストから音声への変換や音声からテキストへの変換のために、異なる音声プロバイダーを組み合わせることができます。これは、各操作に最適なプロバイダーを使用したい場合に特に便利です。例えば、音声からテキストへの変換にはOpenAIを使用し、テキストから音声への変換にはPlayAIを使用する場合です。 CompositeVoiceは、Agentクラスによって内部的に使用され、柔軟な音声機能を提供します。 ## 使用例 ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; // Create voice providers const openai = new OpenAIVoice(); const playai = new PlayAIVoice(); // Use OpenAI for listening (speech-to-text) and PlayAI for speaking (text-to-speech) const voice = new CompositeVoice({ input: openai, output: playai }); // Convert speech to text using OpenAI const text = await voice.listen(audioStream); // Convert text to speech using PlayAI const audio = await voice.speak("Hello, world!"); ``` ## コンストラクタのパラメータ ## メソッド ### speak() 設定されたスピーキングプロバイダーを使用してテキストを音声に変換します。 注意事項: - スピーキングプロバイダーが設定されていない場合、このメソッドはエラーをスローします - オプションは設定されたスピーキングプロバイダーに渡されます - 音声データのストリームを返します ### listen() 設定されたリスニングプロバイダーを使用して音声をテキストに変換します。 注意事項: - リスニングプロバイダーが設定されていない場合、このメソッドはエラーをスローします - オプションは設定されたリスニングプロバイダーに渡されます - プロバイダーに応じて、文字列または転写されたテキストのストリームを返します ### getSpeakers() スピーキングプロバイダーから利用可能な声のリストを返します。各ノードには以下が含まれます: 注意事項: - スピーキングプロバイダーからのみ声を返します - スピーキングプロバイダーが設定されていない場合、空の配列を返します - 各声オブジェクトには少なくともvoiceIdプロパティがあります - 追加の声のプロパティはスピーキングプロバイダーに依存します --- title: "リファレンス: Deepgram Voice | 音声プロバイダー | Mastra ドキュメント" description: "Deepgramの音声実装に関するドキュメントで、複数の音声モデルと言語を使用したテキスト読み上げと音声認識機能を提供します。" --- # Deepgram [JA] Source: https://mastra.ai/ja/reference/voice/deepgram MastraにおけるDeepgramの音声実装は、DeepgramのAPIを使用してテキスト読み上げ(TTS)と音声認識(STT)機能を提供します。複数の音声モデルと言語をサポートしており、音声合成と文字起こしの両方に対して設定可能なオプションがあります。 ## 使用例 ```typescript import { DeepgramVoice } from "@mastra/voice-deepgram"; // Initialize with default configuration (uses DEEPGRAM_API_KEY environment variable) const voice = new DeepgramVoice(); // Initialize with custom configuration const voice = new DeepgramVoice({ speechModel: { name: 'aura', apiKey: 'your-api-key', }, listeningModel: { name: 'nova-2', apiKey: 'your-api-key', }, speaker: 'asteria-en', }); // Text-to-Speech const audioStream = await voice.speak("Hello, world!"); // Speech-to-Text const transcript = await voice.listen(audioStream); ``` ## コンストラクタパラメータ ### DeepgramVoiceConfig ", description: "Deepgram APIに渡す追加のプロパティ", isOptional: true, }, { name: "language", type: "string", description: "モデルの言語コード", isOptional: true, }, ]} /> ## メソッド ### speak() 設定された音声モデルと声を使用してテキストを音声に変換します。 戻り値: `Promise` ### listen() 設定されたリスニングモデルを使用して音声をテキストに変換します。 戻り値: `Promise` ### getSpeakers() 利用可能な音声オプションのリストを返します。 --- title: "リファレンス: ElevenLabs Voice | 音声プロバイダー | Mastra ドキュメント" description: "ElevenLabs の音声実装に関するドキュメントで、複数の音声モデルと自然な音声合成を備えた高品質なテキスト読み上げ機能を提供します。" --- # ElevenLabs [JA] Source: https://mastra.ai/ja/reference/voice/elevenlabs MastraにおけるElevenLabsの音声実装は、ElevenLabs APIを使用して高品質なテキスト読み上げ(TTS)および音声認識(STT)機能を提供します。 ## 使用例 ```typescript import { ElevenLabsVoice } from "@mastra/voice-elevenlabs"; // デフォルトの設定で初期化(ELEVENLABS_API_KEY環境変数を使用) const voice = new ElevenLabsVoice(); // カスタム設定で初期化 const voice = new ElevenLabsVoice({ speechModel: { name: 'eleven_multilingual_v2', apiKey: 'your-api-key', }, speaker: 'custom-speaker-id', }); // テキストから音声へ const audioStream = await voice.speak("Hello, world!"); // 利用可能なスピーカーを取得 const speakers = await voice.getSpeakers(); ``` ## コンストラクタパラメータ ### ElevenLabsVoiceConfig ## メソッド ### speak() 設定された音声モデルと声を使用してテキストを音声に変換します。 戻り値: `Promise` ### getSpeakers() 利用可能な音声オプションの配列を返します。各ノードには以下が含まれます: ### listen() ElevenLabs Speech-to-Text APIを使用して音声入力をテキストに変換します。 オプションオブジェクトは以下のプロパティをサポートします: 戻り値: `Promise` - 文字起こしされたテキストに解決されるPromise ## 重要な注意事項 1. ElevenLabs APIキーが必要です。`ELEVENLABS_API_KEY` 環境変数を介して設定するか、コンストラクタで渡してください。 2. デフォルトのスピーカーはAria(ID: '9BWtsMINqrJLrRacOk9x')に設定されています。 3. ElevenLabsは音声からテキストへの機能をサポートしていません。 4. 利用可能なスピーカーは、各声の言語や性別を含む詳細情報を返す `getSpeakers()` メソッドを使用して取得できます。 --- title: "リファレンス: Google Voice | Voice Providers | Mastra Docs" description: "Google Voice の実装に関するドキュメントで、テキスト読み上げと音声認識機能を提供します。" --- # Google [JA] Source: https://mastra.ai/ja/reference/voice/google MastraにおけるGoogle Voiceの実装は、Google Cloudサービスを使用して、テキスト読み上げ(TTS)と音声認識(STT)の両方の機能を提供します。複数の声、言語、および高度なオーディオ設定オプションをサポートしています。 ## 使用例 ```typescript import { GoogleVoice } from "@mastra/voice-google"; // Initialize with default configuration (uses GOOGLE_API_KEY environment variable) const voice = new GoogleVoice(); // Initialize with custom configuration const voice = new GoogleVoice({ speechModel: { apiKey: 'your-speech-api-key', }, listeningModel: { apiKey: 'your-listening-api-key', }, speaker: 'en-US-Casual-K', }); // Text-to-Speech const audioStream = await voice.speak("Hello, world!", { languageCode: 'en-US', audioConfig: { audioEncoding: 'LINEAR16', }, }); // Speech-to-Text const transcript = await voice.listen(audioStream, { config: { encoding: 'LINEAR16', languageCode: 'en-US', }, }); // Get available voices for a specific language const voices = await voice.getSpeakers({ languageCode: 'en-US' }); ``` ## コンストラクターパラメータ ### GoogleModelConfig ## メソッド ### speak() Google Cloud Text-to-Speech サービスを使用してテキストを音声に変換します。 戻り値: `Promise` ### listen() Google Cloud Speech-to-Text サービスを使用して音声をテキストに変換します。 戻り値: `Promise` ### getSpeakers() 利用可能な音声オプションの配列を返します。各ノードには以下が含まれます: ## 重要な注意事項 1. Google Cloud APIキーが必要です。`GOOGLE_API_KEY`環境変数を介して設定するか、コンストラクタで渡してください。 2. デフォルトの声は「en-US-Casual-K」に設定されています。 3. テキスト読み上げと音声認識サービスの両方で、LINEAR16がデフォルトのオーディオエンコーディングとして使用されます。 4. `speak()`メソッドは、Google Cloud Text-to-Speech APIを通じて高度なオーディオ設定をサポートしています。 5. `listen()`メソッドは、Google Cloud Speech-to-Text APIを通じてさまざまな認識設定をサポートしています。 6. 利用可能な声は、`getSpeakers()`メソッドを使用して言語コードでフィルタリングできます。 --- title: "リファレンス: MastraVoice | ボイスプロバイダー | Mastra ドキュメント" description: "Mastra のすべての音声サービスのコアインターフェースを定義する MastraVoice 抽象基底クラスのドキュメントで、音声間変換機能を含みます。" --- # MastraVoice [JA] Source: https://mastra.ai/ja/reference/voice/mastra-voice MastraVoiceクラスは、Mastraにおける音声サービスのコアインターフェースを定義する抽象基底クラスです。すべての音声プロバイダー実装(OpenAI、Deepgram、PlayAI、Speechifyなど)は、このクラスを拡張して特定の機能を提供します。このクラスには、WebSocket接続を通じたリアルタイムの音声対音声機能のサポートが含まれています。 ## 使用例 ```typescript import { MastraVoice } from "@mastra/core/voice"; // 音声プロバイダーの実装を作成 class MyVoiceProvider extends MastraVoice { constructor(config: { speechModel?: BuiltInModelConfig; listeningModel?: BuiltInModelConfig; speaker?: string; realtimeConfig?: { model?: string; apiKey?: string; options?: unknown; }; }) { super({ speechModel: config.speechModel, listeningModel: config.listeningModel, speaker: config.speaker, realtimeConfig: config.realtimeConfig }); } // 必須の抽象メソッドを実装 async speak(input: string | NodeJS.ReadableStream, options?: { speaker?: string }): Promise { // テキストから音声への変換を実装 } async listen(audioStream: NodeJS.ReadableStream, options?: unknown): Promise { // 音声からテキストへの変換を実装 } async getSpeakers(): Promise> { // 利用可能な音声のリストを返す } // オプションの音声から音声へのメソッド async connect(): Promise { // 音声から音声への通信のためのWebSocket接続を確立 } async send(audioData: NodeJS.ReadableStream | Int16Array): Promise { // 音声から音声へのオーディオデータをストリーム } async answer(): Promise { // 音声プロバイダーに応答を促す } addTools(tools: Array): void { // 音声プロバイダーが使用するツールを追加 } close(): void { // WebSocket接続を閉じる } on(event: string, callback: (data: unknown) => void): void { // イベントリスナーを登録 } off(event: string, callback: (data: unknown) => void): void { // イベントリスナーを削除 } } ``` ## コンストラクタパラメータ ### BuiltInModelConfig ### RealtimeConfig ## 抽象メソッド これらのメソッドは、MastraVoiceを拡張する未知のクラスによって実装される必要があります。 ### speak() 設定された音声モデルを使用してテキストを音声に変換します。 ```typescript abstract speak( input: string | NodeJS.ReadableStream, options?: { speaker?: string; [key: string]: unknown; } ): Promise ``` 目的: - テキスト入力を受け取り、プロバイダーのテキスト読み上げサービスを使用して音声に変換します - 柔軟性のために文字列とストリーム入力の両方をサポートします - オプションを通じてデフォルトのスピーカー/声を上書きすることができます - 再生または保存可能な音声データのストリームを返します - 音声が「speaking」イベントを発生させることで処理される場合、voidを返すことがあります ### listen() 設定されたリスニングモデルを使用して音声をテキストに変換します。 ```typescript abstract listen( audioStream: NodeJS.ReadableStream, options?: { [key: string]: unknown; } ): Promise ``` 目的: - 音声ストリームを受け取り、プロバイダーの音声認識サービスを使用してテキストに変換します - トランスクリプション設定のためのプロバイダー固有のオプションをサポートします - 完全なテキストトランスクリプションまたはトランスクリプションされたテキストのストリームを返すことができます - すべてのプロバイダーがこの機能をサポートしているわけではありません(例: PlayAI, Speechify) - トランスクリプションが「writing」イベントを発生させることで処理される場合、voidを返すことがあります ### getSpeakers() プロバイダーがサポートする利用可能な声のリストを返します。 ```typescript abstract getSpeakers(): Promise> ``` 目的: - プロバイダーから利用可能な声/スピーカーのリストを取得します - 各声には少なくともvoiceIdプロパティが必要です - プロバイダーは各声に関する追加のメタデータを含めることができます - テキスト読み上げ変換のために利用可能な声を発見するために使用されます ## オプションのメソッド これらのメソッドはデフォルトの実装を持っていますが、音声プロバイダーが音声間の機能をサポートしている場合、上書きすることができます。 ### connect() 通信のためのWebSocketまたはWebRTC接続を確立します。 ```typescript connect(config?: unknown): Promise ``` 目的: - 通信のために音声サービスへの接続を初期化します - send()やanswer()のような機能を使用する前に呼び出す必要があります - 接続が確立されると解決されるPromiseを返します - 設定はプロバイダー固有です ### send() 音声プロバイダーにリアルタイムで音声データをストリーミングします。 ```typescript send(audioData: NodeJS.ReadableStream | Int16Array): Promise ``` 目的: - 音声プロバイダーに音声データをリアルタイムで送信し、処理します - ライブマイク入力のような連続音声ストリーミングシナリオに便利です - ReadableStreamとInt16Arrayの両方の音声フォーマットをサポートします - このメソッドを呼び出す前に接続状態である必要があります ### answer() 音声プロバイダーに応答を生成させます。 ```typescript answer(): Promise ``` 目的: - 音声プロバイダーに応答を生成する信号を送信します - リアルタイムの会話でAIに応答を促すために使用されます - 応答はイベントシステム(例: 'speaking'イベント)を通じて発信されます ### addTools() 会話中に使用できるツールを音声プロバイダーに装備します。 ```typescript addTools(tools: Array): void ``` 目的: - 会話中に音声プロバイダーが使用できるツールを追加します - ツールは音声プロバイダーの機能を拡張できます - 実装はプロバイダー固有です ### close() WebSocketまたはWebRTC接続を切断します。 ```typescript close(): void ``` 目的: - 音声サービスへの接続を閉じます - リソースをクリーンアップし、進行中のリアルタイム処理を停止します - 音声インスタンスを終了する際に呼び出すべきです ### on() 音声イベントのためのイベントリスナーを登録します。 ```typescript on( event: E, callback: (data: E extends keyof VoiceEventMap ? VoiceEventMap[E] : unknown) => void, ): void ``` 目的: - 指定されたイベントが発生したときに呼び出されるコールバック関数を登録します - 標準イベントには'speaking'、'writing'、'error'が含まれます - プロバイダーはカスタムイベントも発信できます - イベントデータの構造はイベントタイプに依存します ### off() イベントリスナーを削除します。 ```typescript off( event: E, callback: (data: E extends keyof VoiceEventMap ? VoiceEventMap[E] : unknown) => void, ): void ``` 目的: - 以前に登録されたイベントリスナーを削除します - イベントハンドラーが不要になったときにクリーンアップするために使用されます ## イベントシステム MastraVoiceクラスには、リアルタイム通信のためのイベントシステムが含まれています。標準的なイベントタイプには以下が含まれます: ## 保護されたプロパティ ## テレメトリーサポート MastraVoiceには、パフォーマンスの追跡とエラーモニタリングを伴うメソッド呼び出しをラップする`traced`メソッドを通じて、組み込みのテレメトリーサポートが含まれています。 ## メモ - MastraVoiceは抽象クラスであり、直接インスタンス化することはできません - 実装はすべての抽象メソッドに対して具体的な実装を提供する必要があります - このクラスは異なる音声サービスプロバイダー間で一貫したインターフェースを提供します - 音声から音声への機能はオプションであり、プロバイダー固有です - イベントシステムはリアルタイムの対話のための非同期通信を可能にします - テレメトリはすべてのメソッド呼び出しに対して自動的に処理されます --- title: "リファレンス: Murf Voice | Voice Providers | Mastra Docs" description: "Murf音声実装のドキュメントで、テキスト読み上げ機能を提供します。" --- # Murf [JA] Source: https://mastra.ai/ja/reference/voice/murf MastraにおけるMurfの音声実装は、MurfのAI音声サービスを使用してテキスト読み上げ(TTS)機能を提供します。これは、異なる言語で複数の音声をサポートしています。 ## 使用例 ```typescript import { MurfVoice } from "@mastra/voice-murf"; // デフォルトの設定で初期化(MURF_API_KEY環境変数を使用) const voice = new MurfVoice(); // カスタム設定で初期化 const voice = new MurfVoice({ speechModel: { name: 'GEN2', apiKey: 'your-api-key', properties: { format: 'MP3', rate: 1.0, pitch: 1.0, sampleRate: 48000, channelType: 'STEREO', }, }, speaker: 'en-US-cooper', }); // デフォルト設定でのテキスト読み上げ const audioStream = await voice.speak("Hello, world!"); // カスタムプロパティでのテキスト読み上げ const audioStream = await voice.speak("Hello, world!", { speaker: 'en-UK-hazel', properties: { format: 'WAV', rate: 1.2, style: 'casual', }, }); // 利用可能な声を取得 const voices = await voice.getSpeakers(); ``` ## コンストラクタパラメータ ### MurfConfig ### 音声プロパティ ", description: "カスタム発音マッピング", isOptional: true, }, { name: "encodeAsBase64", type: "boolean", description: "オーディオをbase64としてエンコードするかどうか", isOptional: true, }, { name: "variation", type: "number", description: "声のバリエーションパラメータ", isOptional: true, }, { name: "audioDuration", type: "number", description: "目標オーディオの長さ(秒)", isOptional: true, }, { name: "multiNativeLocale", type: "string", description: "多言語サポートのためのロケール", isOptional: true, }, ]} /> ## メソッド ### speak() MurfのAPIを使用してテキストを音声に変換します。 戻り値: `Promise` ### getSpeakers() 利用可能な音声オプションの配列を返します。各ノードには以下が含まれます: ### listen() このメソッドはMurfではサポートされておらず、エラーをスローします。Murfは音声認識機能を提供していません。 ## 重要な注意事項 1. Murf APIキーが必要です。`MURF_API_KEY` 環境変数を介して設定するか、コンストラクタで渡してください。 2. サービスはデフォルトのモデルバージョンとしてGEN2を使用します。 3. 音声プロパティはコンストラクタレベルで設定でき、リクエストごとに上書き可能です。 4. サービスは、フォーマット、サンプルレート、チャンネルタイプなどのプロパティを通じて広範なオーディオカスタマイズをサポートします。 5. 音声認識機能はサポートされていません。 --- title: "リファレンス: OpenAI リアルタイム音声 | 音声プロバイダー | Mastra ドキュメント" description: "OpenAIRealtimeVoice クラスのドキュメントで、WebSockets を介したリアルタイムのテキスト読み上げと音声認識機能を提供します。" --- # OpenAI Realtime Voice [JA] Source: https://mastra.ai/ja/reference/voice/openai-realtime OpenAIRealtimeVoice クラスは、OpenAI の WebSocket ベースの API を使用してリアルタイムの音声対話機能を提供します。リアルタイムの音声から音声への変換、音声活動検出、およびイベントベースのオーディオストリーミングをサポートしています。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { playAudio, getMicrophoneStream } from "@mastra/node-audio"; // Initialize with default configuration using environment variables const voice = new OpenAIRealtimeVoice(); // Or initialize with specific configuration const voiceWithConfig = new OpenAIRealtimeVoice({ chatModel: { apiKey: 'your-openai-api-key', model: 'gpt-4o-mini-realtime-preview-2024-12-17', options: { sessionConfig: { turn_detection: { type: 'server_vad', threshold: 0.6, silence_duration_ms: 1200 } } } }, speaker: 'alloy' // Default voice }); // Establish connection await voice.connect(); // Set up event listeners voice.on('speaker', ({ audio }) => { // Handle audio data (Int16Array) pcm format by default playAudio(audio); }); voice.on('writing', ({ text, role }) => { // Handle transcribed text console.log(`${role}: ${text}`); }); // Convert text to speech await voice.speak('Hello, how can I help you today?', { speaker: 'echo' // Override default voice }); // Process audio input const microphoneStream = getMicrophoneStream(); await voice.send(microphoneStream); // When done, disconnect voice.connect(); ``` ## 設定 ### コンストラクタオプション ### chatModel ### options ### 音声活動検出 (VAD) 設定 ## メソッド ### connect() OpenAIリアルタイムサービスへの接続を確立します。speak、listen、またはsend関数を使用する前に呼び出す必要があります。 ", description: "接続が確立されたときに解決されるPromise。", }, ]} /> ### speak() 設定された音声モデルを使用してスピーキングイベントを発生させます。入力として文字列またはリーダブルストリームのいずれかを受け入れることができます。 戻り値: `Promise` ### listen() 音声認識のために音声入力を処理します。音声データのリーダブルストリームを受け取り、書き起こされたテキストと共に「listening」イベントを発生させます。 戻り値: `Promise` ### send() ライブマイク入力のような連続音声ストリーミングシナリオのために、リアルタイムでOpenAIサービスに音声データをストリームします。 戻り値: `Promise` ### updateConfig() 音声インスタンスのセッション設定を更新します。これを使用して、音声設定、ターン検出、その他のパラメータを変更できます。 戻り値: `void` ### addTools() 音声インスタンスに一連のツールを追加します。ツールは、会話中にモデルが追加のアクションを実行できるようにします。OpenAIRealtimeVoiceがエージェントに追加されると、そのエージェントに設定されたツールは自動的に音声インターフェースで利用可能になります。 戻り値: `void` ### close() OpenAIリアルタイムセッションから切断し、リソースをクリーンアップします。音声インスタンスの使用が終了したら呼び出す必要があります。 戻り値: `void` ### getSpeakers() 利用可能な音声スピーカーのリストを返します。 戻り値: `Promise>` ### on() 音声イベントのためのイベントリスナーを登録します。 戻り値: `void` ### off() 以前に登録されたイベントリスナーを削除します。 戻り値: `void` ## イベント OpenAIRealtimeVoice クラスは次のイベントを発行します: ### OpenAI リアルタイムイベント 'openAIRealtime:' をプレフィックスとして付けることで、[OpenAI リアルタイムユーティリティイベント](https://github.com/openai/openai-realtime-api-beta#reference-client-utility-events)をリッスンすることもできます: ## 利用可能な声 以下の声のオプションが利用可能です: - `alloy`: 中立的でバランスの取れた - `ash`: 明瞭で正確な - `ballad`: メロディックで滑らかな - `coral`: 暖かく親しみやすい - `echo`: 共鳴し深みのある - `sage`: 落ち着いて思慮深い - `shimmer`: 明るくエネルギッシュな - `verse`: 多才で表現力豊かな --- title: "リファレンス: OpenAI Voice | 音声プロバイダー | Mastra ドキュメント" description: "OpenAIVoice クラスのドキュメントで、テキストから音声への変換と音声からテキストへの変換機能を提供します。" --- # OpenAI [JA] Source: https://mastra.ai/ja/reference/voice/openai MastraのOpenAIVoiceクラスは、OpenAIのモデルを使用してテキストから音声への変換と音声からテキストへの変換機能を提供します。 ## 使用例 ```typescript import { OpenAIVoice } from '@mastra/voice-openai'; // 環境変数を使用してデフォルト設定で初期化 const voice = new OpenAIVoice(); // または特定の設定で初期化 const voiceWithConfig = new OpenAIVoice({ speechModel: { name: 'tts-1-hd', apiKey: 'your-openai-api-key' }, listeningModel: { name: 'whisper-1', apiKey: 'your-openai-api-key' }, speaker: 'alloy' // デフォルトの声 }); // テキストを音声に変換 const audioStream = await voice.speak('こんにちは、どのようにお手伝いできますか?', { speaker: 'nova', // デフォルトの声を上書き speed: 1.2 // 音声速度を調整 }); // 音声をテキストに変換 const text = await voice.listen(audioStream, { filetype: 'mp3' }); ``` ## 設定 ### コンストラクタオプション ### OpenAIConfig ## メソッド ### speak() OpenAIのテキスト読み上げモデルを使用して、テキストを音声に変換します。 戻り値: `Promise` ### listen() OpenAIのWhisperモデルを使用して音声を文字起こしします。 戻り値: `Promise` ### getSpeakers() 利用可能な音声オプションの配列を返します。各ノードには以下が含まれます: ## メモ - APIキーは、コンストラクタオプションまたは`OPENAI_API_KEY`環境変数を介して提供できます - `tts-1-hd`モデルはより高品質なオーディオを提供しますが、処理時間が遅くなる可能性があります - 音声認識は、mp3、wav、webmを含む複数のオーディオフォーマットをサポートしています --- title: "リファレンス: PlayAI Voice | Voice Providers | Mastra Docs" description: "PlayAI音声実装のドキュメントで、テキスト読み上げ機能を提供します。" --- # PlayAI [JA] Source: https://mastra.ai/ja/reference/voice/playai MastraにおけるPlayAIの音声実装は、PlayAIのAPIを使用してテキスト読み上げ機能を提供します。 ## 使用例 ```typescript import { PlayAIVoice } from "@mastra/voice-playai"; // Initialize with default configuration (uses PLAYAI_API_KEY environment variable and PLAYAI_USER_ID environment variable) const voice = new PlayAIVoice(); // Initialize with default configuration const voice = new PlayAIVoice({ speechModel: { name: 'PlayDialog', apiKey: process.env.PLAYAI_API_KEY, userId: process.env.PLAYAI_USER_ID }, speaker: 'Angelo' // Default voice }); // Convert text to speech with a specific voice const audioStream = await voice.speak("Hello, world!", { speaker: 's3://voice-cloning-zero-shot/b27bc13e-996f-4841-b584-4d35801aea98/original/manifest.json' // Dexter voice }); ``` ## コンストラクターパラメータ ### PlayAIConfig ## メソッド ### speak() 設定された音声モデルと声を使用してテキストを音声に変換します。 戻り値: `Promise`。 ### getSpeakers() 利用可能な音声オプションの配列を返します。各ノードには以下が含まれます: ### listen() このメソッドはPlayAIではサポートされておらず、エラーをスローします。PlayAIは音声認識機能を提供していません。 ## メモ - PlayAIは認証にAPIキーとユーザーIDの両方が必要です - サービスは「PlayDialog」と「Play3.0-mini」の2つのモデルを提供しています - 各音声には、APIコールを行う際に使用する必要があるユニークなS3マニフェストIDがあります --- title: "リファレンス: Sarvam Voice | Voice Providers | Mastra Docs" description: "Sarvamクラスのドキュメントで、テキストから音声への変換と音声からテキストへの変換機能を提供します。" --- # Sarvam [JA] Source: https://mastra.ai/ja/reference/voice/sarvam MastraのSarvamVoiceクラスは、Sarvam AIモデルを使用してテキスト読み上げと音声認識機能を提供します。 ## 使用例 ```typescript import { SarvamVoice } from "@mastra/voice-sarvam"; // 環境変数を使用してデフォルト設定で初期化 const voice = new SarvamVoice(); // または特定の設定で初期化 const voiceWithConfig = new SarvamVoice({ speechModel: { model: "bulbul:v1", apiKey: process.env.SARVAM_API_KEY!, language: "en-IN", properties: { pitch: 0, pace: 1.65, loudness: 1.5, speech_sample_rate: 8000, enable_preprocessing: false, eng_interpolation_wt: 123, }, }, listeningModel: { model: "saarika:v2", apiKey: process.env.SARVAM_API_KEY!, languageCode: "en-IN", filetype?: 'wav'; }, speaker: "meera", // デフォルトの声 }); // テキストを音声に変換 const audioStream = await voice.speak("こんにちは、どのようにお手伝いできますか?"); // 音声をテキストに変換 const text = await voice.listen(audioStream, { filetype: "wav", }); ``` ### Sarvam API ドキュメント - https://docs.sarvam.ai/api-reference-docs/endpoints/text-to-speech ## 設定 ### コンストラクタオプション ### SarvamVoiceConfig ### SarvamListenOptions ## メソッド ### speak() Sarvamのテキスト読み上げモデルを使用して、テキストを音声に変換します。 戻り値: `Promise` ### listen() Sarvamの音声認識モデルを使用して音声を文字起こしします。 戻り値: `Promise` ### getSpeakers() 利用可能なボイスオプションの配列を返します。 戻り値: `Promise>` ## メモ - APIキーは、コンストラクタオプションまたは`SARVAM_API_KEY`環境変数を介して提供できます - APIキーが提供されていない場合、コンストラクタはエラーをスローします - サービスは`https://api.sarvam.ai`でSarvam AI APIと通信します - オーディオはバイナリオーディオデータを含むストリームとして返されます - 音声認識はmp3およびwavオーディオフォーマットをサポートしています --- title: "リファレンス: Speechify Voice | Voice Providers | Mastra Docs" description: "Speechify音声実装のドキュメントで、テキスト読み上げ機能を提供します。" --- # Speechify [JA] Source: https://mastra.ai/ja/reference/voice/speechify MastraにおけるSpeechifyの音声実装は、SpeechifyのAPIを使用してテキスト読み上げ機能を提供します。 ## 使用例 ```typescript import { SpeechifyVoice } from "@mastra/voice-speechify"; // デフォルトの設定で初期化(SPEECHIFY_API_KEY 環境変数を使用) const voice = new SpeechifyVoice(); // カスタム設定で初期化 const voice = new SpeechifyVoice({ speechModel: { name: 'simba-english', apiKey: 'your-api-key' }, speaker: 'george' // デフォルトの声 }); // テキストを音声に変換 const audioStream = await voice.speak("Hello, world!", { speaker: 'henry', // デフォルトの声を上書き }); ``` ## コンストラクターパラメーター ### SpeechifyConfig ## メソッド ### speak() 設定された音声モデルと声を使用してテキストを音声に変換します。 戻り値: `Promise` ### getSpeakers() 利用可能な音声オプションの配列を返します。各ノードには以下が含まれます: ### listen() このメソッドはSpeechifyではサポートされておらず、エラーをスローします。Speechifyは音声認識機能を提供していません。 ## メモ - Speechifyは認証にAPIキーを必要とします - デフォルトモデルは「simba-english」です - 音声からテキストへの機能はサポートされていません - 追加のオーディオストリームオプションは、speak()メソッドのoptionsパラメータを通じて渡すことができます --- title: "リファレンス: voice.addInstructions() | 音声プロバイダー | Mastra ドキュメント" description: "音声プロバイダーで利用可能なaddInstructions()メソッドのドキュメント。音声モデルの動作を導くための指示を追加します。" --- # voice.addInstructions() [JA] Source: https://mastra.ai/ja/reference/voice/voice.addInstructions `addInstructions()` メソッドは、リアルタイムの対話中にモデルの動作を導くための指示をボイスプロバイダーに装備します。これは特に、会話全体でコンテキストを維持するリアルタイムボイスプロバイダーに役立ちます。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Create an agent with the voice provider const agent = new Agent({ name: "Customer Support Agent", instructions: "You are a helpful customer support agent for a software company.", model: openai("gpt-4o"), voice, }); // Add additional instructions to the voice provider voice.addInstructions(` When speaking to customers: - Always introduce yourself as the customer support agent - Speak clearly and concisely - Ask clarifying questions when needed - Summarize the conversation at the end `); // Connect to the real-time service await voice.connect(); ``` ## パラメータ
## 戻り値 このメソッドは値を返しません。 ## 注意事項 - 指示は、音声インタラクションに関連して明確、具体的、かつ適切である場合に最も効果的です - このメソッドは主に会話のコンテキストを維持するリアルタイム音声プロバイダーで使用されます - 指示をサポートしていない音声プロバイダーで呼び出された場合、警告をログに記録し、何も実行しません - このメソッドで追加された指示は、通常、関連するエージェントによって提供される指示と組み合わされます - 最良の結果を得るには、会話を開始する前(`connect()`を呼び出す前)に指示を追加してください - `addInstructions()`を複数回呼び出すと、プロバイダーの実装によって、既存の指示が置き換えられるか追加されるかが異なります --- title: "リファレンス: voice.addTools() | 音声プロバイダー | Mastra ドキュメント" description: "音声プロバイダーで利用可能なaddTools()メソッドのドキュメント。音声モデルに関数呼び出し機能を提供します。" --- # voice.addTools() [JA] Source: https://mastra.ai/ja/reference/voice/voice.addTools `addTools()`メソッドは、リアルタイムの対話中にモデルが呼び出せるツール(関数)を音声プロバイダーに装備します。これにより、音声アシスタントは情報の検索、計算の実行、外部システムとの対話などのアクションを実行できるようになります。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { createTool } from "@mastra/core/tools"; import { z } from "zod"; // Define tools const weatherTool = createTool({ id: "getWeather", description: "Get the current weather for a location", inputSchema: z.object({ location: z.string().describe("The city and state, e.g. San Francisco, CA"), }), outputSchema: z.object({ message: z.string(), }), execute: async ({ context }) => { // Fetch weather data from an API const response = await fetch(`https://api.weather.com?location=${encodeURIComponent(context.location)}`); const data = await response.json(); return { message: `The current temperature in ${context.location} is ${data.temperature}°F with ${data.conditions}.` }; }, }); // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Add tools to the voice provider voice.addTools({ getWeather: weatherTool, }); // Connect to the real-time service await voice.connect(); ``` ## パラメータ
## 戻り値 このメソッドは値を返しません。 ## 注意事項 - ツールはMastraツールフォーマットに従い、名前、説明、入力スキーマ、実行関数を含む必要があります - このメソッドは主に関数呼び出しをサポートするリアルタイム音声プロバイダーで使用されます - ツールをサポートしていない音声プロバイダーで呼び出された場合、警告をログに記録し何も実行しません - このメソッドで追加されたツールは、通常、関連するエージェントによって提供されるツールと組み合わせて使用されます - 最良の結果を得るには、会話を開始する前(`connect()`を呼び出す前)にツールを追加してください - 音声プロバイダーは、モデルがツールを使用することを決定した際に、ツールハンドラーの呼び出しを自動的に処理します - `addTools()`を複数回呼び出すと、プロバイダーの実装によって、既存のツールが置き換えられるか、マージされる場合があります --- title: "リファレンス: voice.answer() | Voice Providers | Mastra Docs" description: "リアルタイム音声プロバイダーで利用可能な answer() メソッドのドキュメントで、音声プロバイダーに応答を生成させるトリガーです。" --- # voice.answer() [JA] Source: https://mastra.ai/ja/reference/voice/voice.answer `answer()` メソッドは、リアルタイムの音声プロバイダーでAIに応答を生成させるために使用されます。このメソッドは、ユーザー入力を受け取った後にAIに応答を明示的に指示する必要がある音声対音声の会話で特に役立ちます。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { getMicrophoneStream } from "@mastra/node-audio"; import Speaker from "@mastra/node-speaker"; const speaker = new Speaker({ sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro channels: 1, // Mono audio output (as opposed to stereo which would be 2) bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution) }); // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o", apiKey: process.env.OPENAI_API_KEY, }, speaker: "alloy", // Default voice }); // Connect to the real-time service await voice.connect(); // Register event listener for responses voice.on("speaker", (stream) => { // Handle audio response stream.pipe(speaker); }); // Send user audio input const microphoneStream = getMicrophoneStream(); await voice.send(microphoneStream); // Trigger the AI to respond await voice.answer(); ``` ## パラメータ
", description: "レスポンスのためのプロバイダー固有のオプション", isOptional: true, } ]} /> ## 戻り値 レスポンスがトリガーされたときに解決される`Promise`を返します。 ## 注意事項 - このメソッドは、音声対音声機能をサポートするリアルタイム音声プロバイダーでのみ実装されています - このメソッドが機能をサポートしていない音声プロバイダーで呼び出された場合、警告をログに記録して即座に解決します - 応答音声は通常、直接返されるのではなく、「speaking」イベントを通じて出力されます - サポートしているプロバイダーの場合、AIに生成させる代わりに特定の応答を送信するためにこのメソッドを使用できます - このメソッドは一般的に、会話のフローを作成するために`send()`と組み合わせて使用されます --- title: "リファレンス: voice.close() | 音声プロバイダー | Mastra ドキュメント" description: "リアルタイム音声サービスから切断するための、音声プロバイダーで利用可能なclose()メソッドのドキュメント。" --- # voice.close() [JA] Source: https://mastra.ai/ja/reference/voice/voice.close `close()` メソッドはリアルタイム音声サービスから切断し、リソースをクリーンアップします。これは音声セッションを適切に終了し、リソースリークを防ぐために重要です。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { getMicrophoneStream } from "@mastra/node-audio"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Connect to the real-time service await voice.connect(); // Start a conversation voice.speak("Hello, I'm your AI assistant!"); // Stream audio from a microphone const microphoneStream = getMicrophoneStream(); voice.send(microphoneStream); // When the conversation is complete setTimeout(() => { // Close the connection and clean up resources voice.close(); console.log("Voice session ended"); }, 60000); // End after 1 minute ``` ## パラメータ このメソッドはパラメータを受け付けません。 ## 戻り値 このメソッドは値を返しません。 ## 注意事項 - リアルタイム音声セッションが終了したら、リソースを解放するために必ず`close()`を呼び出してください - `close()`を呼び出した後、新しいセッションを開始するには再度`connect()`を呼び出す必要があります - このメソッドは主に、永続的な接続を維持するリアルタイム音声プロバイダーで使用されます - リアルタイム接続をサポートしていない音声プロバイダーで呼び出された場合、警告をログに記録するだけで何も実行しません - 接続を閉じないと、リソースリークや音声サービスプロバイダーとの潜在的な課金問題につながる可能性があります --- title: "リファレンス: voice.connect() | Voice Providers | Mastra Docs" description: "リアルタイム音声プロバイダーで利用可能なconnect()メソッドのドキュメントで、音声間通信の接続を確立します。" --- # voice.connect() [JA] Source: https://mastra.ai/ja/reference/voice/voice.connect `connect()` メソッドは、リアルタイムの音声対音声通信のために WebSocket または WebRTC 接続を確立します。このメソッドは、`send()` や `answer()` などの他のリアルタイム機能を使用する前に呼び出す必要があります。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; const speaker = new Speaker({ sampleRate: 24100, // MacBook Proでの高品質オーディオの標準であるHz単位のオーディオサンプルレート channels: 1, // モノラルオーディオ出力(ステレオの場合は2) bitDepth: 16, // オーディオ品質のビット深度 - CD品質標準(16ビット解像度) }); // リアルタイム音声プロバイダーを初期化 const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, options: { sessionConfig: { turn_detection: { type: "server_vad", threshold: 0.6, silence_duration_ms: 1200, }, }, }, }, speaker: "alloy", // デフォルトの音声 }); // リアルタイムサービスに接続 await voice.connect(); // これでリアルタイム機能を使用できます voice.on("speaker", (stream) => { stream.pipe(speaker); }); // 接続オプション付き await voice.connect({ timeout: 10000, // 10秒のタイムアウト reconnect: true, }); ``` ## パラメータ ", description: "プロバイダー固有の接続オプション", isOptional: true, } ]} /> ## 戻り値 接続が正常に確立されると解決される`Promise`を返します。 ## プロバイダー固有のオプション 各リアルタイム音声プロバイダーは、`connect()` メソッドに対して異なるオプションをサポートする場合があります: ### OpenAI リアルタイム ## CompositeVoiceとの使用 `CompositeVoice`を使用する場合、`connect()`メソッドは設定されたリアルタイムプロバイダーに委任されます: ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; const realtimeVoice = new OpenAIRealtimeVoice(); const voice = new CompositeVoice({ realtimeProvider: realtimeVoice, }); // これはOpenAIRealtimeVoiceプロバイダーを使用します await voice.connect(); ``` ## メモ - このメソッドは、音声から音声への機能をサポートするリアルタイム音声プロバイダーによってのみ実装されています - この機能をサポートしない音声プロバイダーで呼び出された場合、警告を記録し、即座に解決されます - `send()` や `answer()` などの他のリアルタイムメソッドを使用する前に、接続を確立する必要があります - 音声インスタンスの使用が終わったら、`close()` を呼び出してリソースを適切にクリーンアップしてください - プロバイダーによっては、実装に応じて接続が失われた際に自動的に再接続することがあります - 接続エラーは通常、例外としてスローされ、キャッチして処理する必要があります ## 関連メソッド - [voice.send()](./voice.send) - 音声プロバイダーに音声データを送信します - [voice.answer()](./voice.answer) - 音声プロバイダーに応答を促します - [voice.close()](./voice.close) - リアルタイムサービスから切断します - [voice.on()](./voice.on) - 音声イベントのためのイベントリスナーを登録します --- title: "リファレンス:音声イベント | 音声プロバイダー | Mastra ドキュメント" description: "音声プロバイダーから発信されるイベントのドキュメント、特にリアルタイム音声インタラクションに関するもの。" --- # 音声イベント [JA] Source: https://mastra.ai/ja/reference/voice/voice.events 音声プロバイダーはリアルタイムの音声インタラクション中に様々なイベントを発生させます。これらのイベントは[voice.on()](./voice.on)メソッドを使用してリッスンすることができ、インタラクティブな音声アプリケーションを構築する上で特に重要です。 ## 一般的なイベント これらのイベントは、リアルタイム音声プロバイダー間で一般的に実装されています: ## 注意事項 - すべてのイベントがすべての音声プロバイダーでサポートされているわけではありません - ペイロード構造はプロバイダーによって異なる場合があります - リアルタイムではないプロバイダーの場合、これらのイベントの多くは発生しません - イベントは会話の状態に応答するインタラクティブなUIを構築するのに役立ちます - イベントリスナーが不要になった場合は、[voice.off()](./voice.off)メソッドを使用してリスナーを削除することを検討してください --- title: "リファレンス: voice.getSpeakers() | 音声プロバイダー | Mastra ドキュメント" description: "音声プロバイダーで利用可能なgetSpeakers()メソッドのドキュメント。利用可能な音声オプションを取得します。" --- import { Tabs } from "nextra/components"; # voice.getSpeakers() [JA] Source: https://mastra.ai/ja/reference/voice/voice.getSpeakers `getSpeakers()` メソッドは、音声プロバイダーから利用可能な音声オプション(スピーカー)のリストを取得します。これにより、アプリケーションはユーザーに音声の選択肢を提示したり、異なるコンテキストに最も適した音声をプログラムで選択したりすることができます。 ## 使用例 ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; import { ElevenLabsVoice } from "@mastra/voice-elevenlabs"; // Initialize voice providers const openaiVoice = new OpenAIVoice(); const elevenLabsVoice = new ElevenLabsVoice({ apiKey: process.env.ELEVENLABS_API_KEY }); // Get available speakers from OpenAI const openaiSpeakers = await openaiVoice.getSpeakers(); console.log("OpenAI voices:", openaiSpeakers); // Example output: [{ voiceId: "alloy" }, { voiceId: "echo" }, { voiceId: "fable" }, ...] // Get available speakers from ElevenLabs const elevenLabsSpeakers = await elevenLabsVoice.getSpeakers(); console.log("ElevenLabs voices:", elevenLabsSpeakers); // Example output: [{ voiceId: "21m00Tcm4TlvDq8ikWAM", name: "Rachel" }, ...] // Use a specific voice for speech const text = "Hello, this is a test of different voices."; await openaiVoice.speak(text, { speaker: openaiSpeakers[2].voiceId }); await elevenLabsVoice.speak(text, { speaker: elevenLabsSpeakers[0].voiceId }); ``` ## パラメータ このメソッドはパラメータを受け付けません。 ## 戻り値 >", type: "Promise", description: "音声オプションの配列を解決するプロミス。各オプションには少なくともvoiceIdプロパティが含まれ、プロバイダー固有の追加メタデータが含まれる場合があります。", } ]} /> ## プロバイダー固有のメタデータ 異なる音声プロバイダーは、それぞれの音声に対して異なるメタデータを返します: ## 注意事項 - 利用可能な音声はプロバイダーによって大きく異なります - 一部のプロバイダーでは、音声の完全なリストを取得するために認証が必要な場合があります - プロバイダーがこのメソッドをサポートしていない場合、デフォルトの実装では空の配列が返されます - パフォーマンス上の理由から、リストを頻繁に表示する必要がある場合は結果をキャッシュすることを検討してください - `voiceId`プロパティはすべてのプロバイダーで確実に存在しますが、追加のメタデータは異なります --- title: "リファレンス: voice.listen() | Voice Providers | Mastra Docs" description: "すべてのMastra音声プロバイダーで利用可能なlisten()メソッドのドキュメントで、音声をテキストに変換します。" --- # voice.listen() [JA] Source: https://mastra.ai/ja/reference/voice/voice.listen `listen()` メソッドは、すべての Mastra 音声プロバイダーで利用可能なコア機能で、音声をテキストに変換します。音声ストリームを入力として受け取り、書き起こされたテキストを返します。 ## 使用例 ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; import { getMicrophoneStream } from "@mastra/node-audio"; import { createReadStream } from "fs"; import path from "path"; // Initialize a voice provider const voice = new OpenAIVoice({ listeningModel: { name: "whisper-1", apiKey: process.env.OPENAI_API_KEY, }, }); // Basic usage with a file stream const audioFilePath = path.join(process.cwd(), "audio.mp3"); const audioStream = createReadStream(audioFilePath); const transcript = await voice.listen(audioStream, { filetype: "mp3", }); console.log("Transcribed text:", transcript); // Using a microphone stream const microphoneStream = getMicrophoneStream(); // Assume this function gets audio input const transcription = await voice.listen(microphoneStream); // With provider-specific options const transcriptWithOptions = await voice.listen(audioStream, { language: "en", prompt: "This is a conversation about artificial intelligence.", }); ``` ## パラメーター ## 戻り値 次のいずれかを返します: - `Promise`: 転写されたテキストに解決されるプロミス - `Promise`: 転写されたテキストのストリームに解決されるプロミス(ストリーミング転写用) - `Promise`: テキストを直接返すのではなく「書き込み」イベントを発するリアルタイムプロバイダー用 ## プロバイダー固有のオプション 各音声プロバイダーは、実装に特有の追加オプションをサポートしている場合があります。以下はいくつかの例です: ### OpenAI ### Google ### Deepgram ## リアルタイム音声プロバイダー `OpenAIRealtimeVoice`のようなリアルタイム音声プロバイダーを使用する場合、`listen()`メソッドは異なる動作をします: - 文字起こしされたテキストを返す代わりに、文字起こしされたテキストを含む'writing'イベントを発行します - 文字起こしを受け取るためにイベントリスナーを登録する必要があります ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import { getMicrophoneStream } from "@mastra/node-audio"; const voice = new OpenAIRealtimeVoice(); await voice.connect(); // Register event listener for transcription voice.on("writing", ({ text, role }) => { console.log(`${role}: ${text}`); }); // This will emit 'writing' events instead of returning text const microphoneStream = getMicrophoneStream(); await voice.listen(microphoneStream); ``` ## CompositeVoiceとの使用 `CompositeVoice`を使用する場合、`listen()`メソッドは設定されたリスニングプロバイダーに委任されます: ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; const voice = new CompositeVoice({ listenProvider: new OpenAIVoice(), speakProvider: new PlayAIVoice(), }); // これはOpenAIVoiceプロバイダーを使用します const transcript = await voice.listen(audioStream); ``` ## メモ - すべての音声プロバイダーが音声認識機能をサポートしているわけではありません(例:PlayAI、Speechify) - `listen()` の動作はプロバイダーによってわずかに異なる場合がありますが、すべての実装は同じ基本インターフェースに従います - リアルタイム音声プロバイダーを使用する場合、メソッドは直接テキストを返さず、代わりに「writing」イベントを発生させることがあります - サポートされるオーディオフォーマットはプロバイダーによります。一般的なフォーマットには MP3、WAV、M4A があります - 一部のプロバイダーはストリーミング文字起こしをサポートしており、文字起こしされると同時にテキストが返されます - 最良のパフォーマンスを得るために、使用が終わったらオーディオストリームを閉じるか終了することを検討してください ## 関連メソッド - [voice.speak()](./voice.speak) - テキストを音声に変換します - [voice.send()](./voice.send) - 音声プロバイダーにリアルタイムで音声データを送信します - [voice.on()](./voice.on) - 音声イベントのためのイベントリスナーを登録します --- title: "リファレンス: voice.off() | 音声プロバイダー | Mastra ドキュメント" description: "音声プロバイダーで利用可能なoff()メソッドのドキュメント。音声イベントのイベントリスナーを削除します。" --- # voice.off() [JA] Source: https://mastra.ai/ja/reference/voice/voice.off `off()` メソッドは、以前に `on()` メソッドで登録されたイベントリスナーを削除します。これは、リアルタイム音声機能を持つ長時間実行アプリケーションでリソースをクリーンアップし、メモリリークを防ぐのに特に役立ちます。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import chalk from "chalk"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Connect to the real-time service await voice.connect(); // Define the callback function const writingCallback = ({ text, role }) => { if (role === 'user') { process.stdout.write(chalk.green(text)); } else { process.stdout.write(chalk.blue(text)); } }; // Register event listener voice.on("writing", writingCallback); // Later, when you want to remove the listener voice.off("writing", writingCallback); ``` ## パラメータ
## 戻り値 このメソッドは値を返しません。 ## メモ - `off()`に渡されるコールバックは、`on()`に渡されたのと同じ関数参照でなければなりません - コールバックが見つからない場合、このメソッドは何も効果を持ちません - このメソッドは主に、イベントベースの通信をサポートするリアルタイム音声プロバイダーで使用されます - イベントをサポートしていない音声プロバイダーで呼び出された場合、警告をログに記録し、何も行いません - イベントリスナーを削除することは、長時間実行されるアプリケーションでのメモリリークを防ぐために重要です --- title: "リファレンス: voice.on() | Voice Providers | Mastra Docs" description: "音声プロバイダーで利用可能な on() メソッドのドキュメントで、音声イベントのイベントリスナーを登録します。" --- # voice.on() [JA] Source: https://mastra.ai/ja/reference/voice/voice.on `on()` メソッドは、さまざまな音声イベントのためのイベントリスナーを登録します。これは、音声をリアルタイムで提供するプロバイダーにとって特に重要であり、イベントは文字起こしされたテキスト、音声応答、その他の状態変化を伝えるために使用されます。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; import chalk from "chalk"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Connect to the real-time service await voice.connect(); // Register event listener for transcribed text voice.on("writing", (event) => { if (event.role === 'user') { process.stdout.write(chalk.green(event.text)); } else { process.stdout.write(chalk.blue(event.text)); } }); // Listen for audio data and play it const speaker = new Speaker({ sampleRate: 24100, channels: 1, bitDepth: 16, }); voice.on("speaker", (stream) => { stream.pipe(speaker); }); // Register event listener for errors voice.on("error", ({ message, code, details }) => { console.error(`Error ${code}: ${message}`, details); }); ``` ## パラメータ
## 戻り値 このメソッドは値を返しません。 ## イベント イベントとそのペイロード構造の包括的なリストについては、[Voice Events](./voice.events)のドキュメントを参照してください。 一般的なイベントには以下が含まれます: - `speaking`: 音声データが利用可能になったときに発行されます - `speaker`: 音声出力にパイプできるストリームとともに発行されます - `writing`: テキストが文字起こしまたは生成されたときに発行されます - `error`: エラーが発生したときに発行されます - `tool-call-start`: ツールが実行される直前に発行されます - `tool-call-result`: ツールの実行が完了したときに発行されます 異なる音声プロバイダーは、異なるイベントセットと様々なペイロード構造をサポートしている場合があります。 ## CompositeVoiceとの使用 `CompositeVoice`を使用する場合、`on()`メソッドは設定されたリアルタイムプロバイダーに委任されます: ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; const speaker = new Speaker({ sampleRate: 24100, // MacBook Proでの高品質オーディオの標準であるHz単位のオーディオサンプルレート channels: 1, // モノラルオーディオ出力(ステレオの場合は2) bitDepth: 16, // オーディオ品質のビット深度 - CD品質の標準(16ビット解像度) }); const realtimeVoice = new OpenAIRealtimeVoice(); const voice = new CompositeVoice({ realtimeProvider: realtimeVoice, }); // リアルタイムサービスに接続 await voice.connect(); // これにより、OpenAIRealtimeVoiceプロバイダーにイベントリスナーが登録されます voice.on("speaker", (stream) => { stream.pipe(speaker) }); ``` ## メモ - このメソッドは主にイベントベースの通信をサポートするリアルタイム音声プロバイダーで使用されます - イベントをサポートしていない音声プロバイダーで呼び出された場合、警告をログに記録し何も実行しません - イベントリスナーは、イベントを発生させる可能性のあるメソッドを呼び出す前に登録する必要があります - イベントリスナーを削除するには、同じイベント名とコールバック関数を指定して[voice.off()](./voice.off)メソッドを使用します - 同じイベントに対して複数のリスナーを登録できます - コールバック関数はイベントタイプによって異なるデータを受け取ります([Voice Events](./voice.events)を参照) - パフォーマンスを最適化するために、不要になったイベントリスナーは削除することを検討してください --- title: "リファレンス: voice.send() | Voice Providers | Mastra Docs" description: "リアルタイム音声プロバイダーで利用可能なsend()メソッドのドキュメントで、音声データをストリーミングして継続的に処理します。" --- # voice.send() [JA] Source: https://mastra.ai/ja/reference/voice/voice.send `send()` メソッドは、音声プロバイダーにリアルタイムで音声データをストリーミングし、継続的に処理します。このメソッドは、リアルタイムの音声対音声の会話に不可欠で、マイク入力を直接AIサービスに送信することができます。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; import { getMicrophoneStream } from "@mastra/node-audio"; const speaker = new Speaker({ sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro channels: 1, // Mono audio output (as opposed to stereo which would be 2) bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution) }); // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Connect to the real-time service await voice.connect(); // Set up event listeners for responses voice.on("writing", ({ text, role }) => { console.log(`${role}: ${text}`); }); voice.on("speaker", (stream) => { stream.pipe(speaker) }); // Get microphone stream (implementation depends on your environment) const microphoneStream = getMicrophoneStream(); // Send audio data to the voice provider await voice.send(microphoneStream); // You can also send audio data as Int16Array const audioBuffer = getAudioBuffer(); // Assume this returns Int16Array await voice.send(audioBuffer); ``` ## パラメータ
## 戻り値 音声データが音声プロバイダーによって受け入れられたときに解決する`Promise`を返します。 ## 注意事項 - このメソッドは、音声から音声への機能をサポートするリアルタイム音声プロバイダーでのみ実装されています - この機能をサポートしていない音声プロバイダーで呼び出された場合、警告をログに記録して即座に解決します - WebSocket接続を確立するには、`send()`を使用する前に`connect()`を呼び出す必要があります - 音声フォーマットの要件は、特定の音声プロバイダーによって異なります - 継続的な会話では、通常、ユーザーの音声を送信するために`send()`を呼び出し、AIの応答をトリガーするために`answer()`を呼び出します - プロバイダーは通常、音声を処理する際に文字起こしされたテキストを含む「writing」イベントを発行します - AIが応答すると、プロバイダーは音声応答を含む「speaking」イベントを発行します --- title: "リファレンス: voice.speak() | Voice Providers | Mastra Docs" description: "すべてのMastra音声プロバイダーで利用可能なspeak()メソッドのドキュメントで、テキストを音声に変換します。" --- # voice.speak() [JA] Source: https://mastra.ai/ja/reference/voice/voice.speak `speak()` メソッドは、すべての Mastra 音声プロバイダーで利用可能なコア機能で、テキストを音声に変換します。テキスト入力を受け取り、再生または保存できる音声ストリームを返します。 ## 使用例 ```typescript import { OpenAIVoice } from "@mastra/voice-openai"; // 音声プロバイダーを初期化 const voice = new OpenAIVoice({ speaker: "alloy", // デフォルトの音声 }); // デフォルト設定での基本的な使用法 const audioStream = await voice.speak("こんにちは、世界!"); // この特定のリクエストに異なる音声を使用 const audioStreamWithDifferentVoice = await voice.speak("再びこんにちは!", { speaker: "nova", }); // プロバイダー固有のオプションを使用 const audioStreamWithOptions = await voice.speak("オプション付きでこんにちは!", { speaker: "echo", speed: 1.2, // OpenAI固有のオプション }); // テキストストリームを入力として使用 import { Readable } from "stream"; const textStream = Readable.from(["こんにちは", " ストリーム", " から", " です!"]); const audioStreamFromTextStream = await voice.speak(textStream); ``` ## パラメーター ## 戻り値 `Promise` を返します。ここで: - `NodeJS.ReadableStream`: 再生または保存可能な音声データのストリーム - `void`: 音声を直接返すのではなく、イベントを通じて音声を発するリアルタイム音声プロバイダーを使用する場合 ## プロバイダー固有のオプション 各音声プロバイダーは、実装に特有の追加オプションをサポートしている場合があります。以下はいくつかの例です: ### OpenAI ### ElevenLabs ### Google ### Murf ## リアルタイム音声プロバイダー `OpenAIRealtimeVoice`のようなリアルタイム音声プロバイダーを使用する場合、`speak()`メソッドは異なる動作をします: - オーディオストリームを返す代わりに、オーディオデータを含む「speaking」イベントを発生させます - オーディオチャンクを受け取るためにイベントリスナーを登録する必要があります ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; const speaker = new Speaker({ sampleRate: 24100, // MacBook Proでの高品質オーディオの標準Hzでのオーディオサンプルレート channels: 1, // モノラルオーディオ出力(ステレオの場合は2) bitDepth: 16, // オーディオ品質のビット深度 - CD品質の標準(16ビット解像度) }); const voice = new OpenAIRealtimeVoice(); await voice.connect(); // オーディオチャンクのためのイベントリスナーを登録 voice.on("speaker", (stream) => { // オーディオチャンクを処理(例:再生または保存) stream.pipe(speaker) }); // これにより、ストリームを返す代わりに「speaking」イベントが発生します await voice.speak("こんにちは、これはリアルタイムの音声です!"); ``` ## CompositeVoiceとの使用 `CompositeVoice`を使用する場合、`speak()`メソッドは設定されたスピーキングプロバイダーに委任されます: ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIVoice } from "@mastra/voice-openai"; import { PlayAIVoice } from "@mastra/voice-playai"; const voice = new CompositeVoice({ speakProvider: new PlayAIVoice(), listenProvider: new OpenAIVoice(), }); // これはPlayAIVoiceプロバイダーを使用します const audioStream = await voice.speak("Hello, world!"); ``` ## メモ - `speak()` の動作はプロバイダーによってわずかに異なる場合がありますが、すべての実装は同じ基本インターフェースに従います。 - リアルタイム音声プロバイダーを使用する場合、メソッドは直接オーディオストリームを返さずに「speaking」イベントを発生させることがあります。 - テキストストリームが入力として提供される場合、プロバイダーは通常それを処理する前に文字列に変換します。 - 返されるストリームのオーディオ形式はプロバイダーによって異なります。一般的な形式にはMP3、WAV、OGGがあります。 - 最良のパフォーマンスを得るために、使用が終わったらオーディオストリームを閉じるか終了することを検討してください。 --- title: "リファレンス: voice.updateConfig() | 音声プロバイダー | Mastra Docs" description: "音声プロバイダーで利用可能なupdateConfig()メソッドのドキュメント。実行時に音声プロバイダーの設定を更新します。" --- # voice.updateConfig() [JA] Source: https://mastra.ai/ja/reference/voice/voice.updateConfig `updateConfig()` メソッドを使用すると、実行時に音声プロバイダーの設定を更新できます。これは、新しいインスタンスを作成せずに、音声設定、APIキー、またはその他のプロバイダー固有のオプションを変更する場合に便利です。 ## 使用例 ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-4o-mini-realtime", apiKey: process.env.OPENAI_API_KEY, }, speaker: "alloy", }); // Connect to the real-time service await voice.connect(); // Later, update the configuration voice.updateConfig({ voice: "nova", // Change the default voice turn_detection: { type: "server_vad", threshold: 0.5, silence_duration_ms: 1000 } }); // The next speak() call will use the new configuration await voice.speak("Hello with my new voice!"); ``` ## パラメータ
", description: "更新する設定オプション。具体的なプロパティは音声プロバイダーによって異なります。", isOptional: false, } ]} /> ## 戻り値 このメソッドは値を返しません。 ## 設定オプション 異なる音声プロバイダーは異なる設定オプションをサポートしています: ### OpenAI リアルタイム
## 注意事項 - デフォルトの実装では、プロバイダーがこのメソッドをサポートしていない場合は警告がログに記録されます - 設定の更新は通常、進行中の操作ではなく、後続の操作に適用されます - コンストラクタで設定できるすべてのプロパティが実行時に更新できるわけではありません - 具体的な動作はボイスプロバイダーの実装によって異なります - リアルタイムボイスプロバイダーの場合、一部の設定変更ではサービスへの再接続が必要になることがあります --- title: "リファレンス: .after() | ワークフローの構築 | Mastra ドキュメント" description: ワークフローにおける `after()` メソッドのドキュメントで、分岐と統合のパスを可能にします。 --- # .after() [JA] Source: https://mastra.ai/ja/reference/workflows/after `.after()` メソッドは、ワークフローのステップ間に明示的な依存関係を定義し、ワークフローの実行における分岐と統合のパスを可能にします。 ## 使用法 ### 基本的な分岐 ```typescript workflow .step(stepA) .then(stepB) .after(stepA) // stepAが完了した後に新しいブランチを作成 .step(stepC); ``` ### 複数のブランチのマージ ```typescript workflow .step(stepA) .then(stepB) .step(stepC) .then(stepD) .after([stepB, stepD]) // 複数のステップに依存するステップを作成 .step(stepE); ``` ## パラメーター ## 戻り値 ## 例 ### 単一の依存関係 ```typescript workflow .step(fetchData) .then(processData) .after(fetchData) // fetchDataの後に分岐 .step(logData); ``` ### 複数の依存関係(ブランチのマージ) ```typescript workflow .step(fetchUserData) .then(validateUserData) .step(fetchProductData) .then(validateProductData) .after([validateUserData, validateProductData]) // 両方の検証が完了するのを待つ .step(processOrder); ``` ## 関連 - [Branching Paths の例](../../../examples/workflows/branching-paths.mdx) - [Workflow クラスリファレンス](./workflow.mdx) - [ステップリファレンス](./step-class.mdx) - [制御フローガイド](../../workflows/control-flow.mdx#merging-multiple-branches) --- title: ".afterEvent() メソッド | Mastra ドキュメント" description: "イベントベースのサスペンションポイントを作成する Mastra ワークフローの afterEvent メソッドのリファレンス。" --- # afterEvent() [JA] Source: https://mastra.ai/ja/reference/workflows/afterEvent `afterEvent()` メソッドは、特定のイベントが発生するのを待ってから実行を続行するワークフロー内の中断ポイントを作成します。 ## 構文 ```typescript workflow.afterEvent(eventName: string): Workflow ``` ## パラメータ | パラメータ | 型 | 説明 | |-----------|------|-------------| | eventName | string | 待機するイベントの名前。ワークフローの `events` 設定で定義されたイベントと一致する必要があります。 | ## 戻り値 メソッドチェーンのためのワークフローインスタンスを返します。 ## 説明 `afterEvent()` メソッドは、特定の名前付きイベントを待機する自動停止ポイントをワークフロー内に作成するために使用されます。これは、ワークフローが一時停止し、外部イベントが発生するのを待つべきポイントを宣言的に定義する方法です。 `afterEvent()` を呼び出すと、Mastra は次のことを行います: 1. ID `__eventName_event` を持つ特別なステップを作成します 2. このステップはワークフローの実行を自動的に停止します 3. 指定されたイベントが `resumeWithEvent()` を介してトリガーされるまで、ワークフローは停止したままです 4. イベントが発生すると、`afterEvent()` 呼び出しの次のステップから実行が続行されます このメソッドは、Mastra のイベント駆動型ワークフロー機能の一部であり、外部システムやユーザーの操作と連携するワークフローを、手動で停止ロジックを実装することなく作成することを可能にします。 ## 使用上の注意 - `afterEvent()`で指定されたイベントは、スキーマを持つワークフローの`events`設定で定義されている必要があります - 作成された特別なステップには予測可能なID形式があります: `__eventName_event`(例: `__approvalReceived_event`) - `afterEvent()`に続く任意のステップは、`context.inputData.resumedEvent`を介してイベントデータにアクセスできます - `resumeWithEvent()`が呼び出されると、イベントデータはそのイベントのために定義されたスキーマに対して検証されます ## 例 ### 基本的な使用法 ```typescript // Define workflow with events const workflow = new Workflow({ name: 'approval-workflow', events: { approval: { schema: z.object({ approved: z.boolean(), approverName: z.string(), }), }, }, }); // Build workflow with event suspension point workflow .step(submitRequest) .afterEvent('approval') // Workflow suspends here .step(processApproval) // This step runs after the event occurs .commit(); ``` ## 関連 - [イベント駆動型ワークフロー](./events.mdx) - [resumeWithEvent()](./resumeWithEvent.mdx) - [一時停止と再開](../../workflows/suspend-and-resume.mdx) - [Workflow クラス](./workflow.mdx) --- title: "リファレンス: Workflow.commit() | ワークフローの実行 | Mastra ドキュメント" description: ワークフロー内の `.commit()` メソッドに関するドキュメントで、現在のステップ構成でワークフローマシンを再初期化します。 --- # Workflow.commit() [JA] Source: https://mastra.ai/ja/reference/workflows/commit `.commit()` メソッドは、現在のステップ構成でワークフローの状態マシンを再初期化します。 ## 使用法 ```typescript workflow .step(stepA) .then(stepB) .commit(); ``` ## 戻り値 ## 関連 - [Branching Paths の例](../../../examples/workflows/branching-paths.mdx) - [Workflow クラスリファレンス](./workflow.mdx) - [ステップリファレンス](./step-class.mdx) - [制御フローガイド](../../workflows/control-flow.mdx) ``` --- title: "リファレンス: Workflow.createRun() | ワークフローの実行 | Mastra ドキュメント" description: "ワークフロー内の `.createRun()` メソッドのドキュメントで、新しいワークフロー実行インスタンスを初期化します。" --- # Workflow.createRun() [JA] Source: https://mastra.ai/ja/reference/workflows/createRun `.createRun()` メソッドは、新しいワークフロー実行インスタンスを初期化します。追跡用のユニークな実行IDを生成し、呼び出されたときにワークフローの実行を開始するスタート関数を返します。 `.createRun()` を `.execute()` と比較して使用する理由の一つは、追跡、ログ記録、または `.watch()` を介した購読のためのユニークな実行IDを取得することです。 ## 使用法 ```typescript const { runId, start, watch } = workflow.createRun(); const result = await start(); ``` ## 戻り値 Promise", description: "呼び出されたときにワークフローの実行を開始する関数", }, { name: "watch", type: "(callback: (record: WorkflowResult) => void) => () => void", description: "ワークフロー実行の各遷移で呼び出されるコールバック関数を受け取る関数", }, { name: "resume", type: "({stepId: string, context: Record}) => Promise", description: "指定されたステップIDとコンテキストからワークフロー実行を再開する関数", }, { name: "resumeWithEvent", type: "(eventName: string, data: any) => Promise", description: "指定されたイベント名とデータからワークフロー実行を再開する関数", }, ]} /> ## エラーハンドリング ワークフロー構成が無効な場合、start 関数はバリデーションエラーをスローすることがあります: ```typescript try { const { runId, start, watch, resume, resumeWithEvent } = workflow.createRun(); await start({ triggerData: data }); } catch (error) { if (error instanceof ValidationError) { // バリデーションエラーを処理する console.log(error.type); // 'circular_dependency' | 'no_terminal_path' | 'unreachable_step' console.log(error.details); } } ``` ## 関連 - [Workflow クラスリファレンス](./workflow.mdx) - [Step クラスリファレンス](./step-class.mdx) - 完全な使用例については、[ワークフローの作成](../../../examples/workflows/creating-a-workflow.mdx)の例を参照してください ``` ``` --- title: "リファレンス: Workflow.else() | 条件分岐 | Mastra ドキュメント" description: "Mastra ワークフローにおける `.else()` メソッドのドキュメントで、if 条件が偽の場合に代替の分岐を作成します。" --- # Workflow.else() [JA] Source: https://mastra.ai/ja/reference/workflows/else > 実験的 `.else()` メソッドは、前の `if` 条件が false と評価されたときに実行される代替の分岐をワークフローに作成します。これにより、条件に基づいて異なるパスをたどるワークフローが可能になります。 ## 使用法 ```typescript copy showLineNumbers workflow .step(startStep) .if(async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return value < 10; }) .then(ifBranchStep) .else() // 条件が偽の場合の代替分岐 .then(elseBranchStep) .commit(); ``` ## パラメーター `else()` メソッドはパラメーターを受け取りません。 ## 戻り値 ## 振る舞い - `else()` メソッドは、ワークフロー定義内の `if()` ブランチの後に続かなければなりません - これは、直前の `if` 条件が偽と評価されたときにのみ実行されるブランチを作成します - `.then()` を使用して、`else()` の後に複数のステップを連鎖させることができます - `else` ブランチ内に追加の `if`/`else` 条件をネストすることができます ## エラーハンドリング `else()` メソッドは、前に `if()` ステートメントが必要です。前に `if` がない状態で使用しようとすると、エラーが発生します: ```typescript try { // これはエラーをスローします workflow .step(someStep) .else() .then(anotherStep) .commit(); } catch (error) { console.error(error); // "No active condition found" } ``` ## 関連 - [if リファレンス](./if.mdx) - [then リファレンス](./then.mdx) - [制御フローガイド](../../workflows/control-flow.mdx) - [ステップ条件リファレンス](./step-condition.mdx) --- title: "イベント駆動型ワークフロー | Mastra ドキュメント" description: "MastraでafterEventメソッドとresumeWithEventメソッドを使用してイベント駆動型ワークフローを作成する方法を学びます。" --- # イベント駆動ワークフロー [JA] Source: https://mastra.ai/ja/reference/workflows/events Mastraは、`afterEvent`および`resumeWithEvent`メソッドを通じて、イベント駆動ワークフローの組み込みサポートを提供します。これらのメソッドを使用すると、特定のイベントが発生するのを待っている間に実行を一時停止し、イベントデータが利用可能になったときに再開するワークフローを作成できます。 ## 概要 イベント駆動型ワークフローは、次のようなシナリオで役立ちます: - 外部システムが処理を完了するのを待つ必要がある - 特定のポイントでユーザーの承認や入力が必要 - 非同期操作を調整する必要がある - 長時間実行されるプロセスを異なるサービス間で分割して実行する必要がある ## イベントの定義 イベント駆動型の方法を使用する前に、ワークフロー構成でワークフローがリッスンするイベントを定義する必要があります: ```typescript import { Workflow } from '@mastra/core/workflows'; import { z } from 'zod'; const workflow = new Workflow({ name: 'approval-workflow', triggerSchema: z.object({ requestId: z.string() }), events: { // Define events with their validation schemas approvalReceived: { schema: z.object({ approved: z.boolean(), approverName: z.string(), comment: z.string().optional(), }), }, documentUploaded: { schema: z.object({ documentId: z.string(), documentType: z.enum(['invoice', 'receipt', 'contract']), metadata: z.record(z.string()).optional(), }), }, }, }); ``` 各イベントには、イベントが発生したときに期待されるデータの構造を定義するスキーマと名前が必要です。 ## afterEvent() `afterEvent` メソッドは、特定のイベントを自動的に待機するワークフロー内のサスペンションポイントを作成します。 ### 構文 ```typescript workflow.afterEvent(eventName: string): Workflow ``` ### パラメーター - `eventName`: 待機するイベントの名前(ワークフローの `events` 設定で定義されている必要があります) ### 戻り値 メソッドチェーンのためのワークフローインスタンスを返します。 ### 動作の仕組み `afterEvent` が呼び出されると、Mastra は次のことを行います: 1. ID `__eventName_event` を持つ特別なステップを作成します 2. このステップを設定してワークフローの実行を自動的に中断します 3. イベントが受信された後の継続ポイントを設定します ### 使用例 ```typescript workflow .step(initialProcessStep) .afterEvent('approvalReceived') // ワークフローはここで中断します .step(postApprovalStep) // これはイベント受信後に実行されます .then(finalStep) .commit(); ``` ## resumeWithEvent() `resumeWithEvent` メソッドは、特定のイベントに対するデータを提供することで、一時停止されたワークフローを再開します。 ### 構文 ```typescript run.resumeWithEvent(eventName: string, data: any): Promise ``` ### パラメータ - `eventName`: トリガーされるイベントの名前 - `data`: イベントデータ(このイベントのために定義されたスキーマに準拠している必要があります) ### 戻り値 再開後のワークフロー実行結果に解決される Promise を返します。 ### 動作の仕組み `resumeWithEvent` が呼び出されると、Mastra は以下を行います: 1. イベントデータをそのイベントのために定義されたスキーマに対して検証します 2. ワークフローのスナップショットをロードします 3. イベントデータでコンテキストを更新します 4. イベントステップから実行を再開します 5. 後続のステップでワークフローの実行を続行します ### 使用例 ```typescript // ワークフローの実行を作成 const run = workflow.createRun(); // ワークフローを開始 await run.start({ triggerData: { requestId: 'req-123' } }); // 後で、イベントが発生したとき: const result = await run.resumeWithEvent('approvalReceived', { approved: true, approverName: 'John Doe', comment: 'Looks good to me!' }); console.log(result.results); ``` ## イベントデータへのアクセス ワークフローがイベントデータで再開されると、そのデータはステップコンテキスト内で `context.inputData.resumedEvent` として利用可能です: ```typescript const processApprovalStep = new Step({ id: 'processApproval', execute: async ({ context }) => { // イベントデータにアクセス const eventData = context.inputData.resumedEvent; return { processingResult: `Processed approval from ${eventData.approverName}`, wasApproved: eventData.approved, }; }, }); ``` ## 複数のイベント さまざまなポイントで複数の異なるイベントを待機するワークフローを作成できます: ```typescript workflow .step(createRequest) .afterEvent('approvalReceived') .step(processApproval) .afterEvent('documentUploaded') .step(processDocument) .commit(); ``` 複数のイベント停止ポイントを持つワークフローを再開する際には、現在の停止ポイントに対して正しいイベント名とデータを提供する必要があります。 ## 実用的な例 この例は、承認とドキュメントのアップロードの両方を必要とする完全なワークフローを示しています: ```typescript import { Workflow, Step } from '@mastra/core/workflows'; import { z } from 'zod'; // Define steps const createRequest = new Step({ id: 'createRequest', execute: async () => ({ requestId: `req-${Date.now()}` }), }); const processApproval = new Step({ id: 'processApproval', execute: async ({ context }) => { const approvalData = context.inputData.resumedEvent; return { approved: approvalData.approved, approver: approvalData.approverName, }; }, }); const processDocument = new Step({ id: 'processDocument', execute: async ({ context }) => { const documentData = context.inputData.resumedEvent; return { documentId: documentData.documentId, processed: true, type: documentData.documentType, }; }, }); const finalizeRequest = new Step({ id: 'finalizeRequest', execute: async ({ context }) => { const requestId = context.steps.createRequest.output.requestId; const approved = context.steps.processApproval.output.approved; const documentId = context.steps.processDocument.output.documentId; return { finalized: true, summary: `Request ${requestId} was ${approved ? 'approved' : 'rejected'} with document ${documentId}` }; }, }); // Create workflow const requestWorkflow = new Workflow({ name: 'document-request-workflow', events: { approvalReceived: { schema: z.object({ approved: z.boolean(), approverName: z.string(), }), }, documentUploaded: { schema: z.object({ documentId: z.string(), documentType: z.enum(['invoice', 'receipt', 'contract']), }), }, }, }); // Build workflow requestWorkflow .step(createRequest) .afterEvent('approvalReceived') .step(processApproval) .afterEvent('documentUploaded') .step(processDocument) .then(finalizeRequest) .commit(); // Export workflow export { requestWorkflow }; ``` ### 例のワークフローを実行する ```typescript import { requestWorkflow } from './workflows'; import { mastra } from './mastra'; async function runWorkflow() { // Get the workflow const workflow = mastra.getWorkflow('document-request-workflow'); const run = workflow.createRun(); // Start the workflow const initialResult = await run.start(); console.log('Workflow started:', initialResult.results); // Simulate receiving approval const afterApprovalResult = await run.resumeWithEvent('approvalReceived', { approved: true, approverName: 'Jane Smith', }); console.log('After approval:', afterApprovalResult.results); // Simulate document upload const finalResult = await run.resumeWithEvent('documentUploaded', { documentId: 'doc-456', documentType: 'invoice', }); console.log('Final result:', finalResult.results); } runWorkflow().catch(console.error); ``` ## ベストプラクティス 1. **明確なイベントスキーマを定義する**: Zodを使用して、イベントデータの検証のための正確なスキーマを作成する 2. **説明的なイベント名を使用する**: 目的を明確に伝えるイベント名を選ぶ 3. **欠落したイベントを処理する**: イベントが発生しない、またはタイムアウトする場合にワークフローが対応できるようにする 4. **モニタリングを含める**: イベントを待っている一時停止したワークフローを監視するために`watch`メソッドを使用する 5. **タイムアウトを考慮する**: 発生しない可能性のあるイベントのためにタイムアウトメカニズムを実装する 6. **イベントを文書化する**: 他の開発者のためにワークフローが依存するイベントを明確に文書化する ## 関連 - [ワークフローでの一時停止と再開](../../workflows/suspend-and-resume.mdx) - [ワークフロークラスリファレンス](./workflow.mdx) - [再開メソッドリファレンス](./resume.mdx) - [ウォッチメソッドリファレンス](./watch.mdx) - [アフターイベントリファレンス](./afterEvent.mdx) - [イベントで再開リファレンス](./resumeWithEvent.mdx) --- title: "リファレンス: Workflow.execute() | ワークフロー | Mastra ドキュメント" description: "Mastra ワークフローにおける `.execute()` メソッドのドキュメントで、ワークフローステップを実行し、結果を返します。" --- # Workflow.execute() [JA] Source: https://mastra.ai/ja/reference/workflows/execute 提供されたトリガーデータでワークフローを実行し、結果を返します。ワークフローは実行前にコミットされている必要があります。 ## 使用例 ```typescript const workflow = new Workflow({ name: "my-workflow", triggerSchema: z.object({ inputValue: z.number() }) }); workflow.step(stepOne).then(stepTwo).commit(); const result = await workflow.execute({ triggerData: { inputValue: 42 } }); ``` ## パラメーター ## 戻り値 ", description: "完了した各ステップの結果" }, { name: "status", type: "WorkflowStatus", description: "ワークフローランの最終ステータス" } ] } ]} /> ## 追加の例 実行をランIDで行う: ```typescript const result = await workflow.execute({ runId: "custom-run-id", triggerData: { inputValue: 42 } }); ``` 実行結果を処理する: ```typescript const { runId, results, status } = await workflow.execute({ triggerData: { inputValue: 42 } }); if (status === "COMPLETED") { console.log("ステップの結果:", results); } ``` ### 関連 - [Workflow.createRun()](./createRun.mdx) - [Workflow.commit()](./commit.mdx) - [Workflow.start()](./start.mdx) --- title: "リファレンス: Workflow.if() | 条件分岐 | Mastra ドキュメント" description: "Mastra ワークフローにおける `.if()` メソッドのドキュメントで、指定された条件に基づいて条件分岐を作成します。" --- # Workflow.if() [JA] Source: https://mastra.ai/ja/reference/workflows/if > 実験的 `.if()` メソッドは、ワークフロー内で条件分岐を作成し、指定された条件が真である場合にのみステップを実行できるようにします。これにより、前のステップの結果に基づいて動的なワークフローパスが可能になります。 ## 使用法 ```typescript copy showLineNumbers workflow .step(startStep) .if(async ({ context }) => { const value = context.getStepResult<{ value: number }>('start')?.value; return value < 10; // If true, execute the "if" branch }) .then(ifBranchStep) .else() .then(elseBranchStep) .commit(); ``` ## パラメーター ## 条件タイプ ### 関数条件 ブール値を返す関数を使用できます: ```typescript workflow .step(startStep) .if(async ({ context }) => { const result = context.getStepResult<{ status: string }>('start'); return result?.status === 'success'; // ステータスが "success" の場合に "if" ブランチを実行 }) .then(successStep) .else() .then(failureStep); ``` ### 参照条件 比較演算子を使用した参照ベースの条件を使用できます: ```typescript workflow .step(startStep) .if({ ref: { step: startStep, path: 'value' }, query: { $lt: 10 }, // 値が10未満の場合に "if" ブランチを実行 }) .then(ifBranchStep) .else() .then(elseBranchStep); ``` ## 戻り値 ## エラーハンドリング `if` メソッドは、前のステップが定義されている必要があります。前のステップなしで使用しようとすると、エラーが発生します: ```typescript try { // これはエラーをスローします workflow .if(async ({ context }) => true) .then(someStep) .commit(); } catch (error) { console.error(error); // "条件には実行されるステップが必要です" } ``` ## 関連 - [else リファレンス](./else.mdx) - [then リファレンス](./then.mdx) - [制御フローガイド](../../workflows/control-flow.mdx) - [ステップ条件リファレンス](./step-condition.mdx) --- title: "リファレンス: run.resume() | ワークフローの実行 | Mastra ドキュメント" description: ワークフロー内の `.resume()` メソッドに関するドキュメントで、一時停止されたワークフローステップの実行を再開します。 --- # run.resume() [JA] Source: https://mastra.ai/ja/reference/workflows/resume `.resume()` メソッドは、一時停止されたワークフローステップの実行を再開し、オプションで新しいコンテキストデータを提供します。このデータは、inputData プロパティでステップによってアクセス可能です。 ## 使用法 ```typescript copy showLineNumbers await run.resume({ runId: "abc-123", stepId: "stepTwo", context: { secondValue: 100 } }); ``` ## パラメーター ### config ", description: "ステップのinputDataプロパティに注入する新しいコンテキストデータ", isOptional: true } ]} /> ## 戻り値 ", type: "object", description: "再開されたワークフロー実行の結果" } ]} /> ## 非同期/待機フロー ワークフローが再開されると、実行はステップの実行関数内の `suspend()` 呼び出しの直後から続行されます。これにより、コード内で自然なフローが作成されます。 ```typescript // 中断ポイントを持つステップ定義 const reviewStep = new Step({ id: "review", execute: async ({ context, suspend }) => { // 実行の最初の部分 const initialAnalysis = analyzeData(context.inputData.data); if (initialAnalysis.needsReview) { // ここで実行を中断 await suspend({ analysis: initialAnalysis }); // これは resume() が呼び出された後に実行されるコードです // context.inputData は再開中に提供されたデータを含むようになります return { reviewedData: enhanceWithFeedback(initialAnalysis, context.inputData.feedback) }; } return { reviewedData: initialAnalysis }; } }); const { runId, resume, start } = workflow.createRun(); await start({ inputData: { data: "some data" } }); // 後で、ワークフローを再開 const result = await resume({ runId: "workflow-123", stepId: "review", context: { // このデータは `context.inputData` で利用可能になります feedback: "良さそうですが、セクション3を改善してください" } }); ``` ### 実行フロー 1. ワークフローは `review` ステップの `await suspend()` に達するまで実行されます 2. ワークフローの状態が保存され、実行が一時停止します 3. 後で、新しいコンテキストデータで `run.resume()` が呼び出されます 4. `review` ステップの `suspend()` の後のポイントから実行が続行されます 5. 新しいコンテキストデータ(`feedback`)は `inputData` プロパティでステップに利用可能です 6. ステップが完了し、その結果を返します 7. ワークフローは後続のステップで続行されます ## エラーハンドリング resume 関数はいくつかのタイプのエラーをスローする可能性があります: ```typescript try { await run.resume({ runId, stepId: "stepTwo", context: newData }); } catch (error) { if (error.message === "No snapshot found for workflow run") { // ワークフロー状態が見つからない場合の処理 } if (error.message === "Failed to parse workflow snapshot") { // 破損したワークフロー状態の処理 } } ``` ## 関連 - [一時停止と再開](../../workflows/suspend-and-resume.mdx) - [`suspend` リファレンス](./suspend.mdx) - [`watch` リファレンス](./watch.mdx) - [ワークフロークラス リファレンス](./workflow.mdx) ``` --- title: ".resumeWithEvent() メソッド | Mastra ドキュメント" description: "イベントデータを使用して中断されたワークフローを再開する resumeWithEvent メソッドのリファレンス。" --- # resumeWithEvent() [JA] Source: https://mastra.ai/ja/reference/workflows/resumeWithEvent `resumeWithEvent()` メソッドは、ワークフローが待機している特定のイベントに対するデータを提供することによって、ワークフローの実行を再開します。 ## 構文 ```typescript const run = workflow.createRun(); // ワークフローが開始され、イベントステップで一時停止した後 await run.resumeWithEvent(eventName: string, data: any): Promise ``` ## パラメーター | パラメーター | 型 | 説明 | | --------- | ------ | ------------------------------------------------------------------------------------------------------- | | eventName | string | トリガーするイベントの名前。ワークフローの`events`設定で定義されたイベントと一致する必要があります。 | | data | any | 提供するイベントデータ。そのイベントのために定義されたスキーマに準拠している必要があります。 | ## 戻り値 `WorkflowRunResult` オブジェクトに解決される Promise を返します。これには以下が含まれます: - `results`: ワークフロー内の各ステップの結果ステータスと出力 - `activePaths`: アクティブなワークフローパスとその状態のマップ - `value`: ワークフローの現在の状態値 - その他のワークフロー実行メタデータ ## 説明 `resumeWithEvent()` メソッドは、`afterEvent()` メソッドによって作成されたイベントステップで一時停止されたワークフローを再開するために使用されます。このメソッドが呼び出されると、以下の処理が行われます: 1. 提供されたイベントデータを、そのイベントのために定義されたスキーマに対して検証します 2. ストレージからワークフローのスナップショットをロードします 3. `resumedEvent` フィールドにイベントデータを使用してコンテキストを更新します 4. イベントステップから実行を再開します 5. 後続のステップでワークフローの実行を続行します このメソッドは、Mastra のイベント駆動型ワークフロー機能の一部であり、外部イベントやユーザーの操作に応答するワークフローを作成することができます。 ## 使用上の注意 - ワークフローは中断状態でなければならず、特に`afterEvent(eventName)`によって作成されたイベントステップである必要があります - イベントデータは、ワークフロー構成でそのイベントのために定義されたスキーマに準拠している必要があります - ワークフローは中断されたポイントから実行を続行します - ワークフローが中断されていないか、別のステップで中断されている場合、このメソッドはエラーをスローする可能性があります - イベントデータは`context.inputData.resumedEvent`を通じて後続のステップで利用可能になります ## 例 ### 基本的な使用法 ```typescript // Define and start a workflow const workflow = mastra.getWorkflow("approval-workflow"); const run = workflow.createRun(); // Start the workflow await run.start({ triggerData: { requestId: "req-123" } }); // Later, when the approval event occurs: const result = await run.resumeWithEvent("approval", { approved: true, approverName: "John Doe", comment: "Looks good to me!", }); console.log(result.results); ``` ### エラーハンドリング付き ```typescript try { const result = await run.resumeWithEvent("paymentReceived", { amount: 100.5, transactionId: "tx-456", paymentMethod: "credit-card", }); console.log("Workflow resumed successfully:", result.results); } catch (error) { console.error("Failed to resume workflow with event:", error); // Handle error - could be invalid event data, workflow not suspended, etc. } ``` ### 監視と自動再開 ```typescript // Start a workflow const { start, watch, resumeWithEvent } = workflow.createRun(); // Watch for suspended event steps watch(async ({ activePaths }) => { const isApprovalEventSuspended = activePaths.get("__approval_event")?.status === "suspended"; // Check if suspended at the approval event step if (isApprovalEventSuspended) { console.log("Workflow waiting for approval"); // In a real scenario, you would wait for the actual event // Here we're simulating with a timeout setTimeout(async () => { try { await resumeWithEvent("approval", { approved: true, approverName: "Auto Approver", }); } catch (error) { console.error("Failed to auto-resume workflow:", error); } }, 5000); // Wait 5 seconds before auto-approving } }); // Start the workflow await start({ triggerData: { requestId: "auto-123" } }); ``` ## 関連 - [イベント駆動ワークフロー](./events.mdx) - [afterEvent()](./afterEvent.mdx) - [一時停止と再開](../../workflows/suspend-and-resume.mdx) - [resume()](./resume.mdx) - [watch()](./watch.mdx) --- title: "リファレンス: スナップショット | ワークフロー状態の永続化 | Mastra ドキュメント" description: "Mastraにおけるスナップショットに関する技術リファレンス - 一時停止と再開機能を可能にするシリアライズされたワークフロー状態" --- # Snapshots [JA] Source: https://mastra.ai/ja/reference/workflows/snapshots Mastraにおいて、スナップショットは、特定の時点でのワークフローの完全な実行状態をシリアライズ可能な形で表現したものです。スナップショットは、ワークフローを中断した正確な位置から再開するために必要なすべての情報をキャプチャします。これには以下が含まれます: - ワークフロー内の各ステップの現在の状態 - 完了したステップの出力 - ワークフローを通じて取られた実行パス - 中断されたステップとそのメタデータ - 各ステップの残りの再試行回数 - 実行を再開するために必要な追加のコンテキストデータ スナップショットは、ワークフローが中断されるたびにMastraによって自動的に作成および管理され、設定されたストレージシステムに永続化されます。 ## スナップショットの役割: 一時停止と再開 スナップショットは、Mastraの一時停止と再開機能を可能にする主要なメカニズムです。ワークフローステップが`await suspend()`を呼び出すと: 1. ワークフローの実行がその正確なポイントで一時停止されます 2. ワークフローの現在の状態がスナップショットとしてキャプチャされます 3. スナップショットがストレージに保存されます 4. ワークフローステップは「一時停止」として、ステータスが`'suspended'`でマークされます 5. 後で、`resume()`が一時停止されたステップで呼び出されると、スナップショットが取得されます 6. ワークフローの実行は、正確に中断した場所から再開されます このメカニズムは、人間が関与するワークフローを実装したり、レート制限を処理したり、外部リソースを待機したり、長期間の一時停止が必要な複雑な分岐ワークフローを実装するための強力な方法を提供します。 ## スナップショットの構造 Mastra ワークフロースナップショットは、いくつかの主要なコンポーネントで構成されています: ```typescript export interface WorkflowRunState { // Core state info value: Record; // 現在の状態マシンの値 context: { // ワークフローのコンテキスト steps: Record< string, { // ステップ実行結果 status: "success" | "failed" | "suspended" | "waiting" | "skipped"; payload?: any; // ステップ固有のデータ error?: string; // 失敗した場合のエラー情報 } >; triggerData: Record; // 初期トリガーデータ attempts: Record; // 残りの再試行回数 inputData: Record; // 初期入力データ }; activePaths: Array<{ // 現在アクティブな実行パス stepPath: string[]; stepId: string; status: string; }>; // メタデータ runId: string; // ユニークな実行識別子 timestamp: number; // スナップショットが作成された時間 // ネストされたワークフローと中断されたステップのために childStates?: Record; // 子ワークフローの状態 suspendedSteps?: Record; // 中断されたステップのマッピング } ``` ## スナップショットの保存と取得方法 Mastraは、設定されたストレージシステムにスナップショットを永続化します。デフォルトでは、スナップショットはLibSQLデータベースに保存されますが、Upstashのような他のストレージプロバイダーを使用するように設定することもできます。スナップショットは`workflow_snapshots`テーブルに保存され、libsqlを使用する場合、関連する実行の`run_id`によって一意に識別されます。永続化レイヤーを利用することで、ワークフローの実行をまたいでスナップショットを永続化でき、高度な人間を介したループ機能を可能にします。 [libsqlストレージ](../storage/libsql.mdx)と[upstashストレージ](../storage/upstash.mdx)についてさらに読むことができます。 ### スナップショットの保存 ワークフローが中断されると、Mastraは次の手順でワークフロースナップショットを自動的に永続化します: 1. ステップ実行中の`suspend()`関数がスナップショットプロセスをトリガーします 2. `WorkflowInstance.suspend()`メソッドが中断されたマシンを記録します 3. `persistWorkflowSnapshot()`が呼び出され、現在の状態を保存します 4. スナップショットはシリアライズされ、`workflow_snapshots`テーブルの設定されたデータベースに保存されます 5. ストレージレコードには、ワークフロー名、実行ID、およびシリアライズされたスナップショットが含まれます ### スナップショットの取得 ワークフローが再開されると、Mastraは次の手順で永続化されたスナップショットを取得します: 1. 特定のステップIDで`resume()`メソッドが呼び出されます 2. `loadWorkflowSnapshot()`を使用してストレージからスナップショットが読み込まれます 3. スナップショットが解析され、再開の準備が整えられます 4. スナップショット状態でワークフロー実行が再作成されます 5. 中断されたステップが再開され、実行が続行されます ## スナップショットのストレージオプション Mastraは、スナップショットを永続化するための複数のストレージオプションを提供します。 `storage` インスタンスは `Mastra` クラスで設定され、`Mastra` インスタンスに登録されたすべてのワークフローのためのスナップショット永続化レイヤーをセットアップするために使用されます。 これは、同じ `Mastra` インスタンスに登録されたすべてのワークフローでストレージが共有されることを意味します。 ### LibSQL (デフォルト) デフォルトのストレージオプションは、SQLite互換のデータベースであるLibSQLです: ```typescript import { Mastra } from "@mastra/core/mastra"; import { DefaultStorage } from "@mastra/core/storage/libsql"; const mastra = new Mastra({ storage: new DefaultStorage({ config: { url: "file:storage.db", // ローカルファイルベースのデータベース // 本番環境の場合: // url: process.env.DATABASE_URL, // authToken: process.env.DATABASE_AUTH_TOKEN, }, }), workflows: { weatherWorkflow, travelWorkflow, }, }); ``` ### Upstash (Redis互換) サーバーレス環境向け: ```typescript import { Mastra } from "@mastra/core/mastra"; import { UpstashStore } from "@mastra/upstash"; const mastra = new Mastra({ storage: new UpstashStore({ url: process.env.UPSTASH_URL, token: process.env.UPSTASH_TOKEN, }), workflows: { weatherWorkflow, travelWorkflow, }, }); ``` ## スナップショットを扱う際のベストプラクティス 1. **シリアライズ可能性を確保する**: スナップショットに含める必要があるデータは、シリアライズ可能(JSONに変換可能)でなければなりません。 2. **スナップショットサイズを最小化する**: 大きなデータオブジェクトをワークフローコンテキストに直接保存するのは避けてください。代わりに、それらへの参照(IDなど)を保存し、必要に応じてデータを取得します。 3. **再開コンテキストを慎重に扱う**: ワークフローを再開する際には、どのコンテキストを提供するかを慎重に考慮してください。これは既存のスナップショットデータとマージされます。 4. **適切なモニタリングを設定する**: 特に長時間実行されるワークフローのために、停止されたワークフローのモニタリングを実装し、適切に再開されることを確認します。 5. **ストレージのスケーリングを考慮する**: 多くの停止されたワークフローを持つアプリケーションの場合、ストレージソリューションが適切にスケーリングされていることを確認します。 ## 高度なスナップショットパターン ### カスタムスナップショットメタデータ ワークフローを一時停止する際に、再開時に役立つカスタムメタデータを含めることができます: ```typescript await suspend({ reason: "顧客の承認待ち", requiredApprovers: ["manager", "finance"], requestedBy: currentUser, urgency: "high", expires: new Date(Date.now() + 7 * 24 * 60 * 60 * 1000), }); ``` このメタデータはスナップショットと共に保存され、再開時に利用可能です。 ### 条件付き再開 再開時に一時停止ペイロードに基づく条件付きロジックを実装できます: ```typescript run.watch(async ({ activePaths }) => { const isApprovalStepSuspended = activePaths.get("approval")?.status === "suspended"; if (isApprovalStepSuspended) { const payload = activePaths.get("approval")?.suspendPayload; if (payload.urgency === "high" && currentUser.role === "manager") { await resume({ stepId: "approval", context: { approved: true, approver: currentUser.id }, }); } } }); ``` ## 関連 - [Suspend 関数リファレンス](./suspend.mdx) - [Resume 関数リファレンス](./resume.mdx) - [Watch 関数リファレンス](./watch.mdx) - [Suspend と Resume ガイド](../../workflows/suspend-and-resume.mdx) --- title: "リファレンス: start() | ワークフローの実行 | Mastra ドキュメント" description: "ワークフロー内の `start()` メソッドに関するドキュメントで、ワークフロー実行の開始を行います。" --- # start() [JA] Source: https://mastra.ai/ja/reference/workflows/start start関数はワークフローの実行を開始します。定義されたワークフローの順序で全てのステップを処理し、並列実行、分岐ロジック、ステップの依存関係を管理します。 ## 使用法 ```typescript copy showLineNumbers const { runId, start } = workflow.createRun(); const result = await start({ triggerData: { inputValue: 42 } }); ``` ## パラメーター ### config ", description: "ワークフローのtriggerSchemaに一致する初期データ", isOptional: false } ]} /> ## 戻り値 ", description: "すべての完了したワークフローステップからの結合出力" }, { name: "status", type: "'completed' | 'error' | 'suspended'", description: "ワークフロー実行の最終ステータス" } ]} /> ## エラーハンドリング start 関数は、いくつかの種類のバリデーションエラーをスローする可能性があります: ```typescript copy showLineNumbers try { const result = await start({ triggerData: data }); } catch (error) { if (error instanceof ValidationError) { console.log(error.type); // 'circular_dependency' | 'no_terminal_path' | 'unreachable_step' console.log(error.details); } } ``` ## 関連 - [例: ワークフローの作成](../../../examples/workflows/creating-a-workflow.mdx) - [例: 一時停止と再開](../../../examples/workflows/suspend-and-resume.mdx) - [createRun リファレンス](./createRun.mdx) - [Workflow クラス リファレンス](./workflow.mdx) - [Step クラス リファレンス](./step-class.mdx) ``` --- title: "リファレンス: ステップ | ワークフローの構築 | Mastra ドキュメント" description: ワークフロー内の個々の作業単位を定義するStepクラスのドキュメント。 --- # ステップ [JA] Source: https://mastra.ai/ja/reference/workflows/step-class Stepクラスは、ワークフロー内の個々の作業単位を定義し、実行ロジック、データ検証、および入出力処理をカプセル化します。 ## 使用法 ```typescript const processOrder = new Step({ id: "processOrder", inputSchema: z.object({ orderId: z.string(), userId: z.string() }), outputSchema: z.object({ status: z.string(), orderId: z.string() }), execute: async ({ context, runId }) => { return { status: "processed", orderId: context.orderId }; } }); ``` ## コンストラクターパラメータ ", description: "変数とマージされる静的データ", required: false }, { name: "execute", type: "(params: ExecuteParams) => Promise", description: "ステップのロジックを含む非同期関数", required: true } ]} /> ### ExecuteParams Promise", description: "ステップの実行を一時停止する関数" }, { name: "mastra", type: "Mastra", description: "Mastraインスタンスへのアクセス" } ]} /> ## 関連 - [ワークフローリファレンス](./workflow.mdx) - [ステップ設定ガイド](../../workflows/steps.mdx) - [制御フローガイド](../../workflows/control-flow.mdx) ``` --- title: "リファレンス: StepCondition | ワークフローの構築 | Mastra" description: ワークフロー内のステップ条件クラスのドキュメントで、前のステップの出力やトリガーデータに基づいてステップを実行するかどうかを決定します。 --- # StepCondition [JA] Source: https://mastra.ai/ja/reference/workflows/step-condition 条件は、前のステップの出力やトリガーデータに基づいてステップを実行するかどうかを決定します。 ## 使用法 条件を指定する方法は3つあります:関数、クエリオブジェクト、シンプルなパス比較。 ### 1. 関数条件 ```typescript copy showLineNumbers workflow.step(processOrder, { when: async ({ context }) => { const auth = context?.getStepResult<{status: string}>("auth"); return auth?.status === "authenticated"; } }); ``` ### 2. クエリオブジェクト ```typescript copy showLineNumbers workflow.step(processOrder, { when: { ref: { step: 'auth', path: 'status' }, query: { $eq: 'authenticated' } } }); ``` ### 3. シンプルなパス比較 ```typescript copy showLineNumbers workflow.step(processOrder, { when: { "auth.status": "authenticated" } }); ``` 条件のタイプに基づいて、ワークフローレナーはこれらのタイプのいずれかに条件を一致させようとします。 1. シンプルなパス条件(キーにドットがある場合) 2. ベース/クエリ条件('ref'プロパティがある場合) 3. 関数条件(非同期関数の場合) ## StepCondition ", description: "sift演算子($eq, $gt, など)を使用したMongoDBスタイルのクエリ", isOptional: false } ]} /> ## クエリ Queryオブジェクトは、前のステップやトリガーデータからの値を比較するためのMongoDBスタイルのクエリオペレーターを提供します。基本的な比較オペレーターである`$eq`、`$gt`、`$lt`のほか、配列オペレーターである`$in`や`$nin`をサポートしており、複雑な条件のためにand/orオペレーターと組み合わせることができます。 このクエリ構文は、ステップを実行するかどうかを決定するための読みやすい条件ロジックを可能にします。 ## 関連 - [ステップオプションリファレンス](./step-options.mdx) - [ステップ関数リファレンス](./step-function.mdx) - [制御フローガイド](../../workflows/control-flow.mdx) ``` --- title: "リファレンス: Workflow.step() | ワークフロー | Mastra ドキュメント" description: ワークフロー内の `.step()` メソッドに関するドキュメントで、ワークフローに新しいステップを追加します。 --- # Workflow.step() [JA] Source: https://mastra.ai/ja/reference/workflows/step-function `.step()` メソッドは、ワークフローに新しいステップを追加し、オプションでその変数と実行条件を設定します。 ## 使用法 ```typescript workflow.step({ id: "stepTwo", outputSchema: z.object({ result: z.number() }), execute: async ({ context }) => { return { result: 42 }; } }); ``` ## パラメーター ### StepDefinition Promise", description: "ステップロジックを含む関数", isOptional: false } ]} /> ### StepOptions ", description: "変数名とそのソース参照のマップ", isOptional: true }, { name: "when", type: "StepCondition", description: "ステップを実行するために満たすべき条件", isOptional: true } ]} /> ## 関連 - [ステップインスタンスの基本的な使用法](../../workflows/steps.mdx) - [ステップクラスリファレンス](./step-class.mdx) - [ワークフロークラスリファレンス](./workflow.mdx) - [制御フローガイド](../../workflows/control-flow.mdx) ``` --- title: "リファレンス: StepOptions | ワークフローの構築 | Mastra ドキュメント" description: ワークフロー内のステップオプションに関するドキュメントで、変数マッピング、実行条件、その他のランタイム動作を制御します。 --- # StepOptions [JA] Source: https://mastra.ai/ja/reference/workflows/step-options ワークフローステップの構成オプションで、変数マッピング、実行条件、その他のランタイム動作を制御します。 ## 使用法 ```typescript workflow.step(processOrder, { variables: { orderId: { step: 'trigger', path: 'id' }, userId: { step: 'auth', path: 'user.id' } }, when: { ref: { step: 'auth', path: 'status' }, query: { $eq: 'authenticated' } } }); ``` ## プロパティ ", description: "ステップ入力変数を他のステップからの値にマップします", isOptional: true }, { name: "when", type: "StepCondition", description: "ステップ実行のために満たす必要がある条件", isOptional: true } ]} /> ### VariableRef ## 関連 - [パス比較](../../workflows/control-flow.mdx#path-comparison) - [ステップ関数リファレンス](./step-function.mdx) - [ステップクラスリファレンス](./step-class.mdx) - [ワークフロークラスリファレンス](./workflow.mdx) - [制御フローガイド](../../workflows/control-flow.mdx) ``` --- title: "ステップのリトライ | エラーハンドリング | Mastra ドキュメント" description: "設定可能なリトライポリシーで、Mastra ワークフロー内の失敗したステップを自動的にリトライします。" --- # ステップリトライ [JA] Source: https://mastra.ai/ja/reference/workflows/step-retries Mastraは、ワークフローステップでの一時的な失敗を処理するための組み込みのリトライメカニズムを提供します。これにより、手動の介入を必要とせずに、一時的な問題からワークフローが優雅に回復することができます。 ## 概要 ワークフローのステップが失敗した場合(例外をスローする)、Mastraは設定可能なリトライポリシーに基づいてステップの実行を自動的に再試行できます。これは次のような問題を処理するのに役立ちます: - ネットワーク接続の問題 - サービスの利用不可 - レート制限 - 一時的なリソース制約 - その他の一時的な障害 ## デフォルトの動作 デフォルトでは、ステップが失敗した場合に再試行しません。これは次のことを意味します: - ステップは一度実行されます - 失敗した場合、すぐにそのステップを失敗としてマークします - ワークフローは、失敗したステップに依存しない後続のステップを引き続き実行します ## 設定オプション リトライは2つのレベルで設定できます: ### 1. ワークフロー・レベルの設定 ワークフロー内のすべてのステップに対してデフォルトのリトライ設定を行うことができます: ```typescript const workflow = new Workflow({ name: 'my-workflow', retryConfig: { attempts: 3, // 初回試行に加えてのリトライ回数 delay: 1000, // リトライ間の遅延時間(ミリ秒) }, }); ``` ### 2. ステップ・レベルの設定 個々のステップに対してリトライを設定することもでき、これにより特定のステップに対するワークフロー・レベルの設定を上書きします: ```typescript const fetchDataStep = new Step({ id: 'fetchData', execute: async () => { // 外部APIからデータを取得 }, retryConfig: { attempts: 5, // このステップは最大5回リトライします delay: 2000, // リトライ間の遅延は2秒です }, }); ``` ## リトライパラメータ `retryConfig` オブジェクトは以下のパラメータをサポートしています: | パラメータ | 型 | デフォルト | 説明 | |-----------|------|---------|-------------| | `attempts` | number | 0 | リトライ試行回数(初回試行に加えて) | | `delay` | number | 1000 | リトライ間の待機時間(ミリ秒) | ## リトライの仕組み ステップが失敗した場合、Mastraのリトライメカニズムは以下を行います: 1. ステップにリトライ試行が残っているか確認します 2. 試行が残っている場合: - 試行カウンターを減らします - ステップを「待機」状態に移行します - 設定された遅延期間を待ちます - ステップの実行を再試行します 3. 試行が残っていない場合、またはすべての試行が終了した場合: - ステップを「失敗」としてマークします - (失敗したステップに依存しないステップの)ワークフローの実行を続行します リトライ試行中、ワークフローの実行はアクティブなままですが、リトライされている特定のステップのために一時停止します。 ## 例 ### 基本的なリトライの例 ```typescript import { Workflow, Step } from '@mastra/core/workflows'; // 失敗する可能性のあるステップを定義 const unreliableApiStep = new Step({ id: 'callUnreliableApi', execute: async () => { // 失敗する可能性のあるAPI呼び出しをシミュレート const random = Math.random(); if (random < 0.7) { throw new Error('API call failed'); } return { data: 'API response data' }; }, retryConfig: { attempts: 3, // 最大3回リトライ delay: 2000, // 試行間に2秒待機 }, }); // 信頼性の低いステップを含むワークフローを作成 const workflow = new Workflow({ name: 'retry-demo-workflow', }); workflow .step(unreliableApiStep) .then(processResultStep) .commit(); ``` ### ステップオーバーライドによるワークフローレベルのリトライ ```typescript import { Workflow, Step } from '@mastra/core/workflows'; // デフォルトのリトライ設定を持つワークフローを作成 const workflow = new Workflow({ name: 'multi-retry-workflow', retryConfig: { attempts: 2, // すべてのステップはデフォルトで2回リトライ delay: 1000, // 1秒の遅延で }, }); // このステップはワークフローのデフォルトのリトライ設定を使用 const standardStep = new Step({ id: 'standardStep', execute: async () => { // 失敗する可能性のある操作 }, }); // このステップはワークフローのリトライ設定をオーバーライド const criticalStep = new Step({ id: 'criticalStep', execute: async () => { // より多くのリトライ試行が必要な重要な操作 }, retryConfig: { attempts: 5, // 5回のリトライ試行でオーバーライド delay: 5000, // より長い5秒の遅延 }, }); // このステップはリトライを無効にする const noRetryStep = new Step({ id: 'noRetryStep', execute: async () => { // リトライしないべき操作 }, retryConfig: { attempts: 0, // リトライを明示的に無効化 }, }); workflow .step(standardStep) .then(criticalStep) .then(noRetryStep) .commit(); ``` ## リトライの監視 ログでリトライの試行を監視できます。Mastraはリトライ関連のイベントを`debug`レベルで記録します: ``` [DEBUG] ステップ fetchData が失敗しました (runId: abc-123) [DEBUG] ステップ fetchData の試行回数: 残り2回の試行 (runId: abc-123) [DEBUG] ステップ fetchData が待機中 (runId: abc-123) [DEBUG] ステップ fetchData の待機が終了しました (runId: abc-123) [DEBUG] ステップ fetchData が保留中 (runId: abc-123) ``` ## ベストプラクティス 1. **一時的な失敗に対してリトライを使用する**: 一時的な失敗が発生する可能性のある操作にのみリトライを設定してください。決定論的なエラー(例えば、バリデーションエラー)にはリトライは役立ちません。 2. **適切な遅延を設定する**: サービスが回復する時間を確保するために、外部API呼び出しにはより長い遅延を使用することを検討してください。 3. **リトライ試行回数を制限する**: 非常に高いリトライ回数を設定しないでください。これは、障害時にワークフローが過度に長時間実行される原因となる可能性があります。 4. **冪等操作を実装する**: ステップの`execute`関数が冪等(副作用なしで複数回呼び出せる)であることを確認してください。リトライされる可能性があるためです。 5. **バックオフ戦略を検討する**: より高度なシナリオでは、レート制限される可能性のある操作に対して、ステップのロジックに指数バックオフを実装することを検討してください。 ## 関連 - [ステップクラスリファレンス](./step-class.mdx) - [ワークフロー設定](./workflow.mdx) - [ワークフローにおけるエラーハンドリング](../../workflows/error-handling.mdx) --- title: "リファレンス: suspend() | 制御フロー | Mastra ドキュメント" description: "Mastra ワークフローにおける suspend 関数のドキュメントで、再開されるまで実行を一時停止します。" --- # suspend() [JA] Source: https://mastra.ai/ja/reference/workflows/suspend ワークフローの実行を現在のステップで一時停止し、明示的に再開されるまで待機します。ワークフローの状態は保存され、後で続行することができます。 ## 使用例 ```typescript const approvalStep = new Step({ id: "needsApproval", execute: async ({ context, suspend }) => { if (context.steps.amount > 1000) { await suspend(); } return { approved: true }; } }); ``` ## パラメーター ", description: "中断された状態に保存するオプションのデータ", isOptional: true } ]} /> ## 戻り値 ", type: "Promise", description: "ワークフローが正常に一時停止されたときに解決されます" } ]} /> ## 追加の例 メタデータを使用したサスペンド: ```typescript const reviewStep = new Step({ id: "review", execute: async ({ context, suspend }) => { await suspend({ reason: "Needs manager approval", requestedBy: context.user }); return { reviewed: true }; } }); ``` ### 関連 - [ワークフローのサスペンドと再開](../../workflows/suspend-and-resume.mdx) - [.resume()](./resume.mdx) - [.watch()](./watch.mdx) --- title: "リファレンス: Workflow.then() | ワークフローの構築 | Mastra ドキュメント" description: ワークフロー内の `.then()` メソッドに関するドキュメントで、ステップ間の順次依存関係を作成します。 --- # Workflow.then() [JA] Source: https://mastra.ai/ja/reference/workflows/then `.then()` メソッドは、ワークフローステップ間に順次依存関係を作成し、ステップが特定の順序で実行されることを保証します。 ## 使用法 ```typescript workflow .step(stepOne) .then(stepTwo) .then(stepThree); ``` ## パラメーター ## 戻り値 ## 検証 `then`を使用する場合: - 前のステップはワークフローに存在しなければなりません - ステップは循環依存を形成することはできません - 各ステップは連続したチェーンに一度だけ現れることができます ## エラーハンドリング ```typescript try { workflow .step(stepA) .then(stepB) .then(stepA) // Will throw error - circular dependency .commit(); } catch (error) { if (error instanceof ValidationError) { console.log(error.type); // 'circular_dependency' console.log(error.details); } } ``` ## 関連 - [step リファレンス](./step-class.mdx) - [after リファレンス](./after.mdx) - [順次ステップの例](../../../examples/workflows/sequential-steps.mdx) - [制御フローガイド](../../workflows/control-flow.mdx) ``` --- title: "リファレンス: Workflow.until() | ワークフロー内のループ | Mastra ドキュメント" description: "Mastra ワークフローにおける `.until()` メソッドのドキュメントで、指定された条件が真になるまでステップを繰り返します。" --- # Workflow.until() [JA] Source: https://mastra.ai/ja/reference/workflows/until `.until()` メソッドは、指定された条件が真になるまでステップを繰り返します。これにより、条件が満たされるまで指定されたステップを実行し続けるループが作成されます。 ## 使用法 ```typescript workflow .step(incrementStep) .until(condition, incrementStep) .then(finalStep); ``` ## パラメーター ## 条件タイプ ### 関数条件 ブール値を返す関数を使用できます: ```typescript workflow .step(incrementStep) .until(async ({ context }) => { const result = context.getStepResult<{ value: number }>('increment'); return (result?.value ?? 0) >= 10; // 値が10に達するか超えたら停止 }, incrementStep) .then(finalStep); ``` ### 参照条件 比較演算子を使用した参照ベースの条件を使用できます: ```typescript workflow .step(incrementStep) .until( { ref: { step: incrementStep, path: 'value' }, query: { $gte: 10 }, // 値が10以上になったら停止 }, incrementStep ) .then(finalStep); ``` ## 比較演算子 参照ベースの条件を使用する場合、次の比較演算子を使用できます: | 演算子 | 説明 | 例 | |----------|-----------------|------------| | `$eq` | 等しい | `{ $eq: 10 }` | | `$ne` | 等しくない | `{ $ne: 0 }` | | `$gt` | より大きい | `{ $gt: 5 }` | | `$gte` | 以上 | `{ $gte: 10 }` | | `$lt` | より小さい | `{ $lt: 20 }` | | `$lte` | 以下 | `{ $lte: 15 }` | ## 戻り値 ## 例 ```typescript import { Workflow, Step } from '@mastra/core'; import { z } from 'zod'; // Create a step that increments a counter const incrementStep = new Step({ id: 'increment', description: 'カウンターを1増やします', outputSchema: z.object({ value: z.number(), }), execute: async ({ context }) => { // Get current value from previous execution or start at 0 const currentValue = context.getStepResult<{ value: number }>('increment')?.value || context.getStepResult<{ startValue: number }>('trigger')?.startValue || 0; // Increment the value const value = currentValue + 1; console.log(`Incrementing to ${value}`); return { value }; }, }); // Create a final step const finalStep = new Step({ id: 'final', description: 'ループ完了後の最終ステップ', execute: async ({ context }) => { const finalValue = context.getStepResult<{ value: number }>('increment')?.value; console.log(`ループは最終値: ${finalValue} で完了しました`); return { finalValue }; }, }); // Create the workflow const counterWorkflow = new Workflow({ name: 'counter-workflow', triggerSchema: z.object({ startValue: z.number(), targetValue: z.number(), }), }); // Configure the workflow with an until loop counterWorkflow .step(incrementStep) .until(async ({ context }) => { const targetValue = context.triggerData.targetValue; const currentValue = context.getStepResult<{ value: number }>('increment')?.value ?? 0; return currentValue >= targetValue; }, incrementStep) .then(finalStep) .commit(); // Execute the workflow const run = counterWorkflow.createRun(); const result = await run.start({ triggerData: { startValue: 0, targetValue: 5 } }); // Will increment from 0 to 5, then stop and execute finalStep ``` ## 関連 - [.while()](./while.mdx) - 条件が真の間ループする - [制御フローガイド](../../workflows/control-flow.mdx#loop-control-with-until-and-while) - [Workflow クラスリファレンス](./workflow.mdx) --- title: "リファレンス: run.watch() | ワークフロー | Mastra ドキュメント" description: ワークフロー内の `.watch()` メソッドに関するドキュメントで、ワークフローの実行状況を監視します。 --- # run.watch() [JA] Source: https://mastra.ai/ja/reference/workflows/watch `.watch()` 関数は、mastra の実行における状態の変化を購読し、実行の進行状況を監視し、状態の更新に反応することを可能にします。 ## 使用例 ```typescript import { Workflow } from "@mastra/core/workflows"; const workflow = new Workflow({ name: "document-processor" }); const run = workflow.createRun(); // 状態の変化を購読する const unsubscribe = run.watch(({results, activePaths}) => { console.log('結果:', results); console.log('アクティブパス:', activePaths); }); // ワークフローを実行する await run.start({ input: { text: "このドキュメントを処理する" } }); // 監視を停止する unsubscribe(); ``` ## パラメーター void", description: "ワークフローの状態が変化するたびに呼び出される関数", isOptional: false } ]} /> ### WorkflowState プロパティ ", description: "完了したワークフローステップからの出力", isOptional: false }, { name: "activePaths", type: "Map", description: "各ステップの現在のステータス", isOptional: false }, { name: "runId", type: "string", description: "ワークフロー実行のID", isOptional: false }, { name: "timestamp", type: "number", description: "ワークフロー実行のタイムスタンプ", isOptional: false } ]} /> ## 戻り値 void", description: "ワークフローの状態変化の監視を停止する関数" } ]} /> ## 追加の例 特定のステップの完了を監視する: ```typescript run.watch(({results, activePaths}) => { if (activePaths.get('processDocument')?.status === 'completed') { console.log('Document processing output:', results['processDocument'].output); } }); ``` エラーハンドリング: ```typescript run.watch(({results, activePaths}) => { if (activePaths.get('processDocument')?.status === 'failed') { console.error('Document processing failed:', results['processDocument'].error); // Implement error recovery logic } }); ``` ### 関連 - [ワークフローの作成](/reference/workflows/createRun) - [ステップの設定](/reference/workflows/step-class) --- title: "リファレンス: Workflow.while() | ワークフロー内のループ | Mastra ドキュメント" description: "Mastra ワークフローにおける `.while()` メソッドのドキュメントで、指定された条件が真である限りステップを繰り返します。" --- # Workflow.while() [JA] Source: https://mastra.ai/ja/reference/workflows/while `.while()` メソッドは、指定された条件が真である限りステップを繰り返します。これにより、条件が偽になるまで指定されたステップを実行し続けるループが作成されます。 ## 使用法 ```typescript workflow .step(incrementStep) .while(condition, incrementStep) .then(finalStep); ``` ## パラメーター ## 条件タイプ ### 関数条件 ブール値を返す関数を使用できます: ```typescript workflow .step(incrementStep) .while(async ({ context }) => { const result = context.getStepResult<{ value: number }>('increment'); return (result?.value ?? 0) < 10; // 値が10未満である限り続行 }, incrementStep) .then(finalStep); ``` ### 参照条件 比較演算子を使用した参照ベースの条件を使用できます: ```typescript workflow .step(incrementStep) .while( { ref: { step: incrementStep, path: 'value' }, query: { $lt: 10 }, // 値が10未満である限り続行 }, incrementStep ) .then(finalStep); ``` ## 比較演算子 参照ベースの条件を使用する場合、次の比較演算子を使用できます: | 演算子 | 説明 | 例 | |----------|------------------|------------| | `$eq` | 等しい | `{ $eq: 10 }` | | `$ne` | 等しくない | `{ $ne: 0 }` | | `$gt` | より大きい | `{ $gt: 5 }` | | `$gte` | 以上 | `{ $gte: 10 }` | | `$lt` | より小さい | `{ $lt: 20 }` | | `$lte` | 以下 | `{ $lte: 15 }` | ## 戻り値 ## 例 ```typescript import { Workflow, Step } from '@mastra/core'; import { z } from 'zod'; // Create a step that increments a counter const incrementStep = new Step({ id: 'increment', description: 'カウンターを1増やします', outputSchema: z.object({ value: z.number(), }), execute: async ({ context }) => { // Get current value from previous execution or start at 0 const currentValue = context.getStepResult<{ value: number }>('increment')?.value || context.getStepResult<{ startValue: number }>('trigger')?.startValue || 0; // Increment the value const value = currentValue + 1; console.log(`Incrementing to ${value}`); return { value }; }, }); // Create a final step const finalStep = new Step({ id: 'final', description: 'ループ完了後の最終ステップ', execute: async ({ context }) => { const finalValue = context.getStepResult<{ value: number }>('increment')?.value; console.log(`ループは最終値: ${finalValue} で完了しました`); return { finalValue }; }, }); // Create the workflow const counterWorkflow = new Workflow({ name: 'counter-workflow', triggerSchema: z.object({ startValue: z.number(), targetValue: z.number(), }), }); // Configure the workflow with a while loop counterWorkflow .step(incrementStep) .while( async ({ context }) => { const targetValue = context.triggerData.targetValue; const currentValue = context.getStepResult<{ value: number }>('increment')?.value ?? 0; return currentValue < targetValue; }, incrementStep ) .then(finalStep) .commit(); // Execute the workflow const run = counterWorkflow.createRun(); const result = await run.start({ triggerData: { startValue: 0, targetValue: 5 } }); // Will increment from 0 to 4, then stop and execute finalStep ``` ## 関連 - [.until()](./until.mdx) - 条件が真になるまでループ - [制御フローガイド](../../workflows/control-flow.mdx#loop-control-with-until-and-while) - [Workflow クラスリファレンス](./workflow.mdx) --- title: "リファレンス: Workflow クラス | ワークフローの構築 | Mastra ドキュメント" description: Mastra の Workflow クラスに関するドキュメントで、条件分岐とデータ検証を伴う複雑な操作のシーケンスのための状態マシンを作成することができます。 --- # ワークフロークラス [JA] Source: https://mastra.ai/ja/reference/workflows/workflow ワークフロークラスは、条件分岐やデータ検証を伴う複雑な操作のシーケンスのための状態機械を作成することを可能にします。 ```ts copy import { Workflow } from "@mastra/core/workflows"; const workflow = new Workflow({ name: "my-workflow" }); ``` ## API リファレンス ### コンストラクタ ", isOptional: true, description: "ワークフロー実行の詳細のためのオプションのロガーインスタンス", }, { name: "steps", type: "Step[]", description: "ワークフローに含めるステップの配列", }, { name: "triggerSchema", type: "z.Schema", description: "ワークフロートリガーデータを検証するためのオプションのスキーマ", }, ]} /> ### コアメソッド #### `step()` 他のステップへの遷移を含む[Step](./step-class.mdx)をワークフローに追加します。チェーンのためにワークフローインスタンスを返します。[ステップについて詳しくはこちら](./step-class.mdx)。 #### `commit()` ワークフローの設定を検証し、最終化します。すべてのステップを追加した後に呼び出す必要があります。 #### `execute()` オプションのトリガーデータでワークフローを実行します。[トリガースキーマ](./workflow.mdx#trigger-schemas)に基づいて型付けされています。 ## トリガースキーマ トリガースキーマは、Zodを使用してワークフローに渡される初期データを検証します。 ```ts showLineNumbers copy const workflow = new Workflow({ name: "order-process", triggerSchema: z.object({ orderId: z.string(), customer: z.object({ id: z.string(), email: z.string().email(), }), }), }); ``` このスキーマは: - `execute()` に渡されるデータを検証します - ワークフロー入力のためのTypeScript型を提供します ## 検証 ワークフローの検証は、2つの重要なタイミングで行われます。 ### 1. コミット時 `.commit()`を呼び出すと、ワークフローは次のことを検証します: ```ts showLineNumbers copy workflow .step('step1', {...}) .step('step2', {...}) .commit(); // ワークフロー構造を検証 ``` - ステップ間の循環依存 - 終端パス(すべてのパスは終了する必要があります) - 到達不能なステップ - 存在しないステップへの変数参照 - 重複するステップID ### 2. 実行中 `start()`を呼び出すと、次のことを検証します: ```ts showLineNumbers copy const { runId, start } = workflow.createRun(); // トリガーデータをスキーマに対して検証 await start({ triggerData: { orderId: "123", customer: { id: "cust_123", email: "invalid-email", // 検証に失敗します }, }, }); ``` - トリガーデータをトリガースキーマに対して - 各ステップの入力データをそのinputSchemaに対して - 参照されたステップの出力に変数パスが存在すること - 必要な変数が存在すること ## ワークフローのステータス ワークフローのステータスは、その現在の実行状態を示します。可能な値は次のとおりです: ### 例: 異なるステータスの処理 ```typescript showLineNumbers copy const { runId, start, watch } = workflow.createRun(); watch(async ({ status }) => { switch (status) { case "SUSPENDED": // 一時停止状態の処理 break; case "COMPLETED": // 結果の処理 break; case "FAILED": // エラー状態の処理 break; } }); await start({ triggerData: data }); ``` ## エラーハンドリング ```ts showLineNumbers copy try { const { runId, start, watch, resume } = workflow.createRun(); await start({ triggerData: data }); } catch (error) { if (error instanceof ValidationError) { // バリデーションエラーを処理する console.log(error.type); // 'circular_dependency' | 'no_terminal_path' | 'unreachable_step' console.log(error.details); // { stepId?: string, path?: string[] } } } ``` ## ステップ間でのコンテキストの受け渡し ステップは、ワークフロー内の前のステップからコンテキストオブジェクトを通じてデータにアクセスできます。各ステップは、実行されたすべての前のステップからの蓄積されたコンテキストを受け取ります。 ```typescript showLineNumbers copy workflow .step({ id: 'getData', execute: async ({ context }) => { return { data: { id: '123', value: 'example' } }; } }) .step({ id: 'processData', execute: async ({ context }) => { // コンテキスト.stepsを通じて前のステップからのデータにアクセス const previousData = context.steps.getData.output.data; // previousData.id と previousData.value を処理 } }); ``` コンテキストオブジェクト: - `context.steps` に完了したすべてのステップの結果を含む - `context.steps.[stepId].output` を通じてステップの出力にアクセスを提供 - ステップ出力スキーマに基づいて型付けされる - データの一貫性を確保するために不変である ## 関連ドキュメント - [Step](./step-class.mdx) - [.then()](./then.mdx) - [.step()](./step-function.mdx) - [.after()](./after.mdx) --- title: 'ショーケース' description: 'Mastraで構築されたこれらのアプリケーションをご覧ください' --- [JA] Source: https://mastra.ai/ja/showcase import { ShowcaseGrid } from '@/components/showcase-grid';