# voice.on() The `on()` method registers event listeners for various voice events. This is particularly important for real-time voice providers, where events are used to communicate transcribed text, audio responses, and other state changes. ## Usage Example ```typescript import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; import chalk from "chalk"; // Initialize a real-time voice provider const voice = new OpenAIRealtimeVoice({ realtimeConfig: { model: "gpt-5.1-realtime", apiKey: process.env.OPENAI_API_KEY, }, }); // Connect to the real-time service await voice.connect(); // Register event listener for transcribed text voice.on("writing", (event) => { if (event.role === "user") { process.stdout.write(chalk.green(event.text)); } else { process.stdout.write(chalk.blue(event.text)); } }); // Listen for audio data and play it const speaker = new Speaker({ sampleRate: 24100, channels: 1, bitDepth: 16, }); voice.on("speaker", (stream) => { stream.pipe(speaker); }); // Register event listener for errors voice.on("error", ({ message, code, details }) => { console.error(`Error ${code}: ${message}`, details); }); ``` ## Parameters ### event: string Name of the event to listen for. See the \[Voice Events]\(./voice.events) documentation for a list of available events. ### callback: function Callback function that will be called when the event occurs. The callback signature depends on the specific event. ## Return Value This method does not return a value. ## Events For a comprehensive list of events and their payload structures, see the [Voice Events](https://mastra.ai/reference/voice/voice.events/llms.txt) documentation. Common events include: - `speaking`: Emitted when audio data is available - `speaker`: Emitted with a stream that can be piped to audio output - `writing`: Emitted when text is transcribed or generated - `error`: Emitted when an error occurs - `tool-call-start`: Emitted when a tool is about to be executed - `tool-call-result`: Emitted when a tool execution is complete Different voice providers may support different sets of events with varying payload structures. ## Using with CompositeVoice When using `CompositeVoice`, the `on()` method delegates to the configured real-time provider: ```typescript import { CompositeVoice } from "@mastra/core/voice"; import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime"; import Speaker from "@mastra/node-speaker"; const speaker = new Speaker({ sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro channels: 1, // Mono audio output (as opposed to stereo which would be 2) bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution) }); const realtimeVoice = new OpenAIRealtimeVoice(); const voice = new CompositeVoice({ realtime: realtimeVoice, }); // Connect to the real-time service await voice.connect(); // This will register the event listener with the OpenAIRealtimeVoice provider voice.on("speaker", (stream) => { stream.pipe(speaker); }); ``` ## Notes - This method is primarily used with real-time voice providers that support event-based communication - If called on a voice provider that doesn't support events, it will log a warning and do nothing - Event listeners should be registered before calling methods that might emit events - To remove an event listener, use the [voice.off()](https://mastra.ai/reference/voice/voice.off/llms.txt) method with the same event name and callback function - Multiple listeners can be registered for the same event - The callback function will receive different data depending on the event type (see [Voice Events](https://mastra.ai/reference/voice/voice.events/llms.txt)) - For best performance, consider removing event listeners when they are no longer needed