Text-to-Speech (TTS)
Text-to-Speech (TTS) in Mastra offers a unified API for synthesizing spoken audio from text using various providers. By incorporating TTS into your applications, you can enhance user experience with natural voice interactions, improve accessibility for users with visual impairments, and create more engaging multimodal interfaces.
TTS is a core component of any voice application. Combined with STT (Speech-to-Text), it forms the foundation of voice interaction systems. Newer models support STS (Speech-to-Speech) which can be used for real-time interactions but come at high cost ($).
Configuration
To use TTS in Mastra, you need to provide a speechModel
when initializing the voice provider. This includes parameters such as:
name
: The specific TTS model to use.apiKey
: Your API key for authentication.- Provider-specific options: Additional options that may be required or supported by the specific voice provider.
The speaker
option allows you to select different voices for speech synthesis. Each provider offers a variety of voice options with distinct characteristics for Voice diversity, Quality, Voice personality, and Multilingual support
Note: All of these parameters are optional. You can use the default settings provided by the voice provider, which will depend on the specific provider you are using.
const voice = new OpenAIVoice({
speechModel: {
name: "tts-1-hd",
apiKey: process.env.OPENAI_API_KEY
},
speaker: "alloy",
});
// If using default settings the configuration can be simplified to:
const voice = new OpenAIVoice();
Available Providers
Mastra supports a wide range of Text-to-Speech providers, each with their own unique capabilities and voice options. You can choose the provider that best suits your application’s needs:
- OpenAI - High-quality voices with natural intonation and expression
- Azure - Microsoft’s speech service with a wide range of voices and languages
- ElevenLabs - Ultra-realistic voices with emotion and fine-grained control
- PlayAI - Specialized in natural-sounding voices with various styles
- Google - Google’s speech synthesis with multilingual support
- Cloudflare - Edge-optimized speech synthesis for low-latency applications
- Deepgram - AI-powered speech technology with high accuracy
- Speechify - Text-to-speech optimized for readability and accessibility
- Sarvam - Specialized in Indic languages and accents
- Murf - Studio-quality voice overs with customizable parameters
Each provider is implemented as a separate package that you can install as needed:
pnpm add @mastra/voice-openai # Example for OpenAI
Using the Speak Method
The primary method for TTS is the speak()
method, which converts text to speech. This method can accept options that allows you to specify the speaker and other provider-specific options. Here’s how to use it:
import { Agent } from '@mastra/core/agent';
import { openai } from '@ai-sdk/openai';
import { OpenAIVoice } from '@mastra/voice-openai';
const voice = new OpenAIVoice();
const agent = new Agent({
name: "Voice Agent",
instructions: "You are a voice assistant that can help users with their tasks.",
model: openai("gpt-4o"),
voice,
});
const { text } = await agent.generate('What color is the sky?');
// Convert text to speech to an Audio Stream
const readableStream = await voice.speak(text, {
speaker: "default", // Optional: specify a speaker
properties: {
speed: 1.0, // Optional: adjust speech speed
pitch: "default", // Optional: specify pitch if supported
},
});
Check out the Adding Voice to Agents documentation to learn how to use TTS in an agent.