Latency optimization
Learn how to optimize text-to-speech latency.
This guide covers the core principles for improving text-to-speech latency.
While there are many individual techniques, we’ll group them into four principles.
Four principles
Enterprise customers benefit from increased concurrency limits and priority access to our rendering queue. Contact sales to learn more about our enterprise plans.
Use Flash models
Flash models deliver ~75ms inference speeds, making them ideal for real-time applications. The trade-off is a slight reduction in audio quality compared to Multilingual v2.
75ms refers to model inference time only. Actual end-to-end latency will vary with factors such as your location & endpoint type used.
Leverage streaming
There are three types of text-to-speech endpoints available in our API Reference:
- Regular endpoint: Returns a complete audio file in a single response.
- Streaming endpoint: Returns audio chunks progressively using Server-sent events.
- Websockets endpoint: Enables bidirectional streaming for real-time audio generation.
Streaming
Streaming endpoints progressively return audio as it is being generated in real-time, reducing the time-to-first-byte. This endpoint is recommended for cases where the input text is available up-front.
Streaming is supported for the Text to Speech API, Voice Changer API & Audio Isolation API.
Websockets
The text-to-speech websocket endpoint supports bidirectional streaming making it perfect for applications with real-time text input (e.g. LLM outputs).
Setting auto_mode
to true automatically handles generation triggers, removing the need to
manually manage chunk strategies.
If auto_mode
is disabled, the model will wait for enough text to match the chunk schedule before starting to generate audio.
For instance, if you set a chunk schedule of 125 characters but only 50 arrive, the model stalls until additional characters come in—potentially increasing latency.
For implementation details, see the text-to-speech websocket guide.
Consider geographic proximity
Because our models are served in the US, your geographic location will affect the network latency you experience.
For example, using Flash models with Websockets, you can expect the following TTFB latencies:
We are actively working on deploying our models in EU and Asia. These deployments will bring speeds closer to those experienced by US customers.
Choose appropriate voices
We have observed that in some cases, voice selection can impact latency. Here’s the order from fastest to slowest:
- Default voices (formerly premade), Synthetic voices, and Instant Voice Clones (IVC)
- Professional Voice Clones (PVC)
Higher audio quality output formats can increase latency. Be sure to balance your latency requirements with audio fidelity needs.