
Our new, fastest model generates speech at ≈400ms latency and is over twice as fast as our V1 models. It also doesn't compromise on quality which stays on par with Multilingual V2.
For users of VoIP services, we now also support mulaw 8khz output with an even greater speed boost. See our API documentation to learn more.
We're working on adding multilingual support to Turbo v2, as well.
If you need help with integration or would like to speak about scale and support, feel free to contact our sales team.

Explore articles by the ElevenLabs team
%20(2).webp&w=3840&q=95)
Deutsche Telekom and ElevenLabs announce partnership
Bringing ElevenLabs' AI voice agents to the customer service of Europe’s largest Telco (via app and phone).

How we scaled our customer interview process with ElevenLabs Agents
We used ElevenLabs Agents to interview over 230 users of our ElevenReader app in 24 Hours.

