
Athos expands to 22 languages using ElevenLabs Dubbing
Breaking language barriers in animation with consistent character voices
Integrating ElevenLabs Text to Speech cut setup time by 10x for developers building with voice
Stream has introduced Vision Agents - an open-source framework that enables developers to build low-latency, multimodal AI experiences combining real-time video, audio, and conversation. The framework integrates ElevenLabs Text to Speech to power expressive, responsive voices that enable seamless interaction between users and AI systems.

Vision Agents gives AI the ability to see, hear, and respond in real time. Built on Stream’s video and audio SDKs, the framework provides a low-latency foundation for developers to prototype and deploy multimodal agent experiences.
When evaluating Text to Speech providers, Stream selected ElevenLabs for its market-leading quality and ease of integration - ElevenLabs now serves as the primary voice option for Stream’s users.
“ElevenLabs made it easy for us to quickly bring powerful text-to-speech capabilities to our SDK, allowing Agents to respond in real time with expressive voices to user questions or as feedback to what it’s seeing.” - Neevash Ramdial, Director of Marketing, Stream
Stream integrated ElevenLabs across its codebase in just a few days, enabling developers to add lifelike voice output to their vision agents with minimal configuration. The integration now delivers:
Stream’s Vision Agents demonstrate how ElevenLabs models are expanding what’s possible in multimodal AI. By combining visual understanding with Text to Speech, developers can create agents that not only see, but also speak and listen with near-human fluency.
Looking to build with Text to Speech? Get in touch here.

Breaking language barriers in animation with consistent character voices

Free ElevenReader licenses for NFB members through the ElevenLabs Impact Program
Powered by ElevenLabs エージェント