Server-side streaming
Overview
The ElevenLabs Realtime Speech to Text API enables you to transcribe audio streams in real-time with ultra-low latency using the Scribe Realtime v2 model. Whether you’re building voice assistants, transcription services, or any application requiring live speech recognition, this WebSocket-based API delivers partial transcripts as you speak and committed transcripts when speech segments are complete.
Scribe v2 Realtime can be implemented on the server side to transcribe audio in realtime, either via a URL, file or your own audio stream.
The server side implementation differs from client side in a few ways:
- Uses an ElevenLabs API key instead of a single use token.
- Supports streaming from a URL directly, without the need to manually chunk the audio.
For streaming audio directly from the microphone, see the Client-side streaming guide.
Quickstart
Create an API key
Create an API key in the dashboard here, which you’ll use to securely access the API.
Store the key as a managed secret and pass it to the SDKs either as a environment variable via an .env file, or directly in your app’s configuration depending on your preference.
Configure the SDK
The SDK provides two ways to transcribe audio in realtime: streaming from a URL or manually chunking the audio from either a file or your own audio stream.
For a full list of parameters and options the API supports, please refer to the API reference.
Stream from URL
Manual audio chunking
This example shows how to stream an audio file from a URL using the official SDK.
The ffmpeg tool is required when streaming from an URL. Visit their website for installation instructions.
Create a new file named example.py or example.mts, depending on your language of choice and add the following code:
Next steps
Learn how to handle transcripts and commit strategies in the Transcripts and commit strategies section, and review the list of events and error types that can be received from the Realtime Speech to Text API in the Event reference section.