Generate audio in real-time

Learn how to generate audio in real-time via a WebSocket connection.

WebSocket streaming is a method of sending and receiving data over a single, long-lived connection. This method is useful for real-time applications where you need to stream audio data as it becomes available.

If you want to quickly test out the latency (time to first byte) of a WebSocket connection to the ElevenLabs text-to-speech API, you can install elevenlabs-latency via npm and follow the instructions here.

WebSockets can be used with the Text to Speech and Conversational AI products. This guide will demonstrate how to use them with the Text to Speech API.

Requirements

  • An ElevenLabs account with an API key (here’s how to find your API key).
  • Python or Node.js (or another JavaScript runtime) installed on your machine

Setup

Install required dependencies:

1pip install python-dotenv
2pip install websockets

Next, create a .env file in your project directory and add your API key:

.env
$ELEVENLABS_API_KEY=your_elevenlabs_api_key_here

Initiate the websocket connection

After choosing a voice from the Voice Library and the text to speech model you wish to use, initiate a WebSocket connection to the text to speech API.

1import os
2from dotenv import load_dotenv
3import websockets
4
5# Load the API key from the .env file
6load_dotenv()
7ELEVENLABS_API_KEY = os.getenv("ELEVENLABS_API_KEY")
8
9voice_id = 'Xb7hH8MSUJpSbSDYk0k2'
10
11# For use cases where latency is important, we recommend using the 'eleven_flash_v2_5' model.
12model_id = 'eleven_flash_v2_5'
13
14async def text_to_speech_ws_streaming(voice_id, model_id):
15 uri = f"wss://api.elevenlabs.io/v1/text-to-speech/{voice_id}/stream-input?model_id={model_id}"
16
17 async with websockets.connect(uri) as websocket:
18 ...

Send the input text

Once the WebSocket connection is open, set up voice settings first. Next, send the text message to the API.

1async def text_to_speech_ws_streaming(voice_id, model_id):
2 async with websockets.connect(uri) as websocket:
3 await websocket.send(json.dumps({
4 "text": " ",
5 "voice_settings": {"stability": 0.5, "similarity_boost": 0.8, "use_speaker_boost": False},
6 "generation_config": {
7 "chunk_length_schedule": [120, 160, 250, 290]
8 },
9 "xi_api_key": ELEVENLABS_API_KEY,
10 }))
11
12 text = "The twilight sun cast its warm golden hues upon the vast rolling fields, saturating the landscape with an ethereal glow. Silently, the meandering brook continued its ceaseless journey, whispering secrets only the trees seemed privy to."
13 await websocket.send(json.dumps({"text": text}))
14
15 // Send empty string to indicate the end of the text sequence which will close the WebSocket connection
16 await websocket.send(json.dumps({"text": ""}))

Save the audio to file

Read the incoming message from the WebSocket connection and write the audio chunks to a local file.

1import asyncio
2
3async def write_to_local(audio_stream):
4 """Write the audio encoded in base64 string to a local mp3 file."""
5
6 with open(f'./output/test.mp3', "wb") as f:
7 async for chunk in audio_stream:
8 if chunk:
9 f.write(chunk)
10
11async def listen(websocket):
12 """Listen to the websocket for audio data and stream it."""
13
14 while True:
15 try:
16 message = await websocket.recv()
17 data = json.loads(message)
18 if data.get("audio"):
19 yield base64.b64decode(data["audio"])
20 elif data.get('isFinal'):
21 break
22
23 except websockets.exceptions.ConnectionClosed:
24 print("Connection closed")
25 break
26
27async def text_to_speech_ws_streaming(voice_id, model_id):
28 async with websockets.connect(uri) as websocket:
29 ...
30 # Add listen task to submit the audio chunks to the write_to_local function
31 listen_task = asyncio.create_task(write_to_local(listen(websocket)))
32
33 await listen_task
34
35asyncio.run(text_to_speech_ws_streaming(voice_id, model_id))

Run the script

You can run the script by executing the following command in your terminal. An mp3 audio file will be saved in the output directory.

1python text-to-speech-websocket.py

Advanced configuration

The use of WebSockets comes with some advanced settings that you can use to fine-tune your real-time audio generation.

Buffering

When generating real-time audio, two important concepts should be taken into account: Time To First Byte (TTFB) and Buffering. To produce high quality audio and deduce context, the model requires a certain threshold of input text. The more text that is sent in a WebSocket connection, the better the audio quality. If the threshold is not met, the model will add the text to a buffer and generate audio once the buffer is full.

In terms of latency, TTFB is the time it takes for the first byte of audio to be sent to the client. This is important because it affects the perceived latency of the audio. As such, you might want to control the buffer size to balance between quality and latency.

To manage this, you can use the chunk_length_schedule parameter when either initializing the WebSocket connection or when sending text. This parameter is an array of integers that represent the number of characters that will be sent to the model before generating audio. For example, if you set chunk_length_schedule to [120, 160, 250, 290], the model will generate audio after 120, 160, 250, and 290 characters have been sent, respectively.

Here’s an example of how this works with the default settings for chunk_length_schedule:

In the above diagram, audio is only generated after the second message is sent to the server. This is because the first message is below the threshold of 120 characters, while the second message brings the total number of characters above the threshold. The third message is above the threshold of 160 characters, so audio is immediately generated and returned to the client.

You can specify a custom value for chunk_length_schedule when initializing the WebSocket connection or when sending text.

1await websocket.send(json.dumps({
2 "text": text,
3 "generation_config": {
4 # Generate audio after 50, 120, 160, and 290 characters have been sent
5 "chunk_length_schedule": [50, 120, 160, 290]
6 },
7 "xi_api_key": ELEVENLABS_API_KEY,
8}))

In the case that you want force the immediate return of the audio, you can use flush: true to clear out the buffer and force generate any buffered text. This can be useful, for example, when you have reached the end of a document and want to generate audio for the final section.

This can be specified on a per-message basis by setting flush: true in the message.

1await websocket.send(json.dumps({"text": "Generate this audio immediately.", "flush": True}))

In addition, closing the websocket will automatically force generate any buffered text.

Voice settings

When initializing the WebSocket connections, you can specify the voice settings for the subsequent generations. This allows you to control the speed, stability, and other voice characteristics of the generated audio.

1await websocket.send(json.dumps({
2 "text": text,
3 "voice_settings": {"stability": 0.5, "similarity_boost": 0.8, "use_speaker_boost": False},
4}))

This can be overridden on a per-message basis by specifying a different voice_settings in the message.

Pronunciation dictionaries

You can use pronunciation dictionaries to control the pronunciation of specific words or phrases. This can be useful for ensuring that certain words are pronounced correctly or for adding emphasis to certain words or phrases.

Unlike voice_settings and generation_config, pronunciation dictionaries must be specified in the “Initialize Connection” message. See the API Reference for more information.

Best practice

  • We suggest using the default setting for chunk_length_schedule in generation_config.
  • When developing a real-time conversational AI application, we advise using flush: true along with the text at the end of conversation turn to ensure timely audio generation.
  • If the default setting doesn’t provide optimal latency for your use case, you can modify the chunk_length_schedule. However, be mindful that reducing latency through this adjustment may come at the expense of quality.

Tips

  • The WebSocket connection will automatically close after 20 seconds of inactivity. To keep the connection open, you can send a single space character " ". Please note that this string must include a space, as sending a fully empty string, "", will close the WebSocket.
  • Send an empty string to close the WebSocket connection after sending the last text message.
  • You can use alignment to get the word-level timestamps for each word in the text. This can be useful for aligning the audio with the text in a video or for other applications that require precise timing. See the API Reference for more information.