Stitching multiple requests

Learn how to maintain voice prosody over multiple chunks/generations.

When converting a large body of text into audio, you may encounter abrupt changes in prosody from one chunk to another. This can be particularly noticeable when converting text that spans multiple paragraphs or sections. In order to maintain voice prosody over multiple chunks, you can use the Request Stitching feature.

This feature allows you to provide context on what has already been generated and what will be generated in the future, helping to maintain a consistent voice and prosody throughout the entire text.

Here’s an example without Request Stitching:

And the same example with Request Stitching:

How to use Request Stitching

Request Stitching is easiest when using the ElevenLabs SDKs.

1

Create an API key

Create an API key in the dashboard here, which you’ll use to securely access the API.

Store the key as a managed secret and pass it to the SDKs either as a environment variable via an .env file, or directly in your app’s configuration depending on your preference.

.env
1ELEVENLABS_API_KEY=<your_api_key_here>
2

Install the SDK

We’ll also use the dotenv library to load our API key from an environment variable.

1pip install elevenlabs
2pip install python-dotenv
3

Stitch multiple requests together

Create a new file named example.py or example.mts, depending on your language of choice and add the following code:

1import os
2from io import BytesIO
3from elevenlabs.client import ElevenLabs
4from elevenlabs import play
5from dotenv import load_dotenv
6
7load_dotenv()
8
9ELEVENLABS_API_KEY = os.getenv("ELEVENLABS_API_KEY")
10
11elevenlabs = ElevenLabs(
12 api_key=ELEVENLABS_API_KEY,
13)
14
15paragraphs = [
16 "The advent of technology has transformed countless sectors, with education ",
17 "standing out as one of the most significantly impacted fields.",
18 "In recent years, educational technology, or EdTech, has revolutionized the way ",
19 "teachers deliver instruction and students absorb information.",
20 "From interactive whiteboards to individual tablets loaded with educational software, ",
21 "technology has opened up new avenues for learning that were previously unimaginable.",
22 "One of the primary benefits of technology in education is the accessibility it provides.",
23]
24
25request_ids = []
26audio_buffers = []
27
28for paragraph in paragraphs:
29 # Usually we get back a stream from the convert function, but with_raw_response is
30 # used to get the headers from the response
31 with elevenlabs.text_to_speech.with_raw_response.convert(
32 text=paragraph,
33 voice_id="T7QGPtToiqH4S8VlIkMJ",
34 model_id="eleven_multilingual_v2",
35 previous_request_ids=request_ids
36 ) as response:
37 request_ids.append(response._response.headers.get("request-id"))
38
39 # response._response.headers also contains useful information like 'character-cost',
40 # which shows the cost of the generation in characters.
41
42 audio_data = b''.join(chunk for chunk in response.data)
43 audio_buffers.append(BytesIO(audio_data))
44
45combined_stream = BytesIO(b''.join(buffer.getvalue() for buffer in audio_buffers))
46
47play(combined_stream)
4

Execute the code

1python example.py

You should hear the combined stitched audio play.

FAQ

In order to use the request IDs of a previous request for conditioning it needs to have processed completely. In case of streaming this means the audio has to be read completely from the response body.

The difference depends on the model, voice and voice settings used.

The request IDs should be no older than two hours.

Yes, unless you are an enterprise user with increased privacy requirements.