Multi-Context WebSocket
The Multi-Context Text-to-Speech WebSockets API allows for generating audio from text input while managing multiple independent audio generation streams (contexts) over a single WebSocket connection. This is useful for scenarios requiring concurrent or interleaved audio generations, such as dynamic conversational AI applications.
Each context, identified by a context id, maintains its own state. You can send text to specific
contexts, flush them, or close them independently. A close_socket
message can be used to terminate
the entire connection gracefully.
For more information on best practices for how to use this API, please see the multi context websocket guide.
HandshakeTry it
Headers
Path parameters
Query parameters
The ISO 639-1 language code (for specific models).
Timeout for inactivity before a context is closed (seconds), can be up to 180 seconds.
Reduces latency by disabling chunk schedule and buffers. Recommended for full sentences/phrases.
This parameter controls text normalization with three modes - ‘auto’, ‘on’, and ‘off’. When set to ‘auto’, the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ‘on’, text normalization will always be applied, while with ‘off’, it will be skipped. Cannot be turned on for ‘eleven_turbo_v2_5’ or ‘eleven_flash_v2_5’ models. Defaults to ‘auto’.
If specified, system will best-effort sample deterministically. Integer between 0 and 4294967295.