WebSocket

Create real-time, interactive voice conversations with AI agents

This documentation is for developers integrating directly with the ElevenLabs WebSocket API. For convenience, consider using the official SDKs provided by ElevenLabs.

The ElevenLabs Conversational AI WebSocket API enables real-time, interactive voice conversations with AI agents. By establishing a WebSocket connection, you can send audio input and receive audio responses in real-time, creating life-like conversational experiences.

Endpoint: wss://api.elevenlabs.io/v1/convai/conversation?agent_id={agent_id}

Authentication

Using Agent ID

For public agents, you can directly use the agent_id in the WebSocket URL without additional authentication:

$wss://api.elevenlabs.io/v1/convai/conversation?agent_id=<your-agent-id>

Using a signed URL

For private agents or conversations requiring authorization, obtain a signed URL from your server, which securely communicates with the ElevenLabs API using your API key.

Example using cURL

Request:

$curl -X GET "https://api.elevenlabs.io/v1/convai/conversation/get_signed_url?agent_id=<your-agent-id>" \
> -H "xi-api-key: <your-api-key>"

Response:

1{
2 "signed_url": "wss://api.elevenlabs.io/v1/convai/conversation?agent_id=<your-agent-id>&token=<token>"
3}
Never expose your ElevenLabs API key on the client side.

WebSocket events

Client to server events

The following events can be sent from the client to the server:

Send non-interrupting contextual information to update the conversation state. This allows you to provide additional context without disrupting the ongoing conversation flow.

1{
2 "type": "contextual_update",
3 "text": "User clicked on pricing page"
4}

Use cases:

  • Updating user status or preferences
  • Providing environmental context
  • Adding background information
  • Tracking user interface interactions

Key points:

  • Does not interrupt current conversation flow
  • Updates are incorporated as tool calls in conversation history
  • Helps maintain context without breaking the natural dialogue

Contextual updates are processed asynchronously and do not require a direct response from the server.

Next.js implementation example

This example demonstrates how to implement a WebSocket-based conversational AI client in Next.js using the ElevenLabs WebSocket API.

While this example uses the voice-stream package for microphone input handling, you can implement your own solution for capturing and encoding audio. The focus here is on demonstrating the WebSocket connection and event handling with the ElevenLabs API.

1

Install required dependencies

First, install the necessary packages:

$npm install voice-stream

The voice-stream package handles microphone access and audio streaming, automatically encoding the audio in base64 format as required by the ElevenLabs API.

This example uses Tailwind CSS for styling. To add Tailwind to your Next.js project:

$npm install -D tailwindcss postcss autoprefixer
>npx tailwindcss init -p

Then follow the official Tailwind CSS setup guide for Next.js.

Alternatively, you can replace the className attributes with your own CSS styles.

2

Create WebSocket types

Define the types for WebSocket events:

app/types/websocket.ts
1type BaseEvent = {
2 type: string;
3};
4
5type UserTranscriptEvent = BaseEvent & {
6 type: "user_transcript";
7 user_transcription_event: {
8 user_transcript: string;
9 };
10};
11
12type AgentResponseEvent = BaseEvent & {
13 type: "agent_response";
14 agent_response_event: {
15 agent_response: string;
16 };
17};
18
19type AudioResponseEvent = BaseEvent & {
20 type: "audio";
21 audio_event: {
22 audio_base_64: string;
23 event_id: number;
24 };
25};
26
27type InterruptionEvent = BaseEvent & {
28 type: "interruption";
29 interruption_event: {
30 reason: string;
31 };
32};
33
34type PingEvent = BaseEvent & {
35 type: "ping";
36 ping_event: {
37 event_id: number;
38 ping_ms?: number;
39 };
40};
41
42export type ElevenLabsWebSocketEvent =
43 | UserTranscriptEvent
44 | AgentResponseEvent
45 | AudioResponseEvent
46 | InterruptionEvent
47 | PingEvent;
3

Create WebSocket hook

Create a custom hook to manage the WebSocket connection:

app/hooks/useAgentConversation.ts
1'use client';
2
3import { useCallback, useEffect, useRef, useState } from 'react';
4import { useVoiceStream } from 'voice-stream';
5import type { ElevenLabsWebSocketEvent } from '../types/websocket';
6
7const sendMessage = (websocket: WebSocket, request: object) => {
8 if (websocket.readyState !== WebSocket.OPEN) {
9 return;
10 }
11 websocket.send(JSON.stringify(request));
12};
13
14export const useAgentConversation = () => {
15 const websocketRef = useRef<WebSocket>(null);
16 const [isConnected, setIsConnected] = useState<boolean>(false);
17
18 const { startStreaming, stopStreaming } = useVoiceStream({
19 onAudioChunked: (audioData) => {
20 if (!websocketRef.current) return;
21 sendMessage(websocketRef.current, {
22 user_audio_chunk: audioData,
23 });
24 },
25 });
26
27 const startConversation = useCallback(async () => {
28 if (isConnected) return;
29
30 const websocket = new WebSocket("wss://api.elevenlabs.io/v1/convai/conversation");
31
32 websocket.onopen = async () => {
33 setIsConnected(true);
34 sendMessage(websocket, {
35 type: "conversation_initiation_client_data",
36 });
37 await startStreaming();
38 };
39
40 websocket.onmessage = async (event) => {
41 const data = JSON.parse(event.data) as ElevenLabsWebSocketEvent;
42
43 // Handle ping events to keep connection alive
44 if (data.type === "ping") {
45 setTimeout(() => {
46 sendMessage(websocket, {
47 type: "pong",
48 event_id: data.ping_event.event_id,
49 });
50 }, data.ping_event.ping_ms);
51 }
52
53 if (data.type === "user_transcript") {
54 const { user_transcription_event } = data;
55 console.log("User transcript", user_transcription_event.user_transcript);
56 }
57
58 if (data.type === "agent_response") {
59 const { agent_response_event } = data;
60 console.log("Agent response", agent_response_event.agent_response);
61 }
62
63 if (data.type === "interruption") {
64 // Handle interruption
65 }
66
67 if (data.type === "audio") {
68 const { audio_event } = data;
69 // Implement your own audio playback system here
70 // Note: You'll need to handle audio queuing to prevent overlapping
71 // as the WebSocket sends audio events in chunks
72 }
73 };
74
75 websocketRef.current = websocket;
76
77 websocket.onclose = async () => {
78 websocketRef.current = null;
79 setIsConnected(false);
80 stopStreaming();
81 };
82 }, [startStreaming, isConnected, stopStreaming]);
83
84 const stopConversation = useCallback(async () => {
85 if (!websocketRef.current) return;
86 websocketRef.current.close();
87 }, []);
88
89 useEffect(() => {
90 return () => {
91 if (websocketRef.current) {
92 websocketRef.current.close();
93 }
94 };
95 }, []);
96
97 return {
98 startConversation,
99 stopConversation,
100 isConnected,
101 };
102};
4

Create the conversation component

Create a component to use the WebSocket hook:

app/components/Conversation.tsx
1'use client';
2
3import { useCallback } from 'react';
4import { useAgentConversation } from '../hooks/useAgentConversation';
5
6export function Conversation() {
7 const { startConversation, stopConversation, isConnected } = useAgentConversation();
8
9 const handleStart = useCallback(async () => {
10 try {
11 await navigator.mediaDevices.getUserMedia({ audio: true });
12 await startConversation();
13 } catch (error) {
14 console.error('Failed to start conversation:', error);
15 }
16 }, [startConversation]);
17
18 return (
19 <div className="flex flex-col items-center gap-4">
20 <div className="flex gap-2">
21 <button
22 onClick={handleStart}
23 disabled={isConnected}
24 className="px-4 py-2 bg-blue-500 text-white rounded disabled:bg-gray-300"
25 >
26 Start Conversation
27 </button>
28 <button
29 onClick={stopConversation}
30 disabled={!isConnected}
31 className="px-4 py-2 bg-red-500 text-white rounded disabled:bg-gray-300"
32 >
33 Stop Conversation
34 </button>
35 </div>
36 <div className="flex flex-col items-center">
37 <p>Status: {isConnected ? 'Connected' : 'Disconnected'}</p>
38 </div>
39 </div>
40 );
41}

Next steps

  1. Audio Playback: Implement your own audio playback system using Web Audio API or a library. Remember to handle audio queuing to prevent overlapping as the WebSocket sends audio events in chunks.
  2. Error Handling: Add retry logic and error recovery mechanisms
  3. UI Feedback: Add visual indicators for voice activity and connection status

Latency management

To ensure smooth conversations, implement these strategies:

  • Adaptive Buffering: Adjust audio buffering based on network conditions.
  • Jitter Buffer: Implement a jitter buffer to smooth out variations in packet arrival times.
  • Ping-Pong Monitoring: Use ping and pong events to measure round-trip time and adjust accordingly.

Security best practices

  • Rotate API keys regularly and use environment variables to store them.
  • Implement rate limiting to prevent abuse.
  • Clearly explain the intention when prompting users for microphone access.
  • Optimized Chunking: Tweak the audio chunk duration to balance latency and efficiency.

Additional resources