React SDK

ElevenAgents SDK: deploy customized, interactive voice agents in minutes.

Refer to the ElevenAgents overview for an explanation of how ElevenAgents works.

Installation

Install the package in your project through package manager.

$npm install @elevenlabs/react
$# or
$yarn add @elevenlabs/react
$# or
$pnpm install @elevenlabs/react

Upgrading from an earlier version? Run npx skills add elevenlabs/packages to install the elevenlabs:sdk-migration skill for your AI coding agent, which automates import changes, ConversationProvider wrapping, and API updates.

@elevenlabs/react re-exports everything from @elevenlabs/client, so you don’t need to install both packages.

Usage

Here is a minimal working example that connects to an agent and lets the user start and end a voice conversation:

1import {
2 ConversationProvider,
3 useConversationControls,
4 useConversationStatus,
5} from '@elevenlabs/react';
6
7function App() {
8 return (
9 <ConversationProvider>
10 <Agent />
11 </ConversationProvider>
12 );
13}
14
15function Agent() {
16 const { startSession, endSession } = useConversationControls();
17 const { status } = useConversationStatus();
18
19 if (status === 'connected') {
20 return <button onClick={endSession}>End</button>;
21 }
22
23 return (
24 <button onClick={() => startSession({ agentId: 'agent_7101k5zvyjhmfg983brhmhkd98n6' })}>
25 Start
26 </button>
27 );
28}

The sections below explain each part in detail.

ConversationProvider

All conversation hooks must be used within a ConversationProvider. Wrap your app (or the relevant subtree) with this provider.

1import { ConversationProvider } from '@elevenlabs/react';
2
3function App() {
4 return (
5 <ConversationProvider>
6 <YourComponents />
7 </ConversationProvider>
8 );
9}

Provider props

The provider accepts the same options as useConversation — including callbacks, client tools, overrides, and server location — so you can configure them at the provider level rather than in each hook consumer.

1<ConversationProvider
2 onConnect={() => console.log('Connected')}
3 onDisconnect={() => console.log('Disconnected')}
4 onError={(error) => console.error('Error:', error)}
5 clientTools={{
6 displayMessage: (parameters: { text: string }) => {
7 alert(parameters.text);
8 return 'Message displayed';
9 },
10 }}
11 serverLocation="eu-residency"
12>
13 <YourComponents />
14</ConversationProvider>
Controlled mute state

The provider supports isMuted and onMutedChange props for controlled mute state management, allowing you to persist mute state externally (e.g. across sessions).

1const [muted, setMuted] = useState(false);
2
3<ConversationProvider isMuted={muted} onMutedChange={setMuted}>
4 <YourComponents />
5</ConversationProvider>;

useConversation

A convenience React hook that combines all granular hooks into a single return value. Requires a ConversationProvider ancestor.

For better render performance, consider using the granular hooks instead. useConversation triggers a re-render on any state change, while the granular hooks only re-render when their specific slice of state changes.

Initialize conversation

1import { useConversation } from '@elevenlabs/react';
2
3function MyComponent() {
4 const conversation = useConversation();
5 // ...
6}

Note that ElevenAgents requires microphone access for voice conversations. Consider explaining and allowing access in your app’s UI before the conversation starts.

1// call after explaining to the user why the microphone access is needed
2await navigator.mediaDevices.getUserMedia({ audio: true });

Options

The hook can be optionally initialized with options. These can also be passed at the ConversationProvider level.

1const conversation = useConversation({
2 /* options object */
3});

Options include:

  • clientTools - object definition for client tools that can be invoked by agent. See below for details.
  • overrides - object definition for conversation settings overrides. See below for details.
  • textOnly - whether the conversation should run in text-only mode. See below for details.
  • serverLocation - specify the server location ("us", "eu-residency", "in-residency", "global"). Defaults to "us".

Callbacks Overview

  • onConnect - handler called when the conversation connection is established.
  • onDisconnect - handler called when the conversation connection is ended.
  • onMessage - handler called when a new message is received. These can be tentative or final transcriptions of user voice, replies produced by LLM, or debug message when a debug option is enabled.
  • onError - handler called when a error is encountered.
  • onAudio - handler called when audio data is received.
  • onModeChange - handler called when the conversation mode changes (speaking/listening).
  • onStatusChange - handler called when the connection status changes.
  • onCanSendFeedbackChange - handler called when the ability to send feedback changes.
  • onDebug - handler called when debug information is available.
  • onUnhandledClientToolCall - handler called when an unhandled client tool call is encountered.
  • onVadScore - handler called when voice activity detection score changes.
  • onAudioAlignment - handler called when audio alignment data is received, providing character-level timing information for agent speech.
  • onAgentChatResponsePart - handler called with streaming text chunks during text-only conversations. Provides start, delta, and stop events for real-time text streaming.
Client Tools

Client tools are a way to enable agent to invoke client-side functionality. This can be used to trigger actions in the client, such as opening a modal or doing an API call on behalf of the user.

Client tools definition is an object of functions, and needs to be identical with your configuration within the ElevenLabs UI, where you can name and describe different tools, as well as set up the parameters passed by the agent.

1const conversation = useConversation({
2 clientTools: {
3 displayMessage: (parameters: { text: string }) => {
4 alert(parameters.text);
5
6 return 'Message displayed';
7 },
8 },
9});

If the function returns a value, it is passed back to the agent as a response.

The tool must be explicitly set to block the conversation in the ElevenLabs UI for the agent to await and react to the response. Otherwise, the agent assumes success and continues the conversation.

For a more React-idiomatic approach to registering client tools, see useConversationClientTool.

Conversation overrides

You may choose to override various settings of the conversation and set them dynamically based other user interactions.

We support overriding various settings. These settings are optional and can be used to customize the conversation experience.

The following settings are available:

1const conversation = useConversation({
2 overrides: {
3 agent: {
4 prompt: {
5 prompt: 'My custom prompt',
6 },
7 firstMessage: 'My custom first message',
8 language: 'en',
9 },
10 tts: {
11 voiceId: 'custom voice id',
12 },
13 conversation: {
14 textOnly: true,
15 },
16 },
17});
Text only

If your agent is configured to run in text-only mode, i.e. it does not send or receive audio messages, you can use this flag to use a lighter version of the conversation. In that case, the user will not be asked for microphone permissions and no audio context will be created.

1const conversation = useConversation({
2 textOnly: true,
3});
Controlled State

You can control certain aspects of the conversation state directly through the hook options:

1const [micMuted, setMicMuted] = useState(false);
2
3const conversation = useConversation({
4 micMuted,
5 // ... other options
6});
7
8// Update controlled state
9setMicMuted(true); // This will automatically mute the microphone
Data residency

You can specify which ElevenLabs server region to connect to. For more information see the data residency guide.

1const conversation = useConversation({
2 serverLocation: 'eu-residency', // or "us", "in-residency", "global"
3});

Methods

startSession

The startSession method establishes the connection and starts using the microphone to communicate with the ElevenLabs Agents agent. The method accepts an options object, with signedUrl, conversationToken, or agentId being required.

The Agent ID can be acquired through ElevenLabs UI.

We also recommended passing in your own end user IDs to map conversations to your users.

The connection type is automatically inferred based on the conversation mode. Voice conversations use WebRTC and text-only conversations use WebSocket by default. You can still explicitly specify connectionType if needed.

1const conversation = useConversation();
2
3// For public agents, pass in the agent ID
4const conversationId = await conversation.startSession({
5 agentId: 'agent_7101k5zvyjhmfg983brhmhkd98n6',
6 userId: 'user_9302xkm82nds93', // optional field
7});

For public agents (i.e. agents that don’t have authentication enabled), only the agentId is required.

If the conversation requires authorization, use the REST API to generate signed links for a WebSocket connection or a conversation token for a WebRTC connection.

startSession returns a promise resolving a conversationId. The value is a globally unique conversation ID you can use to identify separate conversations.

1// Node.js server
2
3app.get("/signed-url", yourAuthMiddleware, async (req, res) => {
4 const response = await fetch(
5 `https://api.elevenlabs.io/v1/convai/conversation/get-signed-url?agent_id=${process.env.AGENT_ID}`,
6 {
7 headers: {
8 // Requesting a signed url requires your ElevenLabs API key
9 // Do NOT expose your API key to the client!
10 "xi-api-key": process.env.ELEVENLABS_API_KEY,
11 },
12 }
13 );
14
15 if (!response.ok) {
16 return res.status(500).send("Failed to get signed URL");
17 }
18
19 const body = await response.json();
20 res.send(body.signed_url);
21});
1// Client
2
3const response = await fetch("/signed-url", yourAuthHeaders);
4const signedUrl = await response.text();
5
6await conversation.startSession({
7 signedUrl,
8});
endSession

A method to manually end the conversation. The method will disconnect and end the conversation.

1await conversation.endSession();
setVolume

Sets the output volume of the conversation. Accepts an object with a volume field between 0 and 1.

1await conversation.setVolume({ volume: 0.5 });
sendUserMessage

Sends a text message to the agent.

Can be used to let the user type in the message instead of using the microphone. Unlike sendContextualUpdate, this will be treated as a user message and will prompt the agent to take its turn in the conversation.

1const { sendUserMessage, sendUserActivity } = useConversation();
2const [value, setValue] = useState("");
3
4return (
5 <>
6 <input
7 value={value}
8 onChange={e => {
9 setValue(e.target.value);
10 sendUserActivity();
11 }}
12 />
13 <button
14 onClick={() => {
15 sendUserMessage(value);
16 setValue("");
17 }}
18 >
19 SEND
20 </button>
21 </>
22);
sendContextualUpdate

Sends contextual information to the agent that won’t trigger a response.

1const { sendContextualUpdate } = useConversation();
2
3sendContextualUpdate(
4 "User navigated to another page. Consider it for next response, but don't react to this contextual update."
5);
sendFeedback

Provide feedback on the conversation quality. This helps improve the agent’s performance.

1const { sendFeedback } = useConversation();
2
3sendFeedback(true); // positive feedback
4sendFeedback(false); // negative feedback
sendUserActivity

Notifies the agent about user activity to prevent interruptions. Useful for when the user is actively using the app and the agent should pause speaking, i.e. when the user is typing in a chat.

The agent will pause speaking for ~2 seconds after receiving this signal.

1const { sendUserActivity } = useConversation();
2
3// Call this when user is typing to prevent interruption
4sendUserActivity();
changeInputDevice

Switch the audio input device during an active voice conversation. This method is only available for voice conversations.

1// Change to a specific input device
2conversation.changeInputDevice({
3 sampleRate: 16000,
4 format: 'pcm',
5 preferHeadphonesForIosDevices: true,
6 inputDeviceId: 'a1b2c3d4e5f6', // Optional: specific device ID
7});
changeOutputDevice

Switch the audio output device during an active voice conversation. This method is only available for voice conversations.

1// Change to a specific output device
2conversation.changeOutputDevice({
3 sampleRate: 16000,
4 format: 'pcm',
5 outputDeviceId: 'a1b2c3d4e5f6', // Optional: specific device ID
6});

Device switching only works for voice conversations. If no specific deviceId is provided, the browser will use its default device selection. You can enumerate available devices using the MediaDevices.enumerateDevices() API.

getId

Returns the current conversation ID.

1const { getId } = useConversation();
2const conversationId = getId();
3console.log(conversationId); // e.g., "conv_9001k1zph3fkeh5s8xg9z90swaqa"
getInputVolume / getOutputVolume

Methods that return the current input/output volume levels (0-1 scale).

1const { getInputVolume, getOutputVolume } = useConversation();
2const inputLevel = getInputVolume();
3const outputLevel = getOutputVolume();
getInputByteFrequencyData / getOutputByteFrequencyData

Methods that return Uint8Arrays containing the current input/output frequency data. See AnalyserNode.getByteFrequencyData for more information.

1const { getInputByteFrequencyData, getOutputByteFrequencyData } = useConversation();
2const inputFrequencyData = getInputByteFrequencyData();
3const outputFrequencyData = getOutputByteFrequencyData();

These methods are only available for voice conversations. In WebRTC mode the audio is hardcoded to use pcm_48000, meaning any visualization using the returned data might show different patterns to WebSocket connections.

sendMCPToolApprovalResult

Sends approval result for MCP (Model Context Protocol) tool calls.

1const { sendMCPToolApprovalResult } = useConversation();
2
3// Approve a tool call
4sendMCPToolApprovalResult('tc_8k2m4n6p8r0t', true);
5
6// Reject a tool call
7sendMCPToolApprovalResult('tc_8k2m4n6p8r0t', false);

Return values

In addition to the methods above, useConversation returns the following reactive state:

  • status - the current connection status ("disconnected", "connecting", "connected").
  • isSpeaking - whether the agent is currently speaking.
  • isListening - whether the agent is currently listening.
  • mode - the current conversation mode ("speaking" or "listening").
  • isMuted - whether the microphone is currently muted.
  • setMuted - function to mute/unmute the microphone.
  • canSendFeedback - whether feedback can be submitted for the current conversation.
  • message - the latest message from the conversation.
1const { status, isSpeaking, isListening, isMuted, setMuted, canSendFeedback } = useConversation();
2
3return (
4 <div>
5 <p>Status: {status}</p>
6 <p>Agent is {isSpeaking ? 'speaking' : 'listening'}</p>
7 <button onClick={() => setMuted(!isMuted)}>
8 {isMuted ? 'Unmute' : 'Mute'}
9 </button>
10 </div>
11);

Granular Hooks

For better render performance, use these hooks instead of useConversation. Each hook subscribes to only its specific slice of state, so components only re-render when the data they consume changes.

All granular hooks require a ConversationProvider ancestor.

useConversationControls

Returns action methods for controlling the conversation. This hook does not cause re-renders since it only provides stable function references.

1import { useConversationControls } from '@elevenlabs/react';
2
3function Controls() {
4 const {
5 startSession,
6 endSession,
7 sendUserMessage,
8 sendContextualUpdate,
9 sendUserActivity,
10 setVolume,
11 changeInputDevice,
12 changeOutputDevice,
13 sendMCPToolApprovalResult,
14 getId,
15 getInputVolume,
16 getOutputVolume,
17 getInputByteFrequencyData,
18 getOutputByteFrequencyData,
19 } = useConversationControls();
20
21 return (
22 <button onClick={() => startSession({ agentId: 'agent_7101k5zvyjhmfg983brhmhkd98n6' })}>
23 Start
24 </button>
25 );
26}

useConversationStatus

Returns the current connection status and optional status message.

1import { useConversationStatus } from '@elevenlabs/react';
2
3function StatusIndicator() {
4 const { status, message } = useConversationStatus();
5
6 return <p>Status: {status}</p>; // "disconnected" | "connecting" | "connected"
7}

useConversationInput

Returns mute state and a setter for toggling the microphone.

1import { useConversationInput } from '@elevenlabs/react';
2
3function MuteToggle() {
4 const { isMuted, setMuted } = useConversationInput();
5
6 return <button onClick={() => setMuted(!isMuted)}>{isMuted ? 'Unmute' : 'Mute'}</button>;
7}

useConversationMode

Returns speaking/listening state for the agent.

1import { useConversationMode } from '@elevenlabs/react';
2
3function ModeIndicator() {
4 const { mode, isSpeaking, isListening } = useConversationMode();
5
6 return <p>Agent is {isSpeaking ? 'speaking' : 'listening'}</p>;
7}

useConversationFeedback

Returns feedback availability and a method to submit feedback.

1import { useConversationFeedback } from '@elevenlabs/react';
2
3function FeedbackButtons() {
4 const { canSendFeedback, sendFeedback } = useConversationFeedback();
5
6 if (!canSendFeedback) return null;
7
8 return (
9 <div>
10 <button onClick={() => sendFeedback(true)}>Like</button>
11 <button onClick={() => sendFeedback(false)}>Dislike</button>
12 </div>
13 );
14}

useRawConversation

Returns the raw conversation instance. This is an escape hatch for advanced use cases where you need direct access to the underlying VoiceConversation or TextConversation object.

1import { useRawConversation } from '@elevenlabs/react';
2
3function Advanced() {
4 const conversation = useRawConversation();
5 // Access the raw conversation instance directly
6}

useConversationClientTool

A hook for dynamically registering client tools from React components. Tools are automatically unregistered when the component unmounts.

This is useful when a tool’s handler needs access to component state or props that aren’t available at the provider level.

1import { useConversationClientTool } from '@elevenlabs/react';
2import { useState } from 'react';
3
4function MapComponent() {
5 const [location, setLocation] = useState({ lat: 0, lng: 0 });
6
7 useConversationClientTool('getLocation', () => {
8 return `${location.lat},${location.lng}`;
9 });
10
11 useConversationClientTool('setLocation', (params: { lat: number; lng: number }) => {
12 setLocation(params);
13 return 'Location updated';
14 });
15
16 return <Map center={location} />;
17}

The hook always uses the latest closure value of the handler, so you don’t need to worry about stale state.