React Native SDK

Conversational AI SDK: deploy customized, interactive voice agents in minutes for React Native apps.

Refer to the Conversational AI overview for an explanation of how Conversational AI works.

Installation

Install the package and its dependencies in your React Native project.

$npm install @elevenlabs/react-native @livekit/react-native @livekit/react-native-webrtc livekit-client

An example app using this SDK with Expo can be found here

Requirements

  • React Native with LiveKit dependencies
  • Microphone permissions configured for your platform
  • Expo compatibility (development builds only)

This SDK was designed and built for use with the Expo framework. Due to its dependency on LiveKit’s WebRTC implementation, it requires development builds and cannot be used with Expo Go.

Setup

Provider Setup

Wrap your app with the ElevenLabsProvider to enable conversational AI functionality.

1import { ElevenLabsProvider } from '@elevenlabs/react-native';
2import React from 'react';
3
4function App() {
5 return (
6 <ElevenLabsProvider>
7 <YourAppComponents />
8 </ElevenLabsProvider>
9 );
10}

Usage

useConversation

A React Native hook for managing connection and audio usage for ElevenLabs Conversational AI.

Initialize conversation

First, initialize the Conversation instance within a component that’s wrapped by ElevenLabsProvider.

1import { useConversation } from '@elevenlabs/react-native';
2import React from 'react';
3
4function ConversationComponent() {
5 const conversation = useConversation();
6
7 // Your component logic here
8}

Note that Conversational AI requires microphone access. Consider explaining and requesting permissions in your app’s UI before the Conversation starts, especially on mobile platforms where permission management is crucial.

Options

The Conversation can be initialized with certain options:

1const conversation = useConversation({
2 onConnect: () => console.log('Connected to conversation'),
3 onDisconnect: () => console.log('Disconnected from conversation'),
4 onMessage: (message) => console.log('Received message:', message),
5 onError: (error) => console.error('Conversation error:', error),
6 onModeChange: (mode) => console.log('Conversation mode changed:', mode),
7 onStatusChange: (prop) => console.log('Conversation status changed:', prop.status),
8 onCanSendFeedbackChange: (prop) =>
9 console.log('Can send feedback changed:', prop.canSendFeedback),
10 onUnhandledClientToolCall: (params) => console.log('Unhandled client tool call:', params),
11});
  • onConnect - Handler called when the conversation WebRTC connection is established.
  • onDisconnect - Handler called when the conversation WebRTC connection is ended.
  • onMessage - Handler called when a new message is received. These can be tentative or final transcriptions of user voice, replies produced by LLM, or debug messages.
  • onError - Handler called when an error is encountered.
  • onModeChange - Handler called when the conversation mode changes. This is useful for indicating whether the agent is speaking or listening.
  • onStatusChange - Handler called when the conversation status changes.
  • onCanSendFeedbackChange - Handler called when the ability to send feedback changes.
  • onUnhandledClientToolCall - Handler called when an unhandled client tool call is encountered.

Methods

startSession

The startSession method kicks off the WebRTC connection and starts using the microphone to communicate with the ElevenLabs Conversational AI agent. The method accepts a configuration object with the agentId being conditionally required based on whether the agent is public or private.

Public agents

For public agents (i.e. agents that don’t have authentication enabled), only the agentId is required. The Agent ID can be acquired through the ElevenLabs UI.

1const conversation = useConversation();
2
3// For public agents, pass in the agent ID
4const startConversation = async () => {
5 await conversation.startSession({
6 agentId: 'your-agent-id',
7 });
8};
Private agents

For private agents, you must pass in a conversationToken obtained from the ElevenLabs API. Generating this token requires an ElevenLabs API key.

The conversationToken is valid for 10 minutes.
1// Node.js server
2
3app.get("/conversation-token", yourAuthMiddleware, async (req, res) => {
4 const response = await fetch(
5 `https://api.elevenlabs.io/v1/convai/conversation/token?agent_id=${process.env.AGENT_ID}`,
6 {
7 headers: {
8 // Requesting a conversation token requires your ElevenLabs API key
9 // Do NOT expose your API key to the client!
10 'xi-api-key': process.env.ELEVENLABS_API_KEY,
11 }
12 }
13 );
14
15 if (!response.ok) {
16 return res.status(500).send("Failed to get conversation token");
17 }
18
19 const body = await response.json();
20 res.send(body.token);
21);

Then, pass the token to the startSession method. Note that only the conversationToken is required for private agents.

1const conversation = useConversation();
2
3const response = await fetch('/conversation-token', yourAuthHeaders);
4const conversationToken = await response.text();
5
6// For private agents, pass in the conversation token
7const startConversation = async () => {
8 await conversation.startSession({
9 conversationToken,
10 });
11};

You can optionally pass a user ID to identify the user in the conversation. This can be your own customer identifier. This will be included in the conversation initiation data sent to the server.

1const startConversation = async () => {
2 await conversation.startSession({
3 agentId: 'your-agent-id',
4 userId: 'your-user-id',
5 });
6};
endSession

A method to manually end the conversation. The method will disconnect and end the conversation.

1const endConversation = async () => {
2 await conversation.endSession();
3};
sendUserMessage

Send a text message to the agent during an active conversation.

1const sendMessage = async () => {
2 await conversation.sendUserMessage('Hello, how can you help me?');
3};

sendContextualUpdate

Sends contextual information to the agent that won’t trigger a response.

1const sendContextualUpdate = async () => {
2 await conversation.sendContextualUpdate(
3 'User navigated to the profile page. Consider this for next response.'
4 );
5};
sendFeedback

Provide feedback on the conversation quality. This helps improve the agent’s performance.

1const provideFeedback = async (liked: boolean) => {
2 await conversation.sendFeedback(liked);
3};
sendUserActivity

Notifies the agent about user activity to prevent interruptions. Useful for when the user is actively using the app and the agent should pause speaking, i.e. when the user is typing in a chat.

The agent will pause speaking for ~2 seconds after receiving this signal.

1const signalActivity = async () => {
2 await conversation.sendUserActivity();
3};

Properties

status

A React state containing the current status of the conversation.

1const { status } = useConversation();
2console.log(status); // "connected" or "disconnected"
isSpeaking

A React state containing information on whether the agent is currently speaking. This is useful for indicating agent status in your UI.

1const { isSpeaking } = useConversation();
2console.log(isSpeaking); // boolean
canSendFeedback

A React state indicating whether feedback can be submitted for the current conversation.

1const { canSendFeedback } = useConversation();
2
3// Use this to conditionally show feedback UI
4{
5 canSendFeedback && (
6 <FeedbackButtons
7 onLike={() => conversation.sendFeedback(true)}
8 onDislike={() => conversation.sendFeedback(false)}
9 />
10 );
11}

Example Implementation

Here’s a complete example of a React Native component using the ElevenLabs Conversational AI SDK:

1import { ElevenLabsProvider, useConversation } from '@elevenlabs/react-native';
2import React, { useState } from 'react';
3import { View, Text, TouchableOpacity, StyleSheet } from 'react-native';
4
5function ConversationScreen() {
6 const [isConnected, setIsConnected] = useState(false);
7
8 const conversation = useConversation({
9 onConnect: () => {
10 console.log('Connected to conversation');
11 setIsConnected(true);
12 },
13 onDisconnect: () => {
14 console.log('Disconnected from conversation');
15 setIsConnected(false);
16 },
17 onMessage: (message) => {
18 console.log('Message received:', message);
19 },
20 onError: (error) => {
21 console.error('Conversation error:', error);
22 },
23 });
24
25 const startConversation = async () => {
26 try {
27 await conversation.startSession({
28 agentId: 'your-agent-id',
29 });
30 } catch (error) {
31 console.error('Failed to start conversation:', error);
32 }
33 };
34
35 const endConversation = async () => {
36 try {
37 await conversation.endSession();
38 } catch (error) {
39 console.error('Failed to end conversation:', error);
40 }
41 };
42
43 return (
44 <View style={styles.container}>
45 <Text style={styles.status}>Status: {conversation.status}</Text>
46
47 <Text style={styles.speaking}>
48 Agent is {conversation.isSpeaking ? 'speaking' : 'not speaking'}
49 </Text>
50
51 <TouchableOpacity
52 style={[styles.button, isConnected && styles.buttonActive]}
53 onPress={isConnected ? endConversation : startConversation}
54 >
55 <Text style={styles.buttonText}>
56 {isConnected ? 'End Conversation' : 'Start Conversation'}
57 </Text>
58 </TouchableOpacity>
59
60 {conversation.canSendFeedback && (
61 <View style={styles.feedbackContainer}>
62 <TouchableOpacity
63 style={styles.feedbackButton}
64 onPress={() => conversation.sendFeedback(true)}
65 >
66 <Text>👍</Text>
67 </TouchableOpacity>
68 <TouchableOpacity
69 style={styles.feedbackButton}
70 onPress={() => conversation.sendFeedback(false)}
71 >
72 <Text>👎</Text>
73 </TouchableOpacity>
74 </View>
75 )}
76 </View>
77 );
78}
79
80function App() {
81 return (
82 <ElevenLabsProvider>
83 <ConversationScreen />
84 </ElevenLabsProvider>
85 );
86}
87
88const styles = StyleSheet.create({
89 container: {
90 flex: 1,
91 justifyContent: 'center',
92 alignItems: 'center',
93 padding: 20,
94 },
95 status: {
96 fontSize: 16,
97 marginBottom: 10,
98 },
99 speaking: {
100 fontSize: 14,
101 marginBottom: 20,
102 color: '#666',
103 },
104 button: {
105 backgroundColor: '#007AFF',
106 paddingHorizontal: 20,
107 paddingVertical: 10,
108 borderRadius: 8,
109 marginBottom: 20,
110 },
111 buttonActive: {
112 backgroundColor: '#FF3B30',
113 },
114 buttonText: {
115 color: 'white',
116 fontSize: 16,
117 fontWeight: '600',
118 },
119 feedbackContainer: {
120 flexDirection: 'row',
121 gap: 10,
122 },
123 feedbackButton: {
124 backgroundColor: '#F2F2F7',
125 padding: 10,
126 borderRadius: 8,
127 },
128});
129
130export default App;

Platform-Specific Considerations

iOS

Ensure microphone permissions are properly configured in your Info.plist:

1<key>NSMicrophoneUsageDescription</key>
2<string>This app needs microphone access to enable voice conversations with AI agents.</string>

Android

Add microphone permissions to your android/app/src/main/AndroidManifest.xml:

1<uses-permission android:name="android.permission.RECORD_AUDIO" />

Consider requesting runtime permissions before starting a conversation:

1import { PermissionsAndroid, Platform } from 'react-native';
2
3const requestMicrophonePermission = async () => {
4 if (Platform.OS === 'android') {
5 const granted = await PermissionsAndroid.request(PermissionsAndroid.PERMISSIONS.RECORD_AUDIO, {
6 title: 'Microphone Permission',
7 message: 'This app needs microphone access to enable voice conversations.',
8 buttonNeutral: 'Ask Me Later',
9 buttonNegative: 'Cancel',
10 buttonPositive: 'OK',
11 });
12 return granted === PermissionsAndroid.RESULTS.GRANTED;
13 }
14 return true;
15};