Cross-platform Voice Agents with Expo React Native

Build conversational AI agents that work across iOS, Android, and web using Expo React Native and the ElevenLabs Conversational AI SDK.

Introduction

In this tutorial you will learn how to build a voice agent that works across iOS, Android, and web using Expo React Native and the ElevenLabs Conversational AI SDK.

Prefer to jump straight to the code?

Find the example project on GitHub.

Requirements

  • An ElevenLabs account with an API key.
  • Node.js v18 or higher installed on your machine.

Setup

Create a new Expo project

Using create-expo-app, create a new blank Expo project:

$npx create-expo-app@latest --template blank-typescript

Enable microphone permissions

In the app.json file, add the following permissions:

app.json
1{
2 "expo": {
3 "scheme": "elevenlabs",
4 // ...
5 "ios": {
6 "infoPlist": {
7 "NSMicrophoneUsageDescription": "This app uses the microphone to record audio."
8 },
9 "supportsTablet": true,
10 "bundleIdentifier": "com.anonymous.elevenlabs-conversational-ai-expo-react-native"
11 }
12 // ...
13 }
14}

This will allow the React Native web view to prompt for microphone permissions when the conversation is started.

Install dependencies

This approach relies on Expo DOM components to make the conversational AI agent work across platforms. There is a couple of dependencies you need to install to make this work.

$npx expo install @11labs/react
>npx expo install expo-dev-client # tunnel support
>npx expo install react-native-webview # DOM components support
>npx expo install react-dom react-native-web @expo/metro-runtime # RN web support
># Cool client tools
>npx expo install expo-battery
>npx expo install expo-brightness

Expo DOM components

Expo offers a novel approach to work with modern web code directly in a native app via the use dom directive. This approach means that you can use our Conversational AI React SDK across all platforms using the same code.

Under the hood, Expo uses react-native-webview to render the web code in a native component. To allow the webview to access the microphone, you need to make sure to use npx expo start --tunnel to start the Expo development server locally so that the webview is served over https.

Create the conversational AI DOM component

Create a new file in the components folder: ./components/ConvAI.tsx and add the following code:

/components/ConvAI.tsx
1'use dom';
2
3import { useConversation } from '@11labs/react';
4import { Mic } from 'lucide-react-native';
5import { useCallback } from 'react';
6import { View, Pressable, StyleSheet } from 'react-native';
7
8import tools from '../utils/tools';
9
10async function requestMicrophonePermission() {
11 try {
12 await navigator.mediaDevices.getUserMedia({ audio: true });
13 return true;
14 } catch (error) {
15 console.log(error);
16 console.error('Microphone permission denied');
17 return false;
18 }
19}
20
21export default function ConvAiDOMComponent({
22 platform,
23 get_battery_level,
24 change_brightness,
25 flash_screen,
26}: {
27 dom?: import('expo/dom').DOMProps;
28 platform: string;
29 get_battery_level: typeof tools.get_battery_level;
30 change_brightness: typeof tools.change_brightness;
31 flash_screen: typeof tools.flash_screen;
32}) {
33 const conversation = useConversation({
34 onConnect: () => console.log('Connected'),
35 onDisconnect: () => console.log('Disconnected'),
36 onMessage: (message) => {
37 console.log(message);
38 },
39 onError: (error) => console.error('Error:', error),
40 });
41 const startConversation = useCallback(async () => {
42 try {
43 // Request microphone permission
44 const hasPermission = await requestMicrophonePermission();
45 if (!hasPermission) {
46 alert('No permission');
47 return;
48 }
49
50 // Start the conversation with your agent
51 await conversation.startSession({
52 agentId: 'YOUR_AGENT_ID', // Replace with your agent ID
53 dynamicVariables: {
54 platform,
55 },
56 clientTools: {
57 get_battery_level,
58 change_brightness,
59 flash_screen,
60 },
61 });
62 } catch (error) {
63 console.error('Failed to start conversation:', error);
64 }
65 }, [conversation]);
66
67 const stopConversation = useCallback(async () => {
68 await conversation.endSession();
69 }, [conversation]);
70
71 return (
72 <Pressable
73 style={[styles.callButton, conversation.status === 'connected' && styles.callButtonActive]}
74 onPress={conversation.status === 'disconnected' ? startConversation : stopConversation}
75 >
76 <View
77 style={[
78 styles.buttonInner,
79 conversation.status === 'connected' && styles.buttonInnerActive,
80 ]}
81 >
82 <Mic size={32} color="#E2E8F0" strokeWidth={1.5} style={styles.buttonIcon} />
83 </View>
84 </Pressable>
85 );
86}
87
88const styles = StyleSheet.create({
89 callButton: {
90 width: 120,
91 height: 120,
92 borderRadius: 60,
93 backgroundColor: 'rgba(255, 255, 255, 0.1)',
94 alignItems: 'center',
95 justifyContent: 'center',
96 marginBottom: 24,
97 },
98 callButtonActive: {
99 backgroundColor: 'rgba(239, 68, 68, 0.2)',
100 },
101 buttonInner: {
102 width: 80,
103 height: 80,
104 borderRadius: 40,
105 backgroundColor: '#3B82F6',
106 alignItems: 'center',
107 justifyContent: 'center',
108 shadowColor: '#3B82F6',
109 shadowOffset: {
110 width: 0,
111 height: 0,
112 },
113 shadowOpacity: 0.5,
114 shadowRadius: 20,
115 elevation: 5,
116 },
117 buttonInnerActive: {
118 backgroundColor: '#EF4444',
119 shadowColor: '#EF4444',
120 },
121 buttonIcon: {
122 transform: [{ translateY: 2 }],
123 },
124});

Native client tools

A big part of building conversational AI agents is allowing the agent access and execute functionality dynamically. This can be done via client tools.

In order for DOM components to exectute native actions, you can send type-safe native functions to DOM components by passing asynchronous functions as top-level props to the DOM component.

Create a new file to hold your client tools: ./utils/tools.ts and add the following code:

./utils/tools.ts
1import * as Battery from 'expo-battery';
2import * as Brightness from 'expo-brightness';
3
4const get_battery_level = async () => {
5 const batteryLevel = await Battery.getBatteryLevelAsync();
6 console.log('batteryLevel', batteryLevel);
7 if (batteryLevel === -1) {
8 return 'Error: Device does not support retrieving the battery level.';
9 }
10 return batteryLevel;
11};
12
13const change_brightness = ({ brightness }: { brightness: number }) => {
14 console.log('change_brightness', brightness);
15 Brightness.setSystemBrightnessAsync(brightness);
16 return brightness;
17};
18
19const flash_screen = () => {
20 Brightness.setSystemBrightnessAsync(1);
21 setTimeout(() => {
22 Brightness.setSystemBrightnessAsync(0);
23 }, 200);
24 return 'Successfully flashed the screen.';
25};
26
27const tools = {
28 get_battery_level,
29 change_brightness,
30 flash_screen,
31};
32
33export default tools;

Dynamic variables

In addition to the client tools, we’re also injecting the platform (web, iOS, Android) as a dynamic variable both into the first message, and the prompt. To do this, we pass the platform as a top-level prop to the DOM component, and then in our DOM component pass it to the startConversation configuration:

./components/ConvAI.tsx
1// ...
2export default function ConvAiDOMComponent({
3 platform,
4 get_battery_level,
5 change_brightness,
6 flash_screen,
7}: {
8 dom?: import('expo/dom').DOMProps;
9 platform: string;
10 get_battery_level: typeof tools.get_battery_level;
11 change_brightness: typeof tools.change_brightness;
12 flash_screen: typeof tools.flash_screen;
13}) {
14 const conversation = useConversation({
15 onConnect: () => console.log('Connected'),
16 onDisconnect: () => console.log('Disconnected'),
17 onMessage: (message) => {
18 console.log(message);
19 },
20 onError: (error) => console.error('Error:', error),
21 });
22 const startConversation = useCallback(async () => {
23 try {
24 // Request microphone permission
25 const hasPermission = await requestMicrophonePermission();
26 if (!hasPermission) {
27 alert('No permission');
28 return;
29 }
30
31 // Start the conversation with your agent
32 await conversation.startSession({
33 agentId: 'YOUR_AGENT_ID', // Replace with your agent ID
34 dynamicVariables: {
35 platform,
36 },
37 clientTools: {
38 get_battery_level,
39 change_brightness,
40 flash_screen,
41 },
42 });
43 } catch (error) {
44 console.error('Failed to start conversation:', error);
45 }
46 }, [conversation]);
47 //...
48}
49// ...

Add the component to your app

Add the component to your app by adding the following code to your ./App.tsx file:

./App.tsx
1import { LinearGradient } from 'expo-linear-gradient';
2import { StatusBar } from 'expo-status-bar';
3import { View, Text, StyleSheet, SafeAreaView } from 'react-native';
4import { Platform } from 'react-native';
5
6import ConvAiDOMComponent from './components/ConvAI';
7import tools from './utils/tools';
8
9export default function App() {
10 return (
11 <SafeAreaView style={styles.container}>
12 <LinearGradient colors={['#0F172A', '#1E293B']} style={StyleSheet.absoluteFill} />
13
14 <View style={styles.topContent}>
15 <Text style={styles.description}>
16 Cross-platform conversational AI agents with ElevenLabs and Expo React Native.
17 </Text>
18
19 <View style={styles.toolsList}>
20 <Text style={styles.toolsTitle}>Available Client Tools:</Text>
21 <View style={styles.toolItem}>
22 <Text style={styles.toolText}>Get battery level</Text>
23 <View style={styles.platformTags}>
24 <Text style={styles.platformTag}>web</Text>
25 <Text style={styles.platformTag}>ios</Text>
26 <Text style={styles.platformTag}>android</Text>
27 </View>
28 </View>
29 <View style={styles.toolItem}>
30 <Text style={styles.toolText}>Change screen brightness</Text>
31 <View style={styles.platformTags}>
32 <Text style={styles.platformTag}>ios</Text>
33 <Text style={styles.platformTag}>android</Text>
34 </View>
35 </View>
36 <View style={styles.toolItem}>
37 <Text style={styles.toolText}>Flash screen</Text>
38 <View style={styles.platformTags}>
39 <Text style={styles.platformTag}>ios</Text>
40 <Text style={styles.platformTag}>android</Text>
41 </View>
42 </View>
43 </View>
44 <View style={styles.domComponentContainer}>
45 <ConvAiDOMComponent
46 dom={{ style: styles.domComponent }}
47 platform={Platform.OS}
48 get_battery_level={tools.get_battery_level}
49 change_brightness={tools.change_brightness}
50 flash_screen={tools.flash_screen}
51 />
52 </View>
53 </View>
54 <StatusBar style="light" />
55 </SafeAreaView>
56 );
57}
58
59const styles = StyleSheet.create({
60 container: {
61 flex: 1,
62 },
63 topContent: {
64 paddingTop: 40,
65 paddingHorizontal: 24,
66 alignItems: 'center',
67 },
68 description: {
69 fontFamily: 'Inter-Regular',
70 fontSize: 16,
71 color: '#E2E8F0',
72 textAlign: 'center',
73 maxWidth: 300,
74 lineHeight: 24,
75 marginBottom: 24,
76 },
77 toolsList: {
78 backgroundColor: 'rgba(255, 255, 255, 0.05)',
79 borderRadius: 16,
80 padding: 20,
81 width: '100%',
82 maxWidth: 400,
83 marginBottom: 24,
84 },
85 toolsTitle: {
86 fontFamily: 'Inter-Bold',
87 fontSize: 18,
88 color: '#E2E8F0',
89 marginBottom: 16,
90 },
91 toolItem: {
92 flexDirection: 'row',
93 justifyContent: 'space-between',
94 alignItems: 'center',
95 paddingVertical: 12,
96 borderBottomWidth: 1,
97 borderBottomColor: 'rgba(255, 255, 255, 0.1)',
98 },
99 toolText: {
100 fontFamily: 'Inter-Regular',
101 fontSize: 14,
102 color: '#E2E8F0',
103 },
104 platformTags: {
105 flexDirection: 'row',
106 gap: 8,
107 },
108 platformTag: {
109 fontSize: 12,
110 color: '#94A3B8',
111 backgroundColor: 'rgba(148, 163, 184, 0.1)',
112 paddingHorizontal: 8,
113 paddingVertical: 4,
114 borderRadius: 6,
115 overflow: 'hidden',
116 fontFamily: 'Inter-Regular',
117 },
118 domComponentContainer: {
119 width: 120,
120 height: 120,
121 alignItems: 'center',
122 justifyContent: 'center',
123 marginBottom: 24,
124 },
125 domComponent: {
126 width: 120,
127 height: 120,
128 },
129});

Agent configuration

1

Sign in to ElevenLabs

Go to elevenlabs.io and sign in to your account.

2

Create a new agent

Navigate to Conversational AI > Agents and create a new agent from the blank template.

3

Set the first message

Set the first message and specify the dynamic variable for the platform.

Hi there, woah, so cool that I'm running on {{platform}}. What can I help you with?
4

Set the system prompt

Set the system prompt. You can also include dynamic variables here.

You are a helpful assistant running on {{platform}}. You have access to certain tools that allow you to check the user device battery level and change the display brightness. Use these tools if the user asks about them. Otherwise, just answer the question.
5

Set up the client tools

Set up the following client tools:

  • Name: get_battery_level
    • Description: Gets the device battery level as decimal point percentage.
    • Wait for response: true
    • Response timeout (seconds): 3
  • Name: change_brightness
    • Description: Changes the brightness of the device screen.
    • Wait for response: true
    • Response timeout (seconds): 3
    • Parameters:
      • Data Type: number
      • Identifier: brightness
      • Required: true
      • Value Type: LLM Prompt
      • Description: A number between 0 and 1, inclusive, representing the desired screen brightness.
  • Name: flash_screen
    • Description: Quickly flashes the screen on and off.
    • Wait for response: true
    • Response timeout (seconds): 3

Run the app

Modyfing the brightness is not supported within Expo Go, therefore you will need to prebuild the app and then run it on a native device.

  • Terminal 1:
    • Run npx expo prebuild --clean
$npx expo prebuild --clean
  • Run npx expo start --tunnel to start the Expo development server over https.
$npx expo start --tunnel
  • Terminal 2:
    • Run npx expo run:ios --device to run the app on your iOS device.
$npx expo run:ios --device