Cross-platform Voice Agents with Expo React Native
Build conversational AI agents that work across iOS, Android, and web using Expo React Native and the ElevenLabs Conversational AI SDK.
Introduction
In this tutorial you will learn how to build a voice agent that works across iOS, Android, and web using Expo React Native and the ElevenLabs Conversational AI SDK.
Prefer to jump straight to the code?
Find the example project on GitHub.
Requirements
- An ElevenLabs account with an API key.
- Node.js v18 or higher installed on your machine.
Setup
Create a new Expo project
Using create-expo-app
, create a new blank Expo project:
Enable microphone permissions
In the app.json
file, add the following permissions:
This will allow the React Native web view to prompt for microphone permissions when the conversation is started.
Install dependencies
This approach relies on Expo DOM components to make the conversational AI agent work across platforms. There is a couple of dependencies you need to install to make this work.
Expo DOM components
Expo offers a novel approach to work with modern web code directly in a native app via the use dom
directive. This approach means that you can use our Conversational AI React SDK across all platforms using the same code.
Under the hood, Expo uses react-native-webview
to render the web code in a native component. To allow the webview to access the microphone, you need to make sure to use npx expo start --tunnel
to start the Expo development server locally so that the webview is served over https.
Create the conversational AI DOM component
Create a new file in the components folder: ./components/ConvAI.tsx
and add the following code:
Native client tools
A big part of building conversational AI agents is allowing the agent access and execute functionality dynamically. This can be done via client tools.
In order for DOM components to exectute native actions, you can send type-safe native functions to DOM components by passing asynchronous functions as top-level props to the DOM component.
Create a new file to hold your client tools: ./utils/tools.ts
and add the following code:
Dynamic variables
In addition to the client tools, we’re also injecting the platform (web, iOS, Android) as a dynamic variable both into the first message, and the prompt. To do this, we pass the platform as a top-level prop to the DOM component, and then in our DOM component pass it to the startConversation
configuration:
Add the component to your app
Add the component to your app by adding the following code to your ./App.tsx
file:
Agent configuration
Create a new agent
Navigate to Conversational AI > Agents and create a new agent from the blank template.
Set up the client tools
Set up the following client tools:
- Name:
get_battery_level
- Description: Gets the device battery level as decimal point percentage.
- Wait for response:
true
- Response timeout (seconds): 3
- Name:
change_brightness
- Description: Changes the brightness of the device screen.
- Wait for response:
true
- Response timeout (seconds): 3
- Parameters:
- Data Type:
number
- Identifier:
brightness
- Required:
true
- Value Type:
LLM Prompt
- Description: A number between 0 and 1, inclusive, representing the desired screen brightness.
- Data Type:
- Name:
flash_screen
- Description: Quickly flashes the screen on and off.
- Wait for response:
true
- Response timeout (seconds): 3
Run the app
Modyfing the brightness is not supported within Expo Go, therefore you will need to prebuild the app and then run it on a native device.
- Terminal 1:
- Run
npx expo prebuild --clean
- Run
- Run
npx expo start --tunnel
to start the Expo development server over https.
- Terminal 2:
- Run
npx expo run:ios --device
to run the app on your iOS device.
- Run