Next.JS
Learn how to create a web application that enables voice conversations with ElevenLabs AI agents
This tutorial will guide you through creating a web client that can interact with a Conversational AI agent. You’ll learn how to implement real-time voice conversations, allowing users to speak with an AI agent that can listen, understand, and respond naturally using voice synthesis.
What You’ll Need
- An ElevenLabs agent created following this guide
npm
installed on your local system.- We’ll use Typescript for this tutorial, but you can use Javascript if you prefer.
Looking for a complete example? Check out our Next.js demo on GitHub
Setup
Create a new Next.js project
Open a terminal window and run the following command:
It will ask you some questions about how to build your project. We’ll follow the default suggestions for this tutorial.
Navigate to project directory
Install the ElevenLabs dependency
Test the setup
Run the following command to start the development server and open the provided URL in your browser:
Implement Conversational AI
Create the conversation component
Create a new file app/components/conversation.tsx
:
Update the main page
Replace the contents of app/page.tsx
with:
This authentication step is only required for private agents. If you’re using
a public agent, you can skip this section and directly use the agentId
in
the startSession
call.
Signed URLs expire after a short period. In a production environment, you should implement proper error handling and URL refresh logic.
Next Steps
Now that you have a basic implementation, you can:
- Add visual feedback for voice activity
- Implement error handling and retry logic
- Add a chat history display
- Customize the UI to match your brand
For more advanced features and customization options, check out the @11labs/react package.
Was this page helpful?