Quickstart
Build your first conversational AI voice agent in 5 minutes.
In this guide, you’ll learn how to create your first Conversational AI voice agent. This will serve as a foundation for building conversational workflows tailored to your business use cases.
Getting started
Conversational AI agents are managed through the ElevenLabs dashboard. This is used to:
- Create and manage AI assistants
- Configure voice settings and conversation parameters
- Equip the agent with tools and a knowledge base
- Review conversation analytics and transcripts
- Manage API keys and integration settings
The web dashboard uses our Web SDK under the hood to handle real-time conversations.
Build a support agent
Build a restaurant ordering agent
Overview
In this guide, we’ll create a conversational support assistant capable of answering questions about your product, documentation, or service. This assistant can be embedded into your website or app to provide real-time support to your customers.
Prerequisites
Assistant setup
Create a new assistant
In the ElevenLabs Dashboard, create a new assistant by entering a name and selecting the Blank template
option.
Configure the assistant behavior
Go to the Agent tab to configure the assistant’s behavior. Set the following:
Add a knowledge base
Go to the Knowledge Base section to provide your assistant with context about your business.
This is where you can upload relevant documents & links to external resources:
- Include documentation, FAQs, and other resources to help the assistant respond to customer inquiries.
- Keep the knowledge base up-to-date to ensure the assistant provides accurate and current information.
Configure the voice
Select a voice
In the Voice tab, choose a voice that best matches your assistant from the voice library:
Analyze and collect conversation data
Configure evaluation criteria and data collection to analyze conversations and improve your assistant’s performance.
Configure evaluation criteria
Navigate to the Analysis tab in your assistant’s settings to define custom criteria for evaluating conversations.
Every conversation transcript is passed to the LLM to verify if specific goals were met. Results will either be success
, failure
, or unknown
, along with a rationale explaining the chosen result.
Let’s add an evaluation criteria with the name solved_user_inquiry
:
Configure data collection
In the Data Collection section, configure details to be extracted from each conversation.
Click Add item and configure the following:
- Data type: Select “string”
- Identifier: Enter a unique identifier for this data point:
user_question
- Description: Provide detailed instructions for the LLM about how to extract the specific data from the transcript:
Your assistant is now configured. Embed the widget on your website to start providing real-time support to your customers.