Data Collection and Analysis with Conversational AI in Next.js
Collect and analyse data in post-call webhooks using Conversational AI and Next.js.
Introduction
In this tutorial you will learn how to build a voice agent that collects information from the user through conversation, then analyses and extracts the data in a structured way and sends it to your application via the post-call webhook.
Prefer to jump straight to the code?
Find the example project on GitHub.
Requirements
- An ElevenLabs account with an API key.
- Node.js v18 or higher installed on your machine.
Setup
Create a new Next.js project
We recommend using our v0.dev Conversational AI template as the starting point for your application. This template is a production-ready Next.js application with the Conversational AI agent already integrated.
Alternatively, you can clone the fully integrated project from GitHub, or create a new blank Next.js project and follow the steps below to integrate the Conversational AI agent.
Set up conversational AI
Follow our Next.js guide for installation and configuration steps. Then come back here to build in the advanced features.
Agent configuration
Create a new agent
Navigate to Conversational AI > Agents and create a new agent from the blank template.
Set up the client tools
Set up the following client tool to navigate between the steps:
- Name:
set_ui_state
- Description: Use this client-side tool to navigate between the different UI states.
- Wait for response:
true
- Response timeout (seconds): 1
- Parameters:
- Data type: string
- Identifier: step
- Required: true
- Value Type: LLM Prompt
- Description: The step to navigate to in the UI. Only use the steps that are defined in the system prompt!
Set your agent's voice
Navigate to the Voice
tab and set the voice for your agent. You can find a list of recommended voices for Conversational AI in the Conversational Voice Design docs.
Set the evaluation criteria
Navigate to the Analysis
tab and add a new evaluation criteria.
- Name:
all_data_provided
- Prompt: Evaluate whether the user provided a description of the agent they are looking to generate as well as a description of the voice the agent should have.
Configure the data collection
You can use the post call analysis to extract data from the conversation. In the Analysis
tab, under Data Collection
, add the following items:
- Identifier:
voice_description
data-type
:String
- Description: Based on the description of the voice the user wants the agent to have, generate a concise description of the voice including the age, accent, tone, and character if available.
- Identifier:
agent_description
data-type
:String
- Description: Based on the description about the agent the user is looking to design, generate a prompt that can be used to train a model to act as the agent.
Configure the post-call webhook
Post-call webhooks are used to notify you when a call ends and the analysis and data extraction steps have been completed.
In this example the, the post-call webhook does a couple of steps, namely:
- Create a custom voice design based on the
voice_description
. - Create a conversational AI agent for the users based on the
agent_description
they provided. - Retrieve the knowledge base documents from the conversation state stored in Redis and attach the knowledge base to the agent.
- Send an email to the user to notify them that their custom conversational AI agent is ready to chat.
When running locally, you will need a tool like ngrok to expose your local server to the internet.
Navigate to the Conversational AI settings and under Post-Call Webhook
create a new webhook and paste in your ngrok URL: https://<your-url>.ngrok-free.app/api/convai-webhook
.
After saving the webhook, you will receive a webhooks secret. Make sure to store this secret securely as you will need to set it in your .env
file later.
Integrate the advanced features
Set up a Redis database for storing the conversation state
In this example we’re using Redis to store the conversation state. This allows us to retrieve the knowledge base documents from the conversation state after the call ends.
If youre deploying to Vercel, you can configure the Upstash for Redis integration, or alternatively you can sign up for a free Upstash account and create a new database.
Set up Resend for sending post-call emails
In this example we’re using Resend to send the post-call email to the user. To do so you will need to create a free Resend account and set up a new API key.
Set the environment variables
In the root of your project, create a .env
file and add the following variables:
Configure security and authentication
To secure your conversational AI agent, you need to enable authentication in the Security
tab of the agent configuration.
Once authentication is enabled, you will need to create a signed URL in a secure server-side environment to initiate a conversation with the agent. In Next.js, you can do this by setting up a new API route.
Start the conversation session
To start the conversation, first, call your API route to get the signed URL, then use the useConversation
hook to set up the conversation session.
Client tool and dynamic variables
In the agent configuration earlier, you registered the set_ui_state
client tool to allow the agent to navigate between the different UI states. To put it all together, you need to pass the client tool implementation to the conversation.startSession
options.
This is also where you can pass in the dynamic variables to the conversation.
Uploading documents to the knowledge base
In the Training
step, the agent will ask the user to upload documents or submit URLs to public websites with information that should be available to their agent. Here you can utilise the new after
function of Next.js 15 to allow uploading of documents in the background.
Create a new upload
server action to handle the knowledge base creation upon form submission. Once all knowledge base documents have been created, store the conversation ID and the knowledge base IDs in the Redis database.
Handling the post-call webhook
The post-call webhook is triggered when a call ends and the analysis and data extraction steps have been completed.
There’s a few steps that are happening here, namely:
- Verify the webhook secret and consrtuct the webhook payload.
- Create a custom voice design based on the
voice_description
. - Create a conversational AI agent for the users based on the
agent_description
they provided. - Retrieve the knowledge base documents from the conversation state stored in Redis and attach the knowledge base to the agent.
- Send an email to the user to notify them that their custom conversational AI agent is ready to chat.
Let’s go through each step in detail.
Verify the webhook secret and consrtuct the webhook payload
When the webhook request is received, we first verify the webhook secret and construct the webhook payload.
Create a custom voice design based on the voice_description
Using the voice_description
from the webhook payload, we create a custom voice design.
Retrieve the knowledge base documents from the conversation state stored in Redis
The uploading of the documents might take longer than the webhook data analysis, so we’ll need to poll the conversation state in Redis until the documents have been uploaded.
Create a conversational AI agent for the users based on the agent_description
they provided
Create the conversational AI agent for the user based on the agent_description
they provided and attach the newly created voice design and knowledge base to the agent.
Send an email to the user to notify them that their custom conversational AI agent is ready to chat
Once the agent is created, you can send an email to the user to notify them that their custom conversational AI agent is ready to chat.
You can use new.email, a handy tool from the Resend team, to vibe design your email templates. Once you’re happy with the template, create a new component and add in the agent ID as a prop.
Run the app
To run the app locally end-to-end, you will need to first run the Next.js development server, and then in a separate terminal run the ngrok tunnel to expose the webhook handler to the internet.
- Terminal 1:
- Run
pnpm dev
to start the Next.js development server.
- Run
- Terminal 2:
- Run
ngrok http 3000
to expose the webhook handler to the internet.
- Run
Now open http://localhost:3000 and start designing your custom conversational AI agent, with your voice!
Conclusion
ElevenLabs Conversational AI is a powerful platform for building advanced voice agent uses cases, complete with data collection and analysis.