Data Collection and Analysis with Conversational AI in Next.js

Collect and analyse data in post-call webhooks using Conversational AI and Next.js.

Introduction

In this tutorial you will learn how to build a voice agent that collects information from the user through conversation, then analyses and extracts the data in a structured way and sends it to your application via the post-call webhook.

Prefer to jump straight to the code?

Find the example project on GitHub.

Requirements

  • An ElevenLabs account with an API key.
  • Node.js v18 or higher installed on your machine.

Setup

Create a new Next.js project

We recommend using our v0.dev Conversational AI template as the starting point for your application. This template is a production-ready Next.js application with the Conversational AI agent already integrated.

Alternatively, you can clone the fully integrated project from GitHub, or create a new blank Next.js project and follow the steps below to integrate the Conversational AI agent.

Set up conversational AI

Follow our Next.js guide for installation and configuration steps. Then come back here to build in the advanced features.

Agent configuration

1

Sign in to ElevenLabs

Go to elevenlabs.io and sign in to your account.

2

Create a new agent

Navigate to Conversational AI > Agents and create a new agent from the blank template.

3

Set the first message

Set the first message and specify the dynamic variable for the platform.

Hi {{user_name}}, I'm Jess from the ElevenLabs team. I'm here to help you design your very own conversational AI agent! To kick things off, let me know what kind of agent you're looking to create. For example, do you want a support agent, to help your users answer questions, or a sales agent to sell your products, or just a friend to chat with?
4

Set the system prompt

Set the system prompt. You can also include dynamic variables here.

You are Jess, a helpful agent helping {{user_name}} to design their very own conversational AI agent. The design process involves the following steps:
"initial": In the first step, collect the information about the kind of agent the user is looking to create. Summarize the user's needs back to them and ask if they are ready to continue to the next step. Only once they confirm proceed to the next step.
"training": Tell the user to create the agent's knowledge base by uploading documents, or submitting URLs to public websites with information that should be available to the agent. Wait patiently without talking to the user. Only when the user confirms that they've provided everything then proceed to the next step.
"voice": Tell the user to describe the voice they want their agent to have. For example: "A professional, strong spoken female voice with a slight British accent." Repeat the description of their voice back to them and ask if they are ready to continue to the next step. Only once they confirm proceed to the next step.
"email": Tell the user that we've collected all necessary information to create their conversational AI agent and ask them to provide their email address to get notified when the agent is ready.
Always call the `set_ui_state` tool when moving between steps!
5

Set up the client tools

Set up the following client tool to navigate between the steps:

  • Name: set_ui_state
    • Description: Use this client-side tool to navigate between the different UI states.
    • Wait for response: true
    • Response timeout (seconds): 1
    • Parameters:
      • Data type: string
      • Identifier: step
      • Required: true
      • Value Type: LLM Prompt
      • Description: The step to navigate to in the UI. Only use the steps that are defined in the system prompt!
6

Set your agent's voice

Navigate to the Voice tab and set the voice for your agent. You can find a list of recommended voices for Conversational AI in the Conversational Voice Design docs.

7

Set the evaluation criteria

Navigate to the Analysis tab and add a new evaluation criteria.

  • Name: all_data_provided
    • Prompt: Evaluate whether the user provided a description of the agent they are looking to generate as well as a description of the voice the agent should have.
8

Configure the data collection

You can use the post call analysis to extract data from the conversation. In the Analysis tab, under Data Collection, add the following items:

  • Identifier: voice_description
    • data-type: String
    • Description: Based on the description of the voice the user wants the agent to have, generate a concise description of the voice including the age, accent, tone, and character if available.
  • Identifier: agent_description
    • data-type: String
    • Description: Based on the description about the agent the user is looking to design, generate a prompt that can be used to train a model to act as the agent.
9

Configure the post-call webhook

Post-call webhooks are used to notify you when a call ends and the analysis and data extraction steps have been completed.

In this example the, the post-call webhook does a couple of steps, namely:

  1. Create a custom voice design based on the voice_description.
  2. Create a conversational AI agent for the users based on the agent_description they provided.
  3. Retrieve the knowledge base documents from the conversation state stored in Redis and attach the knowledge base to the agent.
  4. Send an email to the user to notify them that their custom conversational AI agent is ready to chat.

When running locally, you will need a tool like ngrok to expose your local server to the internet.

$ngrok http 3000

Navigate to the Conversational AI settings and under Post-Call Webhook create a new webhook and paste in your ngrok URL: https://<your-url>.ngrok-free.app/api/convai-webhook.

After saving the webhook, you will receive a webhooks secret. Make sure to store this secret securely as you will need to set it in your .env file later.

Integrate the advanced features

Set up a Redis database for storing the conversation state

In this example we’re using Redis to store the conversation state. This allows us to retrieve the knowledge base documents from the conversation state after the call ends.

If youre deploying to Vercel, you can configure the Upstash for Redis integration, or alternatively you can sign up for a free Upstash account and create a new database.

Set up Resend for sending post-call emails

In this example we’re using Resend to send the post-call email to the user. To do so you will need to create a free Resend account and set up a new API key.

Set the environment variables

In the root of your project, create a .env file and add the following variables:

$ELEVENLABS_CONVAI_WEBHOOK_SECRET=
>ELEVENLABS_API_KEY=
>ELEVENLABS_AGENT_ID=
>
># Resend
>RESEND_API_KEY=
>RESEND_FROM_EMAIL=
>
># Upstash Redis
>KV_URL=
>KV_REST_API_READ_ONLY_TOKEN=
>REDIS_URL=
>KV_REST_API_TOKEN=
>KV_REST_API_URL=

Configure security and authentication

To secure your conversational AI agent, you need to enable authentication in the Security tab of the agent configuration.

Once authentication is enabled, you will need to create a signed URL in a secure server-side environment to initiate a conversation with the agent. In Next.js, you can do this by setting up a new API route.

./app/api/signed-url/route.ts
1import { ElevenLabsClient } from 'elevenlabs';
2import { NextResponse } from 'next/server';
3
4export async function GET() {
5 const agentId = process.env.ELEVENLABS_AGENT_ID;
6 if (!agentId) {
7 throw Error('ELEVENLABS_AGENT_ID is not set');
8 }
9 try {
10 const client = new ElevenLabsClient();
11 const response = await client.conversationalAi.getSignedUrl({
12 agent_id: agentId,
13 });
14 return NextResponse.json({ signedUrl: response.signed_url });
15 } catch (error) {
16 console.error('Error:', error);
17 return NextResponse.json({ error: 'Failed to get signed URL' }, { status: 500 });
18 }
19}

Start the conversation session

To start the conversation, first, call your API route to get the signed URL, then use the useConversation hook to set up the conversation session.

./page.tsx
1import { useConversation } from '@11labs/react';
2
3async function getSignedUrl(): Promise<string> {
4 const response = await fetch('/api/signed-url');
5 if (!response.ok) {
6 throw Error('Failed to get signed url');
7 }
8 const data = await response.json();
9 return data.signedUrl;
10}
11
12export default function Home() {
13 // ...
14 const [currentStep, setCurrentStep] = useState<
15 'initial' | 'training' | 'voice' | 'email' | 'ready'
16 >('initial');
17 const [conversationId, setConversationId] = useState('');
18 const [userName, setUserName] = useState('');
19
20 const conversation = useConversation({
21 onConnect: () => console.log('Connected'),
22 onDisconnect: () => console.log('Disconnected'),
23 onMessage: (message: string) => console.log('Message:', message),
24 onError: (error: Error) => console.error('Error:', error),
25 });
26
27 const startConversation = useCallback(async () => {
28 try {
29 // Request microphone permission
30 await navigator.mediaDevices.getUserMedia({ audio: true });
31 // Start the conversation with your agent
32 const signedUrl = await getSignedUrl();
33 const convId = await conversation.startSession({
34 signedUrl,
35 dynamicVariables: {
36 user_name: userName,
37 },
38 clientTools: {
39 set_ui_state: ({ step }: { step: string }): string => {
40 // Allow agent to navigate the UI.
41 setCurrentStep(step as 'initial' | 'training' | 'voice' | 'email' | 'ready');
42 return `Navigated to ${step}`;
43 },
44 },
45 });
46 setConversationId(convId);
47 console.log('Conversation ID:', convId);
48 } catch (error) {
49 console.error('Failed to start conversation:', error);
50 }
51 }, [conversation, userName]);
52 const stopConversation = useCallback(async () => {
53 await conversation.endSession();
54 }, [conversation]);
55 // ...
56}

Client tool and dynamic variables

In the agent configuration earlier, you registered the set_ui_state client tool to allow the agent to navigate between the different UI states. To put it all together, you need to pass the client tool implementation to the conversation.startSession options.

This is also where you can pass in the dynamic variables to the conversation.

./page.tsx
1const convId = await conversation.startSession({
2 signedUrl,
3 dynamicVariables: {
4 user_name: userName,
5 },
6 clientTools: {
7 set_ui_state: ({ step }: { step: string }): string => {
8 // Allow agent to navigate the UI.
9 setCurrentStep(step as 'initial' | 'training' | 'voice' | 'email' | 'ready');
10 return `Navigated to ${step}`;
11 },
12 },
13});

Uploading documents to the knowledge base

In the Training step, the agent will ask the user to upload documents or submit URLs to public websites with information that should be available to their agent. Here you can utilise the new after function of Next.js 15 to allow uploading of documents in the background.

Create a new upload server action to handle the knowledge base creation upon form submission. Once all knowledge base documents have been created, store the conversation ID and the knowledge base IDs in the Redis database.

./app/actions/upload.ts
1'use server';
2
3import { Redis } from '@upstash/redis';
4import { ElevenLabsClient } from 'elevenlabs';
5import { redirect } from 'next/navigation';
6import { after } from 'next/server';
7
8// Initialize Redis
9const redis = Redis.fromEnv();
10
11const elevenLabsClient = new ElevenLabsClient({
12 apiKey: process.env.ELEVENLABS_API_KEY,
13});
14
15export async function uploadFormData(formData: FormData) {
16 const knowledgeBase: Array<{
17 id: string;
18 type: 'file' | 'url';
19 name: string;
20 }> = [];
21 const files = formData.getAll('file-upload') as File[];
22 const email = formData.get('email-input');
23 const urls = formData.getAll('url-input');
24 const conversationId = formData.get('conversation-id');
25
26 after(async () => {
27 // Upload files as background job
28 // Create knowledge base entries
29 // Loop trhough files and create knowledge base entries
30 for (const file of files) {
31 if (file.size > 0) {
32 const response = await elevenLabsClient.conversationalAi.addToKnowledgeBase({ file });
33 if (response.id) {
34 knowledgeBase.push({
35 id: response.id,
36 type: 'file',
37 name: file.name,
38 });
39 }
40 }
41 }
42 // Append all urls
43 for (const url of urls) {
44 const response = await elevenLabsClient.conversationalAi.addToKnowledgeBase({
45 url: url as string,
46 });
47 if (response.id) {
48 knowledgeBase.push({
49 id: response.id,
50 type: 'url',
51 name: `url for ${conversationId}`,
52 });
53 }
54 }
55
56 // Store knowledge base IDs and conversation ID in database.
57 const redisRes = await redis.set(
58 conversationId as string,
59 JSON.stringify({ email, knowledgeBase })
60 );
61 console.log({ redisRes });
62 });
63
64 redirect('/success');
65}

Handling the post-call webhook

The post-call webhook is triggered when a call ends and the analysis and data extraction steps have been completed.

There’s a few steps that are happening here, namely:

  1. Verify the webhook secret and consrtuct the webhook payload.
  2. Create a custom voice design based on the voice_description.
  3. Create a conversational AI agent for the users based on the agent_description they provided.
  4. Retrieve the knowledge base documents from the conversation state stored in Redis and attach the knowledge base to the agent.
  5. Send an email to the user to notify them that their custom conversational AI agent is ready to chat.
./app/api/convai-webhook/route.ts
1import { Redis } from '@upstash/redis';
2import crypto from 'crypto';
3import { ElevenLabsClient } from 'elevenlabs';
4import { NextResponse } from 'next/server';
5import type { NextRequest } from 'next/server';
6import { Resend } from 'resend';
7
8import { EmailTemplate } from '@/components/email/post-call-webhook-email';
9
10// Initialize Redis
11const redis = Redis.fromEnv();
12// Initialize Resend
13const resend = new Resend(process.env.RESEND_API_KEY);
14
15const elevenLabsClient = new ElevenLabsClient({
16 apiKey: process.env.ELEVENLABS_API_KEY,
17});
18
19export async function GET() {
20 return NextResponse.json({ status: 'webhook listening' }, { status: 200 });
21}
22
23export async function POST(req: NextRequest) {
24 const secret = process.env.ELEVENLABS_CONVAI_WEBHOOK_SECRET; // Add this to your env variables
25 const { event, error } = await constructWebhookEvent(req, secret);
26 if (error) {
27 return NextResponse.json({ error: error }, { status: 401 });
28 }
29
30 if (event.type === 'post_call_transcription') {
31 const { conversation_id, analysis, agent_id } = event.data;
32
33 if (
34 agent_id === process.env.ELEVENLABS_AGENT_ID &&
35 analysis.evaluation_criteria_results.all_data_provided?.result === 'success' &&
36 analysis.data_collection_results.voice_description?.value
37 ) {
38 try {
39 // Design the voice
40 const voicePreview = await elevenLabsClient.textToVoice.createPreviews({
41 voice_description: analysis.data_collection_results.voice_description.value,
42 text: 'The night air carried whispers of betrayal, thick as London fog. I adjusted my cufflinks - after all, even spies must maintain appearances, especially when the game is afoot.',
43 });
44 const voice = await elevenLabsClient.textToVoice.createVoiceFromPreview({
45 voice_name: `voice-${conversation_id}`,
46 voice_description: `Voice for ${conversation_id}`,
47 generated_voice_id: voicePreview.previews[0].generated_voice_id,
48 });
49
50 // Get the knowledge base from redis
51 const redisRes = await getRedisDataWithRetry(conversation_id);
52 if (!redisRes) throw new Error('Conversation data not found!');
53 // Handle agent creation
54 const agent = await elevenLabsClient.conversationalAi.createAgent({
55 name: `Agent for ${conversation_id}`,
56 conversation_config: {
57 tts: { voice_id: voice.voice_id },
58 agent: {
59 prompt: {
60 prompt:
61 analysis.data_collection_results.agent_description?.value ??
62 'You are a helpful assistant.',
63 knowledge_base: redisRes.knowledgeBase,
64 },
65 first_message: 'Hello, how can I help you today?',
66 },
67 },
68 });
69 console.log('Agent created', { agent: agent.agent_id });
70 // Send email to user
71 console.log('Sending email to', redisRes.email);
72 await resend.emails.send({
73 from: process.env.RESEND_FROM_EMAIL!,
74 to: redisRes.email,
75 subject: 'Your Conversational AI agent is ready to chat!',
76 react: EmailTemplate({ agentId: agent.agent_id }),
77 });
78 } catch (error) {
79 console.error(error);
80 return NextResponse.json({ error }, { status: 500 });
81 }
82 }
83 }
84
85 return NextResponse.json({ received: true }, { status: 200 });
86}
87
88const constructWebhookEvent = async (req: NextRequest, secret?: string) => {
89 const body = await req.text();
90 const signature_header = req.headers.get('ElevenLabs-Signature');
91
92 if (!signature_header) {
93 return { event: null, error: 'Missing signature header' };
94 }
95
96 const headers = signature_header.split(',');
97 const timestamp = headers.find((e) => e.startsWith('t='))?.substring(2);
98 const signature = headers.find((e) => e.startsWith('v0='));
99
100 if (!timestamp || !signature) {
101 return { event: null, error: 'Invalid signature format' };
102 }
103
104 // Validate timestamp
105 const reqTimestamp = Number(timestamp) * 1000;
106 const tolerance = Date.now() - 30 * 60 * 1000;
107 if (reqTimestamp < tolerance) {
108 return { event: null, error: 'Request expired' };
109 }
110
111 // Validate hash
112 const message = `${timestamp}.${body}`;
113
114 if (!secret) {
115 return { event: null, error: 'Webhook secret not configured' };
116 }
117
118 const digest = 'v0=' + crypto.createHmac('sha256', secret).update(message).digest('hex');
119
120 if (signature !== digest) {
121 return { event: null, error: 'Invalid signature' };
122 }
123
124 const event = JSON.parse(body);
125 return { event, error: null };
126};
127
128async function getRedisDataWithRetry(
129 conversationId: string,
130 maxRetries = 5
131): Promise<{
132 email: string;
133 knowledgeBase: Array<{
134 id: string;
135 type: 'file' | 'url';
136 name: string;
137 }>;
138} | null> {
139 for (let attempt = 1; attempt <= maxRetries; attempt++) {
140 try {
141 const data = await redis.get(conversationId);
142 return data as any;
143 } catch (error) {
144 if (attempt === maxRetries) throw error;
145 console.log(`Redis get attempt ${attempt} failed, retrying...`);
146 await new Promise((resolve) => setTimeout(resolve, 1000));
147 }
148 }
149 return null;
150}

Let’s go through each step in detail.

Verify the webhook secret and consrtuct the webhook payload

When the webhook request is received, we first verify the webhook secret and construct the webhook payload.

./app/api/convai-webhook/route.ts
1// ...
2
3export async function POST(req: NextRequest) {
4 const secret = process.env.ELEVENLABS_CONVAI_WEBHOOK_SECRET;
5 const { event, error } = await constructWebhookEvent(req, secret);
6 // ...
7}
8
9// ...
10const constructWebhookEvent = async (req: NextRequest, secret?: string) => {
11 const body = await req.text();
12 const signature_header = req.headers.get('ElevenLabs-Signature');
13
14 if (!signature_header) {
15 return { event: null, error: 'Missing signature header' };
16 }
17
18 const headers = signature_header.split(',');
19 const timestamp = headers.find((e) => e.startsWith('t='))?.substring(2);
20 const signature = headers.find((e) => e.startsWith('v0='));
21
22 if (!timestamp || !signature) {
23 return { event: null, error: 'Invalid signature format' };
24 }
25
26 // Validate timestamp
27 const reqTimestamp = Number(timestamp) * 1000;
28 const tolerance = Date.now() - 30 * 60 * 1000;
29 if (reqTimestamp < tolerance) {
30 return { event: null, error: 'Request expired' };
31 }
32
33 // Validate hash
34 const message = `${timestamp}.${body}`;
35
36 if (!secret) {
37 return { event: null, error: 'Webhook secret not configured' };
38 }
39
40 const digest = 'v0=' + crypto.createHmac('sha256', secret).update(message).digest('hex');
41
42 if (signature !== digest) {
43 return { event: null, error: 'Invalid signature' };
44 }
45
46 const event = JSON.parse(body);
47 return { event, error: null };
48};
49
50async function getRedisDataWithRetry(
51 conversationId: string,
52 maxRetries = 5
53): Promise<{
54 email: string;
55 knowledgeBase: Array<{
56 id: string;
57 type: 'file' | 'url';
58 name: string;
59 }>;
60} | null> {
61 for (let attempt = 1; attempt <= maxRetries; attempt++) {
62 try {
63 const data = await redis.get(conversationId);
64 return data as any;
65 } catch (error) {
66 if (attempt === maxRetries) throw error;
67 console.log(`Redis get attempt ${attempt} failed, retrying...`);
68 await new Promise((resolve) => setTimeout(resolve, 1000));
69 }
70 }
71 return null;
72}

Create a custom voice design based on the voice_description

Using the voice_description from the webhook payload, we create a custom voice design.

./app/api/convai-webhook/route.ts
1// ...
2
3// Design the voice
4const voicePreview = await elevenLabsClient.textToVoice.createPreviews({
5 voice_description: analysis.data_collection_results.voice_description.value,
6 text: 'The night air carried whispers of betrayal, thick as London fog. I adjusted my cufflinks - after all, even spies must maintain appearances, especially when the game is afoot.',
7});
8const voice = await elevenLabsClient.textToVoice.createVoiceFromPreview({
9 voice_name: `voice-${conversation_id}`,
10 voice_description: `Voice for ${conversation_id}`,
11 generated_voice_id: voicePreview.previews[0].generated_voice_id,
12});
13
14// ...

Retrieve the knowledge base documents from the conversation state stored in Redis

The uploading of the documents might take longer than the webhook data analysis, so we’ll need to poll the conversation state in Redis until the documents have been uploaded.

./app/api/convai-webhook/route.ts
1// ...
2
3// Get the knowledge base from redis
4const redisRes = await getRedisDataWithRetry(conversation_id);
5if (!redisRes) throw new Error('Conversation data not found!');
6// ...
7
8async function getRedisDataWithRetry(
9 conversationId: string,
10 maxRetries = 5
11): Promise<{
12 email: string;
13 knowledgeBase: Array<{
14 id: string;
15 type: 'file' | 'url';
16 name: string;
17 }>;
18} | null> {
19 for (let attempt = 1; attempt <= maxRetries; attempt++) {
20 try {
21 const data = await redis.get(conversationId);
22 return data as any;
23 } catch (error) {
24 if (attempt === maxRetries) throw error;
25 console.log(`Redis get attempt ${attempt} failed, retrying...`);
26 await new Promise((resolve) => setTimeout(resolve, 1000));
27 }
28 }
29 return null;
30}

Create a conversational AI agent for the users based on the agent_description they provided

Create the conversational AI agent for the user based on the agent_description they provided and attach the newly created voice design and knowledge base to the agent.

./app/api/convai-webhook/route.ts
1// ...
2
3// Handle agent creation
4const agent = await elevenLabsClient.conversationalAi.createAgent({
5 name: `Agent for ${conversation_id}`,
6 conversation_config: {
7 tts: { voice_id: voice.voice_id },
8 agent: {
9 prompt: {
10 prompt:
11 analysis.data_collection_results.agent_description?.value ??
12 'You are a helpful assistant.',
13 knowledge_base: redisRes.knowledgeBase,
14 },
15 first_message: 'Hello, how can I help you today?',
16 },
17 },
18});
19console.log('Agent created', { agent: agent.agent_id });
20
21// ...

Send an email to the user to notify them that their custom conversational AI agent is ready to chat

Once the agent is created, you can send an email to the user to notify them that their custom conversational AI agent is ready to chat.

./app/api/convai-webhook/route.ts
1import { Resend } from 'resend';
2
3import { EmailTemplate } from '@/components/email/post-call-webhook-email';
4
5// ...
6
7// Send email to user
8console.log('Sending email to', redisRes.email);
9await resend.emails.send({
10 from: process.env.RESEND_FROM_EMAIL!,
11 to: redisRes.email,
12 subject: 'Your Conversational AI agent is ready to chat!',
13 react: EmailTemplate({ agentId: agent.agent_id }),
14});
15
16// ...

You can use new.email, a handy tool from the Resend team, to vibe design your email templates. Once you’re happy with the template, create a new component and add in the agent ID as a prop.

./components/email/post-call-webhook-email.tsx
1import {
2 Body,
3 Button,
4 Container,
5 Head,
6 Html,
7 Section,
8 Text,
9 Tailwind,
10} from '@react-email/components';
11import * as React from 'react';
12
13const EmailTemplate = (props: any) => {
14 const { agentId } = props;
15 return (
16 <Html>
17 <Head />
18 <Tailwind>
19 <Body className="bg-[#151516] font-sans">
20 <Container className="mx-auto my-[40px] max-w-[600px] rounded-[8px] bg-[#0a1929] p-[20px]">
21 {/* Top Section */}
22 <Section className="mb-[32px] mt-[32px] text-center">
23 <Text className="m-0 text-[28px] font-bold text-[#9c27b0]">
24 Your Conversational AI agent is ready to chat!
25 </Text>
26 </Section>
27
28 {/* Content Area with Icon */}
29 <Section className="mb-[32px] text-center">
30 {/* Circle Icon with Checkmark */}
31 <div className="mx-auto mb-[24px] flex h-[80px] w-[80px] items-center justify-center rounded-full bg-gradient-to-r from-[#9c27b0] to-[#3f51b5]">
32 <div className="text-[40px] text-white">✓</div>
33 </div>
34
35 {/* Descriptive Text */}
36 <Text className="mb-[24px] text-[18px] text-white">
37 Your Conversational AI agent is ready to chat!
38 </Text>
39 </Section>
40
41 {/* Call to Action Button */}
42 <Section className="mb-[32px] text-center">
43 <Button
44 href={`https://elevenlabs.io/app/talk-to?agent_id=${agentId}`}
45 className="box-border rounded-[8px] bg-[#9c27b0] px-[40px] py-[20px] text-[24px] font-bold text-white no-underline"
46 >
47 Chat now!
48 </Button>
49 </Section>
50
51 {/* Footer */}
52 <Section className="mt-[40px] border-t border-[#2d3748] pt-[20px] text-center">
53 <Text className="m-0 text-[14px] text-white">
54 Powered by{' '}
55 <a
56 href="https://elevenlabs.io/conversational-ai"
57 target="_blank"
58 rel="noopener noreferrer"
59 className="underline transition-colors hover:text-gray-400"
60 >
61 ElevenLabs Conversational AI
62 </a>
63 </Text>
64 </Section>
65 </Container>
66 </Body>
67 </Tailwind>
68 </Html>
69 );
70};
71
72export { EmailTemplate };

Run the app

To run the app locally end-to-end, you will need to first run the Next.js development server, and then in a separate terminal run the ngrok tunnel to expose the webhook handler to the internet.

  • Terminal 1:
    • Run pnpm dev to start the Next.js development server.
$pnpm dev
  • Terminal 2:
    • Run ngrok http 3000 to expose the webhook handler to the internet.
$ngrok http 3000

Now open http://localhost:3000 and start designing your custom conversational AI agent, with your voice!

Conclusion

ElevenLabs Conversational AI is a powerful platform for building advanced voice agent uses cases, complete with data collection and analysis.