Post-call webhooks

Get notified when calls end and analysis is complete through webhooks.

Overview

Post-call Webhooks allow you to receive detailed information about a call after analysis is complete. When enabled, ElevenLabs will send a POST request to your specified endpoint with comprehensive call data.

ElevenLabs supports two types of post-call webhooks:

  • Transcription webhooks (post_call_transcription): Contains full conversation data including transcripts, analysis results, and metadata
  • Audio webhooks (post_call_audio): Contains minimal data with base64-encoded audio of the full conversation

The webhook data structure differs from the Conversation API GET endpoint response. Webhooks use a specialized data model optimized for event delivery.

Enabling post-call webhooks

Post-call webhooks can be enabled for all agents in your workspace through the Conversational AI settings page.

Post-call webhook settings

Post call webhooks must return a 200 status code to be considered successful. Webhooks that repeatedly fail are auto disabled if there are 10 or more consecutive failures and the last successful delivery was more than 7 days ago or has never been successfully delivered.

For HIPAA compliance, if a webhook fails we can not retry the webhook.

Authentication

It is important for the listener to validate all incoming webhooks. Webhooks currently support authentication via HMAC signatures. Set up HMAC authentication by:

  • Securely storing the shared secret generated upon creation of the webhook
  • Verifying the ElevenLabs-Signature header in your endpoint using the shared secret

The ElevenLabs-Signature takes the following format:

1t=timestamp,v0=hash

The hash is equivalent to the hex encoded sha256 HMAC signature of timestamp.request_body. Both the hash and timestamp should be validated, an example is shown here:

Example python webhook handler using FastAPI:

1from fastapi import FastAPI, Request
2import time
3import hmac
4from hashlib import sha256
5
6app = FastAPI()
7
8# Example webhook handler
9@app.post("/webhook")
10async def receive_message(request: Request):
11 payload = await request.body()
12 headers = request.headers.get("elevenlabs-signature")
13 if headers is None:
14 return
15 timestamp = headers.split(",")[0][2:]
16 hmac_signature = headers.split(",")[1]
17
18 # Validate timestamp
19 tolerance = int(time.time()) - 30 * 60
20 if int(timestamp) < tolerance
21 return
22
23 # Validate signature
24 full_payload_to_sign = f"{timestamp}.{payload.decode('utf-8')}"
25 mac = hmac.new(
26 key=secret.encode("utf-8"),
27 msg=full_payload_to_sign.encode("utf-8"),
28 digestmod=sha256,
29 )
30 digest = 'v0=' + mac.hexdigest()
31 if hmac_signature != digest:
32 return
33
34 # Continue processing
35
36 return {"status": "received"}

IP whitelisting

For additional security, you can whitelist the following static egress IPs from which all ElevenLabs webhook requests originate:

RegionIP Address
US (Default)34.67.146.145
US (Default)34.59.11.47
EU35.204.38.71
EU34.147.113.54
Asia35.185.187.110
Asia35.247.157.189

If your infrastructure requires strict IP-based access controls, adding these IPs to your firewall allowlist will ensure you only receive webhook requests from ElevenLabs’ systems.

These static IPs are used across all ElevenLabs webhook services and will remain consistent. Using IP whitelisting in combination with HMAC signature validation provides multiple layers of security.

Webhook response structure

ElevenLabs sends two distinct types of post-call webhooks, each with different data structures:

Transcription webhooks (post_call_transcription)

Contains comprehensive conversation data including full transcripts, analysis results, and metadata.

Top-level fields

FieldTypeDescription
typestringType of event (always post_call_transcription)
dataobjectConversation data using the ConversationHistoryCommonModel structure
event_timestampnumberWhen this event occurred in unix time UTC

Data object structure

The data object contains:

FieldTypeDescription
agent_idstringThe ID of the agent that handled the call
conversation_idstringUnique identifier for the conversation
statusstringStatus of the conversation (e.g., “done”)
user_idstringUser identifier if available
transcriptarrayComplete conversation transcript with turns
metadataobjectCall timing, costs, and phone details
analysisobjectEvaluation results and conversation summary
conversation_initiation_client_dataobjectConfiguration overrides and dynamic variables

Transcription webhooks do NOT include the has_audio, has_user_audio, or has_response_audio fields that are available in the GET Conversation response, but the other information is the same.

Audio webhooks (post_call_audio)

Contains minimal data with the full conversation audio as base64-encoded MP3.

Top-level fields

FieldTypeDescription
typestringType of event (always post_call_audio)
dataobjectMinimal audio data
event_timestampnumberWhen this event occurred in unix time UTC

Data object structure

The data object contains only:

FieldTypeDescription
agent_idstringThe ID of the agent that handled the call
conversation_idstringUnique identifier for the conversation
full_audiostringBase64-encoded string containing the complete conversation audio in MP3 format

Audio webhooks contain only the three fields listed above. They do NOT include transcript data, metadata, analysis results, or any other conversation details.

Example webhook payloads

Transcription webhook example

1{
2 "type": "post_call_transcription",
3 "event_timestamp": 1739537297,
4 "data": {
5 "agent_id": "xyz",
6 "conversation_id": "abc",
7 "status": "done",
8 "user_id": "user123",
9 "transcript": [
10 {
11 "role": "agent",
12 "message": "Hey there angelo. How are you?",
13 "tool_calls": null,
14 "tool_results": null,
15 "feedback": null,
16 "time_in_call_secs": 0,
17 "conversation_turn_metrics": null
18 },
19 {
20 "role": "user",
21 "message": "Hey, can you tell me, like, a fun fact about 11 Labs?",
22 "tool_calls": null,
23 "tool_results": null,
24 "feedback": null,
25 "time_in_call_secs": 2,
26 "conversation_turn_metrics": null
27 },
28 {
29 "role": "agent",
30 "message": "I do not have access to fun facts about Eleven Labs. However, I can share some general information about the company. Eleven Labs is an AI voice technology platform that specializes in voice cloning and text-to-speech...",
31 "tool_calls": null,
32 "tool_results": null,
33 "feedback": null,
34 "time_in_call_secs": 9,
35 "conversation_turn_metrics": {
36 "convai_llm_service_ttfb": {
37 "elapsed_time": 0.3704247010173276
38 },
39 "convai_llm_service_ttf_sentence": {
40 "elapsed_time": 0.5551181449554861
41 }
42 }
43 }
44 ],
45 "metadata": {
46 "start_time_unix_secs": 1739537297,
47 "call_duration_secs": 22,
48 "cost": 296,
49 "deletion_settings": {
50 "deletion_time_unix_secs": 1802609320,
51 "deleted_logs_at_time_unix_secs": null,
52 "deleted_audio_at_time_unix_secs": null,
53 "deleted_transcript_at_time_unix_secs": null,
54 "delete_transcript_and_pii": true,
55 "delete_audio": true
56 },
57 "feedback": {
58 "overall_score": null,
59 "likes": 0,
60 "dislikes": 0
61 },
62 "authorization_method": "authorization_header",
63 "charging": {
64 "dev_discount": true
65 },
66 "termination_reason": ""
67 },
68 "analysis": {
69 "evaluation_criteria_results": {},
70 "data_collection_results": {},
71 "call_successful": "success",
72 "transcript_summary": "The conversation begins with the agent asking how Angelo is, but Angelo redirects the conversation by requesting a fun fact about 11 Labs. The agent acknowledges they don't have specific fun facts about Eleven Labs but offers to provide general information about the company. They briefly describe Eleven Labs as an AI voice technology platform specializing in voice cloning and text-to-speech technology. The conversation is brief and informational, with the agent adapting to the user's request despite not having the exact information asked for."
73 },
74 "conversation_initiation_client_data": {
75 "conversation_config_override": {
76 "agent": {
77 "prompt": null,
78 "first_message": null,
79 "language": "en"
80 },
81 "tts": {
82 "voice_id": null
83 }
84 },
85 "custom_llm_extra_body": {},
86 "dynamic_variables": {
87 "user_name": "angelo"
88 }
89 }
90 }
91}

Audio webhook example

1{
2 "type": "post_call_audio",
3 "event_timestamp": 1739537319,
4 "data": {
5 "agent_id": "xyz",
6 "conversation_id": "abc",
7 "full_audio": "SUQzBAAAAAAA...base64_encoded_mp3_data...AAAAAAAAAA=="
8 }
9}

Audio webhook delivery

Audio webhooks are delivered separately from transcription webhooks and contain only the essential fields needed to identify the conversation along with the base64-encoded audio data.

Audio webhooks can be enabled or disabled using the “Send audio data” toggle in your webhook settings. This setting can be configured at both the workspace level (in the Conversational AI settings) and at the agent level (in individual agent webhook overrides).

Streaming delivery

Audio webhooks are delivered as streaming HTTP requests with the transfer-encoding: chunked header to handle large audio files efficiently.

Processing audio webhooks

Since audio webhooks are delivered via chunked transfer encoding, you’ll need to handle streaming data properly:

1import base64
2import json
3from aiohttp import web
4
5async def handle_webhook(request):
6
7 # Check if this is a chunked/streaming request
8 if request.headers.get("transfer-encoding", "").lower() == "chunked":
9 # Read streaming data in chunks
10 chunked_body = bytearray()
11 while True:
12 chunk = await request.content.read(8192) # 8KB chunks
13 if not chunk:
14 break
15 chunked_body.extend(chunk)
16
17 # Parse the complete payload
18 request_body = json.loads(chunked_body.decode("utf-8"))
19 else:
20 # Handle regular requests
21 body_bytes = await request.read()
22 request_body = json.loads(body_bytes.decode('utf-8'))
23
24 # Process different webhook types
25 if request_body["type"] == "post_call_transcription":
26 # Handle transcription webhook with full conversation data
27 handle_transcription_webhook(request_body["data"])
28 elif request_body["type"] == "post_call_audio":
29 # Handle audio webhook with minimal data
30 handle_audio_webhook(request_body["data"])
31
32 return web.json_response({"status": "ok"})
33
34def handle_audio_webhook(data):
35 # Decode base64 audio data
36 audio_bytes = base64.b64decode(data["full_audio"])
37
38 # Save or process the audio file
39 conversation_id = data["conversation_id"]
40 with open(f"conversation_{conversation_id}.mp3", "wb") as f:
41 f.write(audio_bytes)

Audio webhooks can be large files, so ensure your webhook endpoint can handle streaming requests and has sufficient memory/storage capacity. The audio is delivered in MP3 format.

Use cases

Automated call follow-ups

Post-call webhooks enable you to build automated workflows that trigger immediately after a call ends. Here are some practical applications:

CRM integration

Update your customer relationship management system with conversation data as soon as a call completes:

1// Example webhook handler
2app.post('/webhook/elevenlabs', async (req, res) => {
3 // HMAC validation code
4
5 const { data } = req.body;
6
7 // Extract key information
8 const userId = data.metadata.user_id;
9 const transcriptSummary = data.analysis.transcript_summary;
10 const callSuccessful = data.analysis.call_successful;
11
12 // Update CRM record
13 await updateCustomerRecord(userId, {
14 lastInteraction: new Date(),
15 conversationSummary: transcriptSummary,
16 callOutcome: callSuccessful,
17 fullTranscript: data.transcript,
18 });
19
20 res.status(200).send('Webhook received');
21});

Stateful conversations

Maintain conversation context across multiple interactions by storing and retrieving state:

  1. When a call starts, pass in your user id as a dynamic variable.
  2. When a call ends, set up your webhook endpoint to store conversation data in your database, based on the extracted user id from the dynamic_variables.
  3. When the user calls again, you can retrieve this context and pass it to the new conversation into a {{previous_topics}} dynamic variable.
  4. This creates a seamless experience where the agent “remembers” previous interactions
1// Store conversation state when call ends
2app.post('/webhook/elevenlabs', async (req, res) => {
3 // HMAC validation code
4
5 const { data } = req.body;
6 const userId = data.metadata.user_id;
7
8 // Store conversation state
9 await db.userStates.upsert({
10 userId,
11 lastConversationId: data.conversation_id,
12 lastInteractionTimestamp: data.metadata.start_time_unix_secs,
13 conversationHistory: data.transcript,
14 previousTopics: extractTopics(data.analysis.transcript_summary),
15 });
16
17 res.status(200).send('Webhook received');
18});
19
20// When initiating a new call, retrieve and use the state
21async function initiateCall(userId) {
22 // Get user's conversation state
23 const userState = await db.userStates.findOne({ userId });
24
25 // Start new conversation with context from previous calls
26 return await elevenlabs.startConversation({
27 agent_id: 'xyz',
28 conversation_id: generateNewId(),
29 dynamic_variables: {
30 user_name: userState.name,
31 previous_conversation_id: userState.lastConversationId,
32 previous_topics: userState.previousTopics.join(', '),
33 },
34 });
35}