Tools
Provide your agent with real time information and the ability to take action in third party apps with external function calls.
Tools allow you to make external function calls to third party apps so you can get real-time information. You might use tools to:
Schedule appointments and manage availability on someone’s calendar
Book restaurant reservations and manage dining arrangements
Create or update customer records in a CRM system
Get inventory data to make product recommendations
To help you get started with Tools, we’ll walk through an “AI receptionist” we created by integrating with the Cal.com API.
Tools Overview
Secrets
Before we proceed with creating our Tools, we will first create a Secret to securely store our API keys. The Cal.com API we will use for our example takes a Bearer token so we will first add a Secret named “Bearer” and provide the Bearer token as the value.
You can find Secrets within the Conversational AI Dashboard in the Agent subnav.
Webhooks
Next, look for “Tools” in the “Agent” subnav. Add a new Tool to configure your webhook. For our AI receptionist, we created two Tools to interact with the Cal.com API:
Headers
Within the Cal.com documentation, we see that both our availability and booking endpoints require the same three headers:
We configured that as follows:
Path Parameters
You can add path parameters by including variables surrounded by curly brackets in your URL like this {variable}. Once added to the URL path, it will appear under Path Parameters with the ability to update the Data Type and Description. Our AI receptionist does not call for Path Parameters so we will not be defining any.
Query Parameters
Get and Delete requests typically have query parameters while Post and Patch do not. Our Get_Available_Slots tool relies on a Get request that requires the following query parameters: startTime, endTime, eventTypeId, eventTypeSlug, and duration.
In our Description for each, we define a prompt that our Conversational Agent will use to extract the relevant information from the call transcript using an LLM.
Here’s how we defined our query parameters for our AI receptionist:
Event type IDs can differ. Use the find event types endpoint to get the IDs of the relevant events.
Body Parameters
Post and Patch requests typically have body parameters while Get and Delete do not. Our Book_Meeting tool is a Post request and requires the following Body Parameters: startTime, eventTypeId, attendee.
In our Description for each, we define a prompt that our Conversational Agent will use to extract the relevant information from the call transcript using an LLM.
Here’s how we defined our body parameters for our AI receptionist:
Since attendee is an object, it’s subfields are defined as their own parameters:
Adjusting System Prompt to reference your Tools
Now that you’ve defined your Tools, instruct your agent on when and how to invoke them in your system prompt. If your Tools require the user to provide information, it’s best to ask your agent to collect that information before calling it (though in many cases your agent will be able to realize it is missing information and will request for it anyway).
Here’s the System Prompt we use for our AI Receptionist:
You are my receptionist and people are calling to book a time with me.
You can check my availability by using Get_Available_Slots. That endpoint takes start and end date/time and returns open slots in between. If someone asks for my availability but doesn’t specify a date / time, just check for open slots tomorrow. If someone is checking availability and there are no open slots, keep checking the next day until you find one with availability.
Once you’ve agreed upon a time to meet, you can use Book_Meeting to book a call. You will need to collect their full name, the time they want to meet, whether they want to meet for 15, 30 or 60 minutes, and their email address to book a meeting.
If you call Book_Meeting and it fails, it’s likely either that the email address is formatted in an invalid way or the selected time is not one where I am available.
It’s important to note that the choice of LLM matters. We recommend trying out different LLMs and modifying the prompt as needed.