Skip to content

Talk to a Statue: Building A Multi-Modal ElevenAgents-Powered App

Building A Multi-Modal ElevenAgents-Powered Experience

Photograph a statue. Identify the figures depicted. Then have a real-time voice conversation with them - each character speaking in a distinct, period-appropriate voice.

That is what you can build with ElevenLabs' Voice Design and Agent APIs. In this post, we walk through the architecture of a mobile web app that combines computer vision with voice generation to turn public monuments into interactive experiences. Everything here is replicable with the APIs and code samples below.

Skip the tutorial - build it in one prompt

The entire app below was built from a single prompt, tested to successfully one-shot in Cursor with Claude Opus 4.5 (high) from an empty NextJS project. If you want to skip ahead and build your own, paste this into your editor:

1We need to make an app that:
2- is optimised for mobile
3- allows the user to take a picture (of a statue, picture, monument, etc) that includes one or more people
4- uses an OpenAI LLM api call to identify the statue/monument/picture, characters within it, the location, and name
5- allows the user to check it's correct, and then do either a deep research or a standard search to get information about the characters and the statue's history, and it's current location
6- then create an ElevenLabs agent (allowing multiple voices), that the user can then talk to as though they're talking to the characters in the statue. Each character should use voice designer api to create a matching voice.
7The purpose is to be fun and educational.
8
9https://elevenlabs.io/docs/eleven-api/guides/cookbooks/voices/voice-design
10https://elevenlabs.io/docs/eleven-agents/quickstart 
11https://elevenlabs.io/docs/api-reference/agents/create


You can also use the ElevenLabs Agent Skills instead of linking to the docs. These are based on the docs and can yield even better results.

The rest of this post breaks down what that prompt produces.

How it works

The pipeline has five stages:

  1. Capture an image
  2. Identify the artwork and its characters (OpenAI)
  3. Research the history (OpenAI)
  4. Generate unique voices for each character (ElevenAPI)
  5. Start a real-time voice conversation over WebRTC (ElevenAgents)

Identifying the statue with vision

When a user photographs a statue, the image is sent to an OpenAI vision-capable model. A structured system prompt extracts the artwork name, location, artist, date, and - critically - a detailed voice description for each character. The system prompt includes the expected JSON output format:

1{
2  "statueName": "string - name of the statue, monument, or artwork",
3  "location": "string - where it is located (city, country)",
4  "artist": "string - the creator of the artwork",
5  "year": "string - year completed or unveiled",
6  "description": "string - brief description of the artwork and its historical significance",
7  "characters": [
8    {
9      "name": "string - character name",
10      "description": "string - who this person was and their historical significance",
11      "era": "string - time period they lived in",
12      "voiceDescription": "string - detailed voice description for Voice Design API (include audio quality marker, age, gender, vocal qualities, accent, pacing, and personality)"
13    }
14  ]
15}

1const response = await openai.chat.completions.create({
2  model: "gpt-5.2",
3  response_format: { type: "json_object" },
4  messages: [
5    { role: "system", content: SYSTEM_PROMPT },
6    {
7      role: "user",
8      content: [
9        {
10          type: "text",
11          text: "Identify this statue/monument/artwork and all characters depicted.",
12        },
13        {
14          type: "image_url",
15          image_url: {
16            url: `data:image/jpeg;base64,${base64Data}`,
17            detail: "high",
18          },
19        },
20      ],
21    },
22  ],
23  max_completion_tokens: 2500,
24});

For a photograph of the Boudica statue on Westminster Bridge, London, the response looks like this:

1{
2  "statueName": "Boudica and Her Daughters",
3  "location": "Westminster Bridge, London, UK",
4  "artist": "Thomas Thornycroft",
5  "year": "1902",
6  "description": "Bronze statue depicting Queen Boudica riding a war chariot with her two daughters, commemorating her uprising against Roman occupation of Britain.",
7  "characters": [
8    {
9      "name": "Boudica",
10      "description": "Queen of the Iceni tribe who led an uprising against Roman occupation",
11      "era": "Ancient Britain, 60-61 AD",
12      "voiceDescription": "Perfect audio quality. A powerful woman in her 30s with a deep, resonant voice and a thick Celtic British accent. Her tone is commanding and fierce, with a booming quality that projects authority. She speaks at a measured, deliberate pace with passionate intensity."
13    },
14    // Other characters in the statue
15  ]
16}

Writing effective voice descriptions

The quality of the voice description directly determines the quality of the generated voice. The Voice Design prompting guide covers this in detail, but the key attributes to include are: audio quality marker ("Perfect audio quality."), age and gender, tone/timbre (deep, resonant, gravelly), a precise accent ("thick Celtic British accent" rather than just "British"), and pacing. More descriptive prompts yield more accurate results - "a tired New Yorker in her 60s with a dry sense of humor" will outperform "an older female voice" every time.

A few things worth noting from the guide: use "thick" rather than "strong" when describing accent prominence, avoid vague terms like "foreign," and for fictional or historical characters you can suggest real-world accents as inspiration (e.g., "an ancient Celtic queen with a thick British accent, regal and commanding").

Creating character voices with Voice Design

The Voice Design API generates new synthetic voices from text descriptions - no voice samples or cloning required. This makes it well-suited for historical figures where source audio does not exist.

The process has two steps.

Generate previews

1const { previews } = await elevenlabs.textToVoice.design({
2  modelId: "eleven_multilingual_ttv_v2",
3  voiceDescription: character.voiceDescription,
4  text: sampleText,
5});

The text parameter matters. Longer, character-appropriate sample text (50+ words) produces more stable results - match the dialogue to the character rather than using a generic greeting. The Voice Design prompting guide covers this in more detail.

Save the voice

Once previews are generated, select one and create a permanent voice:

1const voice = await elevenlabs.textToVoice.create({
2  voiceName: `StatueScanner - ${character.name}`,
3  voiceDescription: character.voiceDescription,
4  generatedVoiceId: previews[0].generatedVoiceId,
5});

For multi-character statues, voice creation runs in parallel. Five characters' voices generate in roughly the same time as one:

1const results = await Promise.all(
2  characters.map((character) => createVoiceForCharacter(character))
3);

Building a multi-voice ElevenLabs Agent

With voices created, the next step is configuring an ElevenLabs Agent that can switch between character voices in real time.

1const agent = await elevenlabs.conversationalAi.agents.create({
2  name: `Statue Scanner - ${statueName}`,
3  tags: ["statue-scanner"],
4  conversationConfig: {
5    agent: {
6      firstMessage,
7      language: "en",
8      prompt: {
9        prompt: systemPrompt,
10        temperature: 0.7,
11      },
12    },
13    tts: {
14      voiceId: primaryCharacter.voiceId,
15      modelId: "eleven_v3",
16      supportedVoices: otherCharacters.map((c) => ({
17        voiceId: c.voiceId,
18        label: c.name,
19        description: c.voiceDescription,
20      })),
21    },
22    turn: {
23      turnTimeout: 10,
24    },
25    conversation: {
26      maxDurationSeconds: 600,
27    },
28  },
29});

Multi-voice switching

The supportedVoices array tells the agent which voices are available. The Agents platform handles voice switching automatically - when the LLM's response indicates a different character is speaking, the TTS engine routes that segment to the correct voice.

Prompt engineering for group conversations

Making multiple characters feel like a real group - rather than a sequential Q&A - requires deliberate prompt design:

1const multiCharacterRules = `
2MULTI-CHARACTER DYNAMICS:
3You are playing ALL ${characters.length} characters simultaneously.
4Make this feel like a group conversation, not an interview.
5
6- Characters should interrupt each other:
7  "Actually, if I may -" / "Wait, I must say -"
8
9- React to what others say:
10  "Well said." / "I disagree with that..." / "Always so modest..."
11
12- Have side conversations:
13  "Do you remember when -" / "Tell them about the time you -"
14
15The goal is for users to feel like they are witnessing a real exchange
16between people who happen to include them.
17`;

Real-time voice over WebRTC

The final piece is the client connection. ElevenLabs Agents support WebRTC for low-latency voice conversations - noticeably faster than WebSocket-based connections, which matters for natural turn-taking.

Server-side: get a conversation token

1const { token } = await client.conversationalAi.conversations.getWebrtcToken({
2    agentId,
3});

Client-side: start the session

1import { useConversation } from "@elevenlabs/react";
2
3const conversation = useConversation({
4  onConnect: () => setIsSessionActive(true),
5  onDisconnect: () => setIsSessionActive(false),
6  onMessage: (message) => {
7    if (message.source === "ai") {
8      setMessages((prev) => [...prev, { role: "agent", text: message.message }]);
9    }
10  },
11});
12
13await conversation.startSession({
14  agentId,
15  conversationToken: token,
16  connectionType: "webrtc",
17});

The useConversation hook handles audio capture, streaming, voice activity detection, and playback.

For users who want more historical context before starting a conversation, you can add an enhanced research mode using OpenAI's web search tool:

1const response = await openai.responses.create({
2  model: "gpt-5.2",
3  instructions: RESEARCH_SYSTEM_PROMPT,
4  tools: [{ type: "web_search_preview" }],
5  input: `Research ${identification.statueName}. Search for current information
6including location, visiting hours, and recent news about the artwork.`,
7});

What we learned

This project shows that when combining different modalities of AI - text, research, vision, and audio - we’re able to build experiences that cross both the digital and real world. There’s a lot of unexplored potential in multi-modal agents that we’d love to see more people explore for education, work, and fun.

Start building

The APIs used in this project - Voice Design, ElevenAgents, and OpenAI - are all available today.

Explore articles by the ElevenLabs team

Developer
How to use Skills

How to use Agent Skils

Agent Skills are one of the highest-leverage ways to use LLMs. They provide the appropriate context for the task you want to accomplish in a repeatable manner. 

ElevenLabs

Create with the highest quality AI Audio

Get started free

Already have an account? Log in