
Eleven v3 Audio Tags: Giving situational awareness to AI audio
Enhance AI speech with Eleven v3 Audio Tags. Control tone, emotion, and pacing for natural conversation. Add situational awareness to your text to speech.
Presentamos Eleven v3 Alpha
Prueba v3ElevenLabs' audio tags control AI voice emotion, pacing, and sound effects.
With the release of Eleven v3, crear indicaciones de audio se ha convertido en una habilidad esencial. En lugar de escribir o pegar las palabras que quieres que diga la voz IA, ahora puedes usar una nueva capacidad — Audio Tags — to control everything from emotion to delivery.
Eleven v3 is an alpha release research preview del nuevo modelo. Requiere más ingeniería de indicaciones que los modelos anteriores — pero las generaciones son impresionantes.
ElevenLabs Audio Tags are words wrapped in square brackets that the new Eleven v3 model can interpret and use to direct the audible action. They can be anything from [excited], [whispers], and [sighs] through to [gunshot], [clapping] and [explosion].
Audio Tags let you shape how AI voices sound, including nonverbal cues like tone, pauses, and pacing. Whether you're building immersive audiobooks, interactive characters, or dialogue-driven media, these simple script-level tools give you precise control over emotion and delivery.
You can place Audio Tags anywhere in your script to shape delivery in real time. You can also use combinations of tags within a script or even a sentence. Tags fall into core categories:
These tags can help you set the emotional tone of the voice — whether it's somber, intense, or upbeat. For example you could use one or a combination of [sad], [angry], [happily] and [sorrowful].
These are more about the tone and performance. You can use these tags to adjust volume and energy for scenes that need restraint or force. Examples include: [whispers], [shouts] and even [x accent].
True natural speech includes reactions. For example, you can use this to add realism by embedding natural, unscripted moments into speech. For example: [laughs], [clears throat] and [sighs].
Underpinning these features is the new architecture behind v3. The model understands text context at a deeper level, which means it can follow emotional cues, tone shifts, and speaker transitions more naturally. Combined with Audio Tags, this unlocks greater expressiveness than was previously possible in TTS.
You can now also create multi-speaker dialogues that feel spontaneous — handling interruptions, shifting moods, and conversational nuance with minimal prompting.
Los Professional Voice Clones (PVCs) no están completamente optimizados para Eleven v3, lo que puede resultar en una calidad de clonación inferior en comparación con modelos anteriores. Durante esta etapa de vista previa de investigación, sería mejor encontrar un Instant Voice Clone (IVC) o una voz diseñada para tu proyecto si necesitas usar las funciones de v3. La optimización de PVC para v3 llegará pronto.80% off until the end of June. Public API for Eleven v3 (alpha) is coming soon. For early access, please contact sales. Whether you’re experimenting or deploying at scale, now’s the time to explore what’s possible.
Enhance AI speech with Eleven v3 Audio Tags. Control tone, emotion, and pacing for natural conversation. Add situational awareness to your text to speech.
Our most powerful AI voice tools are now available for iOS and Android.