Changelog


Model

  • Introducing Flash: Our fastest text-to-speech model yet, generating speech in just 75ms. Access it via the API with model IDs eleven_flash_v2 and eleven_flash_v2_5. Perfect for low-latency conversational AI applications. Try it now.

Launches

  • TalkToSanta.io: Experience Conversational AI in action by talking to Santa this holiday season. For every conversation with santa we donate 2 dollars to Bridging Voice (up to $11,000).

  • AI Engineer Pack: Get $50+ in credits from leading AI developer tools, including ElevenLabs.



API

  • Credit Usage Limits: Set specific credit limits for API keys to control costs and manage usage across different use cases by setting “Access” or “No Access” to features like Dubbing, Audio Native, and more. Check it out
  • Workspace API Keys: Now support access permissions, such as “Read” or “Read and Write” for User, Workspace, and History resources.
  • Improved Key Management:
    • Redesigned interface moving from modals to dedicated pages
    • Added detailed descriptions and key information
    • Enhanced visibility of key details and settings

Product

  • GenFM: Launched in the ElevenReader app. Learn more

  • Conversational AI: Now generally available to all customers. Try it now

  • TTS Redesign: The website TTS redesign is now rolled out to all customers.

  • Auto-regenerate: Now available in Projects. Learn more

  • Reader Platform Improvements:

    • Improved content sharing with enhanced landing pages and social media previews.
    • Added podcast rating system and improved voice synchronization.
  • Projects revamp:

    • Restore past generations, lock content, assign speakers to sentence fragments, and QC at 2x speed. Learn more
    • Auto-regeneration identifies mispronunciations and regenerates audio at no extra cost. Learn more

API


API

  • u-law Audio Formats: Added u-law audio formats to the Convai API for integrations with Twilio.
  • TTS Websocket Improvements: TTS websocket improvements, flushes and generation work more intuitively now.
  • TTS Websocket Auto Mode: A streamlined mode for using websockets. This setting reduces latency by disabling chunk scheduling and buffers. Note: Using partial sentences will result in significantly reduced quality.
  • Improvements to latency consistency: Improvements to latency consistency for all models.

Website

  • TTS Redesign: The website TTS redesign is now in alpha!

API

  • Normalize Text with the API: Added the option normalize the input text in the TTS API. The new parameter is called apply_text_normalization and works on all non-turbo & non-flash models.

Product

  • Voice Design: The Voice Design feature is now in beta!

Model

  • Stability Improvements: Significant audio stability improvements across all models, most noticeable on turbo_v2 and turbo_v2.5, when using:
    • Websockets
    • Projects
    • Reader app
    • TTS with request stitching
    • ConvAI
  • Latency Improvements: Reduced time to first byte latency by approximately 20-30ms for all models.

API

  • Remove Background Noise Voice Samples: Added the ability to remove background noise from voice samples using our audio isolation model to improve quality for IVCs and PVCs at no additional cost.
  • Remove Background Noise STS Input: Added the ability to remove background noise from STS audio input using our audio isolation model to improve quality at no additional cost.

Feature

  • Conversational AI Beta: Conversational AI is now in beta.