Changelog


Conversational AI

  • Agent monitoring: Added a new dashboard for monitoring conversational AI agents’ activity. Check out your’s here.
  • Proactive conversations: Enhanced capabilities with improved timeout retry logic. Learn more
  • Tool calls: Fixed timeout issues occurring during tool calls
  • Allowlist: Fixed implementation of allowlist functionality.
  • Content summarization: Added Gemini as a fallback model to ensure service reliability
  • Widget stability: Fixed issue with dynamic variables causing the Conversational AI widget to fail

Reader

  • Trending content: Added carousel showcasing popular articles and trending content
  • New publications: Introduced dedicated section for recent ElevenReader Publishing releases

Studio (formerly Projects)

  • Projects is now Studio and is now generally available to everyone
  • Chapter content editing: Added support for editing chapter content through the public API, enabling programmatic updates to chapter text and metadata
  • GenFM public API: Added public API support for podcast creation through GenFM. Key features include:
    • Conversation mode with configurable host and guest voices
    • URL-based content sourcing
    • Customizable duration and highlights
    • Webhook callbacks for status updates
    • Project snapshot IDs for audio downloads

SDKs

  • Swift: fixed an issue where resources were not being released after the end of a session
  • Python: added uv support
  • Python: fixed an issue where calls were not ending correctly

API

  • Added POST v1/speech-to-text endpoints supporting speech to text transcription
  • Added POST v1/workspace/invites/add-bulk endpoint to enable inviting multiple users simultaneously
  • Added POST v1/projects/podcast/create endpoint for programmatic podcast generation through GenFM
  • Added ‘v1/convai/knowledge-base/:documentation_id’ endpoints with CRUD operations for Conversational AI
  • Added v1/convai/tools tool management endpoints for extending Conversational AI agent capabilities
  • Added PATCH v1/projects/:project_id/chapters/:chapter_id endpoint for updating project chapter content and metadata
  • Added group_ids parameter to Workspace Invite endpoint for group-based access control
  • Added structured content property to Chapter response objects
  • Added retention_days and delete_transcript_and_pii data retention parameters to Agent creation
  • Added structured response to AudioNative content
  • Added convai_chars_per_minute usage metric to User endpoint
  • Added media_metadata field to Dubbing response objects
  • Added GDPR-compliant deletion_settings to Conversation responses
  • Deprecated Knowledge Base legacy endpoints:
    • POST /v1/convai/agents/{agent_id}/add-to-knowledge-base
    • GET /v1/convai/agents/{agent_id}/knowledge-base/{documentation_id}
  • Updated Agent endpoints with consolidated privacy control parameters

Docs

  • Shipped our new docs: we’re keen to hear your thoughts, you can reach out by opening an issue on GitHub or chatting with us on Discord

Conversational AI

  • Dynamic variables: Available in the dashboard and SDKs. Learn more
  • Interruption handling: Now possible to ignore user interruptions in Conversational AI. Learn more
  • Twilio integration: Shipped changes to increase audio quality when integrating with Twilio
  • Latency optimization: Published detailed blog post on latency optimizations. Read more
  • PCM 8000: Added support for PCM 8000 to Conversational AI agents
  • Websocket improvements: Fixed unexpected websocket closures

Projects

  • Auto-regenerate: Auto-regeneration now available by default at no extra cost
  • Content management: Added updateContent method for dynamic content updates
  • Audio conversion: New auto-convert and auto-publish flags for seamless workflows

API

  • Added Update Project endpoint for project editing
  • Added Update Content endpoint for AudioNative content management
  • Deprecated quality_check_on parameter in project operations. It is now enabled for all users at no extra cost
  • Added apply_text_normalization parameter to project creation with modes ‘auto’, ‘on’, ‘apply_english’ and ‘off’ for controlling text normalization during project creation
  • Added alpha feature auto_assign_voices in project creation to automatically assign voices to phrases
  • Added auto_convert flag to project creation to automatically convert projects to audio
  • Added support for creating Conversational AI agents with dynamic variables
  • Added voice_slots_used to Subscription model to track number of custom voices used in a workspace to the User endpoint
  • Added user_id field to User endpoint
  • Marked legacy AudioNative creation parameters (image, small, sessionization) as deprecated parameters
  • Agents platform now supports call_limits containing either agent_concurrency_limit or daily_limit or both parameters to control simultaneous and daily conversation limits for agents
  • Added support for language_presets in conversation_config to customize language-specific settings

SDKs

  • Cross-Runtime Support: Now compatible with Bun 1.1.45+ and Deno 2.1.7+
  • Regenerated SDKs: We regenerated our SDKs to be up to date with the latest API spec. Check out the latest Python SDK release and JS SDK release
  • Dynamic Variables: Fixed an issue where dynamic variables were not being handled correctly, they are now correctly handled in all SDKs

Product

Conversational AI

  • Additional languages: Add a language dropdown to your widget so customers can launch conversations in their preferred language. Learn more here.
  • End call tool: Let the agent automatically end the call with our new “End Call” tool. Learn more here
  • Flash default: Flash, our lowest latency model, is now the default for new agents. In your agent dashboard under “voice”, you can toggle between Turbo and Flash. Learn more about Flash here.
  • Privacy: Set concurrent call and daily call limits, turn off audio recordings, add feedback collection, and define customer terms & conditions.
  • Increased tool limits: Increase the number of tools available to your agent from 5 to 15. Learn more here.


Model

  • Introducing Flash: Our fastest text-to-speech model yet, generating speech in just 75ms. Access it via the API with model IDs eleven_flash_v2 and eleven_flash_v2_5. Perfect for low-latency conversational AI applications. Try it now.

Launches

  • TalkToSanta.io: Experience Conversational AI in action by talking to Santa this holiday season. For every conversation with santa we donate 2 dollars to Bridging Voice (up to $11,000).

  • AI Engineer Pack: Get $50+ in credits from leading AI developer tools, including ElevenLabs.



API

  • Credit Usage Limits: Set specific credit limits for API keys to control costs and manage usage across different use cases by setting “Access” or “No Access” to features like Dubbing, Audio Native, and more. Check it out
  • Workspace API Keys: Now support access permissions, such as “Read” or “Read and Write” for User, Workspace, and History resources.
  • Improved Key Management:
    • Redesigned interface moving from modals to dedicated pages
    • Added detailed descriptions and key information
    • Enhanced visibility of key details and settings

Product

  • GenFM: Launched in the ElevenReader app. Learn more

  • Conversational AI: Now generally available to all customers. Try it now

  • TTS Redesign: The website TTS redesign is now rolled out to all customers.

  • Auto-regenerate: Now available in Projects. Learn more

  • Reader Platform Improvements:

    • Improved content sharing with enhanced landing pages and social media previews.
    • Added podcast rating system and improved voice synchronization.
  • Projects revamp:

    • Restore past generations, lock content, assign speakers to sentence fragments, and QC at 2x speed. Learn more
    • Auto-regeneration identifies mispronunciations and regenerates audio at no extra cost. Learn more

API


API

  • u-law Audio Formats: Added u-law audio formats to the Convai API for integrations with Twilio.
  • TTS Websocket Improvements: TTS websocket improvements, flushes and generation work more intuitively now.
  • TTS Websocket Auto Mode: A streamlined mode for using websockets. This setting reduces latency by disabling chunk scheduling and buffers. Note: Using partial sentences will result in significantly reduced quality.
  • Improvements to latency consistency: Improvements to latency consistency for all models.

Website

  • TTS Redesign: The website TTS redesign is now in alpha!

API

  • Normalize Text with the API: Added the option normalize the input text in the TTS API. The new parameter is called apply_text_normalization and works on all non-turbo & non-flash models.

Product

  • Voice Design: The Voice Design feature is now in beta!
Built with