How ElevenLabs is Preparing for Elections in 2024

Ensuring our systems are developed, deployed, and used safely is our priority

How ElevenLabs is Preparing for Elections in 2024
Loading the Elevenlabs Text to Speech AudioNative Player...

Throughout 2024, a range of elections are set to take place across the world. As they approach, we are focused on advancing the safe and fair use of AI voices. 

Our technology was born out of our commitment to breaking down language barriers and fostering global understanding. It continues to find inspiring applications in education, entertainment and accessibility, while also making content more engaging and inclusive. 

However, we recognize the changing landscape of AI technology and its implications for the political process. Ensuring our systems are developed, deployed, and used safely is our priority and we are taking additional measures to combat misuse and the spread of misinformation as the technology evolves. 

Preventing misuse

In our continuing efforts to ensure a positive experience for all users of our platform, we’re taking specific steps to prevent AI voices from being used to spread misinformation. While our terms already prohibit using our platform to impersonate or harm others, we are taking the added measure of introducing a ‘no-go voices’ safeguard. This safeguard is designed to detect and prevent the creation of voices that mimic political candidates actively involved in presidential or prime ministerial elections, starting with those in the US and the UK. We are working to expand this safeguard to other languages and election cycles. We also aim to continually refine this measure through practical testing and feedback. We invite fellow AI companies and partners to collaborate with us on exploring ways to improve and extend these safeguards effectively.

We are also actively testing new ways to counteract the creation of political content that could either affect participation in the democratic process or mislead voters. Our systems have always allowed us to trace content generated on our platform back to the originating account. We are actively developing new moderation and internal review mechanisms to identify and address abuse cases more effectively. Misrepresenting electoral processes, voter eligibility or the value of voting undermines democracy and we firmly oppose the use of AI to create confusion or distrust in the democratic system. Using our technology for political campaigning activity that involves impersonating others, creating chatbots, or placing robocalls is a direct violation of our terms.

Transparency over AI-generated content

Enabling clear identification of AI-generated content is a key aspect of our responsible development efforts.

Last June we released the AI Speech Classifier, which allows anybody to upload audio samples for analysis as to whether the sample was AI-generated audio from ElevenLabs. Our goal is to help prevent the spread of misinformation by making the source of audio content easier to assess. If you're interested in either a partnership or an integration, please connect with us at legal@elevenlabs.io. If you come across something concerning on our platform, or that you think was created on our platform, please let us know here.

Balancing innovation and responsibility 

We are inspired by the positive applications our community continues to find for our technology. Throughout 2024 and beyond, we remain committed to responsible innovation by maintaining a continuous dialogue with our community, partners, civil society and key stakeholders.

Try ElevenLabs today

Get Started Free