At ElevenLabs, we’re committed to making content accessible and engaging across any language, voice, and sound. We believe deeply in the power of our technology to create a more connected and informed world, and in turn, to be a force for good for the democratic process. We’ve enabled lawmakers to reclaim their voices, helping them continue to fight for their constituents and the issues they hold dear. We’ve done the same for advocates, supporting their efforts to make change and improve lives. And our work to remove barriers to knowledge, participation, and community has only begun.
We also believe that it’s on all of us – industry, governments, civil society, and individuals – to not just promote the democratic process, but to protect it. Throughout 2024, a pivotal election year in countries across the globe, we’ve taken concrete steps designed to prevent the misuse of our tools. In advance of the November 5th U.S. election, here’s an update on our efforts.
Policies
We revised our Prohibited Use Policy to strengthen and clarify our rules regarding the use of our tools in the context of elections.
- Ban on campaigning and the impersonation of candidates. We strictly prohibit the use of our tools for political campaigning, including promoting or advocating for a particular candidate, issue, or position, or soliciting votes or financial contributions. We also prohibit the use of our tools to mimic the voices of political candidates and elected government officials.
- Ban on voter suppression and disruptions of the electoral process. We prohibit the use of our tools to incite, engage in, or facilitate voter suppression or other disruptions of electoral or civic processes. This includes creating, distributing, or facilitating the spread of misleading information.
Prevention
We’ve enhanced our safeguards to prevent the misuse of our tools in the context of elections.
- User screening. All users of our voice cloning tools must provide contact and payment information, which helps us block accounts linked to fraudulent activity or high-risk geographies at sign-up.
- “No-Go” Voices. Our No-Go Voice technology blocks the voices of hundreds of candidates, elected officials, other political figures, and celebrities from being generated. We’ve continuously expanded this safeguard to increase its effectiveness, and we monitor and take enforcement actions against users who attempt to generate blocked voices.
Detection and Enforcement
To complement our preventative measures, we’ve ramped up our detection and enforcement capabilities.
- Monitoring for misuse. Our improved proprietary classifiers, including one specifically designed to identify political content, alongside automated moderation and human reviewers, help us detect content that violates our terms. We also work with external threat intelligence teams that provide insights on potential misuse aimed at disrupting elections, and we have set up information sharing channels with government and industry partners.
- Strong enforcement. When we identify misuse, we take decisive action, including removing voices, placing users on probation, banning users from our platform, and if appropriate, reporting to authorities. If you come across problematic content that you think may have originated from ElevenLabs, please let us know.
- Support for government efforts. We believe that government action is critical to deterring misuse. In July, we were proud to announce our support for the Protect Elections from Deceptive AI Act, bipartisan legislation led by Senators Amy Klobuchar, Josh Hawley, Chris Coons, Susan Collins, Maria Cantwell, Todd Young, John Hickenlooper, and Marsha Blackburn, which would hold accountable bad actors who use AI in campaign ads to deceive voters. We’ll continue to work with policymakers in the U.S. and across the globe on AI safety initiatives, including those related to elections.
Provenance and Transparency
Enabling the clear identification of AI-generated content, including through broad collaboration, remains a key aspect of ElevenLabs’ responsible development efforts.
- AI Speech Classifier. Our AI Speech Classifier, available to the public since June 2023, lets anyone upload an audio sample for analysis as to whether it originated from ElevenLabs. By making this tool public, we aim to limit the spread of misinformation by making the source of audio easier to identify.
- Collaborating with industry. When it comes to AI content provenance and transparency, we can’t go it alone—an industry-wide effort is a necessity. This is why we are part of the Content Authenticity Initiative (CAI), which unites major media and tech companies to develop standards for provenance. Through the Coalition for Content Provenance and Authenticity (C2PA), we’re implementing these standards by embedding cryptographically-signed metadata into the audio generated on our platform. This ensures that the content distribution channels that adopt CAI’s tools, like leading social media platforms, can recognize our audio as AI-generated.
- Supporting third-party detection tools. We work with third-party AI safety companies to improve their tools for identifying AI-generated content, including election-related deepfakes. In July, we partnered with Reality Defender – a cybersecurity company focused on deepfake detection – providing access to our models and data to strengthen their detection tools. Our partnership helps Reality Defender’s clients, including governments and large enterprises, detect AI-generated threats in real time, shielding people around the world from misinformation and sophisticated fraud. We’re also involved in other academic and commercial projects, including research initiatives at UC Berkeley’s School of Information, to advance AI content detection.
In the days and weeks ahead, we’ll keep learning and refining these safeguards. We recognize that we can’t foresee every way that our tools may be used. But we’ll continue to take robust action to prevent abuse, while deploying our technology to create a more connected and informed world.