We are committed to ensuring the safe use of our leading audio AI technology.

AI audio helps overcome language and communication barriers, paving the way for a more connected, creative, and productive world. It can also attract bad actors. Our mission is to build and deploy the best audio AI products while continuously improving safeguards to prevent their misuse.

AI safety is inseparable from innovation at ElevenLabs. Ensuring our systems are developed, deployed, and used safely remains at the core of our strategy.
A young man smiling while working at a computer in an office setting.

Mati Staniszewski

Co-founder at ElevenLabs

Our mission is to make content accessible in any language and in any voice.

We are a trusted AI audio provider for millions of users around the world, as well as for leading publishing and media companies including:

The Washington Post Logo
CNN logo in red text
HarperCollins Publishers logo with a flame and water icon.

ElevenLabs safety in practice

We are guided by three principles to manage risk while ensuring AI audio benefits people worldwide: moderation, accountability, and provenance.

Moderation

We actively monitor content generated with our technology.

Blurred yellow text box with a warning icon at the bottom left corner.

Automated moderation. Our automated systems scan content for violations of our policies, blocking them outright or flagging them for review.

Human moderation. A growing team of moderators reviews flagged content and helps us ensure that our policies are adopted consistently.

No-go voices. While our policies prohibit impersonations, we use an additional safety tool to detect and prevent the creation of content with voices deemed especially high-risk.

voiceCAPTCHA. We developed a proprietary voice verification technology to minimize unauthorized use of voice cloning tools, which ensures that users of our high-fidelity voice cloning tool can only clone their own voice.

Accountability

We believe misuse must have consequences.

A digital interface showing a microphone icon, a folder icon, and a green circular pattern with a checkmark.

Traceability. When a bad actor misuses our tools, we want to know who they are. Our systems let us trace generated content back to the originating accounts and our voice cloning tools are only available to users who verified their accounts with billing details.

Bans. We want bad actors to know that they have no place on our platform. We permanently ban users who violate our policies.

Partnering with law enforcement. We will cooperate with the authorities, and in appropriate cases, report or disclose information about illegal content or activity.

Provenance

We believe that you should know if audio is AI-generated.

Abstract black and white wavy lines on a light background.

AI Speech Classifier. We developed a highly accurate detection tool which maintains 99% precision and 80% recall if the sample wasn't modified and lets anyone check if an audio file could have been generated with our technology.

AI Detection Standards. We believe that downstream AI detection tools, such as metadata, watermarks, and fingerprinting solutions, are essential. We support the widespread adoption of industry standards for provenance through C2PA.

Collaboration. We invite fellow AI companies, academia, and policymakers to work together on developing industry-wide methods for AI content detection. We are part of the Content Authenticity Initiative, and partner with content distributors and civil society to establish AI content transparency. We also support governmental efforts on AI safety, and are a member of the U.S. National Institute of Standards and Technology’s (NIST) AI Safety Institute Consortium.

The volume of AI-generated content will keep growing. We want to provide the needed transparency, helping verify the origins of digital content.
Young man wearing a white T-shirt with "ElevenLabs" logo, indoors.

Piotr Dąbkowski

Co-founder at ElevenLabs

Special focus: elections in 2024

Half of the world will vote in 2024. To prepare for this year’s elections, we are focused on advancing the safe and fair use of AI voices.

To facilitate these efforts, we are an inaugural signatory to the Tech Accord on Election Safety which brings together industry leaders such as Amazon, Google, Meta, Microsoft, OpenAI, and ElevenLabs, among others, in a concerted effort to safeguard global elections from AI misuse.

As AI becomes part of our daily lives, we are committed to building trustworthy products and collaborating with partners on developing safeguards against their misuse.
A woman with shoulder-length blonde hair smiling at the camera against a plain light gray background.

Aleksandra Pedraszewska

AI Safety at ElevenLabs

Read more about our safety efforts

Compliance

If you come across content that violates our Prohibited Content and Uses Policy, and you believe it was created on our platform, please report it here. EU users can notify us of content they believe may constitute illegal content (pursuant to DSA Article 16 of the EU Digital Services Act) here. We also designated a single point of contact for EU users (pursuant to DSA Article 12) where they can contact us about other concerns here.

As part of our commitment to responsible AI, ElevenLabs has established policies concerning cooperation with governmental authorities, including law enforcement agencies. In appropriate cases, this may include reporting or disclosing information about prohibited content, as well as responding to lawful inquiries from law enforcement and other governmental entities. Law enforcement authorities can submit legal inquiries by contacting our legal team here. Pursuant to Article 11 of the EU Digital Services Act, Law enforcement authorities in the EU may direct a non-emergency legal process requests to ElevenLabs Sp. z o.o. by submitting their DSA Request via a form here, which was designated as the single point of contact for direct communications with the European Commission, Member States’ Authorities, and the European Board for Digital Services. Authorities may communicate with Eleven Labs in English and Polish. Where required by applicable law, international legal processes may require submission through a Mutual Legal Assistance Treaty.

If you are an EU user, you have six months to appeal an action ElevenLabs has taken against your content or account. If your account or content has been restricted, you can submit your appeal by responding to the notification you received. If you would like to appeal the outcome of your DSA illegal content report, please use the form here

EU users can also contact certified out-of-court settlement bodies to help resolve their disputes relating to content or account restrictions, as well as related appeals. Decisions by out-of-court dispute settlement bodies are not binding.
ElevenLabs

Create with the highest quality AI Audio

Get started free

Already have an account? Log in