Safety

AI audio built to unlock possibilities and positive impact, guided by responsibility and safeguards that protect people from misuse.

Our Safety Misison

At ElevenLabs, we believe deeply in the immense benefits of AI audio. Our technology is used by millions of individuals and thousands of businesses to make content and information accessible to audiences for whom it was previously out of reach, to create engaging education tools, to power immersive entertainment experiences, to bring voices back for people who have lost the ability to speak due to accident or illness, and so much more. 

As with all transformational technologies, we also recognize that when technology is misused, it can cause harm. That’s why we are committed to protecting against the misuse of our models and products – especially efforts to deceive or to exploit others.  Our safety principles guide our everyday work and are reflected in concrete, multi-layered safeguards designed to prevent and address abuse.

“AI safety is inseparable from innovation at ElevenLabs. Ensuring our systems are developed, deployed, and used safely remains at the core of our strategy.”

Mati Staniszewski

Mati Staniszewski

Co-founder at ElevenLabs

“The volume of Al-generated content will keep growing. We want to provide the needed transparency, helping verify the origins of digital content.”

Piotr Dąbkowski

Piotr Dąbkowski

Co-founder at ElevenLabs

Our Safety Principles

Our safety program is guided by the following principles:

Safety illustration

Our Safeguards

We strive to maximize friction for bad actors attempting to misuse our tools, while maintaining a seamless experience for legitimate users. We recognize that no safety system is perfect: on occasion, safeguards may mistakenly block good actors or fail to catch malicious ones.

We deploy a comprehensive set of safeguards in a multi-layered defense system. If one layer is bypassed, the additional layers that lay beyond it are in place to capture the misuse. Our safety mechanisms are continuously evolving to keep pace with advancements in our models, products, and adversarial tactics.

Inform

We incorporate third-party standards such as C2PA and support external efforts to enhance deepfake detection tools. We have publicly released any industry leading AI Audio Classifier to help others determine whether a piece of content was generated using ElevenLabs.

Enforce

Customers who violate our Prohibited Usage Policy are subject to enforcement actions, including bans for persistent or serious violators. We refer criminal and other illegal activity to law enforcement.

Detect

We actively monitor our platform for violations of our Prohibited Usage Policy, leveraging AI classifiers, human reviewers, and internal investigations. We partner with external organizations to obtain insights about potential misuse and have established a mechanism through which the public can report abuse.

Prevent

We redteam our models prior to release and vet our customers at sign up. We also embed product features to deter bad or irresponsible actors, including blocking the cloning of celebrity and other high risk voices, and requiring technological verification for access to our Professional Voice Cloning tool.

Safety Partnership Program

We support leading organizations to develop technical solutions to detect deepfakes in real time. 

Report Content

If you find content which raises concerns, and you believe it was created with our tools, please report it here.

Prohibited content & uses policy

Learn about the types of content and activities that are not allowed when using our tools.

ElevenLabs AI Speech Classifier

Our AI Speech Classifier lets you detect whether an audio clip was created using ElevenLabs.

Coalition for Content Provenance and Authenticity

An open technical standard providing the ability to trace the origin of media.

Content Authenticity Initiative

Promoting the adoption of an open industry standard for content authenticity and provenance.

Frequently asked questions

The most realistic voice AI platform

Background lines