Skip to content

ElevenLabs partners with the UK Government on voice AI safety research

UK AI Security Institute researchers will explore the implications of AI voice technology

Logos for ElevenLabs and the UK AISI

ElevenLabs has agreed a new three-year partnership with the UK Government to study the societal and security implications of AI voice technology. We’ll collaborate on research into how effectively people can identify when they are speaking to an AI agent, and whether that knowledge shifts their behavior.

Conversational AI will agents deliver immense benefits – from personalized tutoring to 24/7 customer service – but it’s vital that we ensure the technology isn't used to deceive or exploit. To protect against misuse, we embed comprehensive safeguards directly into our products and work closely with governments to establish AI safety standards.

Under this new memorandum of understanding, we are giving the UK Government access to our frontier voice models for large-scale, controlled research. This work will investigate how well people can distinguish between a human and an AI agent during natural, back-and-forth conversations. Researchers will also examine how people’s behavior and willingness to disclose information change based on whether they believe they are talking to a human or an agent.

This work is a vote of confidence in the UK's AI leadership: a leading AI company choosing to base itself in the UK, and choosing to partner with our world-leading AI Security Institute on serious, substantive, and innovative research. Work and partnerships like this are how we harness AI as a technology people can put their trust in, and which benefits their lives.

— Kanishka Narayan, Minister for AI and Online Safety

Voice is the most natural way to access the full potential of AI, but people will only embrace this shift if they trust the technology. We’ve built safeguards into every part of the ElevenLabs platform, and this new research partnership with the UK Government is another important step to ensure that AI is deployed responsibly.

— Mati Staniszewski, CEO of ElevenLabs

To ensure our technology is used responsibly, we deploy a comprehensive set of safeguards. This includes blocking the cloning of political and celebrity voices, requiring verification for voice cloning, and offering an AI Audio Classifier to help people determine if an audio clip was generated by ElevenLabs.

This partnership follows the launch of ElevenLabs for Government last week, which helps public sector organizations deliver 24/7, multilingual services. Early adopters like the Government of Ukraine are already using this technology to help citizens access essential services, while other agencies are using AI agents to handle approximately 5,000 calls per day.

Explore articles by the ElevenLabs team

ElevenLabs

Create with the highest quality AI Audio

Get started free

Already have an account? Log in