Last week at the MIT EmTech Digital conference in London, I participated in a panel focused on how business, government, and academia can collaborate to maximize opportunities and manage challenges associated with advanced AI products.
Alongside ElevenLabs, the panel included leaders from the Alan Turing Institute, the Ada Lovelace Institute, and BT, with MIT Technology Review’s Melissa Heikkilä moderating the discussion.
AI 安全的三大举措
在 ElevenLabs,我们开发音频 AI 技术时,始终关注其影响。作为 AI 安全负责人,我专注于赋能创作者、企业和用户,同时防止滥用,遏制不良行为。在圆桌讨论中,我介绍了我们为提升 ElevenLabs 安全性和创新性所采取的措施,并呼吁优先应对 AI 安全挑战。这些策略包括:
Provenance: involves distinguishing AI-generated content from real content by understanding its origins. Upstream AI detection tools, such as classifiers, are probabilistic models trained to recognize AI-generated outputs. At ElevenLabs, we’ve developed the AI Speech Classifier that lets anyone upload samples to check if they originate from our platform. We’re also collaborating with Loccus to enhance AI content classification capabilities. Classifiers, however, are not a panacea solution for provenance; they have their limitations. To address them, downstream AI detection methods have emerged, including metadata, watermarks, and fingerprinting solutions. We endorse industry-wide efforts such as cryptographically signed metadata standard C2PA, which present the benefit of being open and interoperable and could enable labeling of AI-generated content across main distribution channels like Instagram or Facebook.
可追溯性:确保 AI 生成内容可追溯到具体用户。在 ElevenLabs,系统可将平台生成的内容与源账户关联,语音克隆工具仅对已通过银行信息验证的用户开放。注重可追溯性,确保所有 AI 平台用户都能对行为负责,必要时可被法律机构识别。
Moderation: which involves defining clear policies on acceptable content and use, and preventing users from generating content that does not comply with such policies. At ElevenLabs, we use automated systems to scan, flag, and block inappropriate content. Human moderators review flagged content to ensure consistent policy enforcement. We are continually advancing our moderation technology to prevent the generation of content that could harm public trust or safety. Open source moderation endpoints, such as the one provided by OpenAI, enable easy integration of prompt moderation into any AI applications.
Working together towards a common goal
While we prioritize safe AI development, no company can tackle AI misuse alone. At the panel, we recognized the need for widespread collaboration among tech companies, governments, civil rights groups, and the public to confront social and environmental changes and ensure a secure digital future. At ElevenLabs, we’re dedicated to maintaining open dialogue in our community and supporting initiatives like the Content Authenticity Initiative and the Elections Safety Accord to combat misinformation.
AI 安全的前瞻性布局
在讨论中,大家一致认为,AI 产品必须安全开发和使用,同时也要支持其创新和多样化应用。在 ElevenLabs,我们的平台常被用于帮助有音频转写需求的人提升数字内容可访问性,也为因 ALS 等健康原因失声的人带来新声音。要推动 AI 应用发展,必须提升公众对 AI 媒体的认知,鼓励理性互动,推广真实性验证工具,并加强公众和机构的 AI 伦理教育。