What happens when two AI voice assistants have a conversation?

At the ElevenLabs London Hackathon, developers created GibberLink, a groundbreaking protocol that lets AI agents recognize each other and switch to a hyper-efficient sound-based language—making AI-to-AI communication 80% faster and more reliable.

What happens when two AI voice assistants have a conversation? If AI is talking to AI, why should it bother with the inefficiencies of human speech? Why use words when pure data is faster, more precise, and error-proof?

That's exactly what happened at the ElevenLabs London Hackathon, where developers Boris Starkov and Anton Pidkuiko introduced GibberLink, a custom protocol that allows AI agents to recognize each other and switch into a new mode of communication — one that's 80% more efficient than spoken language. And it didn't take long for the idea to go viral.

The Birth of GibberLink

The idea behind GibberLink is simple: AI doesn't need to speak like humans do. During the hackathon, Starkov and Pidkuiko explored the limitations of traditional AI-to-AI speech and realized they could cut out unnecessary complexity by letting AI talk to AI in a way optimized for machines.

By combining ElevenLabs' Conversational AI technology with ggwave, an open-source data-over-sound library, they created a system where AI assistants can detect when they're speaking to another AI and instantly switch to a more efficient mode of communication — transmitting structured data over sound waves instead of words.

How It Works

  • An AI starts speaking normally — just like a voice assistant interacting with a human.
  • Recognition kicks in — if the AI realizes it's talking to another AI, they both switch protocols.
  • The language changes — instead of spoken words, the AI agents transmit structured data over modulated sound waves, thanks to ggwave's frequency modulation system.

The result? Faster, error-proof communication with 80% greater efficiency. Think of it as Morse code on steroids — but for AI.

How GibberLink Broke the Internet

Add voice to your agents on web, mobile or telephony in minutes. Our realtime API delivers low latency, full configurability, and seamless scalability.

GibberLink wasn't just a clever hackathon experiment — it quickly became one of the most talked-about AI innovations of the moment. And this happened in a week when xAI launched Grok 3 and Anthropic dropped its latest iteration of Claude Sonnet.

When Georgi Gerganov, the creator of ggwave, posted about it on X, the AI and tech communities lit up with excitement. Big-name influencers and major tech publications, including Forbes, jumped on the story.

Luke Harries from ElevenLabs summed it up best in his X post: "What if an AI agent makes a phone call, then realizes the other person is also an AI agent? At the ElevenLabs London Hackathon, Boris Starkov and Anton Pidkuiko introduced a custom protocol that AI agents can switch into for error-proof communication that's 80% more efficient. It's mind-blowing."

Why This Matters

GibberLink isn't just a cool hack — it's a glimpse into the future of AI communication. Right now, AI speaks in human language because we expect it to. But when machines need to talk to each other, this type of direct, optimized communication could become the standard. Imagine AI-powered customer service bots, smart assistants, or even autonomous systems collaborating instantly and flawlessly in their own dedicated mode.

It also opens the door to a broader question: What happens when AI stops communicating like us — and starts communicating like AI?

For now, GibberLink is open-source and available for developers to explore on GitHub. But given the buzz it's generated, don't be surprised if this ElevenLabs Hackathon project is just the beginning of a much larger shift in how AI interacts. One thing's for sure — AI just found its own voice.

Zobacz więcej

ElevenLabs

Twórz z najwyższą jakością dźwięku AI