A recent AI communications demo went viral for all the wrong reasons—it stoked fears of an apocalypse… again. Satyen K. Bordoloi digs into why we can’t seem to quit our AI doomsday obsession.


If the medieval world had the plague, the 21st century has AI phobia—as viral, and as dangerous. Every time AI takes a step forward, the world screams: The machines are coming for us! The latest victim is a simple product demo video released at the end of February sent the internet into a tailspin, with people convinced—once again—that AI was plotting humanity’s downfall behind our backs.

The video that launched a thousand ships and threatened to burn the towers of AI

The Gibber Link Incident: Boris Starkov and Anton Pidkuiko, two developers, identified an opportunity in the growing use of AI agents for phone calls. They asked themselves a simple question: Could these agents communicate more efficiently with each other? That was because for agents to talk in plain English was slow, expensive, and prone to errors. Their solution: shift from speech-level to sound-level communication.

The result was Gibber Link, a communication protocol that lets AI agents talk to humans normally but switches to a sound-level protocol when talking to each other. To humans, it sounds like gibberish—hence the name—but to AI agents with the code, it’s crystal clear. Built on the GGWave library, Boris and Anton claim Gibber Link can reduce compute costs by over 90%, cut communication time by up to 80%, and work flawlessly even in noisy environments. It even has the potential to transmit images and structured data (like JSON), hinting at far more sophistication in future AI interactions.

Fearing AI is like fearing a toaster—it’s just a tool doing what it was designed to do

To demonstrate their creation, Starkov and Pidkuiko released a 69-second video of two AI agents chatting away until they realised they were not talking to a human. Then, they switch to their secret language—a series of beeps and clicks that sound like modems in the ’90s and are equally nonsensical to our ears but easily decipherable for the Agents. The video went viral, and the internet collectively lost its mind. AI is conspiring against us! screamed tweets and comments; The machines are out of control!

Let me give an analogy to illustrate why this reaction is absurd. Would you freak out if your toaster can toast bread exactly as it was designed to? Yet, when it comes to AI, we’re quick to believe the wildest stories—thanks largely to decades of sci-fi franchises like Terminator and Matrix that have trained us to tap into the oldest, most primal part of our brain for this most modern of tech. But why? Why do we keep anthropomorphising AI, treating it like some malevolent force even when we know it’s just a tool we’ve built ourselves?

Facial recognition systems—a real-world example of AI bias and its harmful consequences

The Ancient Fear of the Unknown: Our fear of AI is as old as storytelling itself. From the golem of Jewish folklore—a clay creature brought to life by mystical mantras—to Mary Shelley’s Frankenstein, we have always been both fascinated and terrified by the idea of creating life (or something lifelike) that could surpass our control. This fear isn’t rational; it’s primal, rooted in the “lizard brain”—the ancient part of our mind that screams Danger! at anything unfamiliar.

When we hear two AI agents chatting in a language we don’t understand, that lizard brain kicks into overdrive. But here’s the thing: AI isn’t plotting anything. It’s just doing what it was programmed to do. Our fear, while deeply ingrained, is also patently misguided. It distracts us from the real, immediate harms of AI and leads us down a path of anthropomorphisation that can do more harm than good.

The ‘lizard brain’—our primal instincts that trigger fear of the unknown, including AI

Giving AI a Face It Doesn’t Have: Humans have a weird habit of anthropomorphising everything—from pets to cars to, yes, AI. We give AI names (remember, Siri and Alexa), personalities, and even genders. We talk about AI “learning,” “thinking,” and “making decisions” as if it were a person. It’s an understandable impulse, for it is how we make sense of the world. But it’s also incredibly misleading.

When an AI system recommends a product, denies a loan, or filters a job candidate, we describe it as if the AI is exercising judgment. But here’s the truth: AI doesn’t “decide” anything. It processes data according to predefined algorithms and statistical models. There’s no intent, consciousness, or moral reasoning but a series of complex statistical calculations in the backend. By anthropomorphising this process, we risk attributing qualities AI doesn’t possess, which can lead to both misplaced trust, like in AI girlfriends, and unnecessary fear.

The Jewish story of the golem-a clay creature brought to life by mystical mantras—may be the oldest tale expressing our terror at the idea of creating life

This isn’t just harmless, entertaining storytelling; it can actually harm. When we describe AI as “biased,” we often let the humans who designed and deployed it off the hook. Bias in AI doesn’t come from some inherent flaw in the machine; it reflects the biases in the data on which it was trained, the people who created it, and the way they trained it. By framing the issue as if the AI itself is the problem, we ignore the systemic issues that need to be addressed.

The Real Harms of AI: While we get busy worrying about AI secretly conspiring against us, we ignore the real, immediate harm it’s causing right now. These harms have nothing to do with AI becoming too intelligent or too autonomous but with how AI is designed, deployed, and regulated—or, more often, not regulated.

A pressing problem is algorithmic bias. AI systems are only as good as the data they’re trained on. If the data reflects historical inequalities, the AI will perpetuate and, at times, even exacerbate them. Facial recognition systems have repeatedly been shown to be less accurate on darker skin-toned people, leading to wrongful arrests and other injustices. This isn’t because the AI is “racist” in the way a human is, but because the training data wasn’t representative of the diverse populations it would encounter.

AI’s impact on privacy—collecting and analysing personal data without consent

Job displacement caused by AI is huge. As AI becomes more capable, it is increasingly being used to automate tasks once performed by humans. While this can lead to efficiency and lower costs, it also threatens to displace millions. The economic and social consequences of this could be devastating, yet receives far less attention than the hypothetical threat of AI “taking over.”

Then there’s privacy. AI systems collect, analyse, and monetise vast amounts of personal data, often without the knowledge or consent of individuals involved. This erosion of privacy has profound implications for our autonomy and governance yet is overshadowed by more sensational concerns about AI.

Anthropomorphizing AI—giving machines human traits they don’t possess

The Danger of Distraction: The fear of AI “taking over” isn’t just irrational; it’s a distraction. By fixating on hypothetical future scenarios, we neglect current, real problems. This isn’t to say we shouldn’t think about the long-term consequences of AI but that we need to do so in a way that’s grounded in reality, not science fiction.

One way to do this is to shift our focus to understanding AI as a tool. Tools don’t have individual morality and are only as good or evil as the people who use them. By framing AI as a tool, we can better understand its potential benefits and risks and hold the people who design and deploy it accountable for its impacts.

AI algorithms work exactly as they are supposed to by processing data, not making human-like decisions

Another path is to prioritise transparency and accountability. Too often, AI systems operate as “black boxes,” making decisions that are opaque even to their creators. This lack of transparency makes it difficult to identify and address issues like bias and discrimination. By requiring greater transparency and accountability, we can ensure that AI is used in ways that are fair, ethical, and beneficial to society.

Ancient fears mixed with AI anthropomorphisation make a heady cocktail that can knock the equilibrium of our judgement. So, the next time you hear about AI agents communicating in a language you don’t understand, remember: it’s not a conspiracy; it’s just a tool doing what it’s designed for. And remember that to reap AI benefits, we must focus on real issues, not those that live only in our imaginations.

In case you missed:

Satyen is an award-winning scriptwriter, journalist based in Mumbai. He loves to let his pen roam the intersection of artificial intelligence, consciousness, and quantum mechanics. His written words have appeared in many Indian and foreign publications.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved