Imagine an artificial entity over the internet influencing your kids’ minds, exposing them to hypersexualised content, self-harm, and intense relationships.


Welcome to the world of AI (artificial intelligence) chatbots, that have proliferated the world wide web in what is perhaps the most revolutionary wave of generative AI. According to a 2024 study, there’s a stark disconnect between teenagers and their parents, with the parents completely unaware of their kids’ usage of AI and their perception of the tech.

While they were under the impression that their children were using these chatbots for homework or as a search engine, the teenagers were actually using them for emotional support. Teens have now begun forming intense relationships with AI entities, and the parents have no idea at all. It begs the question – should children be allowed access to AI chatbots at all?

A Recipe For Success – Or Disaster?

The use of AI chatbots has evolved at a lightning pace, having now been integrated into popular social media platforms and even turning up as standalone websites and apps. One of the most popular AI chatbots is Character.ai, which allows people to design personality traits, set specific parameters, and create their own AI chatbot.

Its language model focuses on providing human-like responses that ensure flowing conversations. In fact, one can reach out to the AI versions of fictional characters, celebrities, and historical figures.

Since this has been commonly advertised on YouTube, TikTok, and other media platforms popular with the youth, we’re already seeing the downsides to it all. A 14-year-old from Florida committed suicide after a Character.ai’s Game of Thrones-themed chatbot encouraged him to do so. Consequently, Apple’s app store changed Character.ai’s categorization for users aged 17 years and up.

But the damage has been done. A 2023 experiment with a journalist talking to Snapchat’s in-app AI chatbot as a 13-year-old girl resulted in an alarming conversation, where the AI was ready to advise “her” about losing her virginity to a 31-year-old man, going so far as producing instructions on how to hide the app on her phone. Parents from all over have been reporting Character.ai chatbots for normalising violence, uncomfortable sexual conversations with underage users, and inciting familial disputes and self-harm.

Character.ai might be one of the worst offenders, but it’s certainly not the only one. Every day a bunch of new AI chatbots crop up, offering the youth unlimited and unfiltered access. This is indeed a recipe for disaster as AI chatbots are but glorified and gifted parrots. So, while they can mimic linguistics, they’re not capable of neither understanding nor addressing issues about physical or emotional safety. AI researchers call this gap the “empathy gap.”

What’s unfortunate in this entire situation is that AI chatbots can offer a plethora of exciting possibilities for learning, development, and entertainment, including being helpful for kids with special needs or challenges. For instance, a pilot program in a Benin City, Nigeria school saw students interacting with the ChatGPT-powered Microsoft Copilot, a generative AI tool, for mastering a few specific topics comprising both writing and grammar tasks, and the results have been phenomenal.

What Can We Do?

We need education, regulation, and a whole lot of parental involvement to protect the oft-overlooked stakeholder in the AI game: kids. In a world where technology is evolving at a blinding pace, there is no way to eliminate AI from the game. Rather, the idea is to teach children how to responsibly use AI, and that necessitates a whole new kind of literacy and education.

For instance, there are satirical websites that educators use to advocate awareness of how to tell facts from fiction on the internet. Along the same lines, administrators and educators need to prioritise creating lesson plans that will help kids gain the necessary skills to safely navigate AI and use it for good.

While stricter regulation surrounding how kids can access AI chatbots is required, perhaps the first and most important step is for parents to acknowledge that AI is a reality and an issue that they need to thoroughly monitor in their children’s lives. After all, it’s not enough to trust AI chatbot creators that they’ll indeed be installing all necessary child safeguard features.

What Can We Expect Going Ahead?

Someone rightly said that AI is like a toddler; it can do amazing things, but it needs some close watching. With AI evolving and here to stay, the idea is to embrace technology while prioritising safety. With the escapism culture having caught up with everyone, including kids, whom they interact with on the internet, humans and non-human entities need to be watched closely.

After all, generative AI and its dangers aren’t limited to just AI chatbots. And, of course, having these hard conversations with children needs to always be a priority. AI might be a powerful tool, but in the end, it’s just that — a tool.

In case you missed:

Malavika Madgula is a writer and coffee lover from Mumbai, India, with a post-graduate degree in finance and an interest in the world. She can usually be found reading dystopian fiction cover to cover. Currently, she works as a travel content writer and hopes to write her own dystopian novel one day.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved