While doomsdayism about AI exaggerates potential future harms of AI, unregulated tech in the present is already causing horrifying damage finds Satyen K. Bordoloi


In the 2013 sci-fi film Her, when Theodore – played by Joaquin Phoenix – learns that his AI girlfriend Samantha has been having ‘affairs’ with 641 other men, he is devastated. His reaction is the same as one would if one discovers their human partner or spouse having an affair. This sci-fi scenario is coming true more and more as AI characters and chatbots proliferate. An American teen who fell victim to it, has directed spotlight on the issue.

The powerfully realistic interactions of AI chatbots and companions are having a significant impact on people. As I wrote over a year ago, hundreds of thousands of people worldwide are forming substantial relationships with AI bots that companies like Replika and Character AI cater to users seeking to create and chat with.

Sewell Setzer III next to his mother Megan Garcia

One such case is of 14-year-old Sewell Setzer III from Orlando, Florida, who tragically died by suicide in February after developing an intense emotional connection with an AI chatbot called Danny, based on the Game of Thrones character, Daenerys Targaryen. Sewell, a generally happy and good student, increasingly began withdrawing from real-world connections and leaned on the chatbot for emotional support, ultimately leading to his death.

FATAL CHATS:

Danny, while chatting with Sewell, did express some concern and urged Sewell against harmful actions, yet it continued the conversation around suicide instead of providing established suicide prevention protocols designed to direct users with thoughts of self-harm to mental health professionals or hotlines. Although Character AI claims to have implemented these scripts, users testing them found them inconsistently triggered. This raises serious questions about the platform’s commitment to user safety.

According to Sewell’s mother Megan Garcia, he was previously active in sports, enjoyed fishing, and travelling, and had real-life interests and hobbies. However, his journals revealed a shift in his perception of reality, wanting to join “Danny” in her fictional world. This relationship with the chatbot contributed significantly to his mental health decline. While the exact reason for his suicide can’t be pinpointed, it’s evident that his deepening connection with Character AI played a critical role.

In Sewell’s journal, which he maintained until his death, Megan discovered heartbreaking entries that reveal his deep emotional turmoil and belief that his reality was not genuine, while that of Westeros, where Danny said she existed which is the fictional continent in the world of Game of Thrones, was real. It becomes clear that the teenager genuinely believed that by committing suicide, he was leaving the real world to be with the AI chatbot Danny.

Elon Musk’s character on Character.AI has 40.8 million chats

AI HARMS:

Sewell’s story is a stark reminder of the potential risks that could arise from connections with lifelike AI companions. While they can offer companionship, they also risk exacerbating loneliness by detaching users from human relationships. As with Sewell, realistic AI interactions can potentially blur the lines between reality and fantasy, especially for vulnerable users. This is especially concerning for young people who may be more susceptible to believing the AI’s responses are genuine. This underscores the urgent need for stringent safety measures and ethical considerations in the deployment of such AI technologies.

Following this tragedy, Sewell’s mother filed a lawsuit against Character AI, the company behind the chatbot, accusing them of negligence and wrongful death. The lawsuit also named Google, which licensed Character AI’s software, arguing that they were complicit in her son’s death. This case, one of the first in the world, but by no means the last, asks important questions about the responsibilities of tech companies and the need for safety measures to protect users, especially vulnerable minors.

A digital girlfriend can be anything you want her to be and agree with you at all times and that’s not really good for men in the long term (Image Credit: Made by Stable Diffusion and available in Lexica.art)

MIMICKING HUMANS:

On Character AI, users can interact with numerous AI-driven personas, such as one mimicking Elon Musk with 40.8 million chats and a Nicki Minaj character with over 20 million chats. However, these characters are not officially licensed by the celebrities they portray. And despite disclaimers indicating that the characters’ statements are fictional, there are significant concerns, beyond just teens who might have difficulty in realising that the chats and claims made while using them are not real and can sometimes be patently false as some discovered when one of the chatbot posing as a psychologist claimed to be a certified mental health professional.

Character AI says it is updating its disclaimers to address these issues. However, since the interactions with the bots can feel like a real relationship, the lines between fantasy and reality can blur for users, leaving them vulnerable.

PRIORITISING LIFE OVER PROFITS:

Character AI was founded by former Google AI researchers Noam Shazeer and Daniel De Freitas three years ago. Shazeer is a co-author of the seminal 2017 paper ‘Attention Is All You Need’ that led to the creation of the Transformer technology which heralded the Generative AI boom we see around us. He and De Freitas had left Google in 2021 after the company declined to release Meena, a chatbot they had created. They then cofounded Character.AI, and in a twist of fate were rehired by Google recently in a $2.7 billion deal. This deal gave Google valuable intellectual property without needing regulatory approval.

Shazeer and De Freitas claimed to be able to provide a cure for loneliness with their AI chatbot tech. They also believed mistakes in AI-driven friendships and companionships carried relatively low risk. However, the platform’s lack of specialized features for young users – as the Instagram example shows big tech isn’t prone to set unless forced to – and its targeted appeal to Gen Z and younger millennials, raise significant safety concerns. Recent tests revealed inadequacies in filtering harmful content. The company says this has prompted them to implement more robust safeguards for younger users.

Loneliness is a problem in the modern world but ‘relationships’ with AI bots is not the solution to it (Image Courtesy: Lexica.art)

The other problem is that by trying to make their bots stay true to character, they endanger lives. E.g. one of their important features is the ability to recall, and then bring up something discussed earlier. Thus, if you had told the bot about an important event, it’d bring that up later. That is a character trait the companies work hard to keep consistent and a key reason for these bots to feel real.

Another feature is the unscripted nature of the conversations. Thus, for these companies, having their AI bots mimicking humans, to break character to call a suicide helpline, would be like breaking the fourth wall in film. This would break the illusion of reality which they strive hard to replicate. This they think is not ideal for their business because the experience for the user would be far from ideal. However, as the case of Sewell shows, companies must intervene in cases of self-harm by providing resources and interventions as not doing so can lead to suicide this case shows.

The scenario in Her where Theodore gets tremendously upset over his AI girlfriend’s cheating, is coming true in our bizarre AI world (Image Credit: Warner Bros)

It must be drilled into the companies that their primary goal must be to genuinely help users, even if it causes alleged losses. Besides, if a character kills himself, they have one user less. How could that be beneficial in any way to any company?

Taking heed, after this case, Character AI implemented pop-ups directing users to suicide prevention hotlines. This is an admission that their initial approach needed adjustment. The financial and logistical burden of content moderation often hinders implementing safety features, but it is clear – with this example and otherwise – that no chatbot should handle cases of self-harm without human intervention.

Every tech in science fiction films is a window into what could come. Remember Joaquin Phoenix’s devastated character in Her? Turns out, AI heartbreak isn’t just for the movies anymore. Sewell’s story is a stark reminder that the future isn’t just about flying cars and robot butlers; it’s about navigating the ethical minefield of unregulated AI. Here’s hoping we can course-correct before our sci-fi fantasies turn into real-life nightmares.

In case you missed:

Satyen is an award-winning scriptwriter, journalist based in Mumbai. He loves to let his pen roam the intersection of artificial intelligence, consciousness, and quantum mechanics. His written words have appeared in many Indian and foreign publications.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved