It would seem that the much-feared death by AI prophesied in dystopian cinema has already begun but there is more than meets the eye finds Satyen K. Bordoloi
In the Terminator film series, the aim of the Artificial Intelligence machines is to exterminate humans. In the Matrix world, they use humans as batteries. In films like 2001 A Space Odyssey (1969) or Ex Machina, it is individuals who are at risk. As you can guess, harmful AI has been a favorite film trope even before the real advent of AI which began only after the 2010s.
Since then, though, AI watchers would be keenly aware that yes, those movies are exaggerations but that does not mean the dystopian vision they foresaw, isn’t coming true – slowly and stealthily.
In November 2017, 14-year-old Molly Russell ended her life. Seeking answers, her parents checked her social media habits and were stunned to find she had unfettered access to self-harm content for at least six months. After years of struggle to hold companies accountable, the family’s efforts paid off after a coroner concluded at an inquest held last month in the North London Coroner’s Court that Molly was suffering from “negative effects of online content.”
So, what has AI got to do with it? The answer is everything.
Sify Technologies – ICT Service Provider
The rabbit hole of ‘preferred’ content
When you first start anything online: be it social media like Twitter, Facebook, Instagram, TikTok or even browsing Google news, YouTube, Netflix, etc., your every move and click is recorded. Not by humans but by algorithms in the said systems. Thus, with every little action, you take on these platforms e.g., what you like, subscribe to, how long you watch or read something – you are being judged by Artificial Intelligence systems, and what you spend the most time with, similar content is pushed at you.
If you search or engage with suicide-related content a few times, the algorithm will judge that you ‘prefer’ such content and will push similar content. Web 1.0 began with the success of Google helping us search but Web 3.0 is about finding something without searching, wherein apps push preferred content at you without you even looking for it.
AI-guided obsession
Take one of my father’s friends who lives in the North Eastern corner of India in North Lakhimpur. When he first got a touchscreen phone 5 years ago, he happened to click on a few videos about aliens on YouTube. The algorithm began recommending similar videos. When he talked about this a few months before the pandemic to me, he had reached the level where he was convinced aliens live and work amidst us in large numbers.
Perennially interested in semiotics, I checked some of the videos he’d been watching and realized most of them have some facts – up to 10% while the rest was pure fiction. But those 10% provable facts lent credibility to the fictional bits of the video. Worst of all, alien conspiracy videos are all that his YouTube feed showed him, creating for him a bubble. He thought his bubble to be the world.
This uncle’s AI-guided obsession is harmless and the maximum he will suffer is embarrassment. But imagine 14-year-old Molly Russell, like my uncle, being led into a Rabbit-Hole of self-harm and suicide content, most of them romanticizing it, till it was all she could see in her social media feeds. Before she actually committed to it in real life, suicide had become her digital reality.
Mass Hypnosis
Molly’s case at a national scale led to mass hypnosis and hysteria. Like the fake news about the election having been stolen from him that ex-US president Donald Trump promotes without an iota of proof. The videos, images, news reports and conspiracy theory videos about his blatant lies run in the millions on all the platforms. Click on a few and you’ll be sucked deep into these conspiracies till it will become your only reality as you’ll see nothing else.
In Trump’s case, the 10% truth comes from him being the ex-president. Though they know most politicians lie, his followers cannot fathom that a man in such a high position could speak such a big lie. So, they believe he is speaking the truth, take his words as the gospel truth, believe he has been wronged and with their sense of justice being triggered they are ready to fight to the death for him.
The American activists on the other side of the spectrum call them stupid, morons, blind, illiterate etc., Ironically though, these liberals are also so trapped in their bubble of wok extremism created by social justice and outrage videos – again pushed by Artificial Intelligence where semantics i.e., using the right words and terms often matter more than actual action on the ground – that they villainize and demonize the other side wondering how can the other side not get it?
These social justice warriors, even though they might be right to a great extent, do not realise that they are equally intolerant and are half responsible for perpetrating the cycle of intolerance and violence. They forget that change comes from patient dialogue with someone you disagree with, not ‘naming and shaming’ them for the slightest infringement.
The result: the US is looking at a civil-war-like situation because both these sides are equally strong, with neither realizing that they have been pushed to the state by Artificial Intelligence, with no solution in sight but taking AI recommendations from the internet out of the equation.
In nations unlike the US where one side is stronger than the other, the result is outright genocide like in Myanmar or one developing into one like in dozens of countries across the world. Fringe ideas and fake news are thrown into the top of the algorithm’s recommendations because it adjudicates that is what people prefer after a few of such content has been seen and engaged with by users.
The most extreme, proven case of death by algorithm has been recorded in Myanmar where Facebook’s refusal to moderate content (ironically in the name of free speech), led to an ocean of Islamophobic content that caused the killing of at least 25,000 Rohingya Muslims. In many other nations, including India, similar pushed content on various platforms – often created by political parties keen to push their agenda – is leading to the villainization of particular religions or castes, leading to violence on them causing at least dozens of recorded deaths yearly in the last decade since AI has gradually come to the fore.
So, is this proof AI has begun killing people already as prophesied in most films? Yes and no. Yes, because AI is indirectly responsible. No, because AI cannot make a conscious choice. It cannot make decisions independently unlike what a Terminator or Matrix would have you believe. AI only follows instructions that are put into its programming and training data and parameters set by humans.
Thus, to say that AI killed Molly or the 25,000 Rohingyas in Myanmar is like saying that the knife killed a person when the truth is that the person wielding it is actually responsible.
Plausible Deniability
The biggest problem is that many of these social media companies are refusing to do much about it. Consider how they responded to the words of the coroner who concluded that Molly was sucked by these algorithms that showed her, “images, video clips and text concerning or concerned with self-harm, suicide or that were otherwise negative or depressing in nature… some of which were selected and provided without Molly requesting them”. As per the coroner: “Molly Rose Russell died from an act of self-harm whilst suffering from depression and the negative effects of online content”.
The evidence presented was so horrible that a child psychiatrist who testified, told The Guardian, “There were periods where I was not able to sleep well for a few weeks, so bearing in mind that the child saw this over a period of months I can only say that she was (affected) – especially bearing in mind that she was a depressed 14-year-old.”
However, what was most chilling was how the representatives of the two companies particularly singled out – Pinterest and Instagram – responded. The Pinterest representative, after initial denials, admitted that “10 depression pins you might like” is not something its algorithm should have recommended Molly.
But a Meta representative refused to admit these materials were not safe for kids even when shown that of the 16,300 posts Molly saved, shared or liked on Instagram in her last six months, 2,100 were of depression, self-harm and suicide-related stating instead that doing so is, “Yes, it is safe.”
Facebook and Mark Zuckerberg are famous for sweeping legitimate concerns of activists either under a false promise of doing something about it but not doing anything (they get the semantics right thus silencing activists but actions wrong) or as in the case of Molly, refusing outright to admit that there was anything wrong with them.
It is not that these companies do not know the harm their algorithms cause. Amnesty International found that Facebook/Meta knew writing in a report: “Internal studies dating back to 2012 indicated that Meta knew its algorithms could result in serious real-world harms. In 2016, Meta’s own research clearly acknowledged that “our recommendation systems grow the problem” of extremism.” But Facebook/Meta and their chief Mark Zuckerberg refused to do anything.
It is the Zuckerbergs of the world (his employees call him the ‘The Eye Of Sauron’ – the epitome of evil in Lord of the Rings), who are thus causing the deaths from AI and not AI in itself.
Terminator and Matrix were exaggerated warnings that are coming true. AI has already begun killing us. But unlike in those films, they are not acting on free will. Instead, they function on the borrowed will of evil humans behind them. You can kill a T100 terminator in the films. But what will you do to the Mark Zuckerbergs of the world so stubbornly blind in their pursuit of profits that they turn away from all the deaths they are responsible for?
We must never lose sight of the simple fact that Artificial Intelligence is literally the most powerful tool that humans have created so far. It is a tool that has the potential to do tremendous good as it already is doing, but in the wrong hands of stubborn men and women, it can destroy our planet.
We must correct the mistakes we make with using AI quickly for another reason: a tool more powerful than AI is under development – Quantum Computing, which would soon after lead to the emergence of Quantum Intelligence. If we can’t even handle AI, what will we do when exponentially better and faster QI becomes a reality?
In case you missed:
- Kodak Moment: How Apple, Amazon, Meta, Microsoft Missed the AI Boat, Playing Catch-Up
- AI vs. Metaverse & Crypto: Has AI hype lived up to expectations
- OpenAI’s Secret Project Strawberry Points to Last AI Hurdle: Reasoning
- Copy Of A Copy: Content Generated By AI, Threat To AI Itself
- 10 Years of ‘Her’: What the Film Got Right about AI, What It Did Not
- Rise of Generative AI in India: Trends & Opportunities
- PSEUDO AI: Hilarious Ways Humans Pretend to be AI to Fool You
- Rethinking AI Research: Shifting Focus Towards Human-Level Intelligence
- What are Text-to-Video Models in AI and How They are Changing the World
- 2023 Developments That’ll Alter Your Future Forever