Like intelligent humans burnt at the stake in the past, the witchhunt of artificial ‘intelligence’ is afoot and shamefully led by some of our smartest people driven by some primal instincts writes Satyen K. Bordoloi.
Before the Age of Enlightenment, if a person had better intelligence than those around them: they faced one of two fates. If the pack found a way to use it, they would celebrate them. If they feared it because they did not understand, they would call them black magicians, and witches and lynch them. Intelligence has always inspired the opposite primitive instincts: reverence or murderous rage.
In our age of hyper-enlightenment, intelligence – we think – would always be celebrated. But thousands of our smartest are so scared of ChatGPT-type ‘intelligence’, that they’re asking all AI development to be halted for six months. Their petition begins: “AI systems with human-competitive intelligence can pose profound risks to society and humanity.”
It makes some valid points, like how we need to minimise AI harms. But how the media is transmitting and the public receiving the message is of a rampaging AI that needs to be stopped to save humanity. Some, like Eliezer Yudkowsky, are literally asking for its lynching writing in Time Magazine: “Everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.'” His reasoning is: “Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers-in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long.”
Humans have always feared what is different, what we do not understand. Xenophobia – the fear of the other – is partly that. Any group of people different from us, especially those smarter, richer, and better than us – we fear leading to violence and genocides. That this fear of intelligence is manifesting in a non-biological form – digital artificial intelligence – does not mask the primitive instincts triggering it.
The irony is we know what AI is better than we know ourselves.
What is AI
The old computing system invented in the 1940s and put to use using transistors was a linear, vertical array of seven basic logic gates: AND, OR, NOR, NOT, NAND, XOR, and XNOR. The Artificial Neural Networks (ANN) that make up all AI are a collection of nodes or synapses made up of these logical gates that go not in one direction, but connect in different directions with hundreds, thousands, or more, forming a mesh called the ‘neural network’. Each node makes decisions not just based on a combination of these seven logical gates but by also taking the ‘weight’ of previous decisions into consideration.
Thus if 7 out of 10 nodes in a row find one set of logic to be useful to solve the question/problem asked of it, that is what it will pass on to the node in the next row and so on till the final decision is not just logical, but seemingly ‘intelligent’ because it has gone through a rigorous test of multiple logical gates. If the result is correct, there is ‘backpropagation’ where the weights on the nodes are updated to minimise prediction errors in the future. This part is the ‘learning’ in ‘machine learning’ and ‘deep learning’.
Multiply this by hundreds, thousands, millions of nodes that have been ‘taught’ and whose intelligence has been refined on billions and trillions of inputs and you have an artificial system of intelligence that gives the illusion of mimicking our own which we call Artificial Intelligence – AI.
AI is thus the most effective way of computing we have found so far and is hence literally replacing everything computing i.e. all computing will be AI computing in the near future.
Yet, what we forget is that these systems are mere that: systems. They do not have context and meaning of their own like living organisms with consciousness do. A bacterium and an ant have more context and the meaning that they derive from it, than does a hyper intelligent computer system. A calculator does faster calculations than us but it has no context for the 2+2=4, or what it means. Hence, we do not fear a calculator. Why do we fear this supercharged, supersmart calculator we call artificial intelligence then?
What is there to fear in this extension of our previous systems of computing into a more efficient one? Indeed, there is so much to learn, grow and gain that our entire planet – the solar system at first, and the rest of the galaxy next – stands to be transformed with the infusion of such systems of intelligence. Already, from doing decades of work within months like finding the structure of all proteins, to understanding animals, to more mundane things like speaking with our accents, AI has become indispensable to solving the problems of humanity. The growth of AI is crucial as we head deep into a climate catastrophe to ‘science-the-shit’ out of the graves we are digging for ourselves.
Can those thousands of super-smart people who signed the petition not see these obvious points? Or are they too comfortable or selfish and hypocritical (Elon Musk, heading some of the largest AI companies wants to build a ChatGPT-like system yet is one of the greatest AI doomsayers) to see it? Or is it that despite their smartness they just suffer from what every human does: confirmation bias – they started with or encountered this notion of AI being dangerous and have been collecting points that confirm it while ignoring every other counterpoint?
The truth is not so simple. What we fear when we fear AI has deeper roots.
What ‘Do’ we fear when we fear AI
We have always gloated at the power of our minds. The rest of the universe might be sterile, but there is intelligence aplenty on our planet – every living thing has its own. But none has evolved the capacity to affect so much of the universe around as us. We thought this is what made our position in the universe special. We also believed that this super-intelligence is somehow connected to biology.
But with AI we have proven all of these notions wrong and all mostly within a decade. We proved that biology is not needed to hold intelligence, and that indeed it may not be the best possible way to use it either. We are also at the cusp of proving that our brain’s capacity to calculate so much with so little energy can be replicated outside carbon-based ‘life’ forms.
We have thus upended our own special place in the universe and now we feel like we are floating in the ether of nothingness wondering if anything we thought true, ever was. AI is a slap on our faces that tells us we are not gods. That we are not even that good biological creatures. All these movements to kill or pause AI are nothing but humanity’s existential angst, our collective mid-life crisis.
But if you go deep, and read between the lines of each of these arguments, you will see that what they fear is not AI per se. The ones who fear AI the most are those who have anthropomorphised AI, and this ‘humanness’ in AI scares them. These anti-AI arguments, in essence, are saying: “Humans – despite our intelligence and morality, do not care about lesser intelligence like bugs so how can we expect something that’ll be exponentially more intelligent than us, to do so? Will it not treat us, like we treat everything weaker than us – with destructive indifference?”
The fear then is not so much AI, but of AC – Artificial Consciousness. We fear that AI will develop a consciousness of its own which will be like human consciousness and the Skynet scenario from Terminator will play out for us: machines will rise and kill us.
This is where we come to the we-fear-what-we-do-not-understand-scenario. If there is one thing we are not close to solving despite our progress: it is consciousness. We do not know how it forms or how it gets entwined with living things, or if it is possible for digital things – for a thing not made of carbon, to have it. We are today with consciousness where we were a million years ago: we know nothing.
And that is why we fear AI: it is showing signs of intelligence like us, what if it also shows signs of consciousness like us and thus ends up behaving as we do with everything around us: with nonchalant abandon? Thus, our primitive fear reactions are triggered and we raise our torches, swords and sickles to demand AI’s lynching.
In ancient days intelligence was no guarantee of longevity. People tiptoed around you with fear. They were jealous and hated your specialness. And they waited for the chance to unleash mob violence on you.
That our response to AI is the same proves one simple thing: AI might be ready for us but we humans are not yet ready for superintelligence to help us reach our full potential.
In case you missed:
- The ChatGPT Legacy: How It Changed Our Perception of AI Forever
- 10 Years of ‘Her’: What the Film Got Right about AI, What It Did Not
- How Old Are We: Shocking New Finding Upends History of Our Species
- Copy Of A Copy: Content Generated By AI, Threat To AI Itself
- Why a Quantum Internet Test Under New York Threatens to Change the World
- Rethinking AI Research: Shifting Focus Towards Human-Level Intelligence
- Personal AI’s Future Is Here: Not What You Expected; Close to What You Need
- AIoT Explained: The Intersection of AI and the Internet of Things
- Prizes for Research Using AI: Blasphemy or Nobel Academy’s Genius
- OpenAI’s Secret Project Strawberry Points to Last AI Hurdle: Reasoning
1 Comment
I can’t help but think that if you’d spent any real time on Lesswrong, you would’ve better appreciated the concerns of people like yudkowsky. Your framing of it as being a bunch of rich people who want to selfishly prevent you from receiving the benefits of AI was telling.
Ai does not need to become “conscious” in order to kill everybody, it just needs to misunderstand our intentions and be sufficiently capable.
AI may well save us from a warming planet — but we need to make damn sure it doesn’t accomplish that by deciding to flash freeze the surface of the planet or by releasing gases that — oops — aren’t actually breathable, etc.
It doesn’t need to become filled with hate and malice. Imagine giving your small child the keys to your car. We are that child, and we’re taking daddy’s sportscar for a spin, on a road nobody’s ever seen, with every single human in the backseat.