Even the greatest minds make mistakes, offering us valuable insights and lessons. Satyen K. Bordoloi explores the errors of geniuses and their modern parallels in AI.
Geoffrey Hinton – 2024’s Nobel Prize winner in Physics – has the unique distinction of being in opposing camps simultaneously. He’s universally recognized as the godfather of AI, yet he is also one of the biggest AI doomsayers. He has some mild and accurate warnings about AI and others that are dire, scary, and a tad shady, like how AI superintelligence, driven by self-preservation, could wipe out humanity.
This creates a kind of dichotomy. Surely the genius who had a big hand in creating AI as we know it understands how bad AI can get, right? Yes and no. He has indeed got some fears right, but most are straight out of the handbook of AI doomsayers, whose arguments – ironically – existed even before the birth of AI. Even though his AI research is original, he’s merely parroting these old fears of technology, not too different from the Luddites, as we have written earlier. So, how does a layperson make sense of this?
First, you must realize that even geniuses have been wrong, sometimes dramatically. Here are a few examples from Newton to Wozniak and how they made mistakes, sometimes on fairly simple things.
Sir Isaac Newton: Newton’s contribution to science is the stuff of legends. His unbound genius gave us the laws of motion and gravitation, calculus, and advancements in optics. It is fair to say that without him, you would not be reading this article on the digital equipment you’re using. Despite his monumental mind, he was not immune to making some glaring errors of judgment, even in the scientific realm. We’re not even talking about his assumption that gravity was an instantaneous force acting at a distance, which was corrected by Einstein’s theory of general relativity, or his belief in the existence of a luminiferous ether as a medium to propagate light.
His biggest error was the considerable amount of time he spent on the magical ideas of alchemy. He believed – as was common then – that base metals could be turned into gold and spent much time and effort trying to do that, naturally, without any positive results for those decades. The greatest mind of his age faltered in some very fundamental ways.
Albert Einstein: Einstein is considered such an unambiguous prodigy that his name has become an adjective, a synonym for genius. Hence, you’ll be surprised to learn that an entire book has been written detailing his mistakes. “Einstein’s Mistakes: The Human Failings of Genius” by Hans C. Ohanian has a blurb that reads: “Although Einstein was the greatest genius of the twentieth century, many of his ground-breaking discoveries were blighted by mistakes, ranging from serious misconceptions in physics to blatant errors in mathematics.”
But it is not just in the scientific domains where he was mistaken. There are other places where he faltered greatly. For instance, he believed that the main reason for the Great Depression of the 1930s was automation because it took away jobs, leading to economic chaos in the lives of individuals. He wrote that the: “great distress of current times is the result of man-made machines.”
Sounds familiar? That’s because this is almost the same accusation hurled at Artificial Intelligence today. Like industrial automation and the arrival of cars and buses took society away from horse carriages and stagecoaches that employed exponentially more people, AI, by making everything from coding to customer management easy for corporations, is taking human jobs, causing much human job loss and suffering. This has been the case every time a new piece of labour-saving technology has made an appearance. These clichéd ‘rage against the machine’ prove the circular nature of anti-automation arguments and the fallibility of geniuses.
Steve Wozniak: Along with Steve Jobs, Steve Wozniak not only became the co-founder of Apple, but it was his circuit board design in 1976 for a home PC that inspired Jobs to start Apple as a personal computing company, creating, in a way, the digital world that all of us inhabit. Wozniak would create one amazing product after another for Apple that would push the frontiers of the digital world.
Yet, little known is how Wozniak thought Jobs was bonkers when he suggested that his PC circuit board idea would lead to a computer in every home. Worse still, he persisted with that idea for a long time. In a 1985 interview, he made some startling remarks: “The home computer may be going the way of video games, which are a dying fad,” and “I spent all my time using the computer, not learning the subject I was supposed to learn. I was just as efficient when I used a typewriter,” and “as a general device for everyone, computers have been oversold.”
If there is anything this millennium has taught us, it is that computing is the plinth upon which the modern age stands and that far from being oversold, every person has multiple computers that they use on any given day – direct ones like laptops, PCs, and mobiles, and indirect ones like IoT-enabled devices – smart cars, speakers, TVs, and watches included.
Wozniak was not making those comments in a vacuum. Like Hinton today, he was borrowing heavily from ideas circulating then. For example, The Pessimists Archive notes that when Wozniak gave that interview, critiques of computers were common. Even The New York Times ran a story that same year titled ‘Home computer is Out in the Cold’ stating how computers had failed to become as ubiquitous as the TV.
That brings us back to Geoffrey Hinton. Why does a genius like him, the father of AI, criticize his creation? The answer can be found in psychology. Multiple research projects have shown that those with high IQs are not just as susceptible to biases as anyone else, but at times more so. One key reason is their overconfidence in their genius, which creates blind spots in their field of view. Hinton might have been original in research, but when it comes to his AI doomsdayism, he is as clichéd as the common man, though his confidence would make him see himself as a genius even there.
So what does that mean for Geoffrey Hinton’s legacy? Nothing. The future is forgetful and forgiving. Just as we remember Einstein and Newton for their successes, not their failures, we’ll extend the same courtesy to Hinton. As for his AI doomsdayism, it will go the way the Luddites did – into the ‘luminiferous ether’ for now, only to stage a comeback when the next world-changing technology is born.
In case you missed:
- Prizes for Research Using AI: Blasphemy or Nobel Academy’s Genius
- Kodak Moment: How Apple, Amazon, Meta, Microsoft Missed the AI Boat, Playing Catch-Up
- Apple Intelligence – Steve Jobs’ Company Finally Bites the AI Apple
- AI as PM or President? These three AI candidates ignite debate
- Personal AI’s Future Is Here: Not What You Expected; Close to What You Need
- 2023 Developments That’ll Alter Your Future Forever
- PSEUDO AI: Hilarious Ways Humans Pretend to be AI to Fool You
- Why a Quantum Internet Test Under New York Threatens to Change the World
- Copy Of A Copy: Content Generated By AI, Threat To AI Itself
- Rufus & Metis Tell Tales of Amazon’s Delayed AI Entry