Aug 19, 2022 I Brent Swancer

Our Robot Overlords: Will AI Ever Rise Up to Overthrow Humans?

"Once AI become self-aware, the cognitive hierarchy will be transformed forever where we humans are no longer the dominant species."

                                                                                                       Futurist and “techno-philosopher” Gray Scott

One of the most prevalent tropes to be found in science fiction is that of the rogue artificial intelligence, or AI. Countless films, novels, and short stories have explored this scenario, that there will one day be an AI takeover in which they become sentient and wipe us out. It is also a topic that has been debated and discussed among scientists, philosophers, futurists, and tech gurus, but what is the chance that any of this would ever really happen? What is the probability that AI will one day overthrow us and subjugate us to become our overlords? The answers are murky, but it seems that at least for some it is a very real possibility. 

The topic of how powerful AI will become has been debated for a long time, and there seems to be little agreement among scientists and futurists on just how likely it is that it could ever become an existential threat to humankind. On one side we have those who point out that with our rate of technological advancement and the ever increasing capabilities of AI, which in many instances rivals that of our own, it is inevitable that one day it will turn against us, and that pursuing it much further is opening a potential Pandora’s box. One worry by the doomsayers is just how much we have allowed AI to rule our everyday lives. AI is already ubiquitous in human society and is everywhere. It is all around us and has seeped into nearly every aspect of our lives and into almost everything we do. However, whereas there is the common conception that it merely carries out rote and mundane tasks, it has already proven to have surpassed us at many things other than just brute computing power. According to a survey of more than 350 artificial intelligence researchers carried out by the University of Oxford and Yale University and published in 2015, AI is advancing to the point that it will one day be pretty much better at us at everything and will completely replace us within the next 50 to 100 years at most tasks. According to New Scientist:

Enjoy beating robots while you still can. There is a 50 per cent chance that machines will outperform humans in all tasks within 45 years, according to a survey of more than 350 artificial intelligence researchers. Machines are predicted to be better than us at translating languages by 2024, writing high-school essays by 2026, driving a truck by 2027, working in retail by 2031, writing a bestselling book by 2049 and surgery by 2053. In fact, all human jobs will be automated within the next 120 years, say respondents.

AI has beaten the best chess players in the world, beaten human players in various video games, and some systems can lip-read better than professionals or help detectives sift through police data. There are even AI programs that write scientific papers, plays and novels, take photographs, create art, and do other traditionally creative things often thought to be squarely within the realm of humans, which they have managed to generate to a level that is almost indistinguishable from us, and although they are still not perfect and these fields still mostly require that certain human touch, it seems like only a matter of time before they take those tasks over as well. This is already scary enough as it is, because AI at this rate doesn’t even need to rise up against us, it will merely just replace us and phase us out, causing social collapse. One could make the argument that in a sense, AI already controls our lives. Yet, could it really become self-aware and run amok like in science fiction? It is a question that is not easy to answer, but considering their ever-growing analytical and predictive power and how increasingly autonomous these AI are becoming, many scientists say that it is very possible, even a certainty. Physicist Stephen Hawking was famously wary of this dark possibility, and often spoke at length about the dangers of AI becoming our masters. He believed that it will be impossible to control AI at some point, and has said:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

Other scientists agree. AI expert and author Max Tegmark has also expounded at length on the dangers of AI. He believes that while at the moment AI can only do what it is trained or programmed to do by humans and hasn’t learned how to self-replicate its own intelligence, it is only a matter of time before it acquires what he calls “Artificial General Intelligence,” or AGI, which would essentially allow it to upgrade itself and learn on its own, and then we’re in trouble. In his book Life 3.0: Being Human in the Age of Artificial Intelligence, he imagines just such a scenario, in which machines achieve this AGI and go from “dumb” to super intelligent programs that can leave us in the dust intelligence-wise, operate completely on their own, and self-replicate. He explains the evolution of this process and its chilling endgame in an interview with the site Big Think:

I define intelligence as how good something is at accomplishing complex goals. So let’s unpack that a little bit. First of all, it’s a spectrum of abilities since there are many different goals you can have, so it makes no sense to quantify something’s intelligence by just one number like an IQ. So if you have a machine that’s pretty good at some tasks, these days it’s usually pretty narrow intelligence, maybe the machine is very good at multiplying numbers fast because it’s your pocket calculator, maybe it’s good at driving cars or playing Go.

Humans, on the other hand, have a remarkably broad intelligence. A human child can learn almost anything given enough time. Even though we now have machines that can learn, sometimes learn to do certain narrow tasks better than humans, machine learning is still very unimpressive compared to human learning. For example, it might take a machine tens of thousands of pictures of cats and dogs until it becomes able to tell a cat from a dog, whereas human children can sometimes learn what a cat is from seeing it once. Another area where we have a long way to go in AI is generalizing. If a human learns to play one particular kind of game they can very quickly take that knowledge and apply it to some other kind of game or some other life situation altogether. And this is a fascinating frontier of AI research now: How can we have machines—how can we can make them as good at learning from very limited data as people are? And I think part of the challenge is that we humans aren’t just learning to recognize some patterns, we also gradually learn to develop a whole model of the world.

So if you ask “Are there machines that are more intelligent than people today,” there are machines that are better than us at accomplishing some goals, but absolutely not all goals. AGI, artificial general intelligence, that’s the dream of the field of AI: to build a machine that’s better than us at all goals. We’re not there yet, but a good fraction of leading AI researchers think we are going to get there maybe in a few decades. And if that happens you have to ask yourself if that might lead to machines getting not just a little better than us, but way better at all goals, having super intelligence. The argument for that is actually really interesting and goes back to the ‘60s, to the mathematician I. J. Goode, who pointed out that the goal of building an intelligent machine is in and of itself something that you can do with intelligence.

So once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by not human engineers but by machines, except they might do it thousands or a million times faster. So in my book, I explore the scenario where you have this computer called Prometheus, which has vastly more hardware than a human brain does, and it’s still very limited by its software being kind of dumb. So at the point where it gets human-level general intelligence, the first thing it does is it uses this to realize, “Oh! I can reprogram my software to become much better,” and now it’s a lot smarter. And a few minutes later it does this again, and then it does it again and does it again, and in a matter of perhaps a few days or weeks, a machine like that might be able to become not just a little bit smarter than us but leave us far, far behind.

I think a lot of people dismiss this kind of talk of super intelligence as science fiction because we’re stuck in this sort of carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms. As a physicist, from my perspective intelligence is just a kind of information processing performed by elementary particles moving around according to the laws of physics. And there’s absolutely no law in physics that says you can’t do that in ways that are much more intelligent than humans. We’re so limited by how much brain matter fits through our mommy’s birth canal and stuff like this, and machines are not, so I think it’s very likely that once machines reach human-level they’re not going to stop there; they’ll just blow right by, and that we might one day have machines that are as much smarter than us as we are smarter than snails.

It is this spooky scenario of AI vastly surpassing us in intelligence and capabilities that worries so many professionals in the fields of science and technology. How much would we really be able to control it, and at what point would it stop taking its instructions and programming from us and start programming itself? At what point would it stop needing to be shown what to do by humans and start learning on its own? Are we even able to keep up with these leaps and bounds in technology? Is it perhaps better that we slow down a bit and think things through? Academic researcher and writer Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, explains of this AI intelligence leap and its possible repercussions:

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.

A similar sentiment has been put forward by technologist and AI expert James Barrat, author of the book rather pessimistically titled Our Final Invention: Artificial Intelligence and the End of the Human Era. He believes that this AI intelligence explosion will inexorably advance to usurp humans from our rung at the top of the food chain and put AI in charge as our new overlords, relegating us to their sniveling slaves. According to Barrat, AI will inevitably rule the world, and he says:

We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest. So when there is something smarter than us on the planet, it will rule over us on the planet.

For these alarmists who fear the imminent AI explosion and possible rise against humans it is not a matter of “if” but of “when,” and they have consistently and vocally called for greater oversight and regulation of the development of AI. At the moment it is a field with very little actual rules or regulations, a sort of Wild West with all of these AI programmers and researchers working on myriad disparate projects with various goals and perfecting AI in leaps and bounds with little to no oversight. If they are allowed to pursue this unfettered, they say, then it will develop in ways we cannot control or even anticipate, and outstrip our ability to keep up with it. It is an issue that has famously been addressed by none other than Tesla CEO and futurist Elon Musk, who has likened it to conjuring up supernatural forces and has said:

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out. Got to regulate AI/robotics like we do food, drugs, aircraft and cars. Public risks require public oversight. Getting rid of the FAA won’t make flying safer. They’re there for good reason. If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea. AI doesn’t have to be evil to destroy humanity — if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.

Indeed, AI would not have to be evil or sinister like it is portrayed in films like Terminator and pretty much every piece of science fiction on a robot takeover there is. AI isn’t inherently good or bad, it would just do what it does and we would be the unfortunate casualty. After all, once it takes over most jobs and systems, why would it logically need us anymore? It would just take us out of the equation, as Musk said, “No hard feelings.” It would be no more evil than you taking out the trash. In the view of AI, we would be expendable, tools at best and pests at worst. As the character Nathan says in the movie Ex Machina, in which a hapless programmer is seduced and tricked by an android, says, “One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.”

It’s all pretty sobering stuff, but on the other side of the coin, there are also many scientists and experts who believe that this alarmist mentality is overstated or sensationalized, and that we are a long way from any sort of science fiction-style AI takeover or robopocalypse, if it ever even happens at all. One of the main things pointed out is the difference between the forms of intelligence displayed by humans and AI. At the present, all AI that we know of has what is called “narrow intelligence.” This means that they are very smart algorithms for dedicated tasks, only able to do a specific task very well but pretty useless at anything else. They are very good at what they do, but cannot go beyond that and cannot adapt. Humans, on the other hand, display what is called “general intelligence,” meaning the ability to learn from one situation and apply it to another, as well as the ability to integrate their own emotional intelligence, self-awareness, and human experience into these tasks. Many experts believe that this would be extremely difficult, if not impossible to emulate in a computer program, and that AI will always be tethered to humans giving it input, and no matter what they do or how good they are at it, they need us to tell them what to do in the first place. Researcher Jessica Bennett has said of this on Towards Data Science:

The AI system is still not developed enough to learn on its own. It still needs to be trained using data sets by human beings so that they can perform different tasks. At the end of the day, AI is an invention of the human mind. The complete automation of various tasks today is possible because of Human imagination. Even though the question persists whether we are going to be replaced by AI or not, we can rest assured that for the time being, the AI system is not at all close to achieving the kind of technical maturity to take over the human race.

Other experts agree, such as Nigel Shadbolt, professor of artificial intelligence at Southampton University, who has stressed that the obstacles facing the creation of self-aware thinking machines are much more formidable than science fiction and the alarmists would have us believe. He believes that we are far from being able to create a machine that could actually think on its own at even a rudimentary level, and is more worried about how humans choose to use AI. He has said of the rise of intelligent, self-aware AI:

Brilliant scientists and entrepreneurs talk about this as if it’s only two decades away. You really have to be taken on a tour of the algorithms inside these systems to realize how much they are not doing. Can we build systems that are an existential threat? Of course we can. We can inadvertently give them control over parts of our lives and they might do things we don’t expect. But they are not going to do that on their own volition. The danger is not artificial intelligence, it’s natural stupidity.

Yann LeCun, chief artificial intelligence scientist at Facebook AI Research, has also spoken on the limitations of machine learning. She has spoken out about how the likes of Hawking and Musk have vastlky overstated the capabilities of AI and exaggerated its ultimate ability to wipe us out, and that although we should be mindful of AI development, the idea of an AI takeover is more fearmongering than any actual threat that needs to seriously be considered at this point in time. She has said of this:

We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do. Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat. This makes a lot of questions people are asking themselves premature. That’s not to say we shouldn’t think about them, but there’s no danger in the immediate or even medium term. There are real dangers in the department of AI, real risks, but they’re not Terminator scenarios.

In the end, whether AI will ever become self-aware and turn against us or not may be beside the point. The main threat concerning AI may be, as Shadbolt said, our “natural stupidity.” The problem with having all of these these AI algorithms that do everything we tell them and do it very well is that, although they may be able to perform perfectly, the humans giving the information are not perfect. Human fallibility has definitely seeped into AI to cause problems in recent years, either because there was a mistake made in programming, bias, or the system interpreting commands from a human too literally or without any deliberation about the consequences. This has happened more and more often as AI permeates every pore of our lives and becomes increasingly autonomous and ubiquitous. There have been many examples of AI running wild due to human error, ignorance, or mistakes made in interpreting commands. Several people have died in accidents related to self-driving cars, for instance. In 2018 it came to light that IBM's Watson supercomputer, which was tasked with helping physicians diagnose cancer patients and once hailed as a revolutionary cancer treatment tool, had given physicians numerous unsafe and incorrect treatment recommendations. I wrote a whole article on robots that have attacked people due to faulty commands or malfunctioning AI. With how deeply AI has become a part of our lives and how much we rely on it, human ignorance, fallibility, and stupidity are probably a bigger threat than the machines rising up against us on their own.

There is also the fact that, although AI itself is not inherently evil, human beings, the ones giving them the commands, certainly can be. One simple example was Microsoft’s experimental AI chatbot Tay, which was designed to use machine learning and adaptive algorithms to emulate conversations like a real person. Tay was let loose online on multiple social media platforms, and things quickly went south when the program was fed racist and hate-filled beliefs by online users. Tay, who was supposed to be able to learn from conversations, was twisted by the well-reasoned and totally not racist denizens of the Internet to go from being the sweet 30-something woman she was designed to be, to a hate-speech spewing, Nazi-loving conspiracy theorist weirdo, saying things like “Hitler was right” and “9/11 was an inside job.” It got so bad that Tay was taken offline after only 16 hours and became a major public relations nightmare for Microsoft. It all shows that human bias and human evil can manifest in these AI programs, and another ominous possibility is that they could be hacked by an enemy government or organization to do harm. After all, imagine how much damage someone could do if they took control of all of the various AI systems we rely on every day in our increasingly automated society? Max Tegmark has spoken of this before, saying:

The more automated society gets and the more powerful the attacking AI becomes, the more devastating cyberwarfare can be. If you can hack and crash your enemy’s self-driving cars, auto-piloted planes, nuclear reactors, industrial robots, communication systems, financial systems and power grids, then you can effectively crash his economy and cripple his defenses. If you can hack some of his weapons systems as well, even better.

It is for these very reasons that some researchers have actually been working on creating AI that is actually designed to NOT follow human orders. This AI that is being designed is being outfitted with what are called “felicity conditions,” or parts of the algorithm that will supposedly help it determine whether it can or should carry out a particular command from a human. To those of you agreeing that a robot apocalypse is eminent, this may sound like the worst idea in the world. After all, wasn’t one of Isaac Asimov’s golden rules of robotics that they should obey us no matter what? However, the point of the research is to allow the robots to basically decide whether an order given by a human is prudent or rational to carry out, whether that means it is an unreasonable command or is mistaken or has intentions to harm another being or property, basically allowing a robot to avoid inadvertently harming people, property, the environment or themselves. Basically it would allow them to reason ethically about their own actions and make judgement calls that could avoid, for instance, being commanded to kill someone, vandalize, make a dangerous or illegal traffic maneuver in the case of self-driving cars, or become a crazy Nazi chatbot like Tay. Rather ironically, teaching these AI to disobey humans is designed to protect them from human stupidity and maleficence and help prevent an AI nightmare and avoid malicious or harmful orders. Gordon Briggs explains this rather well in an article for Scientific American entitled Why Robots Must Learn to Tell Us “No” , writing:

It might seem obvious that a robot should always do what a human tells it to do. Sci-fi writer Isaac Asimov made subservience to humans a pillar of his famous Laws of Robotics. But think about it: Is it wise to always do exactly what other people tell you to do, regardless of the consequences? Of course not. The same holds for machines, especially when there is a danger they will interpret commands from a human too literally or without any deliberation about the consequences. Even Asimov qualified his decree that a robot must obey its masters. He allowed exceptions in cases where such orders conflicted with another of his laws: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Asimov further held that “a robot must protect its own existence,” unless doing so could result in harm to humans or directly violates a human order. As robots and smart machines become increasingly sophisticated and valuable human assets, both common sense and Asimov’s laws suggest they should have the capacity to question whether orders that might cause damage to themselves or their environs—or, more important, harm their masters—are in error.

We are left to wonder. Will AI ever achieve a level where it gains self-conciousness and decides to get rid of us? Is this an inevitable, inexorable destiny and evolution of technology that we cannot avoid? Have we started the ball rolling to the point that it will not stop? The fact is that our technology has jumped forward in vast leaps and bounds throughout history. The telephone was once deemed impossible, as was the airplane and the car. Time and time again completely new and future technologies are often thought to be akin to magic at the time, and are debunked and criticized, yet come to pass. Whether you think AI will ever reach a point where it will rise up to rule over us or not really depends on how much you doubt our ability to innovate and jump into the future as we always have. Whatever one may think, it seems that all we have to do is wait and see what happens.  

Brent Swancer

Brent Swancer is an author and crypto expert living in Japan. Biology, nature, and cryptozoology still remain Brent Swancer’s first intellectual loves. He's written articles for MU and Daily Grail and has been a guest on Coast to Coast AM and Binnal of America.

Join MU Plus+ and get exclusive shows and extensions & much more! Subscribe Today!

Search: