Apr 14, 2014 I Micah Hanks

Why Artificial Intelligence Might Not Destroy Us After All

We see it in science fiction movies all the time: robotic intelligence is created, and it views humankind as an annoyance to its new existence, and attempts to take control by eradicating its creators. From the skeletal robo-warriors of the Terminator franchise, to the upcoming feature film Transcendence starring Johnny Depp, generally the Hollywood portrayal of the concept of transhumanism is a negative one.

Of course, it would be silly to accept blindly that a sentient artificial intelligence--whether one created by humans, or representative of once-human intelligence now in digitized form like Depp's character in Transcendence--would be innately good solely based on the design and wishes of its creators. After all, a truly intelligent form of sentience would be capable of assessing dangers to its own existence, as well as the potential for replicating itself, or even improving upon its own design; two factors cited by the likes of Vernor Vinge as the likely elements behind an intelligence explosion, where AI may eventually take off into frontiers of science that would essentially evade human understanding.

TRNSC_1080x608_GooglePlus_Cover_Main

However, when considering the likely ways that AI will behave in relation to humans, it is a natural human tendency to anthropomorphize our subject; that is, we tend to expect that a non-human intelligence will behave just as we do, with the same kinds of goals and motivators. But is this really an accurate way to perceive how AI will behave, once it is eventually created?

Yes, and no, perhaps. Obviously, if humans are to create sentient forms of AI, it will likely be modeled after our own logical processes, and designed to function in much the same way that human cognition occurs. Thus, the created intelligence we will one day build may very likely have values similar to ours, and may also view things of importance to humans as being important to it and its robotic kindred just as well.

Without getting too far off into the discussion of "strong" AI and other manifestations of thought in relation to artificial intelligence, with the creation of what are essentially thinking computers or computerized intelligences, there will also likely be differences from the modalities of thought that are expressed by biological humans. This seems likely for the obvious reason that, while human thought is governed partially by chemical processes occurring within the body that result from how our bodies have adapted and evolved, the same physical elements underlying robotized cognition and intelligence may not apply, aside from an intelligence designed to imitate such functions of the human mind and body.

As humans, we have had to evolve and change very subtly over time in order to survive. With survival comes the necessity for competition between members of the same (and different) species which, though important today in relation to everything from procreation to success in the workplace, was once a very different kind of necessity to our ancestors. For instance, in order to ensure survival, in ancient times a killer instinct may also have been more likely, in order to assure one member or group of a particular species that the problems of material scarcity could be successfully overcome. With time, evolution and the formation of more complex intelligence and thought processes (rather than those which are purely animalistic) would become more conducive to cooperation; this, along with the institution of things such as advanced architecture and agriculture, would sow the seeds of civilization.

Of course, there remains a necessity for competition in the world of today. As mentioned already, coworkers must compete for job placement, and basic sexual urges push us to compete in a time-tested display of dominance and security which members of the opposite sex (or even the same sex, based on the circumstances) will recognize. This, in evolutionary terms, has helped humankind to successfully do what we seem to do inevitably: work, have sex, make babies, and thrive with reasonable efficiency overall, until our eventual demise, at which time our progeny repeat this basic equation ad infinitum.

However, consider the methods an advanced artificial intelligence would employ in order to ensure its survival. If computer based, maintenance and upkeep may remain dependent, for a time, on certain elements of mechanical infrastructure; therefore, rather than having a relationship with humans that would be adversarial, a cooperative homeostasis would be more likely to ensue (hence the partial absurdity of a character like HAL attempting to kill the humans aboard the vessel bound for Jupiter in 2001: A Space Odyssey, for in doing so, he only insured his own demise, rather than working with humans cooperatively to insure his continuous operation). Even in the event that an AI is created that could autonomously move about, engage in programming for purposes of self-enhancement, or even replicate itself physically, the removal of chemical and hormonal components that would urge this intelligence toward competition with humans begs consideration as to how this intelligence would behave. Perhaps, in the absence of our hold-overs from a more primal age, this thinking machine would not act very much like humans in the traditional sense at all.

Robo-Killer

Generally speaking, if the trend toward cooperation and homeostasis with our environment and those around us is the trend that new technological innovations, improvement of education, reduction of diseases and aberrant social behavior, and other compliments to our existence that technology helps us achieve, it would seem likely that an artificial intelligence on par with our own levels of understanding (and minus the evolutionary tendencies to compete regarding hunger, sexuality, etc) would be logically fitted for coexistence, rather than hell bent on destroying us. Granted, while operating in the realms of speculation, anything is possible... but if nothing else, consider, at least, the fact that Hollywood portrayals of AI not only have a tendency to steer how our culture perceives such things, but also tends to favor presenting them negatively. Otherwise, the plot and characters in upcoming films of the science fiction genre would seem awful bland, wouldn't they? Almost like everyday life for most of us... and hence further necessity for introducing conflict in exciting, fictional portrayals of what life might be like, but only in the case of "what if."

So next time you lay around all Saturday and watch that Terminator marathon, or when you head out to the theaters next week to see Transcendance, ask yourself this fundamental question: "if it's not human, and it's smarter than I am, then why should I expect it to act anything like me at all?" It may not compliment the intensity of the moviegoing experience, but you may rest better nonetheless, knowing that killer robots aren't so likely to one day come hunt you down, after all.

Addenda: Meanwhile, if you're interested in the philosophy of how our brains work similarly to computers, and how our physical bodies also govern the way we may behave, you might enjoy taking a look at the following subjects, each of which fall under the category of "modern philosophy." Again, I rebut Sir Stephen Hawking's assertion that "philosophy is dead." In fact, it may well be the way to the future:

Computational theory of mind

Embodied Cognition

Micah Hanks

Micah Hanks is a writer, podcaster, and researcher whose interests cover a variety of subjects. His areas of focus include history, science, philosophy, current events, cultural studies, technology, unexplained phenomena, and ways the future of humankind may be influenced by science and innovation in the coming decades. In addition to writing, Micah hosts the Middle Theory and Gralien Report podcasts.

Join MU Plus+ and get exclusive shows and extensions & much more! Subscribe Today!

Search: