Dec 10, 2014 I Micah Hanks

Is Stephen Hawking a Fan of the Terminator Franchise?

English theoretical physicist and cosmologist Stephen Hawking is, arguably, one of the most prominent physical scientists of the modern era, in addition to being one of academia's most brilliant scientific minds.

Hawking was, after all, the first to compose a general cosmology of the universe based on the theories of Einstein's relativity, combined with the still-mysterious realm of quantum mechanics. Despite his decades-long battle with ALS, he has been the Director of Research at the Centre for Theoretical Cosmology at the University of Cambridge, in addition to having authored bestselling books, and having made numerous television appearances over the years. He is also recipient of nearly fifteen awards that include the Albert Einstein Award and the Presidential Medal of Freedom.

You may be wondering, then, what my reasoning is behind the odd title of this article, so follow my logic for a minute. Despite my profound respect for Hawking (which I've juxtaposed with adamant disagreement over the years, based on a few rather biased statements he has made), it does appear that around the time promotion for any new sequel in the Terminator film franchise begins to appear (in this case, it's 2015's Terminator Genisys), science and tech headlines begin to be bogged down with "doom porn" about whether the advent of artificial intelligence may bring with it the end of humanity as we know it.

As you have probably heard, Sir Stephen is only the latest to ask this question about our eventual AI overlords, after having expressed nearly identical sentiments about alien life in the past. Hence, I can't help but wonder if Hawking is a fan of watching Terminator movies, which are among the most frequented of the "doomsday" film representations of AI in our culture.

Specifically, these are the thoughts he expressed recently to the BBC:

"The development of artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate," he said. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

CPU-BRAIN

Of course, Hawking's feelings here aren't without merit; the kind of "superseding" to which he refers remains one among many stark possibilities in the realm of possibilities that await us, as future science and, perhaps, technological singularity become less the things of mere science fiction with time.

Rebuttals or, at very least, further commentary on Hawking's statements were issued by BBC's technology correspondent, Rory Cellan-Jones (author of the BBC's original article featuring Hawking's warning about AI). Computer scientists appear to remain varied on whether intelligent machines will be a realistic threat to humans or not; however, earlier this year, I wrote an article dealing with why I feel many of the key evolutionary factors driving human competition would be missing with intelligent machines. Whether or not this would make them less deadly to humans is still a worthy question, although I would wager that their unpredictable nature might be something to think more deeply about, based on how AI would differ from us in the absence of chemical governances over emotion. Sex drive and its influence on interactions between humans, gender roles, and a variety of other things will also come into play, depending on how future AI will be designed, and how it might limit the kinds of evolutionary influences that have made humans as dangerous as we predict future synthetic intelligence to be.

One final observation, however, would be to note that Hawking's views on AI almost perfectly mirror his fears about alien contact, which he similarly bases on past interactions between humans (and arguably, humans during less civilized times). As a civilization, we still have a long way to go in terms of conscious evolution. And yet, to compare ourselves (or better yet, our ancestors, as Hawking does in a roundabout way) to an advanced race of alien beings that, in the hypothetical sense, may one day visit Earth, might be failing to take into consideration the environmental factors that would govern how that intelligence might behave in contrast with us.

Whether the discussion resolves in relation to alien beings, or synthetic life forms we create ourselves, the fact that they will not be human is the key element that bridges each concept, and which we must consider in estimating how they might be expected to behave. Unlike alien life, however, the AI game, rather than being one of random chance, is still somewhat in our hands.

Which begs the question: what will intelligent machines think when they become self aware, and go back to watch films like Terminator? Will our sci-fi entertainment today inadvertently end up being the impetus (or inspiration, perhaps) behind the eventual "robopocalypse" of tomorrow? It would be just our luck that Hawking was right all along, and that our fascination with the "doomsday" AI scenario might end up being a self-fulfilling prophecy, of sorts.

Which is why, here and now, I choose to remain extremely optimistic about the advent of AI: we're all gonna get along fine... right? Come on guys, let's not give 'em any more bad good ideas, like this one:

https://www.youtube.com/watch?v=62E4FJTwSuc

Micah Hanks

Micah Hanks is a writer, podcaster, and researcher whose interests cover a variety of subjects. His areas of focus include history, science, philosophy, current events, cultural studies, technology, unexplained phenomena, and ways the future of humankind may be influenced by science and innovation in the coming decades. In addition to writing, Micah hosts the Middle Theory and Gralien Report podcasts.

Join MU Plus+ and get exclusive shows and extensions & much more! Subscribe Today!

Search: