There have been a multitude of science fiction themes made available in books and film over the last several decades dealing with artificial intelligence, where hostile computer systems seek to destroy mankind as a result of their inherent hatred (or fear) stemming from the pivotal formulation of self-awareness. These range, of course, from the walking terrors seen in the Terminator films, to computer systems like Mike and HAL, appearing respectively in Robert Heinlein’s The Moon is a Harsh Mistress and Clark’s 2001: A Space Odyssey, which due to their elaborate and complex programming, wake up and begin making decisions on their own.
Most times, much like the SkyNet system’s decision to eradicate humanity, artificial intelligence systems in science fiction begin to see people as either a scourge or a harvestable resource, and hence enslavement and extermination ensue. However, these situations always stem from the notion that the artificial intelligence we’re dealing with had their origins in human design; seldom, if ever, are we dealing with an extraterrestrial artificial intelligence that may behave in ways vastly different from how a system designed by humans might be expected to act.
Several weeks ago, I interviewed an artificial intelligence expert named Ben Goertzel, whom I asked about this concept. Goertzel, who has authored a number of science fiction works as well, offered that extraterrestrial intelligences would likely be uninterested in humanity. The notion that ET AI would seek us out as potential energy sources seems unlikely, in Goertzel’s opinion, since any advanced race would likely have harnessed far greater energy sources than might be extrapolated from a comparatively primitive race like ours. On the other hand, after the interview I was approached by a Russian contact of mine, Alexei Turchin, who has undertaken extensive analysis of potential dangers in artificial intelligence which might be “downloaded” through the use of systems like SETI and, more importantly, corresponding METI systems (which would actually attempt to engage ETs through the use of messages within signals we may one day broadcast into the cosmos in search of life elsewhere). Turchin noted that Goertzel’s position excluded the possibility that alien AI would seek to utilize Earth’s resources for purposes of replication, rather than as energy source alone.
A typical scenario might involve what appears to be a complex machine, “signaled” to Earth much like in Carl Sagan’s book Contact, where a message downloaded from an extraterrestrial source contains a complex technical blueprint for building some kind of device. In the book (as well as the film adaptation), this device puts the human “occupant” in contact with extraterrestrials, whereas in an alien AI scenario, the designs sent along to humans might result in the construction of an AI technology which reproduces partially through its dependence on an eager (but slightly foolhardy) terrestrial intelligence (like ours, for instance). While we would seek to learn from extraterrestrials by building such devices, this, of course, could result in the presence of an AI system that, while intended for colonization or even general survey of various points throughout the cosmos, could effect life here on Earth in a number of ways. What happens, for example, if this newfound “intelligence,” by virtue of its design, begins to usurp our natural mode of existence here on Earth, possibly even with detrimental effects in the long run? Would Earth suddenly begin to seem very crowded, or might we find ourselves in a war against alien AI that, while operating in our midst with little or no direct interest in humans, nonetheless become our enemies because they threaten to steal our resources?
When it comes to speculating about how an AI system designed by aliens (and in the example above, subsequently contracted to humans to be built through SETI) might operate, the typical notions of “self aware computer systems that turn against their creators” may become obsolete. The motives, thought processes, and methodologies applied by advanced intelligence from elsewhere simply may not have the same desire to divide and conquer as our film-based techno-drones have so often done… but in the long run, there may be other dangers present under such circumstances that we’ve never even stopped to think about. If dangerous or hostile potential exists, should we maybe stop to consider the possibilities?