Catastrophic climate change is a terrifying and very plausible—even, to some extent, inevitable—existential threat. Meteor collision, less plausible but at least as terrifying. Nuclear armageddon, somewhat remote but still well within the realm of something that might actually happen. Then there’s the possibility of apocalyptic viruses—both natural and man-made. There’s a lot we could do to kill ourselves, as a species, and a lot the universe could do to finish the job if we don’t get there by ourselves first.
Artificial intelligence is certainly on the list of possible existential threats. But is it the greatest threat?
Stephen Hawking warned us about the existential threat posed by AI earlier this year. Now, Elon Musk—someone who is, in my book, nearly as bright—has joined in. Check out the exchange that starts at 1:07:25 here:
“I think we should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something really foolish. With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water and he’s sure that, yeah, he can control the demon. It didn’t work out."
It wasn’t the first time Musk spoke out about the potential dangers of AI. In tweets from August, he described AI as “[p]otentially more dangerous than nukes” and suggested that humanity might be “just the biological boot loader for digital superintelligence,” a scenario that he characterized as “increasingly probable."
In his joint statement from May, Hawking elaborated on some specific dangers:
"In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation ...
"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
Maybe this should bother me more than it does—Hawking and Musk aren’t exactly intellectual lightweights—but when I read Hawking’s description of a potential nightmare future AI scenario, the first thought that comes to mind is that human intelligence already poses most of the same practical risks. Autonomous weapon systems that can choose and eliminate targets are as old as human history—we call them well-trained soldiers—and great wealth and great dislocation define the globalized economy today. If hypothetically placing this kind of unchecked power in the hands of future AIs frightens us (and it ought to), shouldn’t we also be frightened by the fact that self-serving human beings already non-hypothetically have much of this unchecked power and are using it in some pretty terrible ways?
Let’s certainly support Human Rights Watch’s proposed ban on autonomous weapons; it’s important. And oversight, in general, is important. (Anne Foerst’s strange and groundbreaking work as MIT’s resident AI theologian is well worth revisiting.) But outside of that, we should probably remember that demonic behavior has so far been an exclusively human trait. The true villain of Mary Shelley’s Frankenstein was Victor Frankenstein himself, not the unfortunate Creature he built—who didn’t ask to be born, and whose worst traits only exaggerated those of his creator. Much like the God of Genesis 1, we will make the first generation of self-aware artificial intelligence in our own image; if we want to avoid summoning demonic AIs, perhaps the most important thing we can do is take special care that we don’t set a demonic example.