Join Plus+ and get exclusive shows and extensions! Subscribe Today!

5 Reasons Not to Fear the AI Apocalypse

Artificial intelligence might destroy us all. That is, at least, a message you could take away from remarks that Stephen Hawking and Elon Musk, two very smart people, made regarding AI earlier this year. Hawking said that it had the potential to become our greatest mistake; Musk compared it to summoning a demon we can’t control. Neither were wrong, necessarily. Artificial intelligence could one day become an existential threat.

But personally, I’m not worried. Here’s why.

1. The people asking us to be careful are actually among the world’s biggest supporters of AI.

Discovery News makes this point in one of its own videos on the subject: the world’s biggest AI critics also tend to be the world’s biggest AI supporters. Their own actions suggest that they want us to be cautious about the way we implement AI, not afraid of the whole idea. In other words, they’re unusually pessimistic about the negative implications of AI because they’re also unusually bullish on its potential. If they’re right, it sort of balances out—we have things to be excited about and afraid of—but if they’re wrong on both counts, we might be getting worked up over nothing. Especially since…

2. Sentient AI doesn’t exist yet, and might never exist.

Cognitive science being a secular discipline, it’s kind of strange to see so much shared interest in a vaguely apocalyptic point called the Singularity at which the intelligence of artificial intelligences begins to exceed our own. People speak about the Singularity with the kind of certainty they usually use when referring to things that have already happened, but it’s entirely possible that the Singularity never will.

 

3. There’s a good chance we’re already adequately prepared.

Isaac Asimov first formulated the Three Laws of Robotics some 73 years ago to regulate sentient AIs, which are still not a reality. In the intervening years, these laws have been studied extensively, expanded, revised, and generally reflected upon in hundreds of different contexts.

Cautionary tales, too, have been plentiful. If we want to imagine what the world might be like if a Singularity-like military intelligence were to take over the world and develop time travel technology, we need only look at the popular Terminator series, or the Cybermen from Doctor Who, or any number of other fascist-robot species explored and discussed in the science fiction community. 

4. Humans are already terrible enough.

Baratunde Thurston made this point earlier in the month in a Fast Company piece:

With all due respect to the boldface AI worriers, do we need to invent a boogeyman from the future when we’ve got the present to worry about? … We don’t need to imagine a future filled with human suffering at the (liquid metal) hands of supersmart robots. Many are suffering now at the hands of their fellow humans. It’s possible that artificial intelligence is the only way forward for a species that seems unconcerned with its own survival. 

In other words, it turns out that the violent, malevolent, poorly-functioning intelligence that poses the greatest threat to us in the short term is us. Think about it: Is there anything we’re afraid an AI might do that we’re not already doing to each other, without the benefit of the Singularity?

Much of what makes us evil—implicit bias, self-deception, repression, low self-esteem, conformity brought about by an insatiable need to be liked, irrational hatred, shortsightedness, spite, and so on—is unlikely to be found in a self-transparent sentient AI, which would presumably be at least somewhat aware of its own motives and limitations.   

5. We already have a more primitive, and more urgent, killer robot problem on our hands.

For the foreseeable future, we stand zero chance of being killed by intelligent robots and a much greater chance of being killed by stupid ones. Among the stupidest are armed autonomous drones, which combine the firepower of Skynet with the intelligence of an automatic soap dispenser. Canada’s beautifully-named Campaign to Stop Killer Robots is leading an effort to ban these unholy things before some overly-ambitious superpower with more military capital than sense lets thousands of them loose on the world.

I don’t mean to suggest that Hawking and Musk are wrong, but between the absurd extremes of total complacency and fear lie a wide range of thoughtful approaches to artificial intelligence. When it comes to the future of AI, we have a great deal to keep in mind but nothing to be afraid of—at least not yet.

Tags

Tom Head is an author or coauthor of 29 nonfiction books, columnist, scriptwriter, research paralegal, occasional hellraiser, and proud Jackson native. His book Possessions and Exorcisms (Fact or Fiction?) covers the recent demand for exorcists over the past 30 years and demonic possession.
You can follow Tom on and