May 06, 2014 I Tom Head

Will Artificial Intelligence Be Humanity’s Worst Mistake?

Unlike certain other celebrity scientist elder statesmen, Stephen Hawking isn't prone to saying unbelievably weird stuff just to troll us—so when he wrote last week that developing artificial intelligence would be "the biggest event in human history" but "might also be the last, unless we learn to avoid the risks," the international press listened.

Stuart Armstrong (of Oxford's Future of Humanity Institute) briefly goes over existential risk in general, and existential risk posed by AI in particular, here:

In the short term, the biggest danger posed by AI is autonomous drones, and the best time to prevent the development of autonomous drones is to ban their use before it becomes commonplace (this has already been proposed in Canada, and would appear to be an obvious future topic for a U.N. covenant). Most of the longer-range fears concerning AI may appear far-fetched now, which means that this may be an ideal time to have these conversations, before powerful military and financial interests find themselves in the unenviable and dangerous position of planning the use of technology that the general public has not yet discussed. The last time this happened with respect to a new technology that had the potential to pose an existential risk, the result was an international nuclear arms race.

You can find out more about the risks of AI by visiting Oxford's FHI and Cambridge's Centre for the Study of Existential Risk (CSER).

Tom Head

Tom Head is an author or coauthor of 29 nonfiction books, columnist, scriptwriter, research paralegal, occasional hellraiser, and proud Jackson native. His book Possessions and Exorcisms (Fact or Fiction?) covers the recent demand for exorcists over the past 30 years and demonic possession.

Previous article

Man-Monsters and the Occult

Join MU Plus+ and get exclusive shows and extensions & much more! Subscribe Today!