Unlike certain other celebrity scientist elder statesmen, Stephen Hawking isn’t prone to saying unbelievably weird stuff just to troll us—so when he wrote last week that developing artificial intelligence would be “the biggest event in human history” but “might also be the last, unless we learn to avoid the risks,” the international press listened.
Stuart Armstrong (of Oxford’s Future of Humanity Institute) briefly goes over existential risk in general, and existential risk posed by AI in particular, here:
In the short term, the biggest danger posed by AI is autonomous drones, and the best time to prevent the development of autonomous drones is to ban their use before it becomes commonplace (this has already been proposed in Canada, and would appear to be an obvious future topic for a U.N. covenant). Most of the longer-range fears concerning AI may appear far-fetched now, which means that this may be an ideal time to have these conversations, before powerful military and financial interests find themselves in the unenviable and dangerous position of planning the use of technology that the general public has not yet discussed. The last time this happened with respect to a new technology that had the potential to pose an existential risk, the result was an international nuclear arms race.