“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
That dire warning was sent via letter to the United Nations by Tesla’s Elon Musk, Alphabet’s Mustafa Suleyman and the rest of a group of 116 founders of AI and robotics companies in a call to ban all autonomous weapons before they trigger a “third revolution in warfare” after gunpowder and nuclear arms. Shouldn’t a warning of this magnitude from AI experts be delivered in a form other than a letter?
“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
The letter was sent to the U.N. Convention on Certain Conventional Weapons by Musk, Suleyman and tech gurus from China, Israel, Russia, Britain, South Korea and other major military powers. The group was meeting in Melbourne at the International Joint Conference on Artificial Intelligence (IJCAI) and their main concern is that these killer robots, which were once thought to be decades away from implementing, may be just years away – and it’s possible that some rudimentary forms are already in existence at a secret base or lab. Ryan Gariepy, the founder of Clearpath Robotics, explains the consequences.
“Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”
In fact, it may seem somewhat hypocritical that the U.S. and Russia, for example, are calling for bans on autonomous weapons while already building autonomous tanks and, in the case of the U.S., an autonomous battleship. Thus it’s too late to make this an all-AI-weapons ban, which will make it difficult to ban any when countries think their enemies or even allies are already developing or even testing them. Perhaps that’s why Musk sees this as the gravest threat facing us today, as he iterated in a recent tweet:
“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”
If you’re not a fan of Elon Musk, perhaps Stuart Russell, founder and Vice-President of Bayesian Logic and letter signer, can convince you.
“Unless people want to see new weapons of mass destruction – in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.”
Is one letter signed by tech luminaries enough to stop the spread of autonomous weapons development? Probably not, but it’s a start.
Unless it’s already too late.