Google Proposes Kill Switch for Rogue A.I.

In the battle of man versus machine, Google DeepMind wants to hold the “big red button” in case artificial intelligence (A.I.) runs amok. A.I. has the potential to cure disease, end poverty and find solutions to the world’s woes but it also can be dangerous and safeguards must be in place.

The main concern is that in the future a machine that surpasses humans in capability, may replace humans as the dominant force on Earth and may be impossible to stop. In 2014, Elon Musk tweeted,

We need to be super careful with A.I. Potentially more dangerous than nukes.

 

A male robot thinking about something. Isolated on white background.

A male robot thinking about something. Isolated on white background.

Nick Bostrom, Swedish philosopher at the University of Oxford and founder of the Institute for Ethics and Emerging Technologies and author of the book, “Superintelligence,” warns that when A.I. teaches itself how to learn without supervision, it will be able to learn so fast that it would overrule humanity and take over. This concept isn’t new. In 1958, mathematician Stanislaw Ulam termed it “Singularity.”

Google DeepMind, in conjunction with The Future of Humanity Institute, recently released a paper, “Safely Interruptible Agents,” outlining a framework for stopping out-of-control A.I. The study states,

Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences. If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions – harmful either for the agent or for the environment – and lead the agent into a safer situation.

Artificial intelligence is the biggest aid and threat to humanity.

Artificial intelligence is the biggest aid and threat to humanity.

The scientists who build the A.I. algorithms need to install an “interruption policy” that would “forcibly change the behavior of the agent itself.” This signal would convince the machine, that at this point would be ignoring human commands, that it should stop. This would have to be a proprietary signal that could only be sent by those who own the A.I. Basically, the owner would have to trick the machine.

What was once the realm of science fiction is coming closer to reality. Protections must be put into place to assure the future of humankind.