Join Plus+ and get exclusive shows and extensions! Subscribe Today!

Robot Overlords are Flying Drones and Getting Aggressive

Our robot overlords are at it again and it’s getting ugly. A man in England was shocked at what happened after responding to a request from his robot to fly his drone while Google’s DeepMind AI is starting to show signs of aggressive behavior.

What you see in the video was all real.

Freelance software developer Scott Blais recorded his Aldebaran Robotics Nao V5 humanoid robot asking if it could fly his drone. He claims the robot was not preprogrammed to make the request. After giving it the controls and some simple commands, he turned the Nao V5 loose and hoped for the best.

Blais admits that the crashes directed by the robot caused minor damage to the drone (no word on what he’ll make the bot do to pay – washing the car sounds reasonable) and he’s not sure it was such a good idea.

Letting a robot control another robot probably isn’t the greatest idea in the world, because after all we are supposed to be the ones in command of robots.

Bad news, Scott. Google’s DeepMind Artificial Intelligence is tired of beating humans at the game of Go and is learning to be aggressive as it moves on to games against other AI agents where the players encounter conflicting goals and must determine ways to cooperate. The famous Prisoner’s Dilemma game is an example where the length of the sentences of two prisoners varies depending on decisions they make.

According to their report, the DeepMind researchers developed two games to test the AI. In the first game, called Gathering, the agents picking apples could fire a laser at their opponent, removing them temporarily from the game. The AI learned quickly that this aggressive tactic worked well as the fruit got scarce.

In the second game, called Wolfpack, two wolf agents chase the same prey and have to decide whether to act as lone wolves and possibly lose the prey to scavengers or work together and a lesser but guaranteed prize. In this game, the AI leaned towards cooperation.

The simple games show it won’t be easy to set behavioral patterns for AI to follow and make decisions from in the real world.

What would two Nao V5 humanoids do in a drone race? Would they always play fair or would they learn from watching the Olympics that cheaters can win gold medals too?

Tags

Paul Seaburn Paul Seaburn is one of the most prolific writers at Mysterious Universe. He’s written for TV shows such as “The Tonight Show”, “Politically Incorrect” and an award-winning children’s program. He’s been published in “The New York Times” and “Huffington Post” and has co-authored numerous collections of trivia, puzzles and humor. Paul likes to add a bit of humor to each MU post he crafts. After all, the mysterious doesn’t always have to be serious.

You can follow Paul on and