- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Fans of Isaac Asimov will recognize his Three Laws of Robotics, first presented in his 1942 short story "Runaround" and popularized in the 1950 collection “I, Robot.” While they’re generally recognized as rules for robot interactions with humans, the third could apply to robot-to-robot encounters as well. Although “protect” implies aggression or physical contact, it could also pertain to ‘intelligent’ contact. But what about one robot attacking another emotionally, as humans do to each other so frequently? We may soon find out as a new robot developed at Columbia University has learned to predict another robot’s future actions in a way that some are calling “empathy.” Would today’s version of “I, Robot” need to be changed to “I Feel Your Pain, Robot”?
“Our initial results are very exciting. Our findings begin to demonstrate how robots can see the world from another robot’s perspective.”
Or from in another robot’s steel shoes? That description of robot empathy comes from Boyuan Chen in a Columbia University press release describing his new study, published in the journal Scientific Reports. He and co-authors Carl Vondrick and Hod Lipson built a small robot and programmed it to seek and move towards any green circle it could see in a cage or simulated room it was placed in. Sometimes the robot had a clear view; other times it was blocked by a red box and forced to move around to look for it or another circle. A second robot was placed in a position to observe the actions of the first and predict its moves. After two hours, the observing robot, with just a few visual frames of viewing, was able to predict which green circle the other would pick and the path it would take.
“The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy.”
If you’re worried about robots becoming empathetic to other robots, it gets scarier. The authors suggest this is an early step down the path of robots acquiring a “Theory of Mind” where, like human toddlers, they first understand the needs and perspectives of other robots, then develop social interactions as playful and cooperative as hide-and-seek and other games, or as sinister (and human-like) as lying and deception. Ultimately, Hod Lipson predicts robots could develop a “mind’s eye” allowing them to think visually like humans. He doesn’t necessarily see this a good thing.
“We recognize that robots aren’t going to remain passive instruction-following machines for long. Like other forms of advanced AI, we hope that policymakers can help keep this kind of technology in check, so that we can all benefit.”
Unfortunately, we can’t ask Isaac Asimov for Three More Laws of Robotics for Robots. Then again, if robots are capable of becoming emotional, will three be enough?