Alexander Reben, a Berkeley, California-based roboticist has created a neat autonomous tabletop robot that is designed with the specific intent of injuring its human underlings by stabbing them with a needle. The creation raises a staggering number of questions, of which the most pressing is probably: why the crap would anyone make that?
The answer to that particular question lies somewhere between science fiction, ethics and law, and a fairly reasonable desire to provoke a debate about our very human responses to the rapid development of AI.
Specifically, Ruben's AI bot is designed to contravene Askimov's First Law of Robotics. Penned in 1942 by biochemist and science fiction Isaac Askimov, the First Law states: Robots may not harm people. The second law adds: "[a] robot must obey the orders given it by human beings except where such orders would conflict with the First Law."
The independently violent robot isn't going to cause major harm to anyone--it delivers a very high speed pin prick that while likely to be highly unpleasant, can easily be remedied with a band aid. What it does do is create a physical manifestation of our growing fear of robots.
As Reben explained in an interview with Fast Company, there is a growing concern in public discourse over the fact that AI robots will take our jobs, or will just generally take over.
It's not as though robots can't be physically dangerous--safety measures in any factory since the industrial revolution show that clearly they can--but typically there is no human intent to cause injury. And the more autonomous the machine, the less human responsibility there is if injury is caused.
As it becomes clear that Askimov's Laws cannot (and never really could) protect us, there are serious debates to be had about the direction of AI.
No one’s actually made a robot that was built to intentionally hurt and injure someone... I wanted to make a robot that does this that actually exists...That was important to, to take it out of the thought experiment realm into reality, because once something exists in the world, you have to confront it. It becomes more urgent. You can’t just pontificate about it.
As Fast Company posits:
[Reben] imagines that lawyers will debate the liability issues surrounding a robot that can harm people, while ethicists will ponder whether it’s even okay to think about such an experiment. Philosophers will wonder why such a robot exists.
And all of them will probably keep their fingers very clearly out of the way.