Dec 03, 2015 I Paul Seaburn

Robots That Say No May Override Humans in Driverless Cars

When the subject of driverless cars comes up, the fun but sometimes chilling philosophical game to play is to pretend you’re a programmer and you have to decide if the car will kill its one passenger to avoid running into a crowd of people. Should the passenger be allowed to take over the car and make the decision? Here’s a new twist. What if the car is a robot that has been taught to say no?

Let’s start with the passenger-takeover scenario. A new study conducted by Stanford University looked at ways to keep passengers attentive (more than just awake) in case their self-driving car got into trouble and needed an intervention. The Stanford study found that reading or watching a movie helped keep passengers alert enough to take over.

reading 570x323
Does this passenger look alert enough to take over?

But how alert do they need to be? Other studies have determined that a passenger needs at least five seconds to snap to attention, evaluate the situation and take over. That's a long (five Mississippis) amount of time to react. Some manufacturers are considering having the car do things to keep the passenger alert – noises, flashing lights, vibrating seats, etc.

What if the car could decide the passenger isn’t ready to take over and says ‘no’? What if the passenger tries to take over and the car refuses? That’s the kind of car engineers Gordon Briggs and Dr Matthais Scheutz from Tufts University in Massachusetts might build with their robots that say ‘no’. In their recent paper presented to the Association for the Advancement of Artificial Intelligence, they describe two robots they programmed to refuse commands that would put them in danger. For example, the robots would not walk off the edge of a table when ordered to.

robot says no 570x361
A robot saying no

Whoa! This breaks Isaac Asimov’s law that states a robot must obey the orders given to it by humans. Would car driven by a robot programmed by Briggs and Scheutz be designed for disaster?

What about Asimov's “don’t harm humans” law? Try this scenario: the driving robot refuses a command to turn the car over - an order given by a human who does not appear alert. The human fights with the robot. How much pain should the robot inflict (yes, they can inflict pain now too) to prevent the human from hurting himself? If the robot knowingly turns the car over to a seemingly suicidal driver who then crashes, did the robot cause the harm?

Do we need to keep robots that say 'no' out of driverless cars?

Robot tantrum 570x322
Not just no but a "Hell no!"

 

Paul Seaburn

Paul Seaburn is the editor at Mysterious Universe and its most prolific writer. He’s written for TV shows such as "The Tonight Show", "Politically Incorrect" and an award-winning children’s program. He's been published in “The New York Times" and "Huffington Post” and has co-authored numerous collections of trivia, puzzles and humor. His “What in the World!” podcast is a fun look at the latest weird and paranormal news, strange sports stories and odd trivia. Paul likes to add a bit of humor to each MU post he crafts. After all, the mysterious doesn't always have to be serious.

Join MU Plus+ and get exclusive shows and extensions & much more! Subscribe Today!

Search: