When the subject of driverless cars comes up, the fun but sometimes chilling philosophical game to play is to pretend you’re a programmer and you have to decide if the car will kill its one passenger to avoid running into a crowd of people. Should the passenger be allowed to take over the car and make the decision? Here’s a new twist. What if the car is a robot that has been taught to say no?
Let’s start with the passenger-takeover scenario. A new study conducted by Stanford University looked at ways to keep passengers attentive (more than just awake) in case their self-driving car got into trouble and needed an intervention. The Stanford study found that reading or watching a movie helped keep passengers alert enough to take over.
But how alert do they need to be? Other studies have determined that a passenger needs at least five seconds to snap to attention, evaluate the situation and take over. That's a long (five Mississippis) amount of time to react. Some manufacturers are considering having the car do things to keep the passenger alert – noises, flashing lights, vibrating seats, etc.
What if the car could decide the passenger isn’t ready to take over and says ‘no’? What if the passenger tries to take over and the car refuses? That’s the kind of car engineers Gordon Briggs and Dr Matthais Scheutz from Tufts University in Massachusetts might build with their robots that say ‘no’. In their recent paper presented to the Association for the Advancement of Artificial Intelligence, they describe two robots they programmed to refuse commands that would put them in danger. For example, the robots would not walk off the edge of a table when ordered to.
Whoa! This breaks Isaac Asimov’s law that states a robot must obey the orders given to it by humans. Would car driven by a robot programmed by Briggs and Scheutz be designed for disaster?
What about Asimov's “don’t harm humans” law? Try this scenario: the driving robot refuses a command to turn the car over - an order given by a human who does not appear alert. The human fights with the robot. How much pain should the robot inflict (yes, they can inflict pain now too) to prevent the human from hurting himself? If the robot knowingly turns the car over to a seemingly suicidal driver who then crashes, did the robot cause the harm?
Do we need to keep robots that say 'no' out of driverless cars?
Previous article