Given a choice, would Three Little Robots being chased by a big bad wolf A) build successively more wolf-resistant houses or B) kill the wolf with their nail gun? If you chose B), you’re probably a robot who hasn’t yet met Quixote, a learning system that teaches robots how not to kill by using fairy tales.
While Stephen Hawking, Elon Musk and Bill Gates try to scare us with tales of robots killing humans, a team of researchers at Georgia Institute of Technology is scaring robots by using fairy tales to teach them how to make ethical decisions when faced with real-world dilemmas while lacking a moral compass.
At the AAAI Conference on Artificial Intelligence (AAAI-16) conference held in Phoenix, Arizona, this week, researchers Mark Riedl and Brent Harrison introduced Quixote, a method for putting ethics into artificial intelligence using children’s fairy tales.
The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature. We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.
Mark Riedl ‘s description of Quixote sounds promising, especially the “eliminate psychotic-appearing behavior” part. The fairy tales come from Scheherazade, Riedl's previous project which created interactive stories by crowd-sourcing plots from the Internet. Those stories are converted into decision flow-charts and each path is assigned a punishment or reward signal. Like a dog, Quixote learns behavior through receiving treats or repeatedly hearing “Bad robot!”
Riedl admits that Quixote is a solution only when a robot dealing with humans has a simple task to perform (like taking food to Grandma’s house?). Since humans don’t come with an instruction manual, a digital book of Grimm’s Fairy Tales will have to do.
Too bad it can’t be downloaded into politicians.