Join Plus+ and get exclusive shows and extensions! Subscribe Today!

Humans Will Follow Robots Even If They’re Malfunctioning

Your robot is smoking, moving erratically and making strange noises. If it’s the robot responsible for vacuuming your rug, you’d probably turn it off. If it’s the robot responsible for building safety in your place of work, would you ignore the warning signs and follow its instructions anyway? According to a new study, you would.

In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.

That comment – part of a study to be presented at the upcoming 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016) in Christchurch, New Zealand – is from Alan Wagner, a senior research engineer in the Georgia Tech Research Institute’s first-ever study on human-robot trust in an emergency situation. The tests were partially funded by the Air Force Office of Scientific Research (AFOSR).

A group of 42 volunteers were told to follow a secretly-controlled robot with the words “Emergency Guide Robot” on its side. The intended destination was a conference room but the controller made it do strange and erratic actions before getting there. It went to the wrong room, spun in circles and stopped moving.

Would you follow this malfunctioning robot out of a burning building?

Would you follow this malfunctioning robot out of a burning building?

After it finally got the subjects to the conference room, smoke filled the hallway outside the door. Instead of leading them to an exit door that they were familiar with, the robot pointed them to one they were unsure of in the back of the building. What happened next?

We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency. Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.

That’s right. According to GTRI research engineer Paul Robinette, ALL of the subjects followed the malfunctioning robot! Why?

Study researchers Paul Robinette, Alan Wagner and Ayanna Howard and their malfunctioning robot

Study researchers Paul Robinette, Alan Wagner and Ayanna Howard and their malfunctioning robot

The researchers believe that the subjects saw the robot as an authority figure and trusted it even when it acted erratically. Roboticists need to consider this carefully, says researcher Ayanna Howard

We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human.

So people will follow a robotic authority figure and trust it even when it acts erratically. It sounds like this robot will have a bright future in politics!


Paul Seaburn is the editor at Mysterious Universe and its most prolific writer. He’s written for TV shows such as "The Tonight Show", "Politically Incorrect" and an award-winning children’s program. He's been published in “The New York Times" and "Huffington Post” and has co-authored numerous collections of trivia, puzzles and humor. His “What in the World!” podcast is a fun look at the latest weird and paranormal news, strange sports stories and odd trivia. Paul likes to add a bit of humor to each MU post he crafts. After all, the mysterious doesn't always have to be serious.
You can follow Paul on and