Those of us watching the U.S. presidential election (and who isn’t?) don’t need any help with having a steady stream of nightmares. Apparently this message didn’t make it to Massachusetts where MIT researchers used artificial intelligence algorithms to create horrific images sure to provoke nightmare-inducing scenarios. Wouldn’t the time of these geniuses be better spent looking for new prime numbers?


There are many to blame for these AI bad dreams. The dark soul of the Nightmare Machine is the MIT Media Lab's Scalable Cooperation group, whose stated goal is to help us to “scale up our ability to coordinate, cooperate, exchange information, and make decisions.” Apparently that goal wasn’t challenging enough for researchers Pinar Yanardag, Manuel Cebrian and Iyad Rahwan. They created a unique deep learning algorithm to teach artificial intelligence what kind of images scare humans, then turned it over to Google’s DeepDream image generator to create its own.


The results are definitely horrifying. The Nightmare Machine spit out an image of the beautiful Louvre Museum in the flames of hell. It created a monster that attacked the Golden Gate Bridge. It turned good-looking Brad Pitt into an ugly beast (with no help from Angelina Jolie). And - even though we need no help here – it nightmarified an image of the candidates in the recent presidential debate.


Why are they doing this to us? Because … science!

Creating a visceral emotion such as fear remains one of the cornerstones of human creativity. This challenge is especially important in a time where we wonder what the limits of artificial intelligence are: can machines learn to scare us?

If you want to help the Nightmare Machine create more nightmares (you’re really a sicko), they’ve set up a web site where humans can tell the AI algorithm which images are scary and which are just something from that section of the art museum no one understands.


If all of this isn’t frightening enough, here’s another real nightmare. Netflix, Google and Facebook are all investing heavily in these deep learning algorithms that teach computers how to recognize patterns and generate new images.

Does AI really need our help to be scary? Have you ever imagined riding in a self-driving vehicle programmed by the same people who blame you for the way the country is?



Paul Seaburn

Paul Seaburn is the editor at Mysterious Universe and its most prolific writer. He’s written for TV shows such as "The Tonight Show", "Politically Incorrect" and an award-winning children’s program. He's been published in “The New York Times" and "Huffington Post” and has co-authored numerous collections of trivia, puzzles and humor. His “What in the World!” podcast is a fun look at the latest weird and paranormal news, strange sports stories and odd trivia. Paul likes to add a bit of humor to each MU post he crafts. After all, the mysterious doesn't always have to be serious.

Join MU Plus+ and get exclusive shows and extensions & much more! Subscribe Today!