MUPLUS+   Join Plus+ and get exclusive shows and extensions! Subscribe Today! LEARN MORE  

 
Close

Advertise here now!

 
 

AI Researchers Hold a Scary Doomsday Workshop

We’re talking about malware on steroids that is AI-enabled.

That’s just one description – and a mild one at that – of the doomsday scenarios played out, analyzed and discussed in a game format at the “Great Debate: The Future of Artificial Intelligence – Who’s in Control?” held last weekend at Arizona State University. About 40 scientists, cyber-security experts and policy experts were hosted by Eric Horvitz, managing director of Microsoft’s Redmond Lab, Skype co-founder Jaan Tallinn, and ASU physicist Lawrence Krauss at the day-long doom-apalooza that was funded in part by Tallinn and Elon Musk. What does Elon have to worry about?

The worst-case scenarios came from the participants and were required to use current AI technologies in realistic situations that could occur in the next five to 25 years. They were next divided into two teams – one made of attackers who initiated the problems  and one made of defenders who had to find ways to stop them. The oxymoronic best worst-case scenarios were then played out by panels made up of those who proposed them. While it would have added to the excitement, there doesn’t appear to have been any wagering.

Panelists calmly discussing doomsday scenarios

The ease with which the participants came up with AI worst-case scenarios could itself be a worst-case scenario. Scary ideas included stock market manipulation, cyber-warfare, self-driving cars getting hacked and election manipulation – apparently the “five years in the future” rule was waived for the last two.

Developing defenses and solutions was expectedly more difficult. The “malware on steroids” proposed by Kathleen Fisher, chairwoman of the computer science department at Tufts University, was hidden in a stealth cyber weapon that gets loose in the Internet. The panels determined that the malware would succeed because it is easy to keep hidden due to humans being so gullible. In the stock market manipulation scenario, the solution focused on creating a database of hackers that would help cyber detectives recognize their signature hacks. That one was determined to have potential.

I’m betting on the attackers

While the event was covered by Bloomberg, some of the sessions occurred behind closed doors. What was discussed in there? The Origins Project, the ASU program that promotes deep-thinking endeavors such as this doomsday workshop, says it will release the materials from them. Could the fact that were closed-door sessions be another doomsday warning? Lawrence Krauss – author, professor, physicist, public intellectual and Director of the Origins Project – offers this somewhat comforting thought:

Some things we think of as cataclysmic may turn out to be just fine.

Do you agree?

 TAGS: , , , , , , , , , ,

  • Douglas Adams

    AI won’t be an existential threat until fully autonomous robotic factories exist. They still need us to manufacture their parts. Once they can do it themselves, the threat will be realized. As an existential threat to AI, humans will be removed from the equation as soon as they can self-replicate.

  • mph23

    Murdering your own parents is a rare thing in human cultures.

    Let’s hope that AI feels the same way.

    Besides, you are (mostly) how you are raised. Bigots create bigots, caring, compassionate people create caring, compassionate people, for example. If the people who make AI real are responsible and decent people, we shouldn’t have problems.