Move over, Stephen King. There’s a new horror writer in town and his name is … Elon Musk? In a new documentary, the founder/CEO/financial backer of SpaceX, Tesla, Neuralink, PayPal and more warned that the days of the moral dictators are over because he or she can (and probably will) develop an immortal artificial intelligence that will, at the least, destroy its maker and most likely the rest of us on Earth as well. How’s that spaceship to Mars coming along, Elon?
"It's just like, if we're building a road, and an anthill happens to be in the way. We don't hate ants, we're just building a road. So, goodbye, anthill."
That’s how a good horror novel begins … with an innocent act like building a road over an anthill. If the road construction is being aided by an autonomous earth moving vehicle collecting data for future projects, that ambivalence to lesser life forms will be filed away to be shared not only with other A.I. earth movers but with A.I. Earth destroyers as well. That’s the picture painted by Elon Musk in the new documentary “Do You Trust This Computer?”, which was produced and directed by Chris Paine, whose relationship with Musk began in 2006 while making “Who Killed The Electric Car?”, which documented the technical success and corporate destruction of General Motors’ EV1 of the mid-1990s.
Will the immortal A.I. be created by a corporation … like Google’s DeepMind?
"The DeepMind system can win at any game. It can already beat all the original Atari games. It is super human; it plays all the games at super speed in less than a minute. If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings."
No hard feelings. It’s just like the end of that game of Go you taught it. Or Monopoly. Or Risk. Or Hangman. Or Chicken. AI over I. No hard feelings.
Wait a minute, Elon. Can’t we just have government regulate A.I.?
“By the time we are reactive in AI regulation, it'll be too late. Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization.”
Perhaps a boycott by non-governmental humans is the answer. In South Korea, after Korea Advanced Institute of Science and Technology (KAIST) revealed it will partner with defense manufacturer Hanwha Systems to develop ‘killer robots’, artificial intelligence researchers from nearly 30 countries announced they will boycott the university. They were led by Toby Walsh, a professor at the University of New South Wales, who said this:
“There are plenty of great things you can do with AI that save lives, including in a military context, but to openly declare the goal is to develop autonomous weapons and have a partner like this sparks huge concern. This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms.”
Elon Musk, channeling his best “I pity the fools” impression of Mr. T, doesn’t think humans are the answer when it’s too late.
"At least when there's an evil dictator, that human is going to die. But for an AI there would be no death. It would live forever, and then you'd have an immortal dictator, from which we could never escape."
An immortal dictator. Is this why Musk is heading to Mars?
The documentary can be seen here.