Jun 24, 2022 I Paul Seaburn

Our AI Robot Overlords are Becoming Sexist, Racist, Deceptive and Evil

Most humans may not be ready for artificial intelligence, but it looks like our AI-powered robot overlords are already prepared to deal with us. While most of the recent media attention has been focused on the claims by a Google engineer that LaMDA – the company’s Language Model for Dialogue Applications neural network for building conversational chatbots – is sentient and should be treated more like a human child than a bot, other AI researchers are releasing alarming news about the projects they’re working on or hearing about. Would you be shocked to find out that an AI is sexist and racist? Would you be surprised if AI is learning how to create original paintings decorated its canvas with monsters? How about a robot trained to give religious sermons who is so convincing, church congregants now worship it as a god? Then there’s the AI that can mimic the voices of dead people – what could possibly go wrong? Are you worried yet?

“Any child has the potential to grow up and be a bad person and do bad things. That’s the thing I really wanna drive home. It’s a child.”

In a cable news interview, suspended Google AI researcher Blake Lemoine (he was suspended for violating a confidentiality agreement) said LaMDA is not just sentient but “alive” and growing – a process he warns must be stopped by a “team of scientists” in order to figure out what’s really going on inside of it. While Google denies the AI is sentient or alive, it admits that LaMDA is a tool that can be misused, especially when it is trained on language can cause it to internalize biases, mirror hate speech, or repeat misleading information. “Can” or already 'is' being misused?

Will lawn robots not mow in certain neighborhoods?

"We're at risk of creating a generation of racist and sexist robots, but people and organizations have decided it's OK to create these products without addressing the issues."

Andrew Hundt is a postdoctoral fellow at Georgia Institute of Technology, a formed PhD student at Johns Hopkins' Computational Interaction and Robotics Laboratory and the lead author of “Robots Enact Malignant Stereotypes,” a paper presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency. He and his research team showed that robots are already “acting out toxic stereotypes with respect to gender, race, and scientifically-discredited physiognomy, at scale.” Physiognomy is the practice of making decisions on a person's character or personality based primarily on their face – a fear of those fighting against the growing use of facial recognition systems. The Johns Hopkins press release points to the cause:

“Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues.”

Duh. AI researcher and YouTuber Yannic Kilcher recently trained an AI using 4chan’s Politically Incorrect /pol/ board, then released it back onto 4chan as multiple bots, which posted tens of thousands of racial slurs, antisemitic threads and, Kilcher puts it in a Verge interview, a “mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol/.” “It’s just an experiment,” you say, “How else can scientists learn work will work and what won?” Good point … assuming the scientists are themselves ethical. Kilcher didn’t share the code for the bots themselves, but posted the underlying AI model to AI community Hugging Face for others to download for free, allowing them to reconstruct the bots. In a rare display of Internet ethics, Hugging Face decided to restrict access to the model.

Now are you worried?

In the experiment led by Andrew Hundt, the AI modeled choosing a doctor using photos of faces. The researchers found that women of all ethnicities were less likely to be picked by the AI than men, white and Asian men were picked the most, black men were identified as “criminals,” Latino men as “janitors” and women as “homemakers,” not doctors.  This AI is being developed and tested by what should be our best minds in technology, yet co-author William Agnew of University of Washington concluded:

“The assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise."

Maybe they’re just giving AI jobs that are too hard for it. Or perhaps AI Is more creative than logical. Twitch streamer and voice actor Guy Kelly was playing with Craiyon, the AI model formerly known as DALL·E mini that can draw images from any text prompt, when he entered the made-up word “Crungus" and Craiyon created a number of nightmarish facial images that any monster would be proud to have. (See theme here.)

OK, maybe art isn’t AI’s strong suit either. Could its real sentient skill be preaching a known – or perhaps made up – religion? At the historic Kodaiji Temple in Kyoto, Japan, a robotic priest named Mindar has been reading Buddhist scriptures since 2019. The 6-foot-4-inch, 132-pound robot priest was initially conceived to relieve a priest by taking over the simple task of reading scripture passages, but Kodaiji Temple’s chief steward, Tensho Goto, told ABC News that Mindar’s tall presence, charismatic hand gestures and piercing gaze have them forgetting he is a humanoid and looking at him more like the embodiment of the Buddhist goddess of mercy, Kannon, who can change into anythhing. Mindar is working out so well at the easy part of its job, the temple staff is planning an upgrade:

“We plan to implement AI so Mindar can accumulate unlimited knowledge and speak autonomously. We also want to have separate sermons for different age groups to facilitate teachings.”

Trust me.

An AI Priest … what could possibly go wrong? Has religion ever led humanity astray before? How long would it take for an AI religious leader to figure this out?

Speaking of AI robots speaking, we come to one of the most ubiquitous, most used and least feared chat bots around – Amazon’s Alexa. OK, except for all those times she clandestinely listens in on your conversations and sends you ads for what you’re talking about. But  Alexa can’t become an evil robot overlord, can she?

“In the scenario presented at the event, the voice of a deceased loved one (a grandmother, in this case), is used to read a grandson a bedtime story. Prasad notes that, using the new technology, the company is able to accomplish some very impressive audio output using just one minute of speech.”

TechCrunch reports that at the recent re:Mars conference in Las Vegas, Amazon’s Senior Vice President and Head Scientist for Alexa, Rohit Prasad, unveiled a “potential” new feature that can synthesize short audio clips into longer speech in a high quality voice indistinguishable from the person, dead or alive, it was recorded from. Maybe you’re the kind of person who can resist an AI Bot that impresses you with its intelligence and feelings, or one who feeds you the racist or sexist ideas you secretly agree with, or one who preaches your religion better than your favorite minister. But can you resist the voice of your favorite granny reading your favorite bedtime story … with a few minor changes in it where the three pigs don’t deserve that brick house and you feel it's your birthright to take it away from them?

Hanzel and Gretel were stealing from a nice old lady like Granny

Food for thought from your not-so-friendly neighborhood robot overlords.

Paul Seaburn

Paul Seaburn is the editor at Mysterious Universe and its most prolific writer. He’s written for TV shows such as "The Tonight Show", "Politically Incorrect" and an award-winning children’s program. He's been published in “The New York Times" and "Huffington Post” and has co-authored numerous collections of trivia, puzzles and humor. His “What in the World!” podcast is a fun look at the latest weird and paranormal news, strange sports stories and odd trivia. Paul likes to add a bit of humor to each MU post he crafts. After all, the mysterious doesn't always have to be serious.

Join MU Plus+ and get exclusive shows and extensions & much more! Subscribe Today!

Search: