Even when we don’t know it, artificial intelligence affects virtually every aspect of our modern lives. Yet, for something so influential, there’s an odd assumption that artificial intelligence agents and machine learning, which enable computers to make decisions like humans and for humans, is a neutral process. It’s not.
Alberto Ibargüen, president and CEO of the Knight Foundation, describes the challenge facing a group of tech industry billionaires and philanthropists who announced last week that they are pooling some of their profits to teach morals, ethics and perhaps even a little religion to artificial intelligence. Is this technical philanthropy tinged with a little taste of guilt?
For example, one of the most critical challenges is how do we make sure that the machines we 'train' don't perpetuate and amplify the same human biases that plague society?
That’s a big challenge, one that EBay founder Pierre Omidyar, LinkedIn founder Reid Hoffman, the Knight Foundation (a non-profit charitable foundation) and others hope to help answer through Omidyar's charitable foundation, Omidyar Network. In a press release last week, Omidyar announced the group has initially given $27 million to researchers who will work at MIT's Media Lab and Harvard's Berkman Klein Center for Internet & Society to develop what Constellation Research (a Silicon Valley technology research and advisory firm) VP and principal analyst Steve Wilson calls a “deep, cross-disciplinary systems thinking being applied to the problem of mechanizing ethics."
The foundation is expected to address complex ethical issues in artificial intelligence, like how to put controls in place to minimize its potential dangers to society while still maximizing its benefits. Another related task will be to attract engineers and investors to projects that are innovative and profitable to them while not ignoring the public interest.
Can AI learn ethics? Steve Wilson is glad a group with brains from MIT and Harvard and funding from the Omidyar Network is addressing this challenging question.
The thing about ethics that takes technologists by surprise is that some problems are never going to be completely solved. Machine learning might evolve to deal with surprises, but there will always be 'meta surprises' that no pre-programmed machine can deal with on its own.
Perhaps a good and easy place to start might be addressing the ethics of teaching AI machines how to play poker, especially since gambling is addictive and the machines are already excelling at taking the money of unsuspecting humans.
The recent passing of actor Dick Gautier – who played Hymie the Robot in the classic sitcom Get Smart - reminds us that dealing with artificial intelligence was so much easier when humanoid robots stored their artificial intelligence data on reel-to-reel tape drives.