The robots are right around the corner. They are walking, rolling, crawling, flying, swimming, and driving, and they will be right in our faces before we know it.
Oh yes, and don’t forget that they will soon be running on two legs as well, and faster than any human can. In fact, the latest in a daily intel-stream of robot-related news stories appearing around the world cites a DARPA-funded project in the US, where researchers modeled a new two-legged robot after birds which now “aims to become the fastest bipedal robot in the world.”
Jonathan Hurst, a robotics expert at the Oregon State University College of Engineering says the bird-bot will already be the fastest bipedal robot once it hits walking speed, let alone once it learns to carry that over into a sprint.
Not everyone is as excited about the innovations that agencies like DARPA and other groups are pushing for with these new innovations in robotics. In fact, a group calling themselves “Stop the Robots” have taken to the streets against the apparent encroachment of robotization into our lives.
The group appeared in Austin, Texas this year at the South by Southwest festival, as discussed by BBC news:
Adam Mason from Stop the Robots warns artificial intelligence could one day “make decisions without a moral guideline”.
“Humans make mistakes,” he says. “If we make something that is as smart as humans or smarter, why won’t it make mistakes?”
“We have to consider solutions [based on] human morality, rather than the morality of a computer.”
It is true that we often project concerns about ourselves and our behavior onto how artificial intelligence may be designed to think. However, there are more immediate concerns relating to artificial intelligence that we should perhaps consider as well. One of the most obvious comes with the advent of self-driving cars, as companies like Google, and now Apple, are getting into the development of “smart cars” that will drive themselves, and perform any host of other tasks while in the act of transportation.
But the question is, how will they react in circumstances where hard decisions must be made?
Take for instance this fairly stereotypical situation: a child runs into the street in front of a self-driving car. Oncoming traffic prevents the car from moving into the opposite lane, and there is too little time to allow the car to brake. How will the car react?
A scenario the likes of this resembles something we might see in films, of course. But with the advent of self-driving cars, we may be looking at such scenarios as real-world possibilities. Granted, much simpler problems come to mind without the need for emergency situations that may call for not only quick response or moral judgement: what happens when the self-driving car comes upon a work zone, where speeds are suddenly reduced, or where one may expect to find members of the road crew directing traffic in unordinary ways? How does a self driving car respond to various inclement weather conditions?
Interestingly, one of the chief problems we may face with self driving cars is whether humans will feel comfortable with coming to terms with how machines respond to such situations, despite estimates that suggest the vehicles may actually reduce the number of traffic fatalities or injuries.
A recent Washington Post article quoted Bill Gurley, who addressed these issues recently at SXSW in Austin, Texas:
“I would argue that for a machine to be out there that weighs three tons that’s moving around at that speed, it would need to have at least four nines because the errors would be catastrophic,” Gurley said… Driverless cars may need to be near perfect, but they’ll face a long list of rare circumstances that could be difficult to handle. These unusual circumstances are sometime called edge cases. For example, can a car be programmed to identify an ambulance siren and pull over? Can it respond to an officer directing traffic? What about inclement weather, heavy snow, flooded streets or roads covered with leaves? These things could all disrupt its sensors.
They are serious questions, and yet again, they show that we’re a good ways from honing perfectly operational driverless cars, let alone self-aware computers. But while many are still proclaiming that “The Singularity is Near,” Erich Schmidt, executive chairman of Google (who also addressed the crowd at SXSW) thinks not:
“Certainly nothing like that is conceivable in the next 20 years. We’re still making baby steps, although we’ve made tremendous progress with respect to AI, but we’re a far way away from any kind of singularity.
“I do believe, however, we’ll have a good computer-based question-and-answering system in the next 20 years, which will include advice. In other words, you could ask Google, how long should I stay at SXSW, or what restaurant should I eat in tonight.
Arguably, there are more conceivable benefits than there are disadvantages from the advances in robotics and AI on the horizon. But that alone doesn’t rule out the concerns that are being raised, and as the eventuality of a robotized society becomes more and more apparent, the necessities for addressing such issues will become more imminent as well.