“Soul: spiritual or moral force”
There’s a car called a Soul but can a motor vehicle, especially one without a human driver or even a human passenger, have a spiritual or moral force? We’re about to find out. Researchers at The Institute of Cognitive Science at the University of Osnabrück, Germany, claim to have developed an algorithm that can mimic a human’s moral decision-making process. Should we be impressed or afraid?
“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.”
Moral judgment is simple? Study leader Leon Sutfeld seems to think so. In a paper published in Frontiers in Behavioral Neuroscience, he describes how his research team placed real humans in simulated road traffic scenarios using immersive virtual reality. As the subjects were exposed to potential deadly accident situations involving humans, animals or objects, their decisions were recorded and fen into statistical modeling systems which determined the rules they used to decide who or what to hit or avoid.
Collecting data and creating rules is the easy part. Now what? Download a soul into a Soul? Professor Peter Konig, a senior author of the study, advised caution.
“Firstly, we have to decide whether moral values should be included in guidelines for machine behavior. Secondly, if they are, should machines act just like humans?”
Should machines act just like humans? If so, which ones? Should they act like the best of the best or the average of all? When they come to a moral or ethical fork in the road, whose values should they use to determine which way to turn? Do they follow a society that places greater value on the elderly or a child? Man or woman? Ethnicity?
Did Leon Sutfeld really say that reducing moral decisions to an algorithm is easy?
Now that this algorithm is developed, should it be standard on all driverless cars or an option? How much should it cost? When your driverless car avoids a little old lady and is heading towards a wall, will you be glad you opted for the super stereo system?
The discussions only start in the road, as the authors point out that moral algorithms will be needed in robots working in hospitals, military operations and other places where autonomous machines will be used.
Humans don’t do such a good job at moral decisions themselves. Are we really capable of developing algorithms to attempt to do the same?
Should we even try?
Previous article