A study has analyzed the different moral postures that people often have with self-driving cars by presenting them with unlikely but possible scenarios in which the vehicle is involved in an accident and has to choose to save the pedestrian’s life or the driver’s.

A team of researchers conducted six online surveys within the United States and asked people how they believe autonomous vehicles should behave in different scenarios as reported by the New York Times.

A study has analyzed the different moral postures that people often have with self-driving cars by presenting them with unlikely but possible scenarios in which the vehicle is involved in an accident and has to choose to save the pedestrian's life or the driver’s. Photo credit: Business Insider
A study has analyzed the different moral postures that people often have with self-driving cars by presenting them with unlikely but possible scenarios in which the vehicle is involved in an accident and has to choose to save the pedestrian’s life or the driver’s. Photo credit: Business Insider

The team found that people often thought that the cars should be programmed to make a decision for the greater good, meaning saving six pedestrians against one driver, according to the study published Thursday in Science Magazine.

However, when they were asked if they would buy a self-driving car programmed to save other people instead of them, people answered negatively. The subjects commented that they would rather purchase a car that would protect them at all cost.

Also, the participants were also asked if they approve or disapprove of enforcing utilitarian regulations for the self-driving vehicles, to which they assured to disapprove and that such thing would diminish their willingness to buy such regulated car

As a conclusion, the psychologists and computer scientists involved in the study said that regulating for utilitarian algorithms could increase car-related casualties by the people postponing the adoption of a safer technology.

It was previously determined that the autonomous vehicles, or a similar technology, could decrease up to 90 percent of accidents, which means that this technology would eliminate human error, according to Iyad Rahwan, a professor at MIT’s Media Lab, as reported by the Washington Post.

The other 10 percent of the accidents, according to Rahwan, are due to less controllable things, like severe weather conditions, or mechanical failures that not even the most sophisticated computer can avoid.

Initiating a debate

The moral posture of this scenarios needs to be discussed and take into account what most people think, although this is not always clear, according to some experts commenting the results of the study.

“What is interesting about this paper is that it not only measures an aspect of public opinion but really highlights a deep inconsistency in ordinary people’s thinking about it,” commented Greene in an interview with the Washington Post. “To me what is valuable here is drawing out that inconsistency, and saying, ‘Hey folks, we have to figure out, what are our values here, what trade-offs are we willing or unwilling to make.'”

According to Rahwan, whether or not a programmer explicitly programs cars to do something, they will do something, and this will be implicit in the algorithm. “If we do not have a discussion on this, then that assumption will be completely arbitrary,” he added.

Source: Science Magazine