Is there a moral dilemma for self-driving cars? As the time nears when autonomous cars may make a full entry into the marketplace, ethical questions regarding their programming may impact both public safety and the actual adoption by the public of autonomous cars. In 2015, 4.5 million people were seriously injured and almost 40,000 people were killed in traffic accidents. A large number of the accidents that occur every year are due to human error. The thought about autonomous cars is that removing the potential for human error will drastically cut down the injury and fatality rates by preventing accidents. A recent study shows a moral dilemma that exists when autonomous cars would be forced to make decisions about protecting the safety of their occupants or instead those of pedestrians.
A question of the public good versus self-sacrifice: The study
Researchers in the U.S. and France were interested in exploring an ethical dilemma that could arise when autonomous cars are programmed. Specifically, when the cars encounter situations in which the cars could act in order to preserve the lives of their passengers or instead to preserve the lives of pedestrians were studied. The researchers surveyed 2,000 participants, a majority of whom agreed that they thought cars programmed to save the greatest number of people over protecting the passengers in the vehicles was a good idea. However, when they were presented with the idea of actually purchasing a vehicle that was programmed with such a utilitarian purpose, a majority then stated that they would not want to own a car that was not programmed to protect them and their families regardless of how many other lives could be potentially lost.