Self-driving cars may reduce accidents, but should they make moral decisions?
Human drivers can be pretty awful decision makers, as evidenced by the fact that nearly 1.3 million roadway deaths occur worldwide each year, and 93 percent of U.S. accidents are caused by human error, according to a Center for Internet and Society report.
Soon, autonomous vehicles will be commonplace on America’s highways. These self-driving cars will be required to make split-second decisions to avoid collisions and injuries, but whether they are a safer option than vehicles requiring a human driver is still up for discussion. One report estimates that driverless vehicles could reduce the number of road deaths by as much as 90 percent, but how will autonomous vehicles go about solving moral dilemmas, and should they even be expected to?
The Moral Machine
To answer this question, a group of MIT researchers created a website called Moral Machine, inviting online visitors to click through a variety of situations and decide what the car should do. Spare the young over the old? Choose humans above animals? Preserve the lives of many rather than just a few? The team’s findings were published in October in the journal Nature.
The researchers grouped the results into three clusters according to location and found that:
- Those hailing from Latin America, France, Hungary, and the Czech Republic (the Southern cluster) showed a stronger preference for sparing the young over the old than did respondents from the Asian and Middle Eastern nations.
- The need to choose humans over pets was stronger in the U.S., Canada, Kenya, and most of Europe than it appeared to be in the Southern cluster.
- Sparing pedestrians over passengers and sparing the lawful over the unlawful were strong among all the geographic clusters.
But should cars be making these kinds of decisions anyway? The paper’s authors wrote:
“Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”
Safety First, Ethical Decisions Later?
In 2017, Germany’s Ethics Commission on Automated Driving drafted some initial guidelines for self-driving vehicles, with a prohibition against moral decision-making by a car’s operating system. According to the report, in the midst of an unavoidable car accident, any preference for certain individuals based on age, gender, physical, or mental characteristics should be completely off-limits. Although wide-ranging programming to decrease the number injuries could be justified, the parties involved in the generation of mobility risks must not be allowed to sacrifice non-involved parties, the German researchers concluded.
According to Daniel Sperling, author of a 2018 book on autonomous and shared vehicles, moral dilemmas related to self-driving cars are the last thing the public should be concerned about. “The most important problem is just making them safe,” he recently told National Public Radio. “They’re going to be much safer than human drivers: They don’t drink, they don’t smoke, they don’t sleep, they aren’t distracted.” According to Sperling, the most pressing question about autonomous vehicles that needs to be asked is, “How safe do they need to be before we let them on our roads?”