The Issue Connecting Self-Driving Cars to Philosophers


greek busts
6 min read

UPDATE: In a new study, which was pre-released online on October 12, researchers from MIT, the University of Oregon and the Toulouse School of Economics delved into the ethical dilemma that surrounds autonomous vehicles: How should the vehicles be programmed to respond if they face an unavoidable accident? Should the first priority be to protect the occupants of the vehicle, or should a self-driving car act on pre-programmed concepts of utilitarianism and sacrifice one life to save many? As Quoted notes in the original post, automakers have looked toward philosophy for guidance, and now they can look to psychology as well. Rather than consider the question of whether a self-driving car should be programmed to minimize the loss of life (potentially killing the driver to do so) in a moral framework, the authors of the study considered what decision-making algorithm the public would be most inclined to support.

In a survey with several hundred people, the majority of participants believed that self-driving cars should be programmed to act with a utilitarian focus to minimize the total deaths. Some respondents even believed a utilitarian focus should be legally mandated for self-driving cars, but over a third of respondents said they doubted automakers would enforce such an algorithm. Perhaps that due to the fact that while most respondents support the notion of other people buying self-driving cars, they also admit that if confronted with the option of owning a car that would sacrifice its passenger when lives were at stake, they would decline.

Though self-driving cars on the highway (and at the supermarket and in our neighborhoods) aren’t yet a reality, major car manufacturers and technology companies are devoting time and resources toward their development, and great strides have already been made. This summer, a self-driving car made a 3,500-mile cross country trip, and in July, Google set its self-driving car loose on the streets of Austin, TX. Many cars can steer, stop, and even park by themselves. When it comes to self-driving cars then, the question is no longer “if,” it’s “when.”

Before we let the machines take the wheel, lots of details still need ironing out—most insurance companies don’t yet know how things would work with self-driving cars (you can let a car drive, but you can’t sue it for damages) and local and national law enforcement must figure out how traffic rules would be applied to driverless vehicles. But before all of that, any self-driving car planning to take to the open road must be able to react and make ethical decisions like a human driver. But how?

Taking Robots to Oz

Bloomberg Business reports about how, exactly, engineers plan to ensure self driving cars have both brains and hearts—that is, the ability to make split-second decisions in emergencies based on solid moral and ethical ground. Bloomberg Business spoke to philosopher Patrick Lin, who runs the Ethics and Emerging Sciences Group at California Polytechnic University and counsels automakers. Lin said, “This is going to set the tone for all social robots. These are the first truly social robots to move around in society.”

Engineers and automakers are discussing how to build ethical reasoning into self-driving cars.

Bloomberg Business reports that, “Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of gray. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University’s Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.”

Ethics 101 for Cars

Not only will self-driving cars be more convenient (breakfast and the paper on the way to work, anyone?), the hope is that they will also make our roadways safer. Eliminating driver error—distracted driving, impaired driving, inexperience—will hopefully reduce the more than 30,000 highway deaths that occur each year. But when dangerous situations occur—as they always will—a self-driving vehicle will need to, as Bloomberg Business says, “choose the lesser of two evils — swerve onto a crowded sidewalk to avoid being rear-ended by a speeding truck or stay put and place the driver in mortal danger.” This is where philosophers come in.

Chris Gerdes, a professor at Stanford researching automated driving, recently gave a poignant example of how self-driving cars must be able to interpret complex scenarios and make split-second decisions: “a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.” Gerdes said, “As we see this with human eyes, one of these obstacles has a lot more value than the other. What is the car’s responsibility?” Gerdes continued “If [it] would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? These are very tough decisions that those that design control algorithms for automated vehicles face every day,” he said.

The Biggest Question

MIT Technology Review reports that Bryant Walker-Smith, an assistant professor at the University of South Carolina who studies the legal and social implications of self-driving vehicles, has a wider view of the ethical dilemmas posed by self-driving vehicles. Walker-Smith believes that with the amount of fatal traffic accidents resulting from human error, holding self-driving technology back might be the most unethical thing. He says, “The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”