Can Humans be Trusted with Driverless Cars?

can humans be trusted with driverless cars
MIT Media Lab's "Moral Machine"
9 min read

A transition period is built into the concept of driverless vehicles: we aren’t going to go from roads of today to roads of the future overnight. There will likely be vehicles of all levels of automation on the road with traditional vehicles, at least for a while, which means we’ll see varying degrees of driver engagement, too. The question is: will drivers maintain operability and responsibility of vehicles? Or will humans in driverless cars be so eager to relinquish all control that we’ll encounter an era of machine madness on our roads?

Ford Fusion LiDAR testing
Ford researching autonomous vehicle technology at night. (Image: The Auto Channel)

That driverless technology will arrive is almost a forgone conclusion: “In and around Silicon Valley, at least 19 commercial self-driving efforts are underway, ranging from big carmakers like Nissan and Ford and technology giants like Google, Baidu and Apple, to shoestring operations like,” reports The New York Times. Many companies developing driverless technology–Tesla, most notably–are looking at incremental automation: cars will perform some tasks, but not others, and as technology catches up, humans will perform fewer and fewer of the driving behaviors. But in these scenarios, a driver will have to be paying attention, ready to take over, until cars are completely autonomous.

If automakers count on a transition period in which they must rely on human drivers to stay alert and engaged even though vehicles are technically capable of driving themselves in most conditions, will it work? Can we be trusted with vehicles that have driverless technology but aren’t ready to take 100 percent of the responsibility 100 percent of the time?

How Much Responsibility Should be on Humans?

Google had a jump on autonomous features, which they began testing in 2010, and they decided to experiment with the vigilant-human approach in 2013. Google had employees ride in driverless cars during their commutes, with engineers monitoring all travel remotely. The employees were meant to remain vigilant, but instead, they discovered that the humans were not so good at monitoring the machines. In fact, many drivers slept, read the newspaper, ate, and otherwise checked out of the driving process.

After realizing even their own employees weren’t able to remain vigilant in vehicles with autonomous features, Google decided to radically downscale their driverless vehicle plans.

Google employees slept, ate, and read the newspaper while trying out driverless cars.

It isn’t that Google’s employees were lazy or irresponsible; instead, engineers realized human drivers might not be able to go from checked-out to engaged fast enough. The reflexive response, called “situational awareness,” is required for people to act in a split-second crisis, explains The New York Times. This might be too much to expect for driverless car “drivers,” Google reasoned. The self-driving cars Google is now developing won’t require human involvement at all—so, no built-in transition period required. However, the cars will be limited: their maximum speed will be 25 mph, meaning they won’t ever replace human-driven cars completely, but could function as city taxis or for other services.

Richard Bush, from U.K.-based Car Keys, told us that the introduction of driverless technology isn’t the first time our attention span behind the wheel has been compromised. “Mobile phones, DVD compatible touchscreen systems, eating, smoking–all these things have been flagged as potential dangers while driving, yet people continue to do them anyway,” explains Bush. “So, can humans be trusted to always play by the rules?” asks Bush. “No. But that hasn’t stopped us trudging on with other technologies, and the carelessness of a handful of people should not hinder future development of what could, in theory, be the answer to safer roads.”

How Much Responsibility Should be on Cars?

So far only Tesla has what can truly be called autonomous features, though other automakers have introduced some truly amazing active and passive safety features to the market. Tesla, with its celebrity-level CEO Elon Musk, has never been a timid company. Tesla makes big promises, and often delivers, causing true moments of surprise in an industry not known for overnight change.

Though Tesla has seen similar behavior to what Google observed from its transition period, they so far remain undeterred and committed to making fully driverless vehicles a reality. In October 2015, Tesla updated its Model S vehicles with Autopilot features, cautioning that the technology was still experimental and drivers needed to remain engaged, with hands on the wheel, always. But, as we’ve seen in the intervening months, drivers aren’t willing to wait, even when they understand the consequences of relinquishing control to technology that isn’t fully formed: “Even though I might have a slight attention lapse at the exact wrong moment,” Steve Wozniak, co-founder of Apple, told The New York Times, “it’s easier to drive this way and not feel as tired.”

girl texting while driving

Can Cars Make Judgments Like People?

One of the biggest debates currently raging about driverless vehicles is whether or not they can be expected to make moral and ethical judgments–both of which often come up when driving. It may be true that human error accounts for 94% of all traffic crashes, but each time we get behind the wheel, we also use our lifetime of experience and make snap judgments that can mean life or death for the driver, passenger(s), or others. Will a machine ever be able to truly think like a person?

Moral Machine, a platform created by MIT, that gathers human perspective on moral decisions machines must someday make if driverless technology is ever to be a reality. (You can test yourself here and compare your answers with those of other users.)

What Causes Humans in Driverless Cars to Fail?

Accidents we’ve seen in Teslas have likely been due to drivers checking out, giving a dangerous or inappropriate amount of control to the vehicle. Take the driver in Florida who died in May when his Tesla, in Autopilot mode, failed to detect the broad side of a semi truck and hit it full force. Reports said he most likely was watching a DVD instead of remaining an active part of the driving process. What appears to be happening is that people might not understand the limitations of self-driving features and therefore rely on them too much. Automotive engineers call this “overtrust.”

People relying on autonomous features too much is called 'overtrust.'

Dr. Steven J. Hausman worked as a researcher and senior executive for the National Institutes of Health for 31 years and is a consultant on emerging technologies, including autonomous vehicles. Dr. Hausman explains that the question of whether or not drivers can (or should) be trusted with a transition period during which vehicle technology performs many driving tasks but a human must still remain engaged has actually been studied in other transportation sectors — namely, aviation.

“With the advent of more and more in-vehicle information systems, automobiles tend to become more like airplanes in terms of the human interface characteristics,” Dr. Hausman explains. He says the biggest concern with partially automated systems is mode confusion: drivers could become confused about whether or not an event falls under something that’s automated, or whether it’s something to which they need to attend. “Engineers creating such systems must take into account a variety of factors that include driver age and experience, education level (which affects decision-making speed), cognitive ability, the workload of the driver, driver distraction, driver status and how the driver may interpret different messages. It may very well be the case that automobile automation will have to be customized to each driver based on these and other factors.”

If a driver is out of the loop for any amount of time, the risk that they will lose situational awareness increases, and over time, people might lose driving skills, says Dr. Hausman. In highly automated vehicles, it will be easy for drivers to engage in other tasks, which would lead to delayed reaction times. Driving systems, says Dr. Hausman, must therefore be exceptionally reliable “since circumstances may arise on very short notice and without any prior warning. This also means that the vehicle automation system needs to warn the driver if it is reaching its design limits, perhaps on snow-covered roads, so that the driver can once again take control.”

In fact, he adds, The Federal Motor Carrier Safety Administration has asked for research to construct such a system, but the system doesn’t yet exist and it might take several years to become a reality (even though similar types of systems are already being tested abroad). “Until these become standard equipment I would therefore be hesitant to agree that drivers are ready for a transition phase.”