Why Autonomous Vehicle Fears Are Science Fiction, Not Science Fact: Part 1

David RodriguezApril 26, 2017

Why Autonomous Vehicle Fears Are Science Fiction, Not Science Fact: Part 1

by David Rodriguez, CMO of GreenRoad

Imagine yourself in this situation: You’re the only passenger in your autonomous vehicle on a dark, rainy night. Suddenly, your vehicle skids. Your car can either take a sharp turn and fall off the cliff with you in it, or it can save you by veering into another car, potentially pushing it off the cliff. Should your car do whatever is needed to save you, or the other unaware driver? What if the other vehicle has more passengers? Should your car choose to protect your life at the cost of others?

If you had an intense gut reaction to this moral predicament, you’ll be relieved to learn those were trick questions. Many people worry about this type of situation when thinking about the autonomous cars of the future, but it’s based on incomplete information. As engineers and automakers will be quick to point out, in reality, autonomous vehicles should not find themselves in these extreme life-or-death situations nearly as often as human drivers might. Still, 75 percent of American drivers say they are scared to ride in a self-driving car, even though 63 percent believe roads would be safer if autonomous vehicles were mass-adopted.

So where does this deeply-rooted fear of self-driving vehicles come from?

Asimov’s Written (and Implied) Laws

Most people’s view on robots has been shaped by science fiction author Isaac Asimov’s “Three Laws of Robotics.” Asimov invoked these laws in a number of stories, but perhaps most popularly in his “I, Robot” series, which Hollywood later made into a blockbuster movie flop starring Will Smith.

The laws are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These seem like good enough general guidelines to keep humans safe, but what they don’t cover is the gray area that would exist in situations like the collision course scenario I mentioned earlier. Both possible outcomes of that scenario lead to a human getting harmed, forcing people to wonder whether there might be an unwritten fourth law. In the event that human injury is inevitable, would a robot have to choose the option that would inflict the least amount of damage possible? Would your car purposefully drive you off a cliff rather than endanger another vehicle containing more humans? It’s a scary question, but it’s also misleading.

To imagine the fourth-law scenario happening, we would have to ignore nearly everything else about the way these vehicles will actually operate. Autonomous vehicles will be highly tuned, highly connected, safety-centric machines programmed to avoid risk at every single moment, continuously estimating and predicting future outcomes.

Autonomous vehicles will have the in-depth knowledge and “training” of top-performing professional drivers, with the added benefit of highly sensitive — and superhuman — sensory and calculation abilities. Just as Alan Turing’s “Turing Test” defined machines as intelligent if they could pass for human, autonomous vehicles will be programmed to undertake a technologically enhanced, but fundamentally humanoid decision-making process to make sure life-or-death scenarios don’t occur. Functionally speaking, every single autonomous vehicle will personify what we might call a “perfect human driving profile,” made up from the driving characteristics of multiple types of professional drivers.

How to Model “Human” Autonomous Drivers

The first step for creating the perfect “human” driving profile is gathering data on how the best human drivers make decisions. Companies like GreenRoad have been tracking, measuring, and learning from the driving behaviors of thousands of professional drivers over the course of billions of miles. Through that research, we’ve identified not only the driving maneuvers that result in safe driving, but also how external factors like weather, traffic, road conditions, and time of day affect driving decisions too.

Training a car to drive safely also depends on Advanced Driver Assistance Systems (ADAS) to help prevent accidents by keeping cars at safe distances from each other, protecting autonomous passengers from bad human drivers, and performing other safety-related operations. This relies on input from a variety of sensors, including radar, cameras, satellite communications, radio communications, and positioning. By leveraging these precise and powerful measurements, autonomous vehicles can become safer than even the best human drivers.

The challenge is combining these two types of data—historical driver data and today’s sensory data—into a seamless driver profile with a machine-learning approach that helps it improve over time. The data and technology exist; it’s up to manufacturers and engineers to access it and normalize it based on their local driving laws.  

In part two of this blog series, we’ll look at a common risk scenario to see how autonomous vehicles might react to completely avoid or lessen the blow of collisions, in much the way professional drivers do.