The 5 Levels of Autonomy

Jan 8, 2018 9:05:00 AM

shutterstock_660229243.jpgMany years ago, science fiction writer Isaac Asimov, in his seminal I, Robot, proposed the Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Although part of a work of fiction—and one that predated practical, environment-aware robots by many decades—the Three Laws have had surprising staying power over the years, and inform much of robotics research and development to this day.

The Three Laws obviously do not apply to all “robots” as we know them today. Industrial robots, for example, which are programmed to go through a specific set of repetitive motions, are not aware of their surroundings, and such a robot wouldn’t know the difference if it picked up a car chassis, a beach ball, or a factory supervisor. So the Three Laws really apply only to robots that have a high level of autonomy, or the ability to make decisions on their own, without human input of any kind.

Classifying Autonomy

More recently, the Society of Automotive Engineers (SAE) proposed a five-level classification scale for autonomous vehicles. They recognized that not all “self-driving” cars are alike; there are different levels of autonomy. In order to help automotive engineers, governments, and insurers get a better handle on this new technology, SAE defined the five levels of automobile autonomy.

  • Level 0: Not autonomous at all; driver has sole control of the vehicle.
  • Level 1: A single function is automated, but does not necessarily use information about the driving environment. A car operating under simple cruise control would qualify as Level 1.
  • Level 2: Acceleration, deceleration and steering are automated, and use sensory input from the environment to make decisions. Modern cars that have cruise control and automated lane maintenance or collision-avoidance braking would fall into this category. The driver is still ultimately responsible for the safe operation of the car.
  • Level 3: In this level, all safety functions are automated, but the driver is still needed to take over in an emergency that the car can’t handle. Tesla vehicles with the “autopilot” feature engaged are examples. This is the most controversial level because it requires the human driver to remain alert and focused on the task of driving, even though the car is doing most of the work. Humans naturally would find this situation more tedious than just driving the car, and many in the autonomous-vehicle community worry that a driver’s attention may wander from the task at hand, leading to catastrophic results. Some carmakers are choosing to skip Level 3 and go directly to Level 4.
  • Levels 4 and 5: These are the fully autonomous levels, where the car handles all driving decisions with no input from a human at all. The difference is that Level 4 cars are limited to a specific set of driving scenarios, such as city, suburban and highway driving, whereas Level 5 cars can handle any driving scenario, including off-road operation.

(Yes, that’s actually six levels, not five, but Level 0 doesn’t really count because it involves no autonomy at all.)

Why Just Cars?

It doesn’t take much imagination to extend this concept beyond self-driving cars. Essentially any robot or robot-like system—especially those whose decisions can impact human safety—could benefit from a similar classification system. For example:

  • Other autonomous vehicles - Whether they carry passengers or not, autonomous locomotives, ships and aircraft could be usefully classified by their level of autonomy.
  • Surgical robots - Doctors, hospital personnel, governments, insurers and especially patients have an interest in knowing what level of autonomy a surgical robot has.
  • Military robots - Classifying military robots could help frame the current debate over whether they should be allowed at all. Some thinkers fear that fully autonomous military robots will lead to increased wartime carnage, while others believe that sufficiently advanced autonomous robots could actually decrease casualties because they could make better decisions than human soldiers under highly stressful, emotionally charged battle scenarios.

Asimov’s Three Laws of Robotics are still relevant, but given what we know about robotics today, they should be augmented by a classification system such as the SAE’s levels of automobile autonomy to determine whether and how the laws are applied. By implementing this kind of classification system for all robotic systems, we would have a common language for all stakeholders to use when discussing this technology.

Abdul Dremali

Written by Abdul Dremali

Innovation Lab & Growth Lead: Abdul is currently leading research & development in Artificial Intelligence, Machine Learning and Augmented Reality at the AndPlus Innovation Lab.

Lists by Topic

see all