A Tesla sedan is shown after it struck a parked Laguna Beach Police Department vehicle in Laguna Beach, California, in this May 29, 2018, handout photo.
A Tesla sedan is shown after it struck a parked Laguna Beach Police Department vehicle in Laguna Beach, California, in this May 29, 2018, handout photo. Credit: Laguna Beach Police Department/Handout via REUTERS

This article is republished from The Conversation.

It’s hard to miss the flashing lights of fire engines, ambulances and police cars ahead of you as you’re driving down the road. But in at least 11 cases in the past three and a half years, Tesla’s Autopilot advanced driver-assistance system did just that. This led to 11 accidents in which Teslas crashed into emergency vehicles or other vehicles at those scenes, resulting in 17 injuries and one death.

The National Highway Transportation Safety Administration has launched an investigation into Tesla’s Autopilot system in response to the crashes. The incidents took place between January 2018 and July 2021 in Arizona, California, Connecticut, Florida, Indiana, Massachusetts, Michigan, North Carolina and Texas. The probe covers 765,000 Tesla cars – that’s virtually every car the company has made in the last seven years. It’s also not the first time the federal government has investigated Tesla’s Autopilot.

As a researcher who studies autonomous vehicles, I believe the investigation will put pressure on Tesla to reevaluate the technologies the company uses in Autopilot and could influence the future of driver-assistance systems and autonomous vehicles.

How Tesla’s autopilot works

Tesla’s Autopilot uses cameras, radar and ultrasonic sensors to support two major features: Traffic-Aware Cruise Control and Autosteer.

Traffic-Aware Cruise Control, also known as adaptive cruise control, maintains a safe distance between the car and other vehicles that are driving ahead of it. This technology primarily uses cameras in conjunction with artificial intelligence algorithms to detect surrounding objects such as vehicles, pedestrians and cyclists, and estimate their distances. Autosteer uses cameras to detect clearly marked lines on the road to keep the vehicle within its lane.

In addition to its Autopilot capabilities, Tesla has been offering what it calls “full self-driving” features that include autopark and auto lane change. Since its first offering of the Autopilot system and other self-driving features, Tesla has consistently warned users that these technologies require active driver supervision and that these features do not make the vehicle autonomous.

Tesla is beefing up the AI technology that underpins Autopilot. The company announced on Aug. 19 that it is building a supercomputer using custom chips. The supercomputer will help train Tesla’s AI system to recognize objects seen in video feeds collected by cameras in the company’s cars.

Autopilot does not equal autonomous

Advanced driver-assistance systems have been supported on a wide range of vehicles for many decades. The Society of Automobile Engineers divides the degree of a vehicle’s automation into six levels, starting from Level 0, with no automated driving features, to Level 5, which represents full autonomous driving with no need for human intervention.

Within these six levels of autonomy, there is a clear and vivid divide between Level 2 and Level 3. In principle, at Levels 0, 1 and 2, the vehicle should be primarily controlled by a human driver, with some assistance from driver-assistance systems. At Levels 3, 4 and 5, the vehicle’s AI components and related driver-assistance technologies are the primary controller of the vehicle. For example, Waymo’s self-driving taxis, which operate in the Phoenix area, are Level 4, which means they operate without human drivers but only under certain weather and traffic conditions.

Tesla Autopilot is considered a Level 2 system, and hence the primary controller of the vehicle should be a human driver. This provides a partial explanation for the incidents cited by the federal investigation. Though Tesla says it expects drivers to be alert at all times when using the Autopilot features, some drivers treat the Autopilot as having autonomous driving capability with little or no need for human monitoring or intervention. This discrepancy between Tesla’s instructions and driver behavior seems to be a factor in the incidents under investigation.

Another possible factor is how Tesla assures that drivers are paying attention. Earlier versions of Tesla’s Autopilot were ineffective in monitoring driver attention and engagement level when the system is on. The company instead relied on requiring drivers to periodically move the steering wheel, which can be done without watching the road. Tesla recently announced that it has begun using internal cameras to monitor drivers’ attention and alert drivers when they are inattentive.

Another equally important factor contributing to Tesla’s vehicle crashes is the company’s choice of sensor technologies. Tesla has consistently avoided the use of lidar. In simple terms, lidar is like radar but with lasers instead of radio waves. It’s capable of precisely detecting objects and estimating their distances. Virtually all major companies working on autonomous vehicles, including Waymo, Cruise, Volvo, Mercedes, Ford and GM, are using lidar as an essential technology for enabling automated vehicles to perceive their environments.

By relying on cameras, Tesla’s Autopilot is prone to potential failures caused by challenging lighting conditions, such as glare and darkness. In its announcement of the Tesla investigation, the NHTSA reported that most incidents occurred after dark where there were flashing emergency vehicle lights, flares or other lights. Lidar, in contrast, can operate under any lighting conditions and can “see” in the dark.

Fallout from the investigation

The preliminary evaluation will determine whether the NHTSA should proceed with an engineering analysis, which could lead to a recall. The investigation could eventually lead to changes in future versions of Tesla’s Autopilot and its other self-driving systems. The investigation might also indirectly have a broader impact on the deployment of future autonomous vehicles. In particular, the investigation may reinforce the need for lidar.

Although reports in May 2021 indicated that Tesla was testing lidar sensors, it’s not clear whether the company was quietly considering the technology or using it to validate their existing sensor systems. Tesla CEO Elon Musk called lidar “a fool’s errand” in 2019, saying it’s expensive and unnecessary.

However, just as Tesla is revisiting systems that monitor driver attention, the NHTSA investigation could push the company to consider adding lidar or similar technologies to future vehicles.

Hayder Radha is a professor of electrical and computer engineering at  Michigan State University.

Join the Conversation

3 Comments

  1. Just one death from auto-piloted vehicles? There are many more than that every year in the City of Saint Paul alone.

    It’s funny how we humans are. Everyone of us knows someone who has no concerns about a 10 mile drive to the airport, but needs a couple of drinks to sooth the nerves before a flight. If you know anything at all about risk, you know which of the two, a car trip or an airplane ride, is much riskier.

    But yeah, let’s keep killing 1.3 MILLION people every year by letting humans drive.

    1. The one deaths was due to collisions with emergency vehicles only. The key stAt is deaths per vehicle mile. Considering tbe minuscule number of.cars, how does.this compare. But you knew that, you just want to bloviate

    2. There have been many more deaths, 4 in US deaths in the month of July alone, unfortunately only one of those was the Tesla driver. The one death the article refers to is one involving emergency vehicles. Vehicles stopped in the road with emergency lights flashing. I’ve spent the last thirty years working with programmers and it seems to me that absolutely the easiest thing to program for, in a “self driving” car, is NOT to run into things. Presumably there is input from some type of “radar” or whatever, the program would be: “if my speed is > then speed of object in front of me hit the F’ing brakes.” A very simple script to write, if they can’t even get that done correctly how are they going to navigate city streets with pedestrian, bikes, delivery vehicles and the rest. Keep these things off the road.

Leave a comment