The Critical Role of On-Road Testing for Automated Vehicle Development
ID 125434240 © Mariusz Burcz | Dreamstime.com

Commentary

The Critical Role of On-Road Testing for Automated Vehicle Development

Automated vehicle experts explain the importance of public road testing.

The National Highway Traffic Safety Administration (NHTSA) recently announced the launch of its Automated Vehicles Transparency and Engagement for Safe Testing (AV TEST) Initiative. NHTSA’s AV TEST Initiative is a voluntary program to collect and publish information related to the testing of AVs, including where the tests are occurring and what types of vehicles are involved. NHTSA’s announcement was accompanied by three online panel discussions on various aspects of AV TEST. 

Unlike some peer countries, the U.S. lacks national automated vehicle testing regulations. State motor vehicle codes are generally silent on who—or what—is driving the vehicles subject to state laws and regulations. As a result, automated vehicle testing on public roads was legal in nearly all U.S. states prior to the development of any specific state automated vehicle policies.

NHTSA held the final panel on June 18, which focused on how and why developers are proceeding with on-road automated vehicle testing. The panelists together made a strong case for continuing the U.S.’s largely permissive on-road testing environment, which has faced criticism in recent years from some activists seeking new comprehensive regulations prior to permitting on-road testing of automated driving systems. 

Chris Urmson, the cofounder and CEO of Aurora and former director of the Google Self-Driving Car Project, kicked off the discussion by noting that on-road testing is needed to validate prototype automated driving systems, as well as to assure the public that these systems are safe. The general research and development approach is to gather real-world traffic data from sensors on human-driven test vehicles, run those data over and over through simulators to amplify tricky situations and troubling behavior, and then send the software back out on public roads to validate the automated driving system. This feedback loop, said Urmson, makes ongoing development of prototype automated systems dramatically more efficient when compared to relying only on simulator and closed-course testing.

On the question of whether or not on-road testing of prototype automated vehicles is unprecedented, George Nicols, chair of SAE International’s On-Road Automated Driving Committee, explained it is not accurate to say that manufacturers have little experience with on-road testing. “There’s a quite extensive history of manufacturers testing new vehicles on public roads,” he said, as “there is nothing like the real world itself” and that it is not uncommon to see camouflaged future model year test vehicles driving on the public roads of southeast Michigan near automaker research and production facilities.

Nichols did provide a caveat that on-road testing of automated driving systems is a bit different from traditional on-road testing in that manufacturers typically did not test core operating functions like steering, throttling, or braking, which were all controlled by trained human test drivers.

This caveat from Nichols was put into context by Sandeep Neema, program manager for Assured Autonomy at the Defense Advanced Research Projects Agency (DARPA), who explained the machine learning technologies that underlie automated driving systems pose unique safety and performance assurance challenges. Solutions must be found in order to achieve the desired level of confidence with the automated systems, and Neema echoed Urmson’s point that resource constraints make testing and validating without a real-world driving component extremely difficult.

Chris Gerdes, director of the Center for Automotive Research at Stanford University, provided background on his unique automated driving research initiative that seeks to maximize automated driving system safe vehicle performance by learning from racecar drivers on closed tracks. As he put it, “The best human drivers are pretty amazing.” 

His team hopes to move to testing these algorithms on public roads in the future, agreeing with other panelists that real-world testing is critical to validating simulator models. Invoking statistician George Box’s famous quip that “all models are wrong, but some are useful,” Gerdes said that without validation through real-world public road testing, they won’t know how wrong their models are.

The unanimity from these automated vehicle technical experts on the importance of public road testing stands in stark contrast to some of the claims being made by professional auto safety activists, such as those employed by the Center for Auto Safety founded by Ralph Nader. These activists argue that public road testing of automated vehicle prototypes should be prohibited in the absence of detailed national safety and performance regulations. The problem with this precautionary approach is it ignores how those safety and performance regulations are created in the first place.

As a matter of enduring agency practice and formally codified by the National Technology Transfer and Advancement Act of 1995 and subsequent OMB Circular A-119 implementation guidance, it is general federal policy for regulatory agencies to incorporate voluntary consensus standards from private standards bodies in lieu of writing government-unique standards. Those needed private technical standards and standardized test procedures for automated driving systems are by and large still under development, and many will likely involve validation of certain models for safety assurance purposes. Only once these voluntary consensus standards are published may they be ripe for regulatory incorporation.

Ironically, this call by some activists for the federal government to impose detailed testing regulations prior to allowing developers to continue on-road automated vehicle testing not only contradicts longstanding federal policy and practice; it would also short-circuit the efficient development of these needed technical standards and delay the realization of potential safety gains. Policymakers would be wise to reject these misguided and overly precautionary appeals to safety and allow developers to continue efficiently validating their prototype automated driving systems.