Preliminary evidence on the safety of automated vehicles is overwhelmingly positive
Photo 153625700 © Andreistanescu | Dreamstime.com

Commentary

Preliminary evidence on the safety of automated vehicles is overwhelmingly positive

As robotaxis expand, early results indicate that Waymo’s automated vehicles are almost certainly safer than typical human drivers.

The California Public Utilities Commission (CPUC) voted Aug. 10 to approve an expansion of the robotaxi services offered by automated vehicle developers Cruise and Waymo in San Francisco. The companies have not only acted on CPUC’s approval; they’ve announced new robotaxi markets throughout the United States, and it appears increasingly likely that large metro areas in the Sun Belt will enjoy automated vehicle ride-hailing services in the next few years. 

Some skeptics have raised questions about automated vehicle (AV) safety, and recent incidents involving Cruise in San Francisco—including a collision involving a firetruck that resulted in minor injuries to a Cruise passenger—led California regulators to scale back the company’s deployment pending an investigation. But should the growing number of Americans with access to robotaxi service be worried about their safety?

In addition to San Francisco and Phoenix, where it first deployed its driverless ride-hailing service in 2020, Waymo has announced plans to expand to Los Angeles in October and Austin in early 2024.

Cruise, which operates commercial ride-hailing in Phoenix and Austin in addition to San Francisco, has recently announced or has been hiring for commercial and testing positions in Atlanta, Charlotte, Dallas, Houston, Nashville, Raleigh, San Diego, Seattle, and Washington, D.C. And Cruise aims to be able to operate its purpose-built Origin robotaxi in snowy winter climates in two years, a major milestone in the pursuit of a “turn-key” automated ride-hail platform that can be deployed quickly in any metro area.

After a few years of deflated pessimism that followed several years of overly optimistic expectations, it appears the dawn of publicly available automated vehicles has finally arrived. Anti-car urbanists such as writer David Zipper, who last year questioned the feasibility of AVs given the supposed “limitations of their machine learning,” are now sounding the alarm that “self-driving cars could make car trips easier than ever—which is exactly the problem.”

As University of Sydney Professor of Transport David Levinson wryly noted in his “Transportist” newsletter, “I am very pleased the discourse has moved from Autonomous Vehicles are Impossible and Vaporware to Autonomous Vehicles are now a Problem.”

The urbanists most opposed to automated vehicles fear them because they believe AVs will ultimately work too well. According to progressive advocacy organization Transportation for America, for example, AVs threaten to “torpedo any hopes of a more sustainable transportation system” by drawing customers away from their preferred (and slower and less comfortable) urban transportation modes such as walking, bicycling, and fixed-route mass transit.

In San Francisco, which, like some major cities, has embraced this style of slow-city urbanism, officials have been alleging various safety problems stemming from the small number of AVs operating on their streets.

The safety of people both inside and outside automated vehicles is critically important. Safety is also a core motivation for AV technology development in the first place because AVs can’t drive drunk, drowsy, or distracted and are programmed to obey the law—including posted speed limits. Commercial success depends on automated vehicles being safer to satisfy both regulators and customers. It is important to remember that decades of research from the National Highway Traffic Safety Administration (NHTSA) has consistently identified human driver error and misbehavior as critical reasons for more than 90% of crashes. But how well-founded are these AV safety concerns, especially when compared to the status quo of human drivers?

Many traditional safety evaluation methods, such as statistical analysis of police-reported crash data, are inadequate due to the novelty of automated vehicles and the relative rareness of the most severe incidents (i.e., fatal crashes). A 2020 RAND Corporation study suggested “leading measures” are needed to fill the gap in conventional “lagging measures” to make the AV safety case in the interim. The Automated Vehicle Safety Consortium, an initiative of the public-private SAE Industry Technologies Consortia that includes most major AV companies, is developing best practices to further this standardized AV safety measurement goal. 

To this end, CruiseWaymo, and other developers have been building their safety cases using a variety of techniques and data sources. In its first million driverless miles, equivalent to approximately 80 years of typical human driving, Waymo claims it observed 20 contact events with other vehicles and roadway objects, most of which occurred while the Waymo vehicle was stationary or parking. Only two collisions were severe enough to be police-reportable, and there were no reported injuries.

In its first million driverless miles, Cruise claims a 54% reduction in collisions ”benchmarked against human drivers in a comparable driving environment” and subsequently released details about its human driving benchmark methodology.

Waymo has been particularly transparent in how it is developing and demonstrating its safety case, with its safety research team led by prominent auto safety scholar Trent Victor publishing regularly in the peer-reviewed literature and making pre-print studies available for public review on its website. 

Despite this progress, formal industry-wide standardization—let alone regulatory incorporation of these future standards—remains years away. Building public trust requires automated vehicle developers and policymakers to use the limited tools at their disposal. The good news is that preliminary evidence on the relative safety of AVs is overwhelmingly positive.

In his “Understanding AI” newsletter, journalist and computer scientist Timothy B. Lee looked at the often-hyperbolic safety concerns expressed by San Francisco activists and officials and found them lacking. After reviewing collision reports submitted to the California Department of Motor Vehicles by Cruise and Waymo, as well as the companies’ claims that their AVs crash less than half as often as a typical human driver, Lee concluded:

The big question for policymakers is whether to allow Waymo and Cruise to continue and even expand their services. This should be an easy call with respect to Waymo, which seems to be safer than a human driver already. The faster Waymo scales up, the more crashes can be prevented. I think Cruise’s tech is probably safer than a human driver too, but it’s a closer call. I could imagine changing my mind as more data comes in in the coming months.

Bolstering this case is new research conducted by the Swiss Reinsurance Company (SwissRe) in partnership with Waymo. Using real-world claims data, in which a human driver baseline was established from insured drivers residing in the same ZIP codes in which Waymo operates, SwissRe and Waymo found that Waymo AVs in testing operations (i.e., accompanied by human supervisors behind the wheel ready to intervene) saw a 92% reduction in bodily injury claim frequency and a 95% reduction in property damage claim frequency when compared to human drivers. For rider-only operations (i.e., without a human supervisor behind the wheel), bodily injury claim frequency was reduced by 100% and property damage claim frequency by 76% compared to human drivers. Both testing and rider-only operations had significantly lower claim frequencies than when human drivers were manually operating the same Waymo vehicles.

Even with large confidence intervals due to the comparatively small sample size obtained from Waymo’s AV operations, the results indicate that Waymo’s automated vehicles are almost certainly safer than typical human drivers. And combining the testing operations and rider-only datasets substantially narrows the confidence intervals, with results indicating a 93% reduction in bodily injury and property damage claim frequencies relative to the human driver baseline.

Analysis of this sort from the insurance industry is extremely valuable. Insurer claims data are more reliable than police-report data because of greater reporting frequency, consistency, and scope (especially with respect to crash-related injuries that may not be reported at the time of the crash). The NHTSA estimated in 2015 that approximately 24% of injury-causing crashes and 60% of property damage-only crashes go unreported to police in the U.S., making apples-to-apples comparisons difficult between AVs that report the most minor fender-benders and human drivers who routinely fail to report crashes even when they’ve been physically injured.

As commercial AV operations expand in the U.S., their traffic safety implications will be under a microscope. Fortunately, consumers wishing to take a ride in a robotaxi can be confident in knowing that the growing body of safety evidence strongly suggests the robots are already better drivers than us mere humans.

Policymakers should avoid undue precautionary restrictions on automated vehicle testing and operations on public roads sought by a small group of activists that would short-circuit the real-world learning that is delivering road safety improvements. Instead, policymakers should collaborate with these companies to establish information-sharing practices, develop first-responder interaction plans and training, and integrate robotaxis into existing ride-hailing regulatory frameworks.