Montana Senate Bill 67 would stall automated vehicle progress
ID 356366270 © Stevendalewhite | Dreamstime.com

Testimony

Montana Senate Bill 67 would stall automated vehicle progress

Montana needs a workable regulatory framework for vehicles equipped with automated driving systems. However, Senate Bill 67 has several problems. 

A version of the following testimony was submitted to the Montana State House of Representatives Committee on Transportation on March 24, 2025.

Montana needs a workable regulatory framework for vehicles equipped with automated driving systems. However, Senate Bill 67 is deficient in several key respects. 

Dual rulemaking authorities 

Sections 5 and 6 create dual rulemaking authorities for the Department of Transportation and the Department of Justice. This is somewhat reminiscent of California’s approach to autonomous vehicle regulation, in which the state gradually adopted its current bifurcated regulatory framework beginning in 2012. While there was an internal logic to California’s policy choices, those decisions delayed the operation of driverless vehicles equipped with automated driving systems on California public roadways by several years. This result was unforeseen by both the legislative sponsors and many supporters in the industry at the time.  

In addition, California regulators have still not incorporated heavy-duty autonomous vehicles (AV) into the state’s regulatory framework after many years of work. In contrast to California’s complex, restrictive approach, more than 20 states have adopted AV regulatory frameworks based on successful models developed in Arizona, Florida, and Texas that efficiently authorize the safe operation of driverless vehicles. 

Dangerous road conditions 

In Section 5, it is unclear what is meant by “demonstrated to be capable of operating safely during dangerous road conditions.” Importantly, this raises the question of how an autonomous commercial vehicle would be authorized to operate in such conditions. Currently, no consensus technical standards or standardized test procedures exist within any standards body or government anywhere in the world on this topic. 

Current autonomous vehicle industry practice with respect to operating in dangerous road conditions is simply not to operate. In technical terms, severe weather is outside the automated driving systems’ operational design domains. With respect to roadway-specific hazards, sensors such as lidar, radar, and cameras provide superior object detection when compared to human eyes. Response times by onboard computers are also much more rapid than human decision-making and actuation. The enhanced safety offered by automated driving systems has already been demonstrated, such as in research conducted by developer Waymo in partnership with global reinsurance giant Swiss Re. Their 2024 study analyzed 25.3 million fully autonomous miles driven by Waymo alongside 500,000 insurance claims and over 200 billion miles of driving exposure. Waymo/Swiss Re found that, when compared to human drivers, Waymo’s automated driving system produced an 88% reduction in property damage claims and a 92% reduction in bodily injury claims.  

First responder interactions 

While Section 6 authorizes rulemaking by the Department of Justice, which includes both motor vehicle registration and highway patrol, it is silent on the specifics. One expected element that is omitted relates to first responder interactions with autonomous vehicles on roadways. Standard language on autonomous vehicle law enforcement interaction plans has already been adopted in numerous state codes. This language was developed in partnership with law enforcement agencies in Arizona, California, and elsewhere. Consensus standardization of both interaction protocols and personnel training is critically important to ensure that first responder interactions with disabled autonomous vehicles on roadways are safe. 

Conformity with consensus technical standards 

Section 3’s definitions of “automated driving systems” deviate substantially from SAE International’s Recommended Practice J3016. J3016 is the global consensus standard on the definitions of driving automation levels. It has been widely adopted by governments, industry, and technical bodies around the world, including the U.S. federal government and most states.  

For instance, Section 3 refers to technology that performs at Levels 1-5 automation collectively as “automating driving systems.” This is incorrect. In J3016, Levels 1-5 are collectively known as “driving automation systems.” Only a subset of driving automation systems at Level 3-5 are considered “automated driving systems.”  

Section 3 makes a critical distinction between Levels 1-3 and 4-5, while the J3016 consensus definitions distinguish between driver assistance features at Level 1 and “partial automation” at Level 2, where neither is capable of performing the entire dynamic driving task. According to J3016, Level 3 is an automated driving system capable of performing the entire dynamic driving task.  

The crucial difference between Level 3 and Level 4-5 automated driving systems is that at Level 3, if the automated driving system experiences a failure, a fallback-ready user (i.e., a driver seated in the driver seat of the vehicle) must be prepared to take manual control of the vehicle after a request to intervene in order to either assume responsibility of the dynamic driving task or achieve a “minimal risk condition” (i.e., exiting traffic, pulling to a stop on the side of the road, and engaging flashing hazard lamps). In contrast, per J3016, Level 4-5 systems must be capable of automatically achieving a minimal risk condition without the direction of a fallback-ready user. 

Risk-based regulatory scrutiny 

Building on the definitions contained in Section 3, Section 4 improperly groups Level 3 automated driving systems with Level 1-2 driver assistance and partially automated driving automation systems. In contrast, J3016 groups Level 3 with Levels 4 and 5 due to their distinct performance capabilities.  

Recall that Level 3 systems are autonomous systems capable of performing the entire dynamic driving task within their operational design domains, with the distinction between Level 3 and higher levels of automation being the mechanism (human or automatic) by which to achieve a minimal risk condition. It is important to understand that Level 1 systems are increasingly standard in late-model vehicles available for purchase by consumers today. For example, Level 1 systems include adaptive cruise control, a driver assistance feature first introduced more than 25 years ago (Mercedes Distronic in 1999).  

It is widely known in the engineering community that Level 3 systems pose unique and heightened risks compared to all other levels of driving automation by requiring a fallback-ready user to either assume responsibility for the dynamic driving task or achieve a minimal risk condition. Simulator and naturalistic driving studies have found that it can take up to 40 seconds for a fallback-ready user to regain situational awareness and stabilize steering. This duration is far too long to avoid imminent roadway hazards in many situations.  

Due to serious public safety concerns about the readiness of fallback-ready operators, most driving automation developers have been deterred from introducing Level 3 automated driving systems to public roadways. The sole example is Mercedes’ Drive Pilot that will only operate within a very narrow operational design domain consisting of preapproved freeway lanes with no active work zones, congested traffic conditions with speeds less than 40 mph, during daylight hours with clear weather, and with an in-cabin camera that must detect an operator seated in the driver seat. Yet Section 4 would subject Level 3 systems to less scrutiny than the much safer Level 4 systems in successful commercial operation today throughout the United States, which reflects the absence of a risk-based approach in this proposed regulatory framework.