California’s AI law works by staying narrow
ID 398858601 © Prakitta Lapphatthranan | Dreamstime.com

Commentary

California’s AI law works by staying narrow

The law takes a narrow, transparency-first approach to regulating advanced “frontier” AI models, creating room for experimentation, while requiring timely disclosures that give the state the data it needs to address risks as they emerge.

California Gov. Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law in late September. The law takes a narrow, transparency-first approach to regulating advanced “frontier” artificial intelligence (AI) models, creating room for experimentation and innovation, while requiring timely disclosures that give the state the data it needs to address risks as they emerge. 

This new law is already a better first step than last year’s heavy-handed—and ultimately vetoed—proposal, Senate Bill 1047. The value of the new law, Senate Bill 53, however, will depend on its execution and whether California continues to update its definition of “frontier” to reflect the growing capabilities of firms entering the market. 

Senate Bill 53 defines “frontier foundation models” as models trained with more than 10^26, or 10 to the power of 26, floating-point operations (FLOPs)—a.k.a. massive computing power—and imposes heavier obligations on larger firms with more than $500 million in annual revenue. 

Among the major provisions of the law, it requires large AI developers to publish a framework explaining their safety standards and risk assessment procedures. Before deployment, a developer must also post a public transparency report, including an additional requirement for large developers to disclose risk assessments and the extent to which third-party evaluators were involved in assessing those risks. Developers are required to report critical safety incidents to the state’s Office of Emergency Services (OES), and starting from 2027, the OES will release anonymized summaries of those reports. 

By choosing disclosure and incident reporting rather than rigid technical requirements or pre-deployment approvals, SB 53 leaves space for experimentation—building rules around demonstrated risks instead of hypothetical harms. California’s law also aligns with existing national and international safety standards, rather than creating its own arbitrary standards, which helps maintain consistency across jurisdictions. Because the AI field still lacks agreed-upon standards on dangerous behavior, the law’s framework and reporting provisions are intended to produce the information policymakers need to refine their laws and craft more responsive regulations in the future. 

Concerns with SB 53

Despite the law’s strengths, the definition of a “frontier” model still leaves room for improvement. For now, the threshold of 10^26 FLOPs and the $500 million revenue threshold for large developers create a clear and narrow scope. Former Google CEO Eric Schmidt is among those who recommended the 10^26 FLOPS threshold. But, in the future, this static threshold can drift away from the capability it was meant to capture.

History has shown that algorithmic efficiency often doubles every 16 months, meaning a new update to the law will be required time and time again. If the threshold stays the same, it will miss new models that are just as powerful but trained with less compute, while still flagging older, inefficient ones. Whether the newly created California Department of Technology (CDT), charged with recommending changes to that threshold annually, can successfully convince the legislature remains to be seen.

Another concern with SB 53 is that the reporting obligations, though well-intentioned, may become a mere administrative formality, with companies producing data that checks the box without improving understanding of real issues. The law requires large developers to file quarterly summaries of their internal catastrophic-risk assessments, even when nothing has changed. Unless the information collected is analyzed and shared by the OES in ways that genuinely improve a regulator’s understanding of risk, this could just turn into a bureaucratic sludge that buries insights into true risks.

Looking beyond California: State-based AI best practices in lieu of a federal standard

A flexible scope would also help keep state rules consistent until there is a federal law. Right now, however, the states point in different directions: New York’s “Responsible AI Safety and Education RAISE Act” (A 6953), for example, also covers models with 10^26 FLOPs, but goes further to include models with very high training costs (about $100 million) and even covers smaller models if building them costs at least $5 million. Michigan’s House Bill 4668 skips the compute threshold altogether and simply covers any entity that spent at least $100 million in the past year and $5 million on any single model. 

Looking ahead, if five or 10 more states adopt their own definitions, this emerging state patchwork will only grow more complicated and difficult to comply with. The practical solution could be keeping the definition of the “frontier” aligned by following the same national and international standards. This would avoid putting developers through a dozen different playbooks.

California Senate Bill 53, even with all its flaws, may serve as that model. But the real test of SB 53 will be the value of the information it produces from transparency reports and assessments. If those reports reveal meaningful patterns in model behavior and help the state more effectively respond to risks, California could set an example for others to follow. But if those reporting requirements turn into routine filings and formal checklists, the California experiment could show the limits of transparency laws, potentially pushing legislators toward heavier tools.