California’s Senate Bill 1047 is a troubling development for AI governance
Photo 269774099 © Rinchumrus2528 | Dreamstime.com

Commentary

California’s Senate Bill 1047 is a troubling development for AI governance

The bill could potentially criminalize the development and use of open-source AI models, which commonly involve adapting and enhancing existing models to create new applications.

As state legislators across the United States move to create regulatory frameworks for artificial intelligence (AI), California is pushing a particularly aggressive bill that could subject AI developers to a wide range of civil penalties. Although it is unlikely to prevent harmful use of AI, the regulatory burdens and compliance costs introduced by the bill could discourage small companies and individual developers from pursuing groundbreaking AI projects, which are crucial for advancements in healthcare, education, and environmental protection. 

Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, seeks to regulate the development and deployment of advanced AI models in California. The bill mandates that developers of significant AI models adhere to strict safety protocols, including the capability to shut down the model if necessary and certify compliance annually. Noncompliance can result in severe penalties, such as the deletion of the AI model and substantial fines.  

The bill, introduced by State Sen. Scott Wiener (D-San Francisco), establishes penalties that escalate from 10 percent of the cost of training an AI model for the first violation to 30 percent for every subsequent breach of the bill’s provisions. These fines can devastate startups and small companies, which often operate with limited budgets and resources.  

The bill also grants state regulators the authority to mandate the deletion of AI models, erasing years of research and development, substantial financial investments, and potentially valuable technological advancements. For small businesses, the threat of model deletion could mean the end of their business and discourage developers from exploring innovative AI applications, potentially stifling creativity and leading to a more cautious development environment. 

The bill’s safety certification and compliance mechanism could also lead to criminal perjury charges if officials believe developers misled them about the AI’s safety. This may lead to authorities arbitrarily deciding whether an organization’s mistakes are honest and charging people with crimes that could lead to up to four years of jail time. The threat of criminal liability may deter developers from being bold and taking risks when building models, fearing that honest mistakes or unforeseen outcomes could result in severe personal consequences. 

The bill aims to prevent the harmful use of AI, such as creating autonomous weapons or launching cyberattacks on critical infrastructure that could result in significant damage. However, the problem with introducing such high penalties is that it is nearly impossible to predict and mitigate every potential misuse of an AI model. Typically, developers create general-purpose tools without foresight into all possible future applications. The responsibility for harmful actions should lie with the individuals who intentionally misuse the AI, not the developers who created the tool. 

Moreover, assigning such responsibility to AI developers for harmful uses of their technology overlooks factors beyond their control. For example, an AI designed for autonomous drone navigation could be maliciously repurposed by a terrorist group to deploy weaponized drones, leading to severe casualties and destruction. Similarly, a hacker might exploit an AI system developed for network optimization to find and attack vulnerabilities in critical infrastructure, causing widespread disruptions and data breaches. These scenarios show the potential for technology built by developers in good faith to be exploited by bad actors. This complexity underscores the need for a nuanced approach to liability, where the intent and actions of the user are considered, rather than placing the entire burden on the developers. 

Senate Bill 1047 is meant to apply to only extremely powerful AI models, but our analysis concludes that startups and large corporations are both subject to regulation under the bill. While the bill’s text covers models at or above the threshold of computing power that is accessible only to major corporations with significant resources, it also rather vaguely applies itself to models with similar “capabilities.” This language opens the door to covering almost all future AI models because the speed of technological advances guarantees that tomorrow’s computers will routinely deliver today’s state-of-the-art computing power more efficiently and cheaply. The uncertainty about whether a model falls under the benchmark and threshold criteria creates a legal grey area, potentially holding back innovation by making R&D investment riskier and the path for startups less lucrative. 

The bill could potentially criminalize the development and use of open-source AI models, which commonly involve adapting and enhancing existing models to create new applications. For example, developers use open-source models like GPT-3 to create advanced chatbots, virtual assistants, and translation tools. These applications can automate customer service, assist in language learning, and provide real-time translation services. While it is common for users and creators of flawed tools to bear legal responsibility for any resulting harm, the proposed law extends this liability to developers who modify open-source AI models. This could uniquely impact the open-source AI community, where the culture of shared innovation and collaboration drives progress. The potential for legal consequences might deter developers from participating in open-source projects, hindering the collaborative efforts crucial for advancing AI technology. Under the proposed law, developers who use and modify open-source models could be legally responsible for any harm caused by their AI systems, even if the modifications are built on someone else’s original model. This interpretation could greatly inhibit the open-source AI community, as the threat of legal repercussions may discourage developers from engaging in shared innovation and collaborative efforts. 

An alternative to internal certification is a nine-step process with the Frontier Model Division–a new regulatory body established under the bill. Among these steps is a requirement to establish a mechanism to quickly shut down the model, along with all its copies and derivatives. This is technically impossible except for local models or those with tightly controlled deployments, making it a significant hurdle for developers working with distributed and open-source models. 

Another demanding step involves adhering to all existing standards and regulations determined by the National Institute for Standards and Technology, the State of California, academia, nonprofit sector experts, and standard-setting organizations before training of the model begins. While this might be a reasonable measure for a product already on the market, it makes little sense for a model that has not yet been trained. The high cost and complexity of compliance could discourage smaller entities from AI innovation, further consolidating power among a few large corporations. 

While the intent behind Senate Bill 1047 to ensure the safe use of AI is commendable, its current form poses significant challenges to innovation and the open-source AI community. A more balanced approach that protects society from potential harm while fostering an environment conducive to technological advancement is essential. Policymakers must work closely with AI developers and experts to create regulations that are both effective and supportive of innovation.