A newly signed bill in Colorado that regulates artificial intelligence use imposes a series of onerous rules on so-called high-risk AI systems. It’s one of the most sweeping AI regulatory bills passed by a state legislature. Unfortunately for Coloradans, it will likely deter investment in AI in their state, reduce employment around AI, and reduce deployment of AI systems in Colorado, all without addressing or solving any issues with AI applications.
On May 17, Colorado Gov. Jared Polis signed into law Senate Bill 24-205, the Consumer Protection for Artificial Intelligence (AI), into law. The law’s main provision imposes a duty on those who create or modify artificial intelligence systems (whom the law calls “developers”) and businesses or governments that use AI systems (“deployers”) to identify and correct any statistical discrimination in their high-risk AI systems.
Statistical discrimination is a pattern of different results based on group characteristics without specific conduct. This type of discrimination isn’t unique to AI, it has a long history in employment and housing. For example, if women were denied employment more frequently than men due to family obligations, it is not clear whether the deployer of AI is liable. However, bill sponsors have not documented any case of AI-based statistical discrimination in Colorado.
The law defines AI as “high risk” if it “has a material legal or similarly significant effect on the provision or denial to any consumer of or the cost or terms of” a job, educational opportunity, or other service, such as housing or health care. This definition of a high-risk AI system is far too vague. There is no guidance in the law’s text explaining the degree to which AI must assist in making a consequential decision to be covered by the law. For example, if the system uses AI to filter out incomplete applications for jobs, that may trigger liability. Uncertainty about which AI systems the law applies to adds legal risks and possible financial costs to all businesses that use AI for day-to-day operations.
Another major concern: Senate Bill 24-205’s definition of algorithmic discrimination departs substantially from current anti-discrimination law. In most Colorado discrimination cases, plaintiffs must prove intentional, discriminatory conduct. SB 24-205 instead uses a “disparate impact” standard, meaning there only needs to be statistical patterns of inequities, not specific conduct, to qualify as legal discrimination. This change of standards adds another layer of legal risk for anybody developing or deploying AI in Colorado.
Polis singled out this provision for reform in his signing statement:
“Laws that seek to prevent discrimination generally focus on prohibiting intentional discriminatory conduct. Notably, this bill deviates from that practice by regulating the results of Al system use, regardless of intent, and I encourage the legislature to reexamine this concept as the law is finalized before it takes effect in 2026.”
The state legislature should revise the law by returning to a definition of discrimination that bans specific, intentional conduct and not simply statistical correlations. This legislation is a solution looking for a problem, as there have been no documented cases of AI-based discrimination in Colorado.
Vague and novel non-discrimination rules aren’t the only issues with the law. The law also imposes significant reporting requirements on developers and deployers, though it exempts those with under 50 employees. Developers must give deployers and the public all information detailing risks, impacts, data summaries, and mitigation plans and list any other AI systems created or modified by the developer. In turn, deployers must conduct impact assessments and create a risk assessment plan in case discrimination occurs. These plans must follow the federal Artificial Intelligence Risk Management Framework or other similar standards.
These impact assessments contain a wealth of information. Deployers must disclose the purpose and proposed benefits of any high-risk AI, along with any possible risks of discrimination and plans to mitigate them. They must also disclose the categories of data and metrics used to evaluate the AI’s output. This is not a one-time requirement. Each time a developer or deployer “substantially modifies” the AI system, they must amend their analysis within 90 days.
In the final reporting step, deployers must disclose to consumers any time AI makes a consequential decision that affects them. This notification must also include a wealth of information about the AI tool, such as its purpose, the decision made, an opt-out clause, and internet links to more information about the tool. If the decision negatively affects the consumer, they have the right to receive an explanation from the deployer about how and why the decision was made, along with an opportunity to correct any possibly inaccurate data the tool used.
Consumers have a right to appeal the decision to a human for review and receive information about it. While this may sound good for consumers in theory, it will slow down the decision-making process and discourage the deployment of AI systems.
Early studies of AI show promising gains in efficiency, effectiveness, and productivity of employees who use systems like Chat GPT to assist in their work. One study found that a 1% increase in AI deployment causes a 14% increase in worker productivity. Another found that customer service professionals with AI successfully addressed 14% more customer issues than those without.
Slowing the deployment of AI would harm the consumers the legislation is meant to protect by reducing worker productivity and slowing decision-making, delaying access to housing, employment, or government services.
A vague law that forces onerous compliance process enforcement is a key worry. Yet again, SB 24-205 provides little in the way of specifics. The law gives the attorney general broad enforcement and rulemaking power, making it risky for a Colorado business to deploy AI. In any enforcement action, the law places the burden of proof on the developer or deployer to show they comply with the reporting requirements. This turns normal civil litigation on its head. Under normal circumstances, the state must prove a violation of the law occurred and seek a remedy under those facts. It is much more difficult to prove a violation of the law did not occur.
Colorado legislators should rethink their approach to regulating AI. Instead of imposing stringent rules that unnecessarily discourage growth in AI, the legislature should pass legislation that prevents real harm and protects consumers while ensuring innovation and entrepreneurship in AI can continue in Colorado.