Proposed artificial intelligence legislation would drive innovation out of Texas
ID 28259260 © Cupertino10 | Dreamstime.com

Commentary

Proposed artificial intelligence legislation would drive innovation out of Texas

The Texas Responsible Artificial Intelligence Governance Act introduces sweeping obligations for developers and businesses that depend on AI.

The Texas Responsible Artificial Intelligence Governance Act introduces sweeping obligations for developers, deployers, and businesses that rely on artificial intelligence (AI). If passed, the bill would likely discourage innovation and investment in the state. Texas, historically considered a pro-business and pro-innovation state, risks adopting a regulatory framework that could hinder its competitiveness in the AI sector.

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) was introduced late last year by state Rep. Giovanni Capriglione (R-98) as House Bill 1709. Despite being a Republican-led initiative, the bill may also be gaining traction among Texas Democrats due to its similarities to California’s Senate Bill 1047 and alignment with the European Union AI Act—both regulatory frameworks broadly supported by Democratic lawmakers.

Additionally, TRAIGA has secured backing from the Future of Privacy Forum’s Multistakeholder AI Policymaker Working Group, which has both Democratic and Republican lawmakers and advocates for stronger AI regulations. The bill is currently under House committee review and, if passed, would take effect in September 2025.

TRAIGA’s broad scope is one of its most concerning aspects. In today’s work environment, AI systems often play a supporting role in consequential decisions related to employment, finance, healthcare, housing, and insurance. However, under the bill, any AI tool used in these sectors—regardless of its actual influence on decision-making—could be subjected to costly compliance requirements.

For example, an AI-powered resume screening tool that assists human resources departments by ranking job applicants based on relevant skills and experience would fall under TRAIGA’s high-risk category. Employers using such a tool would be required to conduct impact assessments, ensure compliance with algorithmic fairness standards, and provide detailed reports on how AI is used in the hiring process. These obligations impose significant financial and administrative burdens on businesses that rely on AI for these tasks. Many companies, particularly startups and mid-sized firms, may determine that the compliance costs outweigh the benefits of AI adoption. Consequently, TRAIGA risks discouraging AI implementation altogether, reducing efficiency and innovation in hiring or pushing businesses to relocate to states with more favorable regulatory environments.

It would also tip the AI market in favor of large incumbents, a problem Vice President J.D. Vance recently warned against at the Artificial Intelligence Action Summit, held in early February in France.

“To restrict its development now will not only unfairly benefit incumbents in the space, but it would also mean paralyzing one of the most promising technologies we have seen in generations,” Vance said.  

His speech was in line with the idea that instead of imposing rigid rules, policymakers should focus on fostering an environment where innovators can responsibly develop AI applications that benefit society.

As examples of the types of regulations Texas should be avoiding, TRAIGA also introduces several vague and overbroad prohibitions on specific uses of AI that are deemed “unacceptable risks.” TRAIGA would require systems that perform emotion recognition, capture biometric data, or categorize consumers based on sensitive attributes to obtain explicit consent. This restriction could severely limit AI’s potential in areas like fraud prevention, personalized health care, and adaptive learning technologies. In practice, these restrictions may eliminate valuable applications that enhance security, efficiency, and user experience. For example, biometric authentication technologies—widely used to prevent fraud and enhance security—could fall under TRAIGA’s prohibitions. Many financial institutions and tech companies rely on these systems to safeguard user accounts and transactions. A blanket requirement would not only undermine security but also create an uneven playing field where Texas-based businesses are restricted in ways that their competitors in other states or countries are not.

A key question is whether such restrictions should apply universally or be tailored to different use cases. In some contexts, requiring explicit consent makes sense, such as in healthcare data collection, where personal privacy concerns are paramount. However, a blanket restriction on biometric authentication without considering its role in security applications could have unintended consequences. Mandating explicit consent in every instance could introduce unnecessary friction, making these systems less effective and inadvertently increasing security risks. Additionally, AI-driven authentication tools are increasingly used in public safety applications, such as airport security and border control, where obtaining prior consent from every individual is impractical. There are complex tradeoffs around biometrics and consent that this Texas bill just brushes aside.

Another problematic provision of HB 1709 is the introduction of a limited private right of action. While the Texas attorney general would be the primary enforcer of the law, private litigants can sue over alleged violations involving banned AI systems. This opens the door for opportunistic lawsuits, further increasing compliance costs and legal risks for AI developers and deployers.  Given the complexity of AI decision-making, many companies may face litigation even when they have taken reasonable steps to mitigate bias and ensure compliance.

The experience of Illinois’ Biometric Information Privacy Act (BIPA) serves as a cautionary tale. Under BIPA, businesses have faced a wave of lawsuits, often over technical or procedural violations rather than actual harm to consumers. TRAIGA’s private right of action could create a similar legal minefield in Texas, where companies are targeted for minor or unintended infractions. The fear of litigation will further deter AI innovation, making Texas a less attractive destination for tech companies.

Furthermore, TRAIGA’s approach to algorithmic discrimination is redundant given existing federal and state anti-discrimination laws. Algorithmic discrimination is defined in the law as any condition in which an AI system, when deployed, creates unlawful discrimination of a protected classification in violation of the laws of this state or federal law. Artificial intelligence systems used in hiring, lending, and other key decisions are already subject to anti-bias regulations under existing laws such as the Equal Credit Opportunity Act and Title VII of the Civil Rights Act. Instead of adding another layer of regulation, policymakers should focus on enforcing these laws and encouraging industry best practices for fairness and transparency.

Texas House Bill 1709’s exemptions for small businesses and experimental AI sandboxes are well-intentioned but ultimately insufficient. While companies that meet the Small Business Administration’s definition of a small business would be exempt from TRAIGA’s obligations, this carveout does little to protect mid-sized firms and startups that aspire to scale. Similarly, the experimental sandbox program offers only temporary relief, meaning that companies developing cutting-edge AI would eventually face the same regulatory constraints.

TRAIGA’s regulatory framework also poses a threat to Texas’ long-term economic prospects. The state has positioned itself as a leader in AI and emerging technologies, attracting major investments from tech companies and infrastructure projects like the $500 billion Stargate Project, an AI infrastructure project that is meant to build large-scale infrastructure of advanced data centers across the United States that would power and develop cutting-edge AI. Regulations could drive these investments elsewhere.

TRAIGA represents a significant departure from the state’s claimed pro-innovation stance. If enacted, House Bill 1709 would create barriers that deter AI development, increase compliance costs, and open the floodgates for litigation. TRAIGA, as currently written, leans too heavily toward regulation at the expense of technological advancement. If Texas wants to remain a leader in AI, policymakers must reconsider the bill’s provisions and ensure that regulation does not become a roadblock to progress.