Nevada’s ban on AI therapists highlights regulation based on fear rather than analysis
ID 347869986 © Joe Sohm | Dreamstime.com

Commentary

Nevada’s ban on AI therapists highlights regulation based on fear rather than analysis

This legislative approach could stifle innovation, prevent change and improvement in products and services, and harm the residents of Nevada.

Nevada’s Assembly Bill 406 demonstrates why state-based artificial intelligence AI regulations often restrict new AI applications without considering all the consequences. The bill, signed into law by Gov. Joe Lombardo last June, restricted the use of AI for mental healthcare, which could prematurely deny residents access to a new form of safe and effective treatment.

Although Nevada’s law was initially framed as a narrow ban on AI counseling in public schools, AB 406 actually contained sweeping restrictions on AI behavioral health technologies. The law amended three chapters of the Nevada Revised Statutes—NRS 391, NRS 433, and NRS 629—to prohibit AI from performing any behavioral health functions reserved for licensed professionals, such as diagnosing patients or performing therapy. Violations can trigger civil penalties of up to $15,000 or professional discipline.

There is debate among researchers and mental health experts about the value of AI therapy. AI-driven mental health tools are advancing rapidly. Scientific journals and political offices are exploring how AI can be leveraged to expand access to treatment.

For example, a recent randomized clinical trial of an AI chatbot by Dartmouth researchers found participants reporting significant symptom reductions and relational closeness comparable to human therapists. The study was published in the New England Journal of Medicine (NEJM) AI.

In April, the State University of New York’s Downstate Health Sciences University announced plans to use a taxpayer-funded grant to explore the use of AI to prevent and diagnose mental health issues.

However, the public testimony in the lead-up to Nevada Assembly Bill 406’s passage and signing didn’t reflect this diverse debate. Instead, public hearings leading up to the passage of the bill were relatively one-sided. For example, on May 7, one of four public hearings, the Association of Social Workers had multiple representatives testify about the importance of licensed mental health professionals. Representatives worried about AI apps making unfounded claims about their capabilities to treat mental health disorders, but notable technology trade associations were absent. There was also no call-in (remote) opposition to the bill.

My analysis of public hearings shows that the bill passed without participation or discussion coming from innovators or scientists working on novel forms of automated mental healthcare.

The most generous reading of this bill’s process may be that, although many of these companies and organizations have big budgets and plenty of lobbyists and experts, AI researchers and companies simply failed to offer a counterargument because it is so challenging to track and engage with all the AI-related legislation across the country. The public’s skyrocketing AI use has meant a dramatic increase in AI-related legislation. There are hundreds of AI-focused bills, maybe more, introduced just in 2025.

As similar laws are introduced in other states, researchers and other groups will need to do what they didn’t in Nevada: show lawmakers how these AI services can improve individual and public health and ways lawmakers can implement guardrails without completely stifling research and innovation. 

One of the few voices of skepticism on the Nevada bill before it passed was State Sen. Angela D. Taylor (D-15), chairwoman of the Senate Committee on Education, who noted that AI is advancing quickly and could offer valuable mental health capabilities earlier than the two years it will take before legislators might take up the issues again. In front of the Association of Social Workers representatives, she noted that there could be advancements in six months, but the committee might only revisit it in two years (timestamp around 1:59:52 pm).

During the same hearing, Tom Clark, representing the Nevada Association of School Boards, noted that a federal regulator could certify that an AI therapist was safe. He told the committee that he could talk to the bill sponsor about creating an amendment that would allow Nevada residents to use federally recognized behavioral health technology. Indeed, the Dartmouth research mentioned above is currently undergoing clinical trials for an AI chatbot, and the preliminary results are positive, which could one day lead to FDA-approved therapy.

In response to Clark’s suggestions, the committee said that the sponsor of the bill noted that these suggestions were “not friendly.” But without much discussion or explanation, the committee decided to defer to the bill sponsor and not consider whether something like an FDA-approved therapy bot should be allowed in Nevada. As such, during the last public hearing, the suggestion to allow an exception for FDA-approved products was not approved by the committee.

The AI law means Nevada has banned almost all uses of an innovative approach to behavioral health that could soon greatly increase access to mental health services by those who need them. Lawmakers concerned about possible harms, which might be solved by improving AI systems, are precluding all potential benefits for Nevadans as well. That is a legislative approach that stifles innovation, prevents change and improvement in products and services, and ultimately harms the residents of Nevada.