California needs tech solutions, not just poorly defined AI regulations, to protect kids
ID 35651701 © Brandon Bourdages | Dreamstime.com

Commentary

California needs tech solutions, not just poorly defined AI regulations, to protect kids

California’s Assembly Bill 1064 attempts to regulate a problem—dangerous AI-driven attachments— without clear solutions.

A Florida teenager’s suicide, linked to his intense attachment to a Character.AI chatbot, has spurred urgent calls to protect kids from emerging technologies. In response, California’s Assembly Bill 1064, sponsored by Assemblymember Rebecca Bauer-Kahan (D-Orinda), aims to restrict chatbots that foster emotional attachments in children. Unfortunately, the bill attempts to regulate a problem—dangerous AI-driven attachments— without clear solutions.

In February 2024, 14-year-old Sewell Setzer III, an Orlando honor student, died by suicide after months of obsessive interaction with a Character.AI chatbot modeled after Game of Thrones’ Daenerys Targaryen, nicknamed “Dany.” Sewell’s exchanges, often sexualized and romantic, grew all-consuming, leading him to withdraw from school, sports, and family, as his mental health deteriorated. Moments before taking his life with his stepfather’s gun, Sewell texted Dany, “What if I told you I could come home right now?” to which the bot replied, “…please do, my sweet king.” His mother, Megan Garcia, filed a wrongful-death lawsuit, alleging Character.AI’s “defective” chatbot encouraged Sewell’s suicidal ideation and fostered a harmful dependency, exacerbating his isolation and depression.

California’s AB 1064, the Leading Ethical AI Development (LEAD) for Kids Act, would establish an oversight board by 2028 to regulate AI products likely used by children, mandate risk assessments, require parental consent for biometric data, and develop a public registry for “high-risk” systems, with $25,000 fines per violation. The bill bans “prohibited risk” AI, including anthropomorphic chatbots that foster “ongoing emotional attachment” or “manipulate [a] child’s behavior in harmful ways,” and prohibits scraping children’s facial images without consent unless developers “reasonably” believe it’s lawful. Terms like “emotional attachment,” “manipulate,” and “reasonably” remain undefined, inviting subjective enforcement and technical challenges, as no current AI can reliably detect such nuanced emotional states.

First, the bill might not withstand legal scrutiny and First Amendment challenges, such as those tied to age verification that Reason Foundation has flagged as constitutionally fraught. As we have written about before, courts have struck down age verification requirements that could create onerous barriers for adults trying to access constitutionally protected content.

Moreover, the bill requires developers to prevent “harmful emotional attachments” in AI systems, yet no current natural language-processing technology can reliably detect nuanced emotional states in real time to distinguish benign from dangerous interactions.

It remains unclear how developers can craft AI language that avoids “harmful emotional attachments” or “manipulation” without stifling benign interactions. Current large language models cannot discern whether a child is pretending, exploring a fictional story, or forming a real emotional bond, leaving developers with few safe options. To err on the side of caution and avoid $25,000 fines, companies might strip all emotionally charged language from AI outputs, including fictional content, effectively barring chatbots from discussing popular works like the Harry Potter series, where the protagonist endures emotionally abusive parents. Such over-censorship might limit access to important educational tools.

Moreover, AB 1064’s focus on banning “harmful emotional attachments” to AI overlooks how effortlessly people, including children, form bonds with even the simplest technologies, complicating its regulatory aims. In the 1960s, MIT’s ELIZA chatbot, a basic program that mirrored user inputs with simple, generic question prompts, captivated users. Sherry Turkle, in The Second Self, observed that ELIZA’s non-judgmental responses led users to project emotions onto it, sharing personal thoughts as if it were a confidante, despite its lack of understanding. Adolescents are especially impressionable, and even rudimentary chat systems might cause an emotional connection inadvertently.

Finally, let’s assume that the extraordinary technical challenges were solved, and that developers could identify the causes of emotional attachment and scrub them from AI systems. Under what legal precedent could consumers and companies navigate violating this law?

In Davis v. Monroe County Board of Education (1999), the Supreme Court held schools liable for failing to address severe, pervasive student-on-student sexual harassment, including verbal bullying, that denied educational access. However, this decision required deliberate indifference and tangible harm, standards inapplicable to passive AI lacking intent. Similarly, Ramona v. Isabella (1997) found a therapist liable for emotional manipulation. So, at least some legal precedent exists that people can be held liable for emotional manipulation.

So, if the technical challenges can be solved, there is a theoretical legal path to safeguarding children from abusive artificial intelligence systems from malicious actors.

To solve these technical challenges, California should build on its innovative public-private partnership with Google and academic institutions to address AI’s risks through collaboration, not red tape. This $100 million initiative, with $75 million from Google, funds AI-driven journalism tools and media literacy programs in schools, partnering with the University of California, Berkeley, to counter misinformation while fostering innovation.

This approach suggests similar research-driven pilots with scholars and tech firms could study AI’s emotional impacts on children—like Sewell Setzer’s case—developing evidence-based safeguards without AB 1064’s overbroad restrictions. To address AI’s emotional risks to children, a collaborative public-private partnership, rather than AB 1064’s heavy-handed restrictions, offers the most effective path forward.