A version of the following public comment was submitted to members of the California Senate Standing Committee on Privacy, Digital Technologies, and Consumer Protection on March 26, 2026.
While the rise of deepfakes through artificial intelligence (AI) digital replicas is an emergent issue, the penalties placed on online platforms in Senate Bill 1142 (SB 1142) are large enough to lead to several unintended outcomes affecting expression.
SB 1142 would require large online platforms to create a mechanism for users to report and remove unauthorized digital replicas of themselves. As part of this mechanism, platforms would be required to respond within 48 hours, either by taking down the content or by providing a written explanation for why the content remains online. To keep up with these obligations, the bill would require platforms to build and staff rapid notice and review systems, provenance-aware reporting tools, and legal processes capable of distinguishing harmful uses from protected humor.
Platforms that systemically fail to comply risk civil penalties of up to $50,000 per day, and these penalties are compounded by the fact that the attorney general and every city attorney in California may bring a complaint.
Though the goal of this bill is likely to provide a meaningful mechanism for users to protect their likeness, the 48-hour response window, combined with hefty penalties for noncompliance, would likely lead online platforms to opt to remove most reported content altogether. Despite SB 1142’s exception for satire and parody, such protected expressive works will likely also be caught up in takedowns because this bill creates strong incentives for overly cautious content moderation. All of these would impose significant compliance costs that smaller platforms and open-source-oriented services may struggle to bear.
SB 1142’s proposed 48-hour takedown regime would effectively pressure platforms to act as the primary enforcers against user speech involving digital replicas. This likely puts the bill in conflict with Section 230, which means it risks being rejected by courts. Recent California efforts to regulate AI-generated content in the electoral context illustrate this, as courts struck down laws like Assembly Bill 2655 on Section 230 grounds after finding that the state could not impose liability on platforms for failing to block or remove user-generated deepfake videos, even when those laws were framed as targeting harmful or deceptive content during campaigns. While SB 1142 states that this section “does not impose liability on a social media platform if that liability is prohibited by Section 230,” this clause is incongruent with the threat of significant civil penalties for not taking certain content down fast enough.
Rather than holding platforms liable for content users generate and post online, we recommend that the committee focus on holding individual bad actors accountable for posting AI-generated deepfakes with the intention of defaming another person.