California Gov. Gavin Newsom signed the Digital Age Assurance Act (Assembly Bill 1043) into law on Oct. 13, marking a significant evolution in state approaches to online youth safety. There is room for improvement, but the act introduces a meaningful first step toward a more privacy-preserving, age-signaling model intended to minimize data exposure while improving compliance certainty for businesses. This step is a welcome advancement over earlier approaches, but it also creates potential complications if later paired with more restrictive bills, a tradeoff that policymakers should weigh carefully.
California’s AB 1043 mandates that parents or users declare age during initial setup, and the device encodes that information into an encrypted signal that communicates an age bracket to apps and online services. This signal then informs developers of whether the user falls within compliance categories such as “under 13,” “13-15,” “16-17,” or “18+.” Unlike previous app store age-verification bills, AB 1043 assigns enforcement authority solely to the California attorney general. It prohibits private lawsuits over violations, thereby reducing compliance anxiety of frivolous private litigation. For families, this model minimizes exposure to ID theft and limits the need to share sensitive information with online platforms.
This approach contrasts with that of Utah, Texas, and Louisiana, which enacted the first statewide app-store age verification laws in 2025. The bills require app stores and developers to verify users’ ages through “commercially reasonable” methods. Utah and Louisiana’s laws are set to go into effect in 2026, while Texas’ has been temporarily blocked by a federal judge on constitutional grounds. Two federal bills, the App Store Accountability Act, introduced by Sen. Mike Lee (R-Utah), and the Parents Over Platforms Act, introduced by Rep. Jake Auchincloss (D-Mass.), include the same “commercially reasonable” language as the state bills. Although this phrasing does not explicitly mandate government ID or biometric checks, it creates strong incentives for app stores to collect the most precise forms of evidence available: driver’s licenses, passports, or credit cards. Fearing the risk of lawsuits and non-compliance penalties, companies would default to the most definitive identification techniques.
In 2025 alone, several popular apps that already required government ID checks for age verification suffered significant data breaches, highlighting the privacy risks associated with such mandates. The Tea app, a women-only dating advice platform that required users to upload selfies and copies of government-issued IDs as part of its account verification, experienced a major breach in July that exposed over 70,000 identification images and sensitive personal data. In October, global messaging platform Discord suffered a breach directly tied to its compliance with the United Kingdom’s Online Safety Act, which mandates robust age verification for platforms likely to be accessed by minors. To meet these legal requirements, Discord began requiring UK-based users to submit either facial scans, government IDs, or the last 4 digits of credit cards for age checks, vastly expanding the pool of highly sensitive data at risk. When hackers later compromised a third-party vendor managing this information, thousands of ID photos and partial credit card details were exposed. These incidents underscore how rigid age-verification systems can turn well-intentioned privacy protections into security liabilities and inadvertently create new vectors for harm.
In contrast, AB 1043 correctly prioritizes privacy and security by using a self-declared age signal rather than a verification process. The law integrates core privacy-by-design principles by separating identity from compliance status and ensuring that user data never leaves local systems in identifiable form. It also provides developers with clearer compliance certainty than Utah-style frameworks, which remain mired in vague terms like “commercially reasonable.”
However, there are still issues with AB 1043 that should be addressed. First, the law’s mandate that device makers integrate age signals into all devices risks sidelining parents from key digital literacy decisions. For AB 1043 to achieve its stated balance between safety, privacy, and parental empowerment, California could modify its framework to make age signaling optional for parents rather than required.
Second, debates over youth online safety laws raise a subtler issue: their impact on family relationships and parental oversight. Age-verification and age-signal frameworks are often presented as empowering parents, but automation can easily displace meaningful dialogue between parents and their children. True digital literacy depends on ongoing dialogue, trust, and continuous education about online risks, not on technical filters alone. When technology assumes the entire role of risk management, it can foster complacency and a false sense of security, as if software settings could replace parental judgment. Policymakers should therefore ensure that digital safety tools operate as supports for families, not substitutes for them.
California’s initial framework, in this respect, could be refined through a simple but meaningful adjustment: Make the device-level age signal optional for parents rather than compulsory. An opt-in structure would preserve AB 1043’s privacy benefits while strengthening family agency. Parents could choose to enable the system during device setup if they desire automated filtering or app age controls, or skip it entirely for now if they prefer to guide their children’s use through household rules and open communication. Optional enrollment would further align the policy with California’s broader digital rights precedents, reinforcing choice, consent, and proportionality.
On the whole, California’s AB 1043 represents a meaningful advancement in the national debate on age verification. It replaces high-risk identity checks with privacy-preserving signals, curtails constitutional litigation risks, and clarifies enforcement responsibility. But if the state were to shift to an opt-in model, it could preserve the law’s privacy protections, align with its digital rights values, and restore parents to the central role in guiding children’s online wellbeing. Age assurance need not come at the expense of privacy or parental autonomy.