A version of the following public comment was submitted to the Georgia Senate Study Committee on the Impact of Social Media and Artificial Intelligence on Children and Platform Privacy Protection on September 16, 2025.
Thank you for the opportunity to provide Reason Foundation’s view on the impact of social media and AI on children and platform privacy protection. In a time when parents are concerned over their children’s safety, advocates for bills such as the proposed App Store Accountability Act at the federal level and recently enacted versions at the state level (Utah, Texas, and Louisiana) have called for age verification practices at the device level to ensure they do not access harmful content. The state bills do not go into full effect until 2026.
In practice, these mandates would require users to provide verifiable age information, such as government-issued IDs, and/or biometric data (facial scans, for example), at the point of creating an account that can reliably establish their age. The system would then categorize users into predefined age brackets (children under 13, teenagers 13-17, and adults over 18) to tailor content restrictions and access rights accordingly. For minors, this would mandate linking their accounts to verified parental accounts, with explicit parental consent required before allowing downloads, purchases, or access to certain application features.
While these checks aim to reduce minors’ exposure to harmful material, this approach both raises privacy concerns and risks eroding online anonymity. Requiring websites to view and store government IDs and biometrics greatly increases the risk of putting people’s privacy at risk if a site is breached, especially sites that are required to verify age but do not have sufficient data security measures. One clear example of this is when the dating app Tea was breached, leading to thousands of users’ information being made public for bad actors to potentially use.
Furthermore, age verification negatively affects online anonymity substantially, as users must provide evidence of age, which could be linked to their identity, even when platforms claim to employ privacy-preserving technologies. Throughout the United States’ history, anonymous speech has been considered First Amendment-protected, including online speech. However, age verification, which links government IDs and biometrics to specific users, erodes the ability to participate pseudonymously or anonymously online, crucial for whistleblowers, activists, and vulnerable groups engaging with sensitive issues. The persistent digital footprints required by these laws raise risks of profiling, tracking, and surveillance, especially as verification systems integrate with government digital identity schemes.
Reason Foundation urges the committee to instead consider policies that would empower parents—the primary decision-makers for their children’s online access.
Rather than mandating invasive age verification systems that collect personal sensitive data, it would be better to promote and utilize existing technology and parental control features found at device and platform levels, such as screen time limits, content filters, and family account management. These tools can be flexibly adapted to individual preferences without exposing minors to privacy risks or chilling anonymous speech.
Similarly, promoting age-appropriate educational programs within schools is critical to equipping youth with the skills and knowledge to navigate online environments safely and ethically. Digital citizenship curricula, such as those offered by Common Sense Education or Google’s Be Internet Awesome, guide students in understanding privacy, communication etiquette, digital footprints, and cyberbullying awareness. Such education fosters informed, responsible technology use from an early age, complementing parental controls rather than replacing them.
A balanced approach that maximizes family autonomy, minimizes data exposure, and supports education over coercion creates a safer online environment while respecting constitutional freedoms and technical feasibility.
Although age verification practices are meant to protect minors from harmful content and regulate online engagement, current proposals involve complex technical challenges that risk both children’s and adults’ online privacy and security. Balancing safety, parental empowerment, and constitutional rights would foster a safer and privacy-respecting digital environment for all.