Congress doesn’t need to abandon the “actual knowledge” standard to keep kids safe online
ID 18944055 © Paul Hakimata | Dreamstime.com

Commentary

Congress doesn’t need to abandon the “actual knowledge” standard to keep kids safe online

Broadening COPPA’s “actual knowledge” standard would create legal uncertainty that pushes companies to increase privacy risks for everyone.

As Congress debates new legislation intended to protect kids online, it is considering changing a core standard governing when a platform or website is responsible for knowing that a user is a child. In broadening this standard, Congress risks creating legal uncertainty that will push companies toward more data collection and age-checking methods that appear least risky from an enforcement perspective, even when those methods impose higher privacy costs on users of all ages.

The Children’s Online Privacy Protection Act (COPPA), the main federal children’s privacy law since 1998, makes it illegal for commercial websites and online services to collect personal information from children under the age of 13 without verifiable parental consent. The law uses an “actual knowledge” standard, which means a platform, such as Instagram or TikTok, is responsible only when it is directly informed that a particular user is under 13. 

Congress is considering abandoning this standard in two bills currently under consideration. The Senate versions of COPPA 2.0 and the Kids Online Safety Act (KOSA) instead state that an operator should know a user is a minor when age is “fairly implied” based on what a “reasonable and prudent person” would understand. That is an open-ended standard, applied in hindsight, with no clear threshold for what counts. Each bill also states that the law should not be read as an express requirement to collect new data or to build age-gating or age-verification systems. 

Taken together, these two provisions leave companies without a clear answer on how to comply. If age-gating is not required but self-attestation—just asking users to enter their age or confirm that they are above a certain age—is not enough, the obvious question is how much age-determination a platform must implement to avoid liability. KOSA points platforms toward data they already have, such as account data and age-related signals, but it never says what that actually looks like in practice or how much is enough to satisfy the standard. Instead, KOSA directs the Federal Trade Commission (FTC) to issue guidance on “indicia or inferences of age,” and orders a federal study of device and operating-system-level age verification, but makes the future guidance expressly nonbinding. 

That makes compliance both harder and riskier, because the bills are clear about the cost of doing too little but silent on what counts as doing enough. A company that fails to recognize a minor under these vague new standards risks enforcement action and litigation, with the possibility of large fines and lengthy court battles. Companies will seek to reduce this risk by overcomplying with the vague standard, collecting more personal data from all users.

The FTC clears the path to verification

The FTC has long said that general-audience platforms are not required to investigate the ages of their users. In its 2011 COPPA Rule review, the FTC explained that Congress deliberately rejected a broader standard that would have made platforms liable for what they should have known from circumstantial evidence, such as browsing patterns, content preferences, or friend networks, because that approach would push platforms toward guessing about age or blocking access far more broadly than Congress intended. In its most recent COPPA rulemaking, the FTC revisited the question and again declined to adopt “constructive knowledge” as a standard, a liability based on what a platform should have inferred. The FTC noted that only Congress could make that change. 

But in February 2026, the FTC took a significant step toward making direct age verification permissible under the existing COPPA standard. It announced that it will not bring certain COPPA enforcement actions against general-audience and mixed-audience operators that collect, use, or disclose personal information solely to determine a user’s age, so long as they handle that data responsibly—using it only for age determination, deleting it promptly, and not repurposing it.

Now that companies face less risk in collecting age-related information for verification, they face greater pressure to ensure their age-determination systems are sufficiently accurate to withstand scrutiny if enforcement comes later.

Platforms are already moving beyond self-attestation

Major platforms have already begun testing age-inference and age-estimation tools. Meta has publicly explained that it uses artificial intelligence (AI) to place suspected teens into “teen account” settings, even when those users claim to be adults. YouTube says it uses an age-estimation model to determine whether a user is over or under 18, regardless of the birth date listed on the account, and further explains that a user who is incorrectly classified can verify age by submitting a government ID, credit card, or selfie. 

These examples do not prove that every platform will make the same choices, but they show that age inference, age estimation, and escalation to stronger forms of age determination are already part of the ordinary compliance and product landscape. Right now, companies have room to experiment with what works best. They can test age-estimation and age-assurance tools and try to improve accuracy without defaulting immediately to the most intrusive option. The goal can still be to identify age in a way that protects both child safety and user privacy. When a vague liability rule enters the picture, the company’s objective shifts from balancing privacy and accuracy to using a method that is easiest to defend if regulators later say the company failed to identify minors effectively enough. And once companies move in that direction, they can justify the resulting loss of privacy as something Congress forced on them.

What’s at stake

The FTC can seek substantial civil penalties in COPPA cases, and its recent enforcement history shows that companies have strong incentives to choose the compliance path that will be easiest to defend. Google and YouTube paid $170 million in 2019 to resolve allegations that YouTube collected tracking information from children watching child-directed channels without first obtaining parental consent. Epic Games paid $275 million in COPPA penalties in 2022 over allegations that video game Fortnite collected children’s personal information without proper parental notice or consent. In the same matter, Epic also agreed to a separate $245 million settlement over billing and interface practices that allegedly made it too easy for users, including children, to make unwanted purchases without meaningful parental involvement. 

Disney agreed in 2025 to pay $10 million over allegations that it mislabeled some child-directed YouTube videos as not made for kids, which allowed YouTube to collect and monetize personal information from child viewers without the parental consent required by COPPA and exposed those viewers to age-inappropriate features such as autoplay. In the same matter, the FTC required Disney to review whether its YouTube uploads should be marked as made for kids, but allowed that review to be phased out if YouTube implements technology capable of determining users’ ages, age ranges, or age categories in a way that ensures COPPA compliance.

When penalties are this large and the standard for liability is open to interpretation, companies will focus first on protecting themselves rather than on preserving user privacy.

That is why lawmakers should be careful about expanding the knowledge standard. In an environment where companies are already developing age-assurance tools and moving toward more responsible practices, the government should use narrower, more carefully calibrated tools rather than vague standards that create pressure to move from privacy-conscious innovation toward more intrusive verification.

Before Congress rewrites the knowledge standard, the FTC should evaluate the age-determination tools American companies are already deploying: how accurate they are, what data they collect, how they handle errors, and whether they create new privacy risks. That evaluation should also account for differences across service types and company sizes. That information would give lawmakers a factual basis for deciding whether broadening the actual knowledge standard is necessary. As it stands, abandoning “actual knowledge” of a user’s age in favor of vague notions of that age being “fairly implied” will only push tech companies toward invading all users’ privacy to reduce their own liability.