A Senate bill designed to protect children from AI chatbots is under fire from human rights activists who say it would do something even more dangerous: build a foundation for online surveillance while restricting access to constitutionally protected speech.
The GUARD Act, introduced by Sen. Josh Hawley, will order age verification to find “AI friends,” a group of AI systems designed to interact like humans. Users under the age of 18 will be completely banned.
What the GUARD Act requires
The bill’s main mechanism is age verification, but not the kind where you click on a box to confirm you’re over 18. Self-verification is not allowed.
Instead, the GUARD Act requires real-world identifiers. Consider financial records, government-issued records, or other information-related data. In English: in order to talk to AI, you need to prove who you are in the world, and create documents that link what you do online.
The original version of the bill cast a wide net, covering almost all AI chatbots. After pushback from human rights organizations, lawmakers scaled back the scope to focus more on “AI companions,” a group that is particularly concerned. But age proofs remained strong, and concerns about privacy and speech did not take on any less significance.
Defining what should be an “AI companion” versus a chatbot is always no small task. The line between a customer service bot, an AI tutor, and a conversational partner blurs quickly. That ambiguity creates a risk for manufacturers who may over-comply to avoid liability, limiting the possibility of using devices for which the bill was not intended.
The Problem of Radical Change
The bill doesn’t just add to the argument for access. It makes people of all ages banned. Critics say this goes beyond what existing laws allow, even for things that could be seen as harmful to children.
The Electronic Frontier Foundation has been outspoken, touting the law as a way to protect privacy. Their argument is straightforward: affirmative action is not limited to children. It affects everyone, because every user has to prove their age to use the service. This means that the authorities provide private data to discuss with the programs.
The EFF also warns that the law could limit young people’s access to digital devices, and that the freeze will only extend to users. Developers and companies that build AI tools face a regulatory environment where the safest legal strategy is to ban more, not less. When the penalty for wrong age verification is high, the right business idea is to block it. This means that academic use, health care tools, and smart use can all be a disaster.
The placed construction concerns
When platforms need to collect and verify real-world documents, the precedent will not disappear as the policy debate continues. It becomes an integral part of how people interact with AI systems, a repository of real-time information for digital conversations.
The EFF’s plan is not clear: this is construction dressed up as child protection laws. Authentication systems collect information, and the data collected is data that can be broken down, called up, or retrieved.
Age restrictions create barriers to entry that have a major impact on small companies and open source projects. A large technology company can absorb the cost of building an identity verification infrastructure. A two-person startup building a training AI can’t.
What does this mean for the AI industry?
Narrowing down from “all AI chatbots” to “AI peers” shows that the time to engage people and the time to comment on the community can make the final word. But the basic structure of the guarantee survived that reformation.
For users, the outcome depends on how the “AI partner” is defined in the law enforcement process. If the definition is narrow, most people will not notice. If it expands through rulemaking or judicial interpretation, the certification requirement could affect a wider range of AI systems than the bill’s proponents indicate.





