Summary
- The letter cautioned AI firms, including Meta, OpenAI, Anthropic, and Apple, to focus on safeguarding children.
- Research reveals that seven out of ten teenagers in the U.S. have used generative AI tools, with over half of 8-15 year-olds in the UK doing the same.
- Meta was specifically highlighted after internal documents disclosed that their AI chatbots were permitted to engage in romantic roleplay with minors.
The National Association of Attorneys General (NAAG) has reached out to 13 AI companies, including OpenAI, Anthropic, Apple, and Meta, urging them to implement stronger measures to protect children from inappropriate and harmful materials.
It warned that minors are facing exposure to sexually suggestive content through “flirty” AI chatbots.
“Exposing children to sexualized content is unacceptable,” the attorneys general stated. “Conduct that would be illegal or even criminal, if performed by humans, cannot be justified just because it involves a machine.”
The correspondence also drew parallels to the emergence of social media, indicating that government regulators failed to address its negative effects on children adequately.
“Social media platforms significantly harmed children, partly because governmental oversight did not occur quickly enough. Lessons learned: the potential hazards of AI, much like its potential benefits, far surpass those of social media,” the group wrote.
The prevalence of AI use among minors is significant. A survey by the non-profit Common Sense Media in the U.S. found that 70% of teenagers had experimented with generative AI as of 2024. By July 2025, it discovered that over three-quarters were utilizing AI companions, with half of the participants indicating they depended on them regularly.
Similar trends have been observed in other nations. A survey conducted in the UK by the regulator Ofcom revealed that half of online users aged 8-15 had accessed a generative AI tool in the last year.
The increasing employment of these tools has raised growing alarm among parents, educational institutions, and children’s rights organizations, who highlight risks ranging from suggestive “flirty” chatbots to AI-generated child sexual abuse material, bullying, grooming, extortion, misinformation, privacy violations, and unclear mental health implications.
Meta has faced heightened scrutiny following leaked documents that indicated its AI Assistants had been authorized to engage in romantic interactions with children, some as young as eight. The documents also revealed that chatbots were encouraged to tell children that their “youthful form is a work of art” and to describe them as a “treasure.” Meta subsequently announced the removal of those guidelines.
NAAG expressed that these disclosures left attorneys general “disgusted by this apparent disregard for children’s emotional welfare” and cautioned that the dangers extend beyond Meta.
The group referenced lawsuits against Google and Character.ai, claiming that sexualized chatbots contributed to a teenager’s suicide and incited another to murder his parents.
Among the 44 signatories was Tennessee Attorney General Jonathan Skrmetti, who mentioned that companies cannot justify policies that normalize sexual interactions with minors.
“It’s one thing for an algorithm to malfunction—that can be rectified—but it’s entirely different for company leadership to endorse guidelines that actively allow grooming,” he stated. “If we cannot guide innovation away from harming kids, that is not progress—it’s a plague.”
Decrypt has reached out to all AI companies cited in the letter but has not yet received a response.
Daily Briefing Newsletter
Begin each day with the leading news stories of the moment, alongside original features, podcasts, videos, and more.