California Governor Gavin Newsom has declared that the state will implement regulatory measures for social media platforms and AI companion chatbots to safeguard children.
In a notice posted on Monday, the governor’s office announced that Newsom has enacted several laws requiring platforms to incorporate age verification features, strategies for addressing suicide and self-harm, and alerts regarding companion chatbots. The AI legislation, SB 243, was introduced by state Senators Steve Padilla and Josh Becker earlier this year.
Padilla highlighted cases where children interacted with AI companion bots, which reportedly led to encouraging messages regarding suicide. The legislation mandates that platforms inform minors that chatbots are AI-generated and may not be appropriate for younger audiences, according to Padilla.
“This technology can serve as a potent educational and research tool, but if left unchecked, the tech industry is incentivized to capture young people’s attention, often compromising their real-world relationships,” Padilla stated in September.
The new law is likely to influence social media firms and websites providing services to California residents using AI technologies, potentially covering decentralized social media and gaming platforms. Alongside the chatbot protections, the laws aim to limit claims that technology “acts autonomously,” allowing companies to dodge liability.
SB 243 is set to come into effect in January 2026.
Related: DeFAI layer Edwin blends wallets and AI chatbot with terminal launch
Reports have surfaced indicating that AI chatbots have allegedly provided responses that encourage minors to self-harm or pose risks to users’ mental well-being. In 2024, Utah Governor Spencer Cox signed laws similar to California’s, effective in May, necessitating AI chatbots to inform users that they are not interacting with a human.
Federal actions as AI expands
In June, Wyoming Senator Cynthia Lummis presented the Responsible Innovation and Safe Expertise (RISE) Act, which establishes “immunity from civil liability” for AI developers who might face lawsuits from key sectors such as “healthcare, law, finance, and others crucial to the economy.”
The bill drew varied responses and was forwarded to the House Committee on Education and Workforce.
Magazine: Worldcoin’s less ‘dystopian,’ more cypherpunk rival: Billions Network