By Kate Newhouse, Kooth CEO
Last week, California Governor Newsom signed Senate Bill 243 - the first US law to safeguard AI companion chatbots designed for social and emotional interaction. It’s a landmark step forward, though not without its challenges.
At Kooth, we’ve pioneered digital mental health support for over 20 years, including across California. We’ve seen how, when used responsibly, technology can break down barriers that many people face in accessing timely, effective care, making good mental health accessible to everyone.
We believe AI has vast potential to improve mental health. For us, technology is a bridge to better outcomes, not an end in itself.
That’s why we’re exploring AI tools deliberately and ethically, grounded in evidence, with clear guardrails to assess and manage risk.
But we’ve also seen how quickly public trust in technology erodes when safety fails. In recent years, AI companions built without safeguards have caused real harm, from normalising risky behaviour to amplifying despair and even suicide. The human cost of unchecked AI is tragic.
California’s new law introduces essential protections: transparency about when users interact with AI, age-appropriate limits on sensitive topics and requirements to refer users in crisis to help. These are common sense guardrails and an important baseline for user safety.
Still, this is just the beginning.
We know that many of our colleagues at Common Sense Media and the California chapter of the American Association of Pediatrics, respected voices in the conversation about protecting young people’s safety, were disappointed that this bill didn’t go far enough.
The bill also raises important questions:
- •How can we prove that AI safeguards work - across cultures, demographics, and contexts? 
- •How do we protect sensitive health data? 
- •What oversight and accountability do developers need to ensure these systems are safe, especially for young people? 
These are the same questions faced by developers of regulated digital mental health tools, which must comply with rules set out by the US Food and Drug Administration (FDA) or the UK Medical and Healthcare Products Regulatory Agency (MHRA) before being placed on the market.
While it’s unclear whether California’s law will push AI chatbots into medical device territory, responsible operators should expect that level of scrutiny.
To navigate these challenges, we need a diverse range of voices, from clinicians and engineers to ethicists, anthropologists, and philosophers. Most importantly, we need to hear from the young people growing up with these technologies.
We see this reality every day in Soluna, our digital behavioural health platform that’s available to all of California’s youth. One of the first questions young users ask is, “Are you real?” We’re proud to say all our coaches and counsellors are real people: trained, supported and supervised to deliver safe, trusted care.
Our next generation deserves technology that makes life better, not worse. California’s law is a step in the right direction; together, we can clear the path ahead.




