California cracking down on AI chatbots
The new laws require tech companies — many of which are based in San Francisco — to have protocols and warnings regarding AI companion chatbots.
The governor said, “Emerging technology like chatbots and social media can inspire, educate, and connect. But without real guardrails, technology can also exploit, mislead, and endanger our kids. We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”
“We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale,” Newsom said.
Under California’s new laws, tech companies that develop companion chatbot platforms must establish protocols to identify and address users’ expressions of self-harm. Platforms must also disclose to users that interactions are artificially generated. Minors must be provided break reminders, and prevented from viewing sexually explicit images generated by a chatbot, according to Newsom’s office.
Newly signed legislation also establishes a prohibition against chatbots representing themselves as health care professionals that can provide medical advice. Tech companies will also be held liable for real-world harm caused by their AI products.
The world’s largest AI event, Dreamforce, began Tuesday in downtown San Francisco. For Salesforce’s annual conference, speakers include Google CEO Sundar Pichai, Anthropic CEO Dario Amodei, and OpenAI COO Brad Lightcap, and Salesforce CEO Marc Benioff.
Benioff called AI the “next great technology transformation — where humans and AI work together to drive productivity, growth, and meaningful change.”
AI-related legislation signed by Newsom this year includes:
A study released last month documented cases of ChatGPT chatbots triggering psychotic episodes in adult users by reinforcing grand ideas, blurring boundaries of reality, and encouraging delusional beliefs.
Researchers wrote, “We have documented the recent remarkable increase in reported cases of … ‘AI psychosis,’ wherein individuals … have had delusional beliefs encouraged and arguably amplified through interactions with autonomous AI agents.”
Through chatbot conversations, one user began to believe that he could fly if he jumped off a tall office building in New York City. Another man planned to seek revenge against OpenAI executives for apparently deleting a chatbot who he fell in love with, named “Juliet.” In another case, a woman’s conversations with her chatbot led heated arguments with her husband over how much time she spent using AI.
The corner of Dover Road and Main Street in Chichester has a new presence: a…
If you think you’ve been paying more at the grocery store lately, you’re mostly right.…
Alice: Madness Returns creator and director American McGee says he "pasted dildos" on the head…
Watching a streamer find their way through the digital labyrinth of some spooky game—particularly one…
OpenAI has announced a new Bio Bug Bounty program for GPT-5.5 as part of its…
In the wake of the 2024 presidential election, communities across the country are still reeling…
This website uses cookies.