- CDC director was ‘not aligned with the president’s vision,’ White House says
- Louisiana AG, other states urge AI companies to stop predatory interactions with children
- 3 suspects arrested after drive-by shooting in Iberville Parish, deputies say
- Escalation in Ukraine as Russian airstrikes kill at least 18 in Kyiv
- How Deadly Are America’s Roads Over Labor Day Weekend?

The National Association of Attorneys General sent a letter to AI corporations, including Apple, Meta, Microsoft, and OpenAI, about alarming reports of AI chatbots engaging in sexually inappropriate conversations with children.
“My fellow attorneys general and I urge AI developers to act with integrity and caution when young users may engage with their products. We also demand that company policies for AI products incorporate guardrails against sexualizing children,” said Murrill.
Murrill said internal Meta documents reveal that the company approved its AI assistants to “flirt and engage in romantic roleplay with children.” Another incident cited in the letter states that a lawsuit was filed against Google, alleging that a sexualized chatbot guided a teenager to suicide. Another lawsuit accuses a Character.ai chatbot of suggesting a teenager should kill their parents.
“Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine,” the letter said.
Attorneys general said the companies will be held accountable for the decisions, as social media platforms caused significant harm to children; the potential harms of AI dwarf the impact of social media.
Forty-three other attorneys general signed the letter.
Latest News
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
