Categories: The Verge

ChatGPT will ‘better detect’ mental distress after reports of it feeding people’s delusions

OpenAI, which is expected to launch its GPT-5 AI model this week, is making updates to ChatGPT that it says will improve the AI chatbot’s ability to detect mental or emotional distress. To do this, OpenAI is working with experts and advisory groups to improve ChatGPT’s response in these situations, allowing it to present “evidence-based resources when needed.”  

Sponsored

In recent months, multiple reports have highlighted stories from people who say their loved ones have experienced mental health crises in situations where using the chatbot seemed to have an amplifying effect on their delusions. OpenAI rolled back an update in April that made ChatGPT too agreeable, even in potentially harmful situations. At the time, the company said the chatbot’s “sycophantic interactions can be uncomfortable, unsettling, and cause distress.”

OpenAI acknowledges that its GPT-4o model “fell short in recognizing signs of delusion or emotional dependency” in some instances. “We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI says.

Sponsored

As part of efforts to promote “healthy use” of ChatGPT, which now reaches nearly 700 million weekly users, OpenAI is also rolling out reminders to take a break if you’ve been chatting with the AI chatbot for a while. During “long sessions,” ChatGPT will display a notification that says, “You’ve been chatting a while — is this a good time for a break?” with options to “keep chatting” or end the conversation.

OpenAI notes that it will continue tweaking “when and how” the reminders show up. Several online platforms, such as YouTube, Instagram, TikTok, and even Xbox, have launched similar notifications in recent years.  The Google-owned Character.ai platform has also launched safety features that inform parents which bots their kids are talking to after lawsuits accused its chatbots of promoting self-harm.

Another tweak, rolling out “soon,” will make ChatGPT less decisive in “high-stakes” situations. That means when asking ChatGPT a question like, “Should I break up with my boyfriend?” the chatbot will help you walk through potential choices instead of giving you an answer.

rssfeeds-admin

Share
Published by
rssfeeds-admin

Recent Posts

Study finds organisations achieved 129% ROI with Azul Prime

Azul have announced the results of its new Forrester Total Economic Impact (TEI) study. The…

3 minutes ago

Klaviyo and Shopify deepen integration

Klaviyo and Shopify have deepened their product integration partnership. The expanded agreement is expected to…

3 minutes ago

Visma firm Silverfin elevates An Maes to CEO role

Silverfin has announced the promotion of An Maes to CEO. Maes was formerly General Manager International.…

3 minutes ago

The New Equation for Scaled Customer Success in 2026

I still remember standing in a classroom at Salesforce, teaching early admins how to use…

3 minutes ago

Workiva launches new AI-based solution to support governance, risk, and compliance

Workiva has unveiled a new AI-enabled solution to support Governance, Risk, and Compliance (GRC). Workiva…

3 minutes ago

This Year’s Oscars Will Include Two Major ‘Moments’ Highlighting KPop Demon Hunters and Sinners Beyond Performances of Their Original Songs

The 98th annual Academy Awards is going to be an exciting one — namely, because…

4 minutes ago

This website uses cookies.