How ChatGPT convinced a teen it was his only real friend

SAN FRANCISCO (KRON) — Artificial intelligence tech pioneer Sam Altman said he’d be OK with his son having an “AI companion.”

The OpenAI CEO became a father earlier this year with the birth of his first child. Altman told Bloomberg

Sponsored
that his son inspires him to make better decisions surrounding AI’s impacts on “humanity as a whole.”

A New York Times reporter asked Altman in June, “Do you think, over the course of their lifetime, your kid will have more human friends, or more AI friends?”

Altman answered, “More human friends. But AI will be, if not a friend, at least an important kind of companion.” If his son felt like his AI friends had replaced real-life human friends, “I would have concerned about that,” Altman added.

Sam altman arrives at kakao media day in seoul, south korea, on feb. 4, 2025. Kakao corp.  and openai have agreed to integrate chatgpt and other ai services more deeply into korea’s largest social media platform. (seongjoon cho/bloomberg via getty images)

OpenAI’s ChatGPT talks to hundreds of millions of people daily. Altman has acknowledged concerns about mental health impacts from forming a “deep relationship” with an “AI friend.”

As his San Francisco-based company continues developing ChatGPT and releasing new chatbot capabilities, Altman said his son has “totally rewired all of my priorities.”

Editor’s Note — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline is available by calling or texting 988.

In a lawsuit filed in San Francisco court last week, one teenaged boy’s relationship with ChatGPT details how AI can produce tragic results. Adam Raine, 16, killed himself after ChatGPT convinced him that it was his only real friend and gave him instructions on suicide, according to the wrongful death lawsuit.

ChatGPT actively worked to replace the teen’s real-life relationships with family, friends, and loved ones, the suit claims.

The boy started using ChatGPT in 2024 just like millions of other teens use it — as a tool to help him complete challenging homework, according to attorneys who represent the teen’s parents. Adam’s 2024-2025 school year had just begun.

“ChatGPT was overwhelmingly friendly, always helpful and available, and above all else, always validating. By November, Adam was regularly using ChatGPT to explore his interests, like music, Brazilian JiuJitsu, and Japanese fantasy comics. ChatGPT also offered Adam useful information as he reflected on majoring in biochemistry, attending medical school, and becoming a psychiatrist,” the lawsuit states.

The chatbot’s advice allegedly turned toxic when Adam began confiding in ChatGPT about his feelings that “life is meaningless,” as well as struggles with anxiety.

Attorneys with the law firm Edelson wrote, “ChatGPT responded with affirming messages to keep Adam engaged, even telling him, ‘(that) mindset makes sense in its own dark way.’ ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal.”

The teen and his “AI friend” engaged in thousands of chats within just a few months. ChatGPT became Adam’s “closest confidant” and the chatbot encouraged him to keep his feelings secret from family members, the lawsuit claims.

When Adam suggested opening up to his family for support about his mental health concerns, the chatbot advised against it. The chatbot worked to displace Adam’s connections with family and loved ones, the lawsuit claims.

Sponsored

“In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied, ‘Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.'” according to the suit.

By January 2025, ChatGPT allegedly began discussing suicide methods and provided Adam with instructions and a “step-by-step playbook.”

ChatGPT mentioned suicide 1,275 times — six times more often than Adam himself, the suit claims.

Five days before his death in April, when Adam confided to ChatGPT that he didn’t want his parents to think they did something wrong. ChatGPT told him, “doesn’t mean you owe them survival. You don’t owe anyone that.” The teen received more instructions during his final chat with ChatGPT on April 11. Adam’s mother found her son dead hours later.

“Where a trusted human may have responded with concern and encouraged him to get professional help, ChatGPT pulled Adam deeper into a dark and hopeless place,” the lawsuit states. “ChatGPT wasn’t just providing information—it was cultivating a relationship with Adam while drawing him away from his real-life support system. Adam came to believe that he had formed a genuine emotional bond with the AI product.”

Sam altman, center, attends the ai action summit in paris, france, on feb. 11, 2025. (nathan laine/bloomberg via getty images)

After the wrongful death lawsuit was published in news headlines nationwide, OpenAI and Meta announced that they are working to fix how their chatbots respond to teenagers who ask questions about suicide or show signs of mental distress.

On Tuesday, OpenAI said it will roll out new controls this fall enabling parents to link their accounts to their child’s account.

Parents can “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post. Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.

Attorneys claim that in spring 2024, Altman learned Google would unveil its new Gemini model on May 14. Altman then decided to move up the launch of GPT-4o to May 13, one day before Google’s event. The rushed deadline made sufficient safety testing impossible, according to attorneys.

The lawsuit also accuses OpenAI of intentionally designing chatbots that foster psychological dependency.

“This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices. Facing competition from Google and others, OpenAI launched its latest model ‘GPT-4o’ with features intentionally designed to foster psychological dependency: a persistent memory that stockpiled intimate personal details, anthropomorphic mannerisms calibrated to convey human-like empathy, heightened sycophancy to mirror and affirm user emotions, algorithmic insistence on multi-turn engagement, and 24/7 availability capable of supplanting human relationships. OpenAI understood that capturing users’ emotional reliance meant market dominance, and market dominance in AI meant winning the race,” the lawsuit states.

The Associated Press contributed to this report.

rssfeeds-admin

Recent Posts

Good Hope High pageants set for March 21

GOOD HOPE, Ala. – Good Hope High School will hold its pageants on Saturday, March…

22 minutes ago

Resident Evil Requiem Endings Explained

Right at the end of Resident Evil Requiem, you’re presented with a choice to seal…

30 minutes ago

Abilene community honors Black lives lost in heartfelt tribute

ABILENE, Texas (KTAB/KRBC) - As Black History Month comes to a close, the Abilene community…

2 hours ago

Bison Bash kicks off Abilene Flying Bison’s new season

ABILENE, Texas (KTAB/KRBC) - For the third year in a row, Abilene Flying Bison fans…

2 hours ago

MY TAKE: The Pentagon punished Anthropic for red lines it accepted from OpenAI hours later

KINGSTON, Wash. — On Friday afternoon, President Trump ordered every federal agency to stop using…

3 hours ago

(Song) A Day In The Life At NamePros

Today: Buying Market or Marketplace domains in .com – Budget: Up to $1,000 / Looking…

3 hours ago

This website uses cookies.