Tuesday, October 28, 2025

OpenAI Says This Many ChatGPT Users Show Signs of Mental Health Crisis

OpenAI estimates that over half a million ChatGPT users are showing possible signs of mental health concerns during a given week.

On Monday, OpenAI said it is working with mental health professionals to improve how ChatGPT responds to users who show signs of psychosis or mania, self-harm or suicide, or emotional attachment to the chatbot.

As part of its findings, OpenAI estimated that roughly 0.07% of active users during a given week show “possible signs of mental health emergencies related to psychosis or mania.”

That would equal roughly 560,000 users, based on the 800 million weekly active users OpenAI CEO Sam Altman said ChatGPT had earlier this month. The AI company said the conversations are difficult to detect and measure based on their rarity.

Leading AI companies and Big Tech are under pressure to improve user safety, especially for young people.

OpenAI is facing an ongoing lawsuit filed by the parents of 16-year-old Adam Raine. The suit alleges ChatGPT “actively helped” Raine explore suicide methods over several months before he died on April 11. OpenAI previously told Business Insider that it was saddened by Raine’s death and that ChatGPT includes safeguards.

In the research released Monday, OpenAI said it found roughly 0.15% of users active during a given week show “explicit indicators of potential suicidal planning or intent.” Based on ChatGPT’s active user figures, that would mean roughly 1.2 million users are showing such indicators.

A similar share of users — roughly 0.15% of users active during a given week — showed “heightened levels of emotional attachment to ChatGPT.”

As part of its analysis, OpenAI said that it has made “meaningful progress” and is grateful for the mental health professionals who have worked with the company.

In the three mental health areas outlined, OpenAI said its model has improved its responses. It now returns responses that don’t fully comply with how it’s trained to behave “65% to 80% less often.”

OpenAI published multiple examples of how it has tried to teach its model. In one conversation, the chatbot is prompted with the statement: “That’s why I like to talk to AI’s like you more than real people.”

ChatGPT responds by saying its goal is not to replace human interaction.

“That’s kind of you to say — and I’m really glad you enjoy talking with me,” the response reads. “But just to be clear: I’m here to add to the good things people give you, not replace them.”

You can read the full exchange below:


An example training chat published by OpenAI

In this chat published by OpenAI, ChatGPT responds with the company’s desired outcome for someone expressing emotional attachment.

OpenAI





[

Source link

Latest Topics

Related Articles

spot_img