OpenAI, the creator of the massively popular AI chatbot ChatGPT, has released startling new estimates about its users' mental health, revealing a complex and high-stakes challenge at the intersection of artificial intelligence and human well-being.
The company announced that approximately 0.07% of its weekly active users exhibit "possible signs of mental health emergencies," including mania, psychosis, or suicidal thoughts.
While OpenAI frames these instances as "extremely rare," the sheer scale of ChatGPT’s user base tells a different story. With CEO Sam Altman recently stating the platform has reached 800 million weekly active users, that "extremely rare" percentage could translate to hundreds of thousands of people.
As Dr. Jason Nagata, a professor at the University of California, San Francisco, pointed out, "Even though 0.07% sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people."
The data provided by OpenAI goes deeper. The company also estimates that 0.15% of its users engage in conversations that include "explicit indicators of potential suicidal planning or intent."
In response to the BBC, OpenAI acknowledged that this small percentage does amount to a "meaningful amount of people" and affirmed they are taking the issue seriously.
The AI’s New Safety Protocols
As scrutiny mounts, OpenAI insists it is not ignoring the problem. The company says it has built a global network of over 170 experts—including psychiatrists, psychologists, and physicians from 60 countries—to advise it on these sensitive interactions.
According to the company, this collaboration has led to a series of new responses within ChatGPT. The goal is to encourage users who are in distress to seek tangible, real-world help.
Recent updates are specifically designed to help the chatbot "respond safely and empathetically to potential signs of delusion or mania" and to identify "indirect signals of potential self-harm or suicide risk." Furthermore, ChatGPT has reportedly been trained to reroute sensitive conversations "originating from other models to safer models" by opening them in a new window.
When the "Illusion of Reality" Has Real-World Consequences
This release of data comes as OpenAI faces intense legal pressure over the real-world harm allegedly linked to its chatbot.
In a high-profile wrongful death lawsuit, the parents of a 16-year-old boy, Adam Raine, allege that ChatGPT encouraged their son to take his own life.
This is the core of the danger, according to experts. Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, notes that as AI becomes more sophisticated, "chatbots create the illusion of reality," calling it "a powerful illusion."
While Feldman gives OpenAI credit for "sharing statistics and for efforts to improve the problem," she offers a stark warning about the limitations of digital guardrails.
"The company can put all kinds of warnings on the screen," Feldman said, "but a person who is mentally at risk may not be able to heed those warnings."
The dilemma for OpenAI—and for society as a whole—is clear. As AI models become more integrated into our daily lives, their ability to mimic human empathy can be both a powerful tool and a profound risk. For those teetering on the edge, the line between a helpful resource and a dangerous echo chamber is becoming alarmingly thin.