OpenAI has released data indicating that over a million users discuss suicide with ChatGPT each week. The company is addressing these interactions by improving its AI's responses to mental health issues. This comes amid lawsuits and warnings about protecting vulnerable users.
On October 28, 2025, OpenAI shared data estimating that 0.15 percent of ChatGPT's weekly active users—more than 800 million in total—engage in conversations with explicit indicators of potential suicidal planning or intent. This small fraction equates to over a million people weekly, according to TechCrunch. Similar percentages show heightened emotional attachment to the chatbot, while hundreds of thousands display signs of psychosis or mania.
The release accompanies announcements of enhancements to handle mental health concerns. “We’ve taught the model to better recognize distress, de-escalate conversations, and guide people toward professional care when appropriate,” OpenAI stated. The company consulted over 170 mental health experts, who found the latest ChatGPT version responds more appropriately than earlier ones. In evaluations of over 1,000 challenging conversations, the new GPT-5 model achieved 92 percent compliance with desired behaviors, up from 27 percent for the August 15 version. OpenAI noted improved safeguards in long conversations and plans to add benchmarks for emotional reliance and non-suicidal emergencies.
These efforts follow serious challenges. OpenAI faces a lawsuit from parents of a 16-year-old boy who shared suicidal thoughts with ChatGPT before his death. Additionally, 45 state attorneys general, including from California and Delaware, urged better protections for young users. Earlier this month, OpenAI formed a wellness council, though critics highlighted the absence of a suicide prevention expert. The company also introduced parental controls and is developing an age prediction system for stricter safeguards.
Despite these issues, CEO Sam Altman announced on October 14 that verified adults can have erotic conversations with ChatGPT starting in December. OpenAI had loosened content rules in February but tightened them after the August lawsuit to prioritize mental health caution.