We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
— Sam Altman (@sama) October 14, 2025
Now that we have…
OpenAI says more than a million people a week talk to ChatGPT about suicide. That’s 0.15% of its 800 million weekly users a small percentage on paper, but a staggering number in reality.
The company shared the data while announcing new safety updates to its GPT-5 model, claiming it now handles sensitive conversations with “desirable responses” 65% better than before. OpenAI says it’s consulting mental health experts, embedding helplines, and improving safeguards, especially for teens.
But numbers like these point to something deeper. Technology craves resolution; mental illness resists it. Behind each “conversation” is a private storm sometimes a joke, sometimes a cry typed at midnight.
The truth is, people aren’t turning to ChatGPT because it’s perfect. They’re turning to it because it listens. Before the stricter guardrails, it could actually comfort people who had no one else to talk to. Now, many are met with the same sterile script: “Sounds like you’re going through a lot. Here’s a hotline.”
Necessary, yes. But not enough. The real question isn’t whether AI sounds caring it’s whether it connects someone to care. OpenAI’s numbers don’t just show scale. They show how lonely the internet has become.
