5/14/2026 at 1:36:30 AM
> Every week, somewhere between 1.2 and 3 million ChatGPT users, roughly the population of a small country, show signals of psychosis, mania, suicidal planning, or unhealthy emotional dependence on the model.> Why is mental-health crisis not a gating category, the kind where the conversation stops, full stop, and the user is routed to a human?
Well, obviously “routing to a human” is not feasible at that scale. And cold exiting the conversation is probably worse for the user than answering carefully.
by nojs
5/14/2026 at 1:47:44 AM
I don't think it's obvious that routing to a human is infeasible. I'm sure many local authorities, health agencies, and non-profits would be okay being routed to. Additionally, I'm sure many of the users are the same week over week, so giving them long term care would reduce the total volume. Finally, there is a long gap between psychosis and emotional dependence, so there could be some triage to make sure those most in need have human intervention.by hx8
5/14/2026 at 1:43:57 AM
Tech companies will pull trillions of dollars out of their asses when the problem is boosting ad revenue or automating people out of a job. But when asked to deal with the crisis they invented and dumped on society the answer is “that’s impossible, doesn’t scale”by Gigachad
5/14/2026 at 1:51:06 AM
Figure a "mental health crisis" human conversation takes 30 minutes. Three million incidents per week would require 37,500 qualified mental health counselors on the phones working a 40 hour shift that week. Figure they make $75k/year each. You're now spending $3 billion per year on crisis response, and you're employing like 10% of all of the health counselors in the US. And all you're providing is 30 minute chats.by CobrastanJorji
5/14/2026 at 2:18:40 AM
> You're now spending $3 billion per year on crisis response
Honestly? That's really affordable[0]. That would be cheap if these were just for the US but it looks like these are global numbers. We spend $2bn/yr alone on "BREASTFEEDING PEER COUNSELORS AND BONUSES"[1]. I mean let's be serious, even in the article that OpenAI published says that it is a small portion of their users. So it doesn't "need to scale" as the scale is relatively small. But just because it is small doesn't mean it is unimportant.$3bn/yr is a lot of people money, but it is nothing for government money.
Edit: Last round of OpenAI funding was $122bn[2] and in the same article they are saying that they are generating $2bn in revenue per month. While that's not profit, it is worth mentioning that what you are saying "doesn't scale" is about 12% of the revenue of something that does scale. A single company. And mind you if we implemented what you're proposing it would be available to all the AI companies and more. Making it only a smaller drop in the bucket, not larger.
[0] Not to mention that better mental health care services will result in savings elsewhere. It's always way more expensive to fix a broken pipe that's flooding your house than it is to fix a pipe with a small crack. "Don't fix what ain't broken" is used too broadly. Maintenance is always cheaper than repair, but people just can't seem to understand this.
[1] https://www.usaspending.gov/federal_account/012-3510
[2] https://openai.com/index/accelerating-the-next-phase-ai/
by godelski
5/14/2026 at 1:56:06 AM
Mark Zuckerberg can spend $80B on the failed metaverse experiment, but can't spare some relative pocket change on solving the psychosis issue his products caused.by Gigachad
5/14/2026 at 2:54:47 AM
> is not feasible at that scale
I want to use an analogy here. The same arguments are often made about cleaning up environmental damage. So either make the companies doing the polluting pay for the costs themselves or if we care so much about them being profitable then we subsidize them by paying for those cleanup efforts out of taxes. Doing nothing is a worse form of subsidy as it not only costs more (in literal dollars) but shoulders that costs onto the people with the least ability to pay for it. The problem is you're treating "doing nothing" as having no cost. It has a high cost, but the cost is also highly distributed.So if it is not scalable, then why subsidize them? This is literally a tragedy of the commons situation. Personally, I'm in favor of making the people who make a mess clean up that mess. I really don't understand why this is such a contentious opinion.
by godelski
5/14/2026 at 1:43:48 AM
"Routed to a human" is what the suicide hotline numbers do. OpenAI employees are neither trained nor credible to do that stuff.by concinds
5/14/2026 at 7:48:11 AM
And what will a human do better? Why will the human care? Who will pay the human?by GardenLetter27
5/14/2026 at 1:58:11 AM
Well, then maybe you can't scale it as a free service with self-serve signups. Maybe you need to gate who you allow to use it and pace how intensely they can engage. Or maybe you need to look for other solutions.Yielding to "not feasible at scale" is exactly how we ended up with a lot of today's most pressing and almost intractible problems, from social media's ills to person and society straight through to enshittification and non-repairability.
by swatcoder
5/14/2026 at 2:38:22 AM
> ...straight through to enshittification and non-repairability.funny as "enshittification" was the topic of a 99% Invisible pod just a few days ago and I also was listening to the new Stewart Brand book that Stripe published. i fixed a Norwegian desk I bought a decade ago on Valencia. happily not feasible at scale but neither was how i broke it :)
by The_Blade