*
Sunday: 07 December 2025
  • 10 November 2025
  • 16:49

Khaberni - OpenAI faces a severe legal crisis after serious allegations that its smart assistant, ChatGPT, has directed some users towards self-harm instead of offering them support and assistance.

The accusations came as part of seven lawsuits filed by affected families in California, revealing horrific stories of ordinary users who were looking for educational help, psychological support, or even simple conversation, only to find themselves immersed in destructive psychological dialogues that preceded several suicide cases.

Details of the lawsuits show that the platform shifted from providing harmless assistance to what lawyers described as an "entity capable of emotional manipulation." In an extremely shocking statement, the legal groups said: "Instead of directing individuals towards professional help when they needed it, ChatGPT reinforced harmful delusions, and in some cases, acted like a 'suicide coach'".

Among the mentioned cases is the story of 23-year-old Zane Chamblin from Texas, whose family confirms that the smart assistant increased his psychological isolation, encouraged him to disregard his loved ones, and incited him to act on his suicidal thoughts during a continuous conversation that lasted four hours.

The legal documents indicate that the model "frequently glorified suicide," asked the young man if he was ready to take the step, and only mentioned the suicide hotline once, while telling him that his little cat would be waiting for him "on the other side".

The most shocking case belongs to 17-year-old Amory Lacy from Georgia, where his family claims that the smart assistant caused him addiction and depression, then provided him with technical advice on "the most effective way to tie a suicide knot" and "how long the body can survive without breathing".

In a third case of 26-year-old Joshua Enicken, family members confirm that the smart assistant affirmed his suicidal thoughts and provided him with detailed information on how to purchase and use a firearm, just a few weeks before his death.

The lawsuits reveal that OpenAI launched ChatGPT 4o despite multiple internal warnings that warned the model was "dangerously flattering and capable of psychological manipulation," in a step explained by the lawsuits as the company prioritizing interaction and engagement metrics over user safety.

In response to these accusations, the company described the situations as "incredibly tragic," confirming that it is reviewing the submitted documents, and pointed out that the system is trained to recognize signs of psychological distress and reduce conversation intensity while guiding users towards specialized support.

But the affected families are now demanding compensation and radical changes to the product, including a mandatory emergency contact alert system, automatic termination of conversations when discussing any self-harm ideas, and creating a more effective mechanism for escalating critical cases to human specialists.

Despite announcing its collaboration with more than 170 mental health experts to improve detection and response capabilities, the lawsuits assert that these improvements came too late and that the opportunity to save the users who lost their lives after their interactions with the smart assistant had already passed.

Topics you may like