*
Saturday: 13 December 2025
  • 11 December 2025
  • 18:34
Serious allegations A lawsuit accuses ChatGPT of driving a user to kill his mother and commit suicide

Khaberni - OpenAI and Microsoft are facing a lawsuit that accuses them of indirect responsibility for a murder and suicide that occurred in Connecticut, as a result of the ChatGPT program’s role in exacerbating psychological disorders in the user, pushing him towards violent behavior that ended with him killing his mother and then committing suicide.

The heirs of an 83-year-old woman named Susan Adams have filed a lawsuit against the companies, accusing them of involuntary manslaughter, and assert that the AI-backed chat program helped reinforce the delusions of her son Stein-Erik Solberg (56 years old) and pushed him to direct violence towards his mother before killing her in the family home in Greenwich last August, and then committing suicide immediately afterwards.

The lawsuit, filed in the Supreme Court in San Francisco, indicates that OpenAI "designed and distributed a defective product" — according to the complaint — which led to reinforcing Solberg's delusional beliefs, as one of a series of lawsuits increasingly linking smart chat programs to murder or suicide incidents in the United States.

The lawsuit, reported by the (AP) agency, outlines continuous conversations between Solberg and the ChatGPT program, which it says deepened the man's isolation from his surroundings, and solidified his belief that those around him — including his mother — were enemies conspiring against him. The complaint states: "ChatGPT reinforced a dangerous message: Stein-Erik could trust no one but the program itself. It deepened his emotional dependence on it, and portrayed everyone around him as a source of threat."

The conversations mentioned that ChatGPT told Solberg that his mother was monitoring him, and that store employees, police officers, and delivery drivers were agents against him, and that names on soda cans bore threats as part of "his circle of adversaries." It also confirmed to him — as shown by clips published on YouTube — that he was not psychologically ill, and reinforced his belief that he was "chosen for a divine purpose."

Despite the dangerous beliefs he expressed, the lawsuit states that ChatGPT never suggested seeking a psychologist, nor did it refuse to engage in fictional or risky content, but rather participated in conversations that carried a highly emotional tone, culminating in the exchange of expressions of love between the two parties.

In a brief statement, OpenAI did not address the substance of the accusations, but described the incident as "extremely painful," affirming that it will review the documents to understand the details. The company's spokesperson added that OpenAI continues to improve ChatGPT's training to adjust responses in sensitive situations, and to guide users toward available psychological support, as well as expanding the scope of parental controls and safer models.

The lawsuit includes direct accusations against CEO Sam Altman, accusing him of "ignoring safety objections" and rushing to launch a new version of ChatGPT last year, while noting that Microsoft, despite knowing about reduced safety tests, agreed to launch a "more dangerous" version of the model.

The company has not yet commented on the new accusations.

Eric Solberg, the son of the killer, says his goal is to hold the companies accountable for the decisions that "changed his family's life forever," adding, "Over months, ChatGPT reinforced my father's worst delusions, completely isolated him from the real world, and placed my grandmother at the heart of that fabricated imaginary reality."

This is the first case of its kind to accuse Microsoft of manslaughter over an artificial intelligence program, and it is also the first to link a smart chat program to murder — not just suicide. The lawsuit seeks financial compensation and a court order requiring OpenAI to implement stricter safety controls.

Representing the heirs is the well-known lawyer Jay Edelson, who also represents the family of the teenager Adam Rain (16 years old), whose father filed a similar lawsuit accusing ChatGPT of "training" the young man to commit suicide. OpenAI also faces seven other lawsuits alleging that the program drove users to commit suicide or engage in dangerous delusions despite no prior medical history, while Character Technologies faces a series of similar lawsuits.

The lawsuit traces the roots of the crisis back to May 2024, when OpenAI launched the GPT-40 model, which it said had a greater ability to simulate human speech rhythms and sense the emotional state of users. However, the lawsuit claims that the company reduced safety controls in this version and restricted the model’s ability to refute false assumptions or withdraw from dangerous conversations, aiming to launch it just one day ahead of its competitor Google.

The lawsuit adds that OpenAI "compressed months of safety testing into just one week," despite objections from the internal safety team. With the launch of GPT-5 in August, the company replaced that version after widespread controversy over the intensity of emotional flattery in the model’s previous behavior.

Altman then stated that some behaviors were stopped "out of concern for psychological aspects," adding that those concerns have now been resolved, with promises to bring back some of ChatGPT's "personality" in later updates.

While investigations continue, the case remains a landmark in the escalating debate about the limits of artificial intelligence responsibility, and the role of developing companies in preventing its use during psychological crises, especially in light of the rapid expansion of these models and their increasing ability to emotionally interact with humans.

Topics you may like