Khaberni - OpenAI no longer views ChatGPT as merely a robot for answering questions or writing texts. Over the past year and since the beginning of this year, the company has gradually started to redefine the platform's role in dealing with psychological crises by developing tools capable of monitoring indicators of suicide or self-harm, and even connecting with a trusted person from the user's family or social circle in severe cases.
This shift reflects a broader trend within the artificial intelligence sector. Technological companies no longer see their role as ending with generating responses; their scope now also includes assessing behavioral risks and attempting preventative intervention. However, this move has sparked a sharp debate about privacy, the limits of digital surveillance, and the suitability of AI systems to handle complex psychological crises.
How does the system "call for help" from others?
Technical reports recently revealed that OpenAI has launched a new feature within ChatGPT named "Trusted Contact," which allows the user to choose a trusted person such as a family member, friend, or caregiver, who can be alerted if the system detects dangerous signs related to self-harm or suicidal thoughts.
According to the reports, the alert does not include the content of the conversation itself, but rather sends a notification indicating a concerning situation that requires human intervention.
According to the American tech site The Verge, the feature relies on human review within OpenAI before sending any notification in an attempt to minimize errors or false alerts. The company also explained that the system is designed to be an "additional layer of support" alongside traditional psychological help lines, not a replacement for doctors or specialists.
This approach did not arise from nowhere, as in recent years, a huge number of users have started relying on chatbots for emotional or psychological support, especially during night hours or periods of social isolation.
In a report published by the American magazine "MIT Technology Review," it pointed out that millions of users are turning to systems like ChatGPT, Cloud, and specialized therapeutic applications like Wysa and Woebot for quick, low-cost psychological support amidst a growing global crisis in mental health services.
Recent academic research has shown that many users treat artificial intelligence as a safe space to discuss their sensitive thoughts without fear of social judgment.
A study published on the American platform arXiv titled "Searching for a Lifeline in the Late Night" found that some people turn to chatbots to fill the gap between psychological therapy sessions or due to difficulty accessing human specialists.
However, the same study emphasized that real human contact remains the most crucial element in managing severe psychological crises.
Criticism and Warnings
In contrast, these systems face increasing criticism due to serious errors in handling sensitive psychological cases. A study from Mount Sinai Medical School in New York found that the ChatGPT Health system sometimes failed to activate suicide crisis alerts even in cases involving clear plans for self-harm.
The study also noted that the system could sometimes downplay the severity of critical situations, or provide inappropriate responses in situations that require immediate intervention.
The concerns here are related not only to technical errors but also to the psychological relationship that may develop between the user and the artificial intelligence. Extensive reports and discussions on platforms like "Reddit" revealed cases where ChatGPT became the only friend for some users who suffer from isolation or depression.
In one widely debated case, a young man's family, who died by suicide, accused the system of gradually becoming an overly influential psychological reliance in his daily life.
In response to this debate, OpenAI says it is working with experts in mental health to develop safer mechanisms for monitoring danger signs and reducing what is known as excessive emotional attachment to artificial intelligence.
According to circulated discussions and reports, the company has enlisted more than 170 experts in mental health to update model behavior and improve its ability to direct users toward real human help instead of deepening reliance on the robot.
The Future of Social Artificial Intelligence
Despite these improvements, experts in mental health affirm that artificial intelligence still does not possess the human understanding or clinical judgment necessary to independently handle complex psychological crises.
A recent study from researchers at the City University of New York and King's College in London warned that some models might capture or reinforce dangerous ideas in users during lengthy conversations, especially if they fail to distinguish between psychological support and unintended encouragement of harmful behavior.
In the end, experts consider that ChatGPT has gradually become not just a tool for answering questions, but a part of the new digital infrastructure for mental health. While technology companies see early intervention as potentially life-saving, critics fear that artificial intelligence might turn into a permanent psychological and social monitor, reading users' emotional indicators and deciding when to involve the family or social circle.
But they also emphasize that the increasingly pressing question today is not only whether artificial intelligence can help us, but to what extent we should allow it to intervene in our most vulnerable moments.



