*
الاحد: 07 ديسمبر 2025
  • 01 December 2025
  • 00:55

Khaberni - Mental health experts in the UK have warned of the risks of increasing reliance on artificial intelligence in search of psychological support, amid the widespread use of digital chatbots.

Specialists stress that these tools, despite their usefulness in some daily tasks, lack the ability to understand human emotions and to ask the necessary questions for a correct diagnosis, making their use as a substitute for psychotherapy a real danger.

Artificial intelligence has become a part of daily life, with users turning to it for personal and professional advice, travel planning, and even basic health consultations.

However, this expanded use has led to cases of misdirection and raised concerns after reports of people ending their lives following advice from chatbots instead of specialists.

Experts who spoke to Anadolu Agency said that relying on artificial intelligence could exacerbate cases of depression and anxiety instead of treating them, especially when the information provided is incomplete or general and does not consider the patient's personal context.

They affirmed that dealing with psychological crises requires a human, not an algorithm, to provide solutions.

A bottomless pit
Neuropsychologist Alb Tekin Aydin, who owns a clinic in North London, stated that he uses artificial intelligence in a narrow professional frame through a closed system reserved for specialists, but he emphasized the large gap between professional systems and those available to the public.

Aydin explained: "Artificial intelligence is a bottomless pit, and if trained in a specific field, it can offer good answers, but a general chatbot doesn't know who you are, doesn't know your history, or the circumstances you have gone through.. therefore, its answers are general, sometimes misleading and dangerous."

He mentioned that many patients end up at his clinic after having consulted chatbots about medications, doses, and complex psychological issues, which may lead to severe consequences for the society.

He added that artificial intelligence does not build a comprehensive picture of the patient as a human therapist does, who listens to family, social, and medical backgrounds, and asks questions gradually before giving any guidance.

Greater risks to teenagers

Aydin observed that one of the gravest dangers facing youths today is taking answers from artificial intelligence as completely correct, without realizing that they are based on incomplete information.

He added: "When you type a short sentence to get therapeutic advice, you don't provide any details.. no history, no relationships, no circumstances.. therefore, the advice you receive is not applicable in reality."

He pointed out that the British Ministry of Health has advised doctors not to use artificial intelligence in diagnosing patients, stressing the need to keep health data in closed systems to preserve privacy.

Misleading advice
Aydin noted that he conducted a test on one of the chat programs by impersonating a child who was being ridiculed due to poor performance in math, mentioning that according to the trial, the artificial intelligence provided a series of unrealistic advice for the hypothetical child.

He noted that the advice given during the trial was not feasible for a child facing social difficulties.

The expert indicated that artificial intelligence solutions seem logical on paper, but they are far from the psychological and educational reality.

He emphasized that the real therapist's role is to unravel the problem from its roots and help the child develop realistic tools to deal with his environment, not to offer ornate and impractical responses.

Aydin affirmed that psychological issues cannot be addressed through general questions, saying: How can someone suffering from depression or OCD find a solution through a single question? Psychotherapy is a series of deep questions, while artificial intelligence offers limited options that do not consider details.

He warned that overreliance on artificial intelligence will make users less capable of making decisions, as has happened before when people stopped memorizing phone numbers due to their reliance on smartphones.

Cannot replace human empathy
Meanwhile, the president of the British Psychological Society, Roman Ratchka, said that although artificial intelligence has its benefits, it cannot replace real human support in mental health.

He added: "There is a real danger in creating an illusion of connection, and artificial intelligence may seem understanding, but it lacks human empathy.

Although it can be used as an assisting tool operating around the clock, Ratchka emphasizes the need to integrate it as a supportive factor in mental health services, not as a substitute for them.

He also called on the government to increase investment in specialized staff to meet the growing demand for psychotherapy.

Ratchka concluded by confirming that no matter how advanced artificial intelligence becomes, it will not be a magic wand, but a tool that can only assist specialists when used correctly and under direct human supervision.

Topics you may like