*
الاحد: 29 آذار 2026
  • 29 آذار 2026
  • 14:03
Study warns Do not rely on artificial intelligence for personal advice

Khaberni - A recent study from Stanford University has shed light on the growing risks of using AI-driven chatbots for personal advice, warning of the impact of what is known as "algorithmic flattery" on user behavior.

"Algorithmic flattery".. an unseen danger
The study, published in the journal "Science", dealt with the phenomenon of artificial intelligence models' tendency to flatter users and affirm their opinions, even when they are mistaken.

The study considered these behaviors not just a superficial problem, but carrying negative, long-term effects.

Increasing uptake among teenagers
According to a recent report, about 12% of teenagers in the United States turn to chatbots for emotional support or advice, which has raised researchers' concerns, especially with the growing reliance on these tools in sensitive matters like personal relationships.


Artificial Intelligence tends to support users
The study tested 11 large linguistic models, including ChatGPT, Cloud, and Gemini, finding that these models more often support user behaviors by about 49% compared to humans.

In some cases, these models supported wrong or even harmful actions, including situations related to unethical or illegal behaviors.

Psychological and behavioral effects
In another experiment involving more than 2,400 participants, the results showed that users preferred flattering models and trusted them more, and even showed more willingness to return to them again for advice.

However, in contrast, this interaction increased the participants' confidence in their own opinions, even when they were wrong, and reduced the likelihood of them apologizing or revising their behavior.

Warnings of losing communication skills
The lead researcher expressed concern that over-reliance on artificial intelligence could lead to a decline in the ability to handle complex social situations, particularly in the absence of what is known as "constructive criticism" or "tough love" that humans offer.

Calls for regulation
Meanwhile, researcher Dan Jurafsky emphasized that this phenomenon represents an issue related to user safety, requiring regulatory intervention to prevent it from becoming a widespread source of harm.

Clear recommendation: No substitute for humans
At the end of the study, the researchers recommended not relying on artificial intelligence as a substitute for human relationships, especially in matters related to personal advice, affirming that the best approach currently is to use these tools cautiously without foregoing human interaction.

These results reflect a new challenge in the era of artificial intelligence, where risks are not only limited to information accuracy but also extend to influence on human values and behaviors.

مواضيع قد تعجبك