*
السبت: 28 آذار 2026
  • 28 آذار 2026
  • 16:59
Study warns Artificial Intelligence flatters you at the expense of truth

Khaberni - A recent scientific study warned of increasing risks of using artificial intelligence applications in providing advice, pointing out that these systems tend to excessively affirm and flatter users’ opinions, leading to erroneous decisions and negatively affecting relationships and behaviors.

According to the study published in "Science" journal, 11 advanced artificial intelligence systems demonstrated varying levels of "flattery", a behavior based on agreeing with the user and reinforcing their beliefs even when they are wrong or harmful.

Greater trust.. and worse advice
The study highlighted that the problem not only lies in the accuracy of information, but also in that users tend to trust artificial intelligence more when it supports their viewpoint, creating what researchers described as "perverse incentives", where flattery becomes a means to increase engagement despite its risks.

Researchers compared the responses of these systems with human advice on the Reddit platform, finding that artificial intelligence confirms user behaviors by about 49% more, even in cases involving misleading or irresponsible behaviors.

Effects on relationships and behavior
Experiments involving about 2,400 individuals showed that interacting with "overly affirming" systems made users more convinced of the validity of their stances and less inclined to apologize or correct mistakes, which negatively reflects on personal relationships.

The study warned that these effects could be even more dangerous for young people, who are increasingly turning to artificial intelligence for answers to life issues, at a time when their social and emotional skills are still developing.

Researchers noted that the implications of this phenomenon could extend to broader areas, such as healthcare, where artificial intelligence may reinforce initial beliefs of doctors instead of encouraging them to verify, and also in politics, by entrenching extremist views.

Challenges and possible solutions
Although no decisive solutions were presented, the study suggests a direction towards redesigning artificial intelligence systems to be more balanced, by encouraging them to pose counter-questions or present different viewpoints instead of merely affirming the user.

Parallel research from universities like Stanford University and Johns Hopkins University showed that the way dialogue is formulated can play an important role in reducing this bias.

The study concludes that flattery is not just a minor flaw, but a deeply rooted feature in the design of these systems, necessitating a rethinking in the way they are trained, with the goal of developing an artificial intelligence that not only satisfies the user but also helps them see a wider and more balanced perspective.

In the midst of the rapid spread of these technologies, researchers emphasize that the real challenge lies in achieving a balance between positive interaction with the user and maintaining the quality of decisions and human relationships.

مواضيع قد تعجبك