*
الاثنين: 20 نيسان 2026
  • 20 نيسان 2026
  • 19:10
Can Artificial Intelligence be Trusted Medically A Study Reveals the Answer

Khaberni - A recent scientific study revealed that some AI-based automated chat programs may provide misleading or inaccurate medical information, raising concerns about relying on them for sensitive health issues.

According to the study published in BMJ Open, a team of researchers tested five leading chat programs, namely ChatGPT, Gemini, Grok, Meta AI, and DeepSeek, by asking 50 health-related questions covering topics such as cancer, vaccines, and nutrition.

The results showed that about 20% of the answers were "highly problematic," while nearly half were classified as "problematic," and 30% were "somewhat problematic." Additionally, all programs failed to provide completely accurate scientific references, and they only refused to answer two out of 250 questions.

Grok recorded the worst performance, with 58% of its answers considered problematic, followed by ChatGPT at 52%, then Meta AI at 50%, with a general convergence in the performance among the five tools.

According to the "Independent" newspaper, the study indicated that the accuracy of the answers varies according to the topic. The performance was relatively better in the fields of cancer and vaccines, while it declined in topics related to nutrition and athletic performance. It also showed that open-ended questions increase the chances of receiving misleading answers.

In a related context, a study in Nature Medicine showed that these programs could theoretically reach an accuracy of 95%, but users only succeed in getting the correct answer less than 35% of the time when used practically.

These findings confirm that chat programs can be a helpful tool for understanding medical information, but they are not suitable as an independent source for diagnosis or treatment, which necessitates verifying information and consulting specialists before making any health decisions.

مواضيع قد تعجبك