*
Wednesday: 10 December 2025
  • 10 December 2025
  • 03:05
Who do people trust more doctors or artificial intelligence

Khaberni - A new American study on public attitudes towards artificial intelligence indicates that most people hesitate to let "ChatGPT" and other AI tools diagnose their health conditions, but they see promising signs in technologies that use these tools to help diagnose cancer.

The results are presented at the annual meeting of the Society for Risk Analysis, held from December 7 to 10 in Washington, D.C.

The study is led by Dr. Michael Sobolev, a behavioral scientist from the Schaeffer Institute for Public Policy and Government Service at the University of Southern California, and Dr. Patricia Sliwoda, a psychologist, and an assistant professor at Baruch College, City University of New York. It focuses on public views—specifically regarding trust, understanding, potential, enthusiasm, and fear of artificial intelligence—in the field of cancer diagnosis, which is one of the most used and impactful applications of artificial intelligence in medicine.

The study used data from two national surveys to assess the degree of personal use of AI tools like "ChatGPT", and the general public's trust in medical AI in accepting an AI-based diagnostic tool for cervical cancer. It concluded the following main results:

- Most people still trust doctors more than artificial intelligence. About only one person out of every six (17%) stated they trust AI as much as they trust a human expert in diagnosing health issues.

- Individuals who have tried artificial intelligence (like ChatGPT) feel more positive about its application in medicine. Those who have used AI in their personal lives said they understand it better, and were more enthusiastic and confident about its use in healthcare (55.1% of participants heard about ChatGPT but did not use it, while 20.9% heard about it and used it).

 

 

Hope, Not Hazard

People see hope, not danger. When participants learned about an AI tool that helps in detecting early signs of cancer, most of them believed it had great potential, and they were more excited than fearful.

"Our research shows that even a little exposure to artificial intelligence—just hearing about it or trying it—can make people more comfortable and confident with it," says Sliwoda. "We know from research that familiarity with new technologies, not just AI, plays a big role in how people accept them."

In the first survey, participants discussed whether they had heard of AI technologies or used them and answered questions about their overall trust in them for health diagnoses.

In the second survey, participants were presented with a scenario based on a practical experiment where a research team developed an AI system capable of analyzing digital images of the cervix to detect precancerous changes (a technique called automated visual assessment). Then, participants evaluated five elements for accepting this diagnostic AI tool on a scale from 1 to 5: understanding, trust, enthusiasm, fear, and potential.

The analysis of the results showed that potential was the highest-rated aspect when evaluating the AI diagnostic tool, followed by enthusiasm, trust, understanding, and fear.

Sobolev, who leads the Behavioral Design Unit at Cedars-Sinai Medical Center in Los Angeles with the aim of enhancing human-centered innovation, says, "We were surprised by the gap between what people generally said about artificial intelligence and how they felt in a real example." He adds, "Our results show that recognizing specific real-world examples can help build trust between people and artificial intelligence in the field of medicine."

Topics you may like