AI-generated picture of ChatGPT performing a doctor's tasks
Author profile picture

A recent cross-sectional study reveals that an AI chatbot, ChatGPT, provides higher quality and more empathetic responses to patient questions than physicians on Reddit’s r/AskDocs forum. The study involved 195 randomly selected patient questions and showed that evaluators preferred chatbot responses in 78.6% of cases. ChatGPT responses were rated higher in quality and empathy, with good/very good quality scores 3.6 times higher and empathetic/very empathetic scores 9.8 times higher than physician responses. These results suggest that AI assistants may play a role in drafting responses to patients, potentially reducing clinician burnout and improving patient outcomes. Further exploration and randomized trials are warranted to determine the impact of AI chatbots in clinical settings.

Study Methodology and Findings

The study, published in JAMA Internal Medicine [1], evaluated the ability of ChatGPT, an AI chatbot assistant released in November 2022, to provide quality and empathetic responses to patient questions. The researchers used a public and nonidentifiable database of questions from the r/AskDocs subreddit, randomly drawing 195 exchanges from October 2022 which a verified physician had responded. Chatbot responses were generated by entering the original question into a fresh session on December 22 and 23, 2022 [1].

A team of licensed healthcare professionals evaluated the anonymized and randomly ordered physician and chatbot responses in triplicate. The evaluators judged the responses based on the quality of the information provided and the empathy or bedside manner demonstrated. The results showed that chatbot responses were significantly longer than physician responses (211 words vs. 52 words) and rated higher in both quality and empathy [1].

Experts Weigh In on the Study and AI Chatbots in Healthcare

Several experts reacted to the study and its implications for AI chatbots in healthcare [2]. Prof Martyn Thomas of Gresham College, London, warned against assuming the results apply to different questions and situations due to the small sample size and limited context. He criticized ChatGPT for lacking medical quality control and urged patients to seek authoritative advice [2].

Prof Maria Liakata of Queen Mary, University of London, commented on the vague evaluation criteria, noting that language proficiency might impact fluency and empathy. She acknowledged the study’s limitations, such as out-of-context questions and the lack of patient evaluations, and stated that these issues would need to be addressed before AI assistants could be adopted in clinical practice [2].

Benefits and Limitations of AI Chatbots in Patient Communication

Prof Nello Cristianini of the University of Bath described the study as impressive but limited, arguing that text-only interaction is unnatural for human doctors. He envisions AI tools assisting doctors but not replacing human-to-human interaction [2]. Dr. Mhairi Aitken of the Alan Turing Institute emphasized the importance of considering actual care settings and diverse patient perspectives. She reminded us that human doctors can adjust their approach based on social cues, a capability that chatbots currently lack [2].

Prof Anthony Cohn of the University of Leeds expressed that he was unsurprised by ChatGPT’s empathetic responses but warned against relying on factual information from chatbots without medical professional verification [2]. Dr. Heba Sailem of King’s College London found the results encouraging but advocated for training specialized Large Language Models based on medical knowledge to improve communication channels between patients and healthcare professionals [2].

Potential Impact on Healthcare and Physician Burnout

Authors of an accompanying editorial in JAMA Internal Medicine suggest that AI systems could help alleviate laborious tasks in modern medicine, allowing physicians to focus on treating patients [3]. However, two authors disclosed financial ties to biopharmaceutical and technology companies, including Lifelink, a healthcare chatbot company [3].

While the study demonstrates the potential for AI chatbot assistants like ChatGPT to generate quality, empathetic responses to patient questions, it is essential to consider the limitations and diverse perspectives in actual care settings. Further exploration and randomized trials are necessary to fully understand AI chatbots’ potential benefits and drawbacks in clinical practice.

sources:

[1] JAMA
[2] Science Media Center
[3] Physician’s weekly