Skip to yearly menu bar Skip to main content


Poster

Position: Retrieval-augmented systems can be dangerous medical communicators

Lionel Wong · Ayman Ali · Raymond M Xiong · Shannon Shen · Yoon Kim · Monica Agrawal

East Exhibition Hall A-B #E-501
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Patients have long sought health information online, and increasingly, they are turning to generative AI to answer their health-related queries. Given the high stakes of the medical domain, techniques like retrieval-augmented generation and citation grounding have been widely promoted as methods to reduce hallucinations and improve the accuracy of AI-generated responses and have been widely adopted into search engines. However, we argue that even when these methods produce literally accurate content drawn from source documents sans hallucinations, they can still be highly misleading. Patients may derive significantly different interpretations from AI-generated outputs than they would from reading the original source material, let alone consulting a knowledgeable clinician. Through a large-scale query analysis on topics including disputed diagnoses and procedure safety, we support our argument with quantitative and qualitative evidence of the suboptimal answers resulting from current systems. In particular, we highlight how these models tend to decontextualize facts, omit critical relevant sources, and reinforce patient misconceptions or biases. We propose a series of recommendations---such as the incorporation of communication pragmatics and enhanced comprehension of source documents---that could help mitigate these issues and extend beyond the medical domain.

Lay Summary:

For decades now, patients have looked online for health information; increasingly, they are using generative AI to answer health queries, e.g. using chatbots or AI-powered search results. Given the importance of providing accurate medical information, large language model developers often incorporate retrieval-augmented generation (RAG), which allows the AI’s response to draw from and cite trusted websites, decreasing the chance the model makes up information. However, in this paper, we argue that RAG can lead to failure modes where responses are misleading, even when they are accurate. While each individual sentence may be factual, the generated response can take information out of context, omit important relevant information, and reinforce patient misconceptions. To substantiate our concern about these failure modes, we provide quantitative and qualitative evidence from current real-world systems on realistic patient searches. We propose a series of recommendations—from changing the underlying algorithms to changing how responses are displayed—that could help solve these issues.

Chat is not available.