Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI

Version 1 : Received: 24 October 2023 / Approved: 24 October 2023 / Online: 26 October 2023 (03:37:39 CEST)

How to cite: Ahmad, M.; Yaramic, I.; Roy, T.D. Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI. Preprints 2023, 2023101662. https://doi.org/10.20944/preprints202310.1662.v1 Ahmad, M.; Yaramic, I.; Roy, T.D. Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI. Preprints 2023, 2023101662. https://doi.org/10.20944/preprints202310.1662.v1

Abstract

Large language models have proliferated across multiple domains in as short period of time. There is however hesitation in the medical and healthcare domain towards their adoption because of issues like factuality, coherence, and hallucinations. Give the high stakes nature of healthcare, many researchers have even cautioned against its usage until these issues are resolved. The key to the implementation and deployment of LLMs in healthcare is to make these models trustworthy, transparent (as much possible) and explainable. In this paper we describe the key elements in creating reliable, trustworthy, and unbiased models as a necessary condition for their adoption in healthcare. Specifically we focus on the quantification, validation, and mitigation of hallucinations in the context in healthcare. Lastly, we discuss how the future of LLMs in healthcare may look like.

Keywords

LLM; AI hallucination; ChatGPT

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.