Preprint
Review

This version is not peer-reviewed.

A Survey on Hallucination in Large Language Models: Definitions, Detection, and Mitigation

Submitted:

20 January 2026

Posted:

22 January 2026

You are already at the latest version

Abstract
Despite‍‍‍ Large Language Models (LLMs) exhibiting outstanding capabilities in various natural language processing tasks, they might still be unreliable. Actually, one of their main sources of unreliability is a phenomenon called hallucination, the creation of reasonable but false pieces of information. This work provides a comprehensive overview of advances in understanding, locating, and reducing hallucinations. We start by considering hallucination as the main obstacle in creating reliable AI, and define a taxonomy that follows the development of factual errors and the notion of unfaithfulness with respect to the model's accessible knowledge. Afterwards, we survey the detection methods that are classified depending on the degree of model access and also, and we also refer to the different cognitive processes used for their comparison, which comprise uncertainty estimation, consistency checking, and knowledge-grounding evaluation. In the end, we offer a well-organized representation of the interventions aimed at the abolition of the model hallucinations employed at various stages of the model lifecycle: (1) data-centric interventions exemplified by high-quality data curation, (2) model-centric alignment through preference optimization and knowledge editing, and (3) inference-time strategies such as retrieval-augmented generation (RAG) and self-correction. We affirm that the multilayer, defense-in-depth framework incorporating these non-overlapping strategies is crucial for robust hallucination abatement. Some of the ongoing difficulties are the scalable data curation, the trade-off between alignment and model capability, and the problem of editing the reasoning pathways instead of the surface ‍‍‍facts.
Keywords: 
;  ;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated