Preprint
Article

This version is not peer-reviewed.

Language Without Propositions: Why Large Language Models Hallucinate

Submitted:

16 January 2026

Posted:

20 January 2026

You are already at the latest version

Abstract
This paper defends the thesis that LLM hallucinations are best explained as a truth representation problem: Current models lack an internal representation of propositions as truth-bearers, so truth and falsity cannot constrain generation in the way factual discourse requires. It begins by surveying leading explanations—computational limits on self-verification, deficiencies in training data as truth sources, and architectural factors—and argues that they converge on the same underlying representational deficit. Next, it reconstructs the philosophical background of current LLM design, showing how optimization for fluent continuation aligns with coherence-style evaluation and with a broadly structuralist, relational semantics, before turning to David Chalmers’s recent attempt to secure propositional interpretability by drawing on Davidson/Lewis-style radical interpretation and by locating propositional content in “middle-layer” structures; it argues that this approach downplays the ubiquity of hallucination and inherits instability from post-training edits. Finally, the paper offers a positive proposal: Atomic propositions should be represented in the basic vector layer, reviving a logical-atomist program as a principled route to reducing hallucination.
Keywords: 
;  ;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated