Submitted:
16 January 2026
Posted:
20 January 2026
You are already at the latest version
Abstract
Keywords:
0. Introduction
I think the shortcomings of neural network models are still there, and they’re called hallucinations or confabulations or blends, and it’s precisely because their intelligence comes from generalizing based on the similarity to things in the training set, and blending things of the general kind that tend to go together, even when we now know they never did go together, and hence you get the hallucinations. Several years later, still a problem, and as a semi-regular user of generative AI, I’m still surprised at the hallucinations that come about, and they come about for a systematic reason.
Namely, there’s nothing in there corresponding to a proposition, to the capital of France is Paris, or so-and-so did such-and-such at such-and-such a time. There are blends of many things that tend to co-occur in the training set, resulting in output that is always plausible, but not always factual.(Dawkins and Pinker 2025, 1:03:13–1:04:24)
Is “Hallucination” a Fitting Metaphor?
I. Previous Explanations of LLMs’ Hallucinations
II. The Debate About Hallucination Echoes the Tension Between Coherence and Correspondence
III. Implicit Theory of Language
This is most evident in models that undergo fine-tuning to satisfy human preferences, since such fine-tuning explicitly involves selecting internal states that increase the probability of outputs satisfying world-involving norms, such as factual accuracy. However, we will also present evidence suggesting that pre-training alone can, in some contexts, select for internal states with world-involving content, albeit in a more indirect way.(Mollo and Millière 2025, 17)
IV. Chalmers’s Propositional Interpretability and Davidson’s Radical Interpretation
A related objection is that current language models lack beliefs because they do not value truth: they have been trained only to predict the next word, not to say what is true. Now, as many have observed in response, current language models typically undergo a round of fine-tuning by reinforcement learning, where true answers are rewarded. Even in the absence of explicit training, it may well be that optimal performance in predicting the next word requires having generally true beliefs about the world. Either way, truth may be rewarded in the training process, albeitly imperfectly in a way that leaves room for much unreliability.(Chalmers 2025, 24)
Radical Interpretation
V. Toward the Proposed Solution: Atomic Facts/Propositions in the Basic Layer
VI. Conclusions
So there are now hybrid models that will, before they produce the output, they’ll kind of look them up on Google. And not surprisingly, Google itself. I mean, that was what Gemini was originally designed to do.(Dawkins and Pinker 2025, 1:04:25–42)
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Armstrong, David M. Truth and truthmakers; Cambridge University Press: Cambridge, 2004. [Google Scholar]
- Bao, Yuntai; Zhang, Xuhong; Du, Tianyu; Zhao, Xinkui; Feng, Zhengwen; Peng, Hao; Yin, Jianwei. Probing the geometry of truth: Consistency and generalization of truth directions in LLMs across logical transformations and question answering tasks. arXiv 2025. Available online: https://arxiv.org/abs/2506.00823 (accessed on 25 December 2025).
- Barrault, Loïc; Duquenne, Paul-Ambroise; Elbayad, Maha; et al. Large concept models: Language modeling in a sentence representation space. arXiv 2024, arXiv:2412.08821. Available online: https://arxiv.org/abs/2412.08821 (accessed on 19 December 2025).
- Brachman, Ronald J.; Levesque, Hector J. Knowledge representation and reasoning; Morgan Kaufmann: San Francisco, 2004. [Google Scholar]
- Bricken, Trenton; Templeton, Adly; Batson, Joshua; et al. Towards monosemanticity: Decomposing language model activations with dictionary learning. Transformer Circuits. 2023. Available online: https://transformer-circuits.pub/2023/monosemantic-features (accessed on 25 December 2025).
- Broniatowski, David A.; Jamison, Amelia M.; Qi, SiHua; AlKulaib, Lulwah; Chen, Tao; Benton, Adrian; Quinn, Sandra C.; Dredze, Mark. Vaccine discourse in the era of social media: Vaccine denialism, misinformation, and trust. American Journal of Public Health 2018, 108, S150–S157. [Google Scholar] [CrossRef]
- Carnap, Rudolf. Der logische Aufbau der Welt; Weltkreis-Verlag: Berlin, 1928. [Google Scholar]
- Chalmers, David J. Constructing the world; Oxford University Press: Oxford, 2012. [Google Scholar]
- Chalmers, David J. Propositional interpretability in artificial intelligence. arXiv 2025, arXiv:2501.15740. Available online: https://arxiv.org/abs/2501.15740 (accessed on 19 December 2025).
- Chang, Haw-Shiuan; McCallum, Andrew. Softmax bottleneck makes language models unable to represent multi-mode word distributions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022): Long Papers; Association for Computational Linguistics: Dublin, Ireland, 2022; pp. 8048–8073. [Google Scholar]
- Chekalina, Veronika; Andrey, Kutuzov; André, Anjos. Addressing hallucinations in language models with knowledge graph embeddings as an additional modality. arXiv 2024, arXiv:2411.11531. Available online: https://arxiv.org/abs/2411.11531 (accessed on 19 December 2025).
- Chen, Shiqi; Miao, Xiong; Liu, Junteng; et al. In-context sharpness as alerts: An inner representation perspective for hallucination mitigation. Proceedings of the 41st International Conference on Machine Learning; PMLR 2024, 235, 7553–7567. Available online: https://arxiv.org/abs/2403.01548 (accessed on 25 December 2025).
- Cossio, Manuel. A comprehensive taxonomy of hallucinations in large language models. arXiv 2025, arXiv:2508.01781. Available online: https://arxiv.org/abs/2508.01781 (accessed on 19 December 2025).
- Cunningham, Hoagy; et al. Sparse autoencoders find highly interpretable features in language models. arXiv 2024, arXiv:2309.08600. Available online: https://arxiv.org/abs/2309.08600 (accessed on 25 December 2025).
- Davidson, Donald. Truth and meaning. In Inquiries into truth and interpretation; Clarendon Press: Oxford, 1967/1984; pp. 17–36. [Google Scholar]
- Davidson, Donald. Radical interpretation. In Inquiries into truth and interpretation; Clarendon Press: Oxford, 1984; pp. 125–139. [Google Scholar]
- Davidson, Donald. A coherence theory of truth and knowledge. In Truth and interpretation; LePore, Ernest, Ed.; Blackwell: Oxford, 1986; pp. 307–319. [Google Scholar]
- Davidson, Donald. Three varieties of knowledge. In Subjective, intersubjective, objective; Clarendon Press: Oxford, 2001; pp. 205–220. [Google Scholar]
- Dawkins, Richard; Pinker, Steven. Can we still be optimistic about the future? A conversation with Steven Pinker. YouTube video on The Poetry of Reality with Richard Dawkins. published January 15, 2025. 2025. Available online: https://www.youtube.com/watch?v=qFZ8_Ide-aA (accessed on 19 December 2025).
- De Cao, Nicola; Aziz, Wilker; Titov, Ivan. Editing factual knowledge in language models. Findings of EMNLP 2021; Association for Computational Linguistics. 2021, pp. 1649–1660. Available online: https://aclanthology.org/2021.emnlp-main.522.pdf (accessed on 19 December 2025).
- Elhage, Nelson; et al. Toy models of superposition. Transformer Circuits Thread (online PDF). 2022. Available online: https://transformer-circuits.pub/2022/toy_model/toy_model.pdf (accessed on 25 December 2025).
- Felin, Teppo; Holweg, Matthias. Theory is all you need: AI, human cognition, and causal reasoning. Strategy Science 2024, 9, 346–371. [Google Scholar] [CrossRef]
- Frankfurt, Harry G. On bullshit; Princeton University Press: Princeton, 2005. [Google Scholar]
- Frege, Gottlob. Die Grundlagen der Arithmetik; Wilhelm Koebner: Breslau, 1884. [Google Scholar]
- Frege, Gottlob. The foundations of arithmetic; Blackwell: Oxford, 1950. [Google Scholar]
- Gekhman, Dor; Schoelkopf, Hailey; Geva, Mor; Goldberg, Yoav. Does fine-tuning LLMs on new knowledge encourage hallucinations? arXiv 2024. Available online: https://arxiv.org/abs/2405.05904 (accessed on 25 December 2025).
- Ghosal, Gaurav; Hashimoto, Tatsunori; Raghunathan, Aditi. Understanding finetuning for factual knowledge extraction. arXiv 2024, arXiv:2406.14785. Available online: https://arxiv.org/abs/2406.14785 (accessed on 25 December 2025).
- Gregory, Dominic. Pictures, propositions, and predicates. Philosophical Studies 2020, 177, 1567–1588. [Google Scholar] [CrossRef]
- Haack, Susan. Evidence and inquiry; Blackwell: Oxford, 1993. [Google Scholar]
- Harnad, Stevan. The symbol grounding problem. Physica D 1990, 42, 335–346. [Google Scholar] [CrossRef]
- Huang, Lei; Yu, Weijiang; Ma, Weitao; et al. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems 2025, 43, 1–55. [Google Scholar] [CrossRef]
- Jang, Joel; Ye, Seonghyeon; Lee, Changho; et al. Towards continual knowledge learning of language models. arXiv 2022, arXiv:2110.03215. Available online: https://arxiv.org/abs/2110.03215 (accessed on 19 December 2025).
- Jolley, Daniel; Douglas, Karen M. The effects of anti-vaccine conspiracy theories on vaccination intentions. PLOS ONE 2014, 9, e89177. [Google Scholar] [CrossRef]
- Joshi, Satyadhar. Mitigating LLM hallucinations: A comprehensive review of techniques and architectures. Preprints. 2025. Available online: https://www.preprints.org/manuscript/202505.1955/v1 (accessed on 19 December 2025).
- Lavrinovics, Ernests; Biswas, Russa; Bjerva, Johannes; Hose, Katja. Knowledge graphs, large language models, and hallucinations: An NLP perspective. arXiv 2024, arXiv:2411.14258. Available online: https://arxiv.org/abs/2411.14258 (accessed on 19 December 2025).
- Lee, Lenka; Mácha, Jakub. Inverted ekphrasis and hallucinating stochastic parrots: Deleuzean insights into AI and art in daily life. Itinera 28 2024. [Google Scholar] [CrossRef] [PubMed]
- Lewandowsky, Stephan; Ecker, Ullrich K. H.; Seifert, Colleen M.; Schwarz, Norbert; Cook, John. Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest 2012, 13, 106–131. Available online: https://pubmed.ncbi.nlm.nih.gov/22922134/ (accessed on 25 December 2025). [CrossRef] [PubMed]
- Li, Kenneth; Hopkins, Aspen K.; Bau, David; Viégas, Fernanda; Pfister, Hanspeter; Wattenberg, Martin. Emergent world representations: Exploring a sequence model trained on a synthetic task. arXiv 2022, arXiv:2210.1338. Available online: https://arxiv.org/abs/2210.13382 (accessed on 25 December 2025).
- Lin, Zhen; Fu, Yao; Zhang, Ben; Zhang, Tianyi; Chen, Danqi. FLAME: Factuality-aware alignment for large language models. Advances in Neural Information Processing Systems 37. 2024. Available online: https://www.proceedings.com/079017-3671.html (accessed on 19 December 2025).
- Meng, Kevin; Bau, David; Andonian, Alex; Belinkov, Yonatan. Locating and editing factual associations in GPT. Proceedings of ICLR 2023; 2023. Available online: https://arxiv.org/abs/2202.05262 (accessed on 19 December 2025).
- Michel, Jean-Baptiste; Shen, Yuan Kui; Aiden, Aviva Presser; et al. Quantitative analysis of culture using millions of digitized books. Science 2011, 331, 176–182. [Google Scholar] [CrossRef]
- Min, Sewon; Krishna, Kalpesh; Lyu, Xinxi; et al. FactScore: Fine-grained atomic evaluation of factual precision in long-form generation. arXiv 2023, arXiv:2305.14251. Available online: https://arxiv.org/abs/2305.14251 (accessed on 19 December 2025).
- Mitchell, Eric; Lin, Charles; Bosselut, Antoine; Manning, Christopher D.; Finn, Chelsea. Memory-based model editing at scale. arXiv 2022, arXiv:2206.06520. Available online: https://arxiv.org/abs/2206.06520 (accessed on 25 December 2025).
- Mollo, Dimitri; Millière, Raphaël. The vector grounding problem. arXiv 2025, arXiv:2304.01481v3. Forthcoming in Philosophy and the Mind Sciences. Available online: https://arxiv.org/abs/2304.01481 (accessed on 19 December 2025).
- Mulligan, Kevin; Simons, Peter; Smith, Barry. Truth-makers. Philosophy and Phenomenological Research 1984, 44, 287–321. [Google Scholar] [CrossRef]
- Olah, Chris; Cammarata, Nick; Schubert, Ludwig; Goh, Gabriel; Petrov, Michael; Carter, Shan. Zoom in: An introduction to circuits. Distill 5. 2020. Available online: https://distill.pub/2020/circuits/zoom-in/ (accessed on 25 December 2025).
- Putnam, Hilary. Reason, truth and history; Cambridge University Press: Cambridge, 1981. [Google Scholar]
- Quine, W. V. O. Two dogmas of empiricism. Philosophical Review 1951, 60, 20–43. [Google Scholar] [CrossRef]
- Russell, Bertrand. The problems of philosophy; Williams and Norgate: London, 1912. [Google Scholar]
- Russell, Bertrand. The philosophy of logical atomism; Routledge: London, 1918/2010. [Google Scholar]
- Russell, Stuart J.; Norvig, Peter. Artificial intelligence: A modern approach, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, 2010. [Google Scholar]
- Ryle, Gilbert. The concept of mind; Hutchinson: London, 1949. [Google Scholar]
- Sansford, Hannah; Richardson, Nicholas; Petric Maretic, Hermina; Nait Saada, Juba. GraphEval: A knowledge-graph based LLM hallucination evaluation framework. arXiv 2024, arXiv:2407.10793. Available online: https://arxiv.org/abs/2407.10793 (accessed on 25 December 2025).
- de Saussure, Ferdinand. Course in General Linguistics; Harris, Roy, Translator; Duckworth: London, 1983. [Google Scholar]
- Šekrst, Kristina. Forthcoming. Do large language models hallucinate electric fata morganas? Journal of Consciousness Studies.
- Sharkey, Lee; Chughtai, Bilal; Batson, Joshua; et al. Open problems in mechanistic interpretability. arXiv 2025, arXiv:2501.16496. Available online: https://arxiv.org/abs/2501.16496 (accessed on 25 December 2025).
- Templeton, Adly; et al. Scaling monosemanticity: Extracting interpretable features from Claude 3 Sonnet. Transformer Circuits. 2024. Available online: https://transformer-circuits.pub/2024/scaling-monosemanticity/ (accessed on 25 December 2025).
- Trinh, Tue. Logicality and the picture theory of language: Propositions as pictures in Wittgenstein’s Tractatus. Synthese 2024, 203, 127. [Google Scholar] [CrossRef] [PubMed]
- Tuquero, Loreben. Musk’s AI-powered Grokipedia: A Wikipedia spin-off with less care to sourcing, accuracy. PolitiFact 2025, 2025. Available online: https://www.politifact.com/article/2025/nov/12/Grokipedia-Wikipedia-AI-citations/ (accessed on 26 December 2025).
- Turpin, Miles; Michael, Julian; Perez, Ethan; Bowman, Samuel R. Language models don’t always say what they think: Unfaithful explanations in chain-of-thought prompting. Advances in Neural Information Processing Systems 2023, 36. Available online: https://arxiv.org/abs/2305.04388 (accessed on 25 December 2025). [CrossRef]
- Verheggen, Claudine; Myers, Robert H. The status and the scope of the principle of charity. Topoi 2025, 44, 1215–1226. [Google Scholar] [CrossRef]
- Weatherby, Leif. Language machines: Cultural AI and the end of remainder humanism; University of Minnesota Press: Minneapolis, 2025. [Google Scholar]
- Wei, Haoran; Sun, Yaofeng; Li, Yukun. DeepSeek-OCR: Contexts optical compression. arXiv 2025, arXiv:2510.18234. Available online: https://arxiv.org/abs/2510.18234 (accessed on 25 December 2025).
- Wittgenstein, Ludwig. Tractatus logico-philosophicus; Ogden, C. K., Translator; Kegan Paul: London, 1922. [Google Scholar]
- Wittgenstein, Ludwig. Some remarks on logical form. Proceedings of the Aristotelian Society, Supplementary Volume 1929, 9, 162–171. Available online: https://www.jstor.org/stable/4106481 (accessed on 19 December 2025). [CrossRef]
- Wittgenstein, Ludwig. Wittgenstein’s lectures, Cambridge, 1930–1932: From the notes of John King and Desmond Lee; Lee, D., Ed.; Basil Blackwell and University of Chicago Press: Oxford and Chicago, 1980. [Google Scholar]
- Xu, Ziwei; Jain, Sanjay; Kankanhalli, Mohan. Hallucination is inevitable: An innate limitation of large language models. arXiv 2024, arXiv:2401.11817. Available online: https://arxiv.org/abs/2401.11817 (accessed on 19 December 2025).
- Yao, Liang; Mao, Chengsheng; Luo, Yuan. KG-BERT: BERT for knowledge graph completion. arXiv 2019, arXiv:1909.03. Available online: https://arxiv.org/abs/1909.03193 (accessed on 25 December 2025).
- Zhang, Yuji; Li, Sha; Liu, Jiateng; Yu, Pengfei; Fung, Yi R.; Li, Jing; Li, Manling; Ji, Heng. Knowledge overshadowing causes amalgamated hallucination in large language models. arXiv 2024, arXiv:2407.08039. Available online: https://arxiv.org/abs/2407.08039 (accessed on 25 December 2025).
- Zhang, Yuji; Li, Sha; Qian, Cheng; Liu, Jiateng; Yu, Pengfei; Han, Chi; Fung, Yi R.; McKeown, Kathleen; Zhai, Chengxiang; Li, Manling; Ji, Heng. The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination. arXiv 2025, arXiv:2502.16143. Available online: https://arxiv.org/abs/2502.16143 (accessed on 25 December 2025).
- Zheng, Liwen; Li, Chaozhuo; Liu, Zheng; Huang, Feiran; Jia, Haoran; Ye, Zaisheng; Zhang, Xi. Fact in fragments: Deconstructing complex claims via LLM-based atomic fact extraction and verification. arXiv 2025, arXiv:2506.07446. Available online: https://arxiv.org/abs/2506.07446 (accessed on 25 December 2025).
- Zou, Andy; Phan, Long; Chen, Sarah; et al. Representation engineering: A top-down approach to AI transparency. arXiv 2023, arXiv:2310.01405. Available online: https://arxiv.org/abs/2310.01405 (accessed on 19 December 2025).
| 1 | Weatherby (2025) addresses these positive functions under the headings of “general poetics” and “poetic ideology.” Cf. also Lee and Mácha (2024) for a discussion of the ramifications of the hallucination metaphor for LLMs’ ability to create works of art. |
| 2 | Large-scale foundation models are typically trained on mixtures of uncurated web-scale corpora—such as Common Crawl, C4, OSCAR, and the Pile—together with curated factual datasets like Wikipedia, government publications, and filtered scientific and educational resources. Uncurated corpora are massive web scrapes containing heterogeneous, duplicate, noisy, or incorrect material; their scale is crucial for model performance, but they are not verified for accuracy. By contrast, curated corpora undergo explicit editorial or algorithmic filtering for quality and topical relevance. |
| 3 | Current LLMs (like GPT-5 and Grok 3) utilize what is known as the chain-of-thought method: They display the chain of their logical reasoning, from the input (user prompt, together with additional web searches) through intermediate steps to the output. This feature was introduced to make reasoning explicit and reveal how the model moves from premise to conclusion. The method assumes that natural-language sentences can stand for propositions whose truth can be determined, and that displaying intermediate steps allows the user to inspect the structure of inference. The model, so to speak, thinks aloud. The user is given the impression that the model arrived at its output via a rigorous logical reasoning process. This process structurally resembles human thinking. In theory, this bridges linguistic expression and logical reasoning. In practice, it does not. For as we know, this is not how LLMs actually reason. The chain-of-thought display is an explanatory fiction produced for the user. This fiction is useful to the extent that the user can review it and correct the model if it does not proceed in the desired direction. But the method also has serious issues and limitations, ranging from restricted generality (Chalmers 2025) to making outright false claims, that is, producing hallucinations (Turpin et al. 2023). The method as a whole can be viewed as a single, grand hallucination, as it gives the user a false impression of rigorous logical reasoning at the LLM’s core. Moreover, the user is misled into thinking that the model can represent propositions, which, in my view, is the main flaw in its design. |
| 4 | Empirical evidence supports the view that extensive or poorly targeted fine-tuning can actually increase hallucination rates in LLMs. Gekhman et al. (2024) show in controlled closed-book QA experiments that since fine-tuning data contain a higher proportion of previously unknown or novel facts, a model’s propensity to hallucinate relative to its original knowledge base rises almost linearly, with early stopping only partially mitigating the effect. Ghosal et al. (2024) similarly found that fine-tuning LLaMA-7B and Mistral on low-popularity factual data worsens performance by roughly 7–10 percent on factuality benchmarks such as PopQA and MMLU, demonstrating that narrow or low-coverage fine-tuning can degrade a model’s general truthfulness. Lin et al. (2024) further observe that conventional alignment and reward-model objectives, optimized for fluency, helpfulness, and verbosity, tend to over-encourage long and confident answers, thereby amplifying plausible but unfounded statements; their proposed factuality-aware loss function improves this but confirms the baseline bias. For rare or low-frequency knowledge, approaches that rely on external information sources tend to be more robust than direct fine-tuning on sparse data, since narrowly targeted fine-tuning can overfit to limited examples and introduce additional factual errors. Finally, Zhang et al. (2024; 2025) have empirically established a law of knowledge overshadowing: Hallucination frequency grows with data imbalance, knowledge popularity, and model size, as dominant facts “overshadow” rarer ones in gradient updates. Collectively, these studies support the conclusion that while fine-tuning enhances local adaptation, over-optimization, data imbalance, and alignment biases can systematically raise hallucination rates outside the tuned domain. |
| 5 | A number of recent studies advance or reinforce variants of the claim that the primary cause of hallucination lies in the absence—or at least the instability—of propositional, knowledge-bearing representations within LLMs. Chen et al. (2024) approach the problem from an inner-representation perspective, showing that hallucinated outputs correlate with diffuse and nondiscriminative activation patterns, implying that the model fails to encode distinct propositional states. Chekalina et al. (2024) demonstrate that supplementing LLMs with knowledge graph embeddings markedly reduces hallucinations, effectively treating the phenomenon as a representational deficiency remediable through the injection of structured propositional content. Similarly, Sansford et al. (2024) propose the GraphEval framework, which evaluates factuality at the level of propositional triples—implicitly assuming that hallucination is a breakdown in proposition-level encoding. Zhang et al. (2025) frame the problem as one of “knowledge overshadowing,” where existing information is either incompletely represented or overwritten by spurious associations, again situating hallucination in defective internal knowledge representation. Finally, broader analyses of knowledge graphs and LLMs (Lavrinovics et al. 2024) converge on the view that language models hallucinate because they lack stable propositional anchoring to factual content—an insufficiency only partially offset when external symbolic structures are integrated. These works lend empirical and conceptual weight to the thesis that hallucination stems not merely from data bias or decoding artifacts, but from a model’s failure to instantiate internally coherent, truth-evaluative propositions. |
| 6 | In line with a recent article by Chalmers (2025; addressed in detail in section IV), I can admit that “pictorial or map-like representations” can also be truth-bearers, although they are not propositions proper. Chalmers treats such structures together with propositions under the heading “generalized propositional attitudes”. However, LLMs are primarily textual machines, and their hallucinations are textual, which means, according to my argument, that LLMs’ hallucinations are primarily related to propositions. Other promising current AI systems are visual language models (VLMs) whose hallucinations are primarily pictorial. Recent architectural designs, such as DeepSeek-OCR (Wei et al. 2025), combine textual and visual language models. In such systems, representing generalized propositions would be crucial to combat the hallucination problem. Taken from another perspective, there are accounts of propositions that are pictorial in essence—Wittgenstein’s Tractatus being the most seminal one. What is crucial here is the ability to represent factual content that may or may not be accurate. Hence, I can keep insisting on the centrality of the proposition without diminishing the pictorial dimension of representation. |
| 7 | This echoes Russell’s prolonged struggles with the problem of the unity of propositions. |
| 8 | Frege insists that “never is the question to be raised about the meaning of a word in isolation, but only in the context of a sentence” (Frege 1950, §62). Wittgenstein develops a structurally similar view when he states that “only the proposition has sense; only in the context of a proposition has a name meaning” (Wittgenstein 1922, 3.3). |
| 9 | Quine: “The unit of empirical significance is the whole of science” (1951, 42). |
| 10 | Dominic Gregory (2020) argues that pictorial representations possess a logical form akin to predication—depicting things as having properties—and that, when contextually framed, such images can bear propositional content. Tue Trinh (2024) likewise treats propositions as pictorial—not in a visual sense, but a structural one: They “show” possible states of affairs through their internal projective form. Both authors thus recast the proposition as a picture in the formal, not perceptual, sense, whose meaning arises from the way its internal configuration mirrors the world’s possible arrangements. |
| 11 | Lewis’s approach is suitable for interpreting artificial machines, because their inner architecture and workings are completely transparent to us—which cannot be said about the human mind. |
| 12 | “The process of separating meaning and opinion invokes two key principles which must be applicable if a speaker is interpretable: the Principle of Coherence and the Principle of Correspondence. The Principle of Coherence prompts the interpreter to discover a degree of logical consistency in the thought of the speaker; the Principle of Correspondence prompts the interpreter to take the speaker to be responding to the same features of the world that he (the interpreter) would be responding to under similar circumstances. Both principles can be (and have been) called principles of charity: one principle endows the speaker with a modicum of logic, the other endows him with a degree of what the interpreter takes to be true belief about the world” (Davidson 2001, 211). |
| 13 | Claudine Verheggen and Robert Myers argue in a recent article (2025) that the principle of charity is the essential ingredient of meaning itself in all contexts. Therefore, in order to get their interpretation off the ground, the radical interpreter must assume this principle. It is not just a methodological maxim, but an integral component of their broader approach. The principle of charity cannot be excluded from radical interpretation. |
| 14 | Russell writes: “The simplest imaginable facts are those which consist in the possession of a quality by some particular thing. […] The whole lot of them, taken together, are as facts go very simple, and are what I call atomic facts. The propositions expressing them are what I call atomic propositions” (Russell 1918/2010, 26–27). |
| 15 | Thus, Wittgenstein writes: “Atomic facts are independent of one another” (1922, 2.061). And: “From an atomic proposition no other can be inferred” (ibid., 5.134). |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).