Submitted:
13 September 2025
Posted:
15 September 2025
Read the latest preprint version here
Abstract
In the philosophy of language, Frege’s (1892) distinction between sense and reference provided a foundational framework for identity statements, while Putnam’s (1975) Twin Earth thought experiment, with its remarkable insight, pushed externalism to its limits, successfully challenging the internalist model of meaning and setting the agenda for decades of debate on the determinacy of reference. However, despite the groundbreaking nature of these works, a curious phenomenon persists: the debates they sparked—such as those surrounding the Ship of Theseus or identical particles—seem to have reached an impasse. This paper argues that this stalemate may not stem from the depth of the problem itself but rather from a shared, unexamined assumption underlying these otherwise compelling theories: the belief that there exists a single, decisive level (whether microphysical structure or historical causation) capable of conclusively resolving the identity question. This paper proposes that, rather than continuing to seek a superior singular answer under this assumption, a more productive approach lies in critically examining the assumption itself. To this end, we develop a hierarchical relativity framework (Steins Theory). Intriguingly, this framework reveals that these seemingly opposing theories can be understood as special cases within our framework at different levels; their difficulties arise inevitably when they attempt to make assertions across levels. Thus, this framework does not aim to negate prior work but seeks to clarify its valid scope of application, offering a new path to resolve a series of philosophical puzzles born of category mistakes.
Keywords:
1. Introduction
2. Analysis
2.1. Axiomatic Foundations
2.1.1. Axiom 1 (Self-Identity)
2.1.2. Axiom 2 (Distinctness)
2.2. Category Mistake and Level Confusion: The Ship of Theseus as a Case Study
- Original Question: Determine whether the entity “ship” at time t1 and t2 is identical.
- Correct Content() Domain (based on the original question): Should include only properties relevant to the ship’s abstract information structure, i.e., Content_Struct(Ship) = (material, form, function).
-
Category Mistake in Historical/Four-Dimensional/Stage Theories:
- -
- Wiggins effectively uses Content_Hist(Ship) = (material, form, function, historical-causal path).
- -
- Four-dimensionalism uses Content_4D(Ship) = (material, form, function, spatiotemporal coordinates).
- -
- Stage Theory uses Content_Stage(Ship) = (material, form, function, counterpart relation).
2.3. Information Conservation
2.3.1. Quantum Identical Particles and the Challenge to PII
2.3.2. Steins Theory’s Solution: A Hierarchical Relativity Framework
2.3.3. Formal Derivation of Information Conservation
2.4. Symmetry
3. Examples (Derived Mathematically, Assuming a Scientific Perspective by Default)
3.1. The Duplication Paradox
- -
- Issue: Are two documents with identical content stored on different devices two distinct pieces of information?
- -
-
Solution:
- -
- If the target is pure informational content identity → Content(n) = textual semantics, then n ≡ m (single entity).
- -
- If the target is document-location entity identity → Content(An) = (textual semantics, location), then (n, Loc_A) ≢ (m, Loc_B).
- -
- Conclusion: Duplicates are the same information entity bound to different spatiotemporal coordinates, resulting in observable distinctness.
3.2. Gibbs’ Paradox
- -
-
Category Mistake:
- -
- The target should be particle type identity → Content(g) = (mass, spin, …).
- -
- Classical statistics illicitly expands Content(Ag) = (intrinsic properties, fictitious label).
- -
-
Correction: ∀g_i, g_j: Content(g_i) = S_int = Content(g_j) ⇒ g_i ≡ g_j (type identity).
- Entropy errors arise from erroneously selecting the Content() domain (introducing labels).
3.3. Black Hole Information Paradox
- -
- Category Mistake: Illegitimately binding the information entity n’s Content() to spatiotemporal coordinates: Content(An) = (information structure, black hole coordinates).
- -
-
Correct Solution:
- -
- Define the target: information structure identity → Content(n) = quantum properties.
- -
- A black hole decouples the set (quantum properties, coordinates), leaving unpaired content unobservable, but Content(quantum properties/coordinates) as an abstract entity persists.
- -
- If a new spacetime entity satisfies Content(m) = Content(n), then m ≡ n.
3.4. Chinese Room Thought Experiment
3.5. The Ship of Theseus
3.6. Twin Earth Paradox
- -
- Traditional Contradiction: Earth’s “water” (H2O) differs chemically from Twin Earth’s “water” (XYZ), but are the concepts of “water” identical for residents of both planets?
- -
-
Theoretical Solution:
- -
- If Content(water concept) = macroscopic properties (colorless, chemically reactive, potable liquid, etc.) → the concepts are identical (n = n).
- -
- Introducing microscopic structure (H2O/XYZ) expands the Content() domain to the molecular level, committing a category mistake.
- -
- Conclusion: Semantic identity is determined solely by cognitive function, independent of underlying physical structure.
3.7. Grandfather Paradox
- -
- Contradiction: If one travels back and kills one’s grandfather ⇒ one should not exist ⇒ one cannot perform the killing.
- -
-
Theoretical Dissolution:
- -
- Define the target entity: worldline identity Content(worldline) = logical structure of events.
- -
-
The killing event results in:
- -
- Original worldline W0: (grandfather lives → you exist → you kill).
- -
- New worldline W1: (grandfather dies → you do not exist).
- -
- Since Content(W0) ≠ Content(W1), W0 and W1 are distinct information entities (not “the same worldline modified”).
3.8. Brain in a Vat
- -
- Problem: How can one prove one is not a brain in a vat? Perception cannot distinguish reality from simulation.
- -
-
Theoretical Formula:
- -
-
Define Content(cognitive entity) = perceptual information flow.
- -
- Real brain B_real: Content = {light signals, touch, …}.
- -
- Vat brain B_vat: Content = {electrical stimulation signals}.
- -
- If Content(B_real) ≡ Content(B_vat) ⇒ by Axiom B_real ≣ B_vat (same cognitive entity).
- -
- Key Point: The “reality” dispute stems from expanding Content() to the external carrier (skull/vat), while the cognitive entity is determined solely by the information flow.
3.9. Mary’s Room
- -
- Scenario: Mary knows all about color neuroscience but has never seen red → Does she gain new knowledge upon seeing red?
- -
-
Theoretical Solution:
- -
-
Define knowledge entity levels:
- -
- Propositional knowledge: Content(K_prop) = data on red light wavelengths.
- -
- Qualia knowledge: Content(K_qualia) = subjective red experience.
- -
- Since Content(K_prop) ≠ Content(K_qualia), they are distinct information entities.
- -
- Mary gains a new entity K_qualia, not a supplement to K_prop ⇒ the paradox arises from conflating knowledge types.
3.10. Newcomb’s Paradox
- -
- Core Paradox: A nearperfect predictor vs. the participant’s free will. Choose one box (known to contain money) or two boxes (potentially more money)?
- -
-
Theoretical Deconstruction:
- -
-
Category Mistake: Conflating the Content() domain of the decision entity:
- -
- Level 1 (pure decision logic): Content(decision) = (action, payoff function) ⇒ dominant strategy: choose two boxes (regardless of prediction accuracy).
- -
- Level 2 (causal history binding): Content(decision) = (action, payoff function, prediction history) ⇒ if prediction is accurate, one box yields higher payoff.
- -
-
Uniqueness Theorem Ruling:
- -
- If the target is rational decision without historical constraints → use Content(Level 1) ⇒ choose two boxes.
- -
- If the target is decision with predictive causation → use Content(Level 2) ⇒ choose one box.
- -
- Paradox Dissolved: The two are distinct decision entities, Content(Level 1) ≠ Content(Level 2); the contradiction arises from domain swapping.
3.11. Raven Paradox
- -
- Core Paradox: “All ravens are black” ≡ “All non-black things are non-ravens.” Why does observing a red apple (non-black and non-raven) confirm the proposition?
- -
-
Theoretical Deconstruction:
- -
- Category Mistake: Expanding the Content() domain of “confirmation behavior” from the logical structure of the proposition to the type of empirical sample.
- -
-
Correct Definition:
- -
- Proposition identity: Content(P) = logical form (∀x: R(x) → B(x)).
- -
- Confirmation identity: Content(confirmation) = verification of ¬∃x: (R(x) ∧ ¬B(x)).
- -
-
Conclusion:
- -
- A red apple confirms the logically equivalent contrapositive (non-black ⇒ non-raven), with Content(confirmation) identical to observing a raven (since Content(P) ≡ Content(P)).
- -
- Claiming “red apples and ravens differ in confirmatory strength” commits a category mistake, expanding Content() to physical sample categories (birds/fruits), violating the initial logical target.
3.12. Sorites Paradox (Baldness Paradox)
- -
- Core Paradox: Removing one grain of sand does not turn a heap into a non-heap ⇒ removing all sand still yields a “heap,” a contradiction.
- -
-
Theoretical Deconstruction:
- -
-
Category Mistake: Conflating the Content() definition of “heap”:
- -
- Level 1 (topological structure): Content(heap) = macroscopic form of sand collection ⇒ removing one grain does not alter form identity (n = n).
- -
- Level 2 (atomic quantity): Content(heap) = number of grains N ⇒ when N = 0, Content(heap) = ∅, entity ceases.
- -
-
Solution:
- -
- If heap is defined as a form entity Content(Level 1), removing one grain retains the same heap.
- -
- If defined as a quantity entity Content(Level 2), each grain removal creates a new entity.
- -
- Paradox Root: Swapping Content() domains (from form to quantity) mid-argument.
3.13. Sleeping Beauty Problem
- -
- Core Paradox: What probability should Sleeping Beauty assign to heads or tails upon awakening (1/2 or 1/3)?
- -
-
Theoretical Deconstruction:
- -
-
Category Mistake: Conflating the Content() domain of “probability entity”:
- -
- Level 1 (prior probability): Content(probability) = coin’s physical state ⇒ P(heads) = 1/2.
- -
- Level 2 (information update): Content(probability) = (coin state, awakening frequency) ⇒ P(heads|awakening) = 1/3.
- -
-
Uniqueness Ruling:
- -
- If asking “probability of the coin’s actual state” → Content(Level 1) ⇒ 1/2.
- -
- If asking “probability given current awakening” → Content(Level 2) ⇒ 1/3.
- -
- Contradiction Root: Treating two distinct probability levels Content(Level 1) ≠ Content(Level 2) as the same question.
3.14. Pascal’s Wager in Modern Contradiction
- -
- Problem: If multiple religions’ gods claim “I alone am true,” how does a rational person bet?
- -
-
Theoretical Deconstruction:
- -
-
Category Mistake: Conflating Content(god) domains:
- -
- Level 1: Content(god X) = divine description in a specific religion’s doctrine.
- -
- Level 2: Content(omnipotent entity) = abstract supreme being beyond specific doctrines.
- -
-
Ruling:
- -
- If comparing the truth of specific religious gods → each Content(god X) differs ⇒ distinct levels.
- -
- If asking “does a supreme entity exist” → define Content(omnipotent entity) independently of specific religions.
3.15. Unexpected Execution Paradox
- -
- Problem: A judge announces, “You will be executed unexpectedly next week.” The prisoner deduces it cannot happen, yet the execution occurs.
- -
-
Theoretical Deconstruction:
- -
- Category Mistake: Swapping Content(unexpected) from “prisoner’s cognitive state” to “objective time point.”
- -
- Correct Definition: Content(unexpected) = prisoner’s inability to be certain of execution the day before.
- -
- Conclusion: The execution day inevitably exists (due to objective time flow), while Content(unexpected) depends solely on the prisoner’s cognitive state, belonging to different levels.
4. Applications
4.1. The Dilemma of Personal Identity and Existing Theories
- -
- P be a person-stage (a personal time slice).
- -
- Define it as an ordered pair: P = Content(P, ).
- -
- Content(P): Represents the core information structure of the person at that time slice (e.g., a specific pattern of perceptions, memories, personality traits, cognitive functions).
- -
- : Represents the carrier instantiating this information structure (e.g., a specific brain).
4.2. ”Spacetime Leap” as a Logical Necessity of Coordinate Decoupling and Rebinding
4.3. Ethical Dilemmas and Existing Theories
4.3.1. An Information-Based Content Analysis Framework: Advancing Traditional Goals
4.3.2. Core Derivation: A Dissolving Analysis of Traditional Dilemmas
4.3.3. Implications of the New Framework: Pursuit of Harmony and Stability
5. Conclusion
References
- Leibniz, G. W. (1714). Principle of the Identity of Indiscernibles (in Monadology).
- Frege, G. (1892). Über Sinn und Bedeutung (On Sense and Reference).
- Floridi, L. (2011). The Philosophy of Information. Oxford University Press.
- Russell, B. (1905). On Denoting. Mind.
- Quine, W. V. O. (1950). Identity, Ostension, and Hypostasis. The Journal of Philosophy. [CrossRef]
- Shannon, C. E. (1948). A Mathematical Theory of Communication. The Bell System Technical Journal.
- Chalmers, D. J. (2018). The Meta-Problem of Consciousness.
- Putnam, H. (1967). The Nature of Mental States.
- Everett, H. III (1957). “Relative State” Formulation of Quantum Mechanics. Reviews of Modern Physics, 29(3), 454–462. [CrossRef]
- Linde, A. (1986). Eternally Existing Self-Reproducing Chaotic Inflationary Universe. Physics Letters B, 175(4), 395–400. [CrossRef]
- Tononi, G. (2004). An Information Integration Theory of Consciousness. BMC Neuroscience, 5(1), 42. [CrossRef]
- Kandel, E. R., Schwartz, J. H., & Jessell, T. M. (2000). Principles of Neural Science. McGraw-Hill.
- Tegmark, M. (1998). The Interpretation of Quantum Mechanics: Many Worlds or Many Words? Fortschritte der Physik, 46(6–8), 855–862.
- Poincaré, H. (1890). Sur le problème des trois corps et les équations de la dynamique. Acta Mathematica, 13, 1–270. [CrossRef]
- Zurek, W. H. (2009). Quantum Darwinism. Nature Physics, 5(3), 181–188. [CrossRef]
- Zurek, W. H. (2003). Decoherence, Einselection, and the Quantum Origins of the Classical. Reviews of Modern Physics, 75(3), 715–775. [CrossRef]
- Black, M. (1952). The Identity of Indiscernibles. Mind, 61(242), 153–164.
- Chandler, H. S. (1975). Rigid Designation. *The Journal of Philosophy, 72(13), 363–369. [CrossRef]
- French, S., & Redhead, M. (1988). Quantum Physics and the Identity of Indiscernibles. The British Journal for the Philosophy of Science, 39(2), 233–246. [CrossRef]
- Kripke, S. A. (1980). Naming and Necessity. Harvard University Press.
- Leibniz, G. W. (1714/1898). The Monadology and Other Philosophical Writings. Trans. Robert Latta. Oxford University Press.
- Putnam, H. (1975). The Meaning of Meaning. Minnesota Studies in the Philosophy of Science, 7, 131–193.
- Wiggins, D. (1980). Sameness and Substance. Harvard University Press.
- Heller, M. (1984). Temporal Parts of Four-Dimensional Objects. Philosophical Studies.
- Sider, T. (2001). Four-Dimensionalism: An Ontology of Persistence and Time. Oxford University Press.
- Sider, T. (1996). All the World’s a Stage. Australasian Journal of Philosophy. [CrossRef]
- Hawley, K. (2001). How Things Persist. Oxford University Press.
- Muller, F. A., & Saunders, S. (2008). Discerning Fermions. The British Journal for the Philosophy of Science. [CrossRef]
- Krause, D. (2011). Logical Aspects of Quantum Non-Individuality. [CrossRef]
- Parfit, D. (1984). Reasons and Persons. Oxford University Press.
- Williams, B. (1970). The Self and the Future. The Philosophical Review. [CrossRef]
- Trivers, R. L. (1971). The Evolution of Reciprocal Altruism. The Quarterly Review of Biology, 46(1), 35–57. [CrossRef]
- Axelrod, R., & Hamilton, W. D. (1981). The Evolution of Cooperation. Science, 211(4487), 1390–1396. [CrossRef]
- Hamilton, W. D. (1964). The Genetical Evolution of Social Behaviour. I & II. Journal of Theoretical Biology, 7(1), 1–52. [CrossRef]
- Nowak, M. A. (2006). Five Rules for the Evolution of Cooperation. Science, 314(5805), 1560–1563. [CrossRef]
- Street, S. (2006). A Darwinian Dilemma for Realist Theories of Value. Philosophical Studies, 127(1), 109–166. [CrossRef]
- Fehr, E., & Fischbacher, U. (2003). The Nature of Human Altruism. Nature, 425(6960), 785–791. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).