Preprint
Article

This version is not peer-reviewed.

The Physical Church-Turing Thesis: Computation as a Fundamental Physical Process

Submitted:

31 August 2025

Posted:

01 September 2025

You are already at the latest version

Abstract
We propose a fundamental revision of the Church-Turing thesis that recognizes computation as an inherently physical process constrained by the laws of thermodynamics, quantum mechanics, and relativity. The classical Church-Turing thesis states that any effectively calculable function can be computed by a Turing machine, but this formulation ignores the physical substrate required for computation. We establish the Physical Church-Turing Thesis: any effectively calculable function that can be physically computed must respect the fundamental constraints imposed by physical law. We develop a rigorous framework based on erasure complexity and reversible computation that leads to provable energy lower bounds for computational problems. Our analysis shows that physical computability can form a proper subset of Turing computability under explicit resource constraints, as formalized by our erasure-based framework. We provide concrete theorems connecting time-space-coherence trade-offs to unavoidable bit erasures, yielding quantitative energy bounds via Landauer's principle. The framework unifies computation theory with fundamental physics and provides practical guidance for energy-efficient algorithm design.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

The Church-Turing thesis, formulated independently by Alonzo Church and Alan Turing in the 1930s, stands as one of the foundational principles of computer science. In its classical form, the thesis states that any function that can be effectively calculated can be computed by a Turing machine. This principle has guided the development of computation theory for nearly a century and underlies our understanding of what is computable.
However, the classical formulation contains a crucial gap: it treats computation as an abstract mathematical process while ignoring the physical substrate required for any actual computation. Real computers are physical systems that must obey the laws of thermodynamics, quantum mechanics, and relativity. These physical constraints impose fundamental limits on what can actually be computed in our universe, regardless of mathematical computability.

1.1. The Gap Between Mathematical and Physical Computation

The distinction between mathematical and physical computation has become increasingly important as we approach fundamental physical limits in computing technology. Consider the following physical constraints:
Thermodynamic Constraints: Landauer’s principle establishes that erasing one bit of information requires at least k B T ln 2 energy, where k B is Boltzmann’s constant and T is temperature. This provides a fundamental lower bound on the energy cost of irreversible computation.
Quantum Speed Limits: The Margolus-Levitin theorem proves that a quantum system with energy E can perform at most 2 E / ( π ) orthogonalizing operations per second, where is the reduced Planck constant. This provides an absolute speed limit for any physical computation.
Information Density Bounds: The Bekenstein bound limits the amount of information that can be stored in a finite region of space with finite energy. For a spherical region of radius R containing energy E, the maximum information content is 2 π E R / ( c ln 2 ) .
Relativistic Constraints: Information cannot travel faster than light, imposing fundamental limits on communication in distributed computational systems.
These physical constraints suggest that not all Turing-computable functions can be physically realized with finite resources.

1.2. Toward a Physical Church-Turing Thesis

We propose a fundamental revision of the Church-Turing thesis that explicitly acknowledges the physical nature of computation:
Physical Church-Turing Thesis: Any effectively calculable function that can be physically computed must respect the fundamental constraints imposed by physical law, including thermodynamics, quantum mechanics, and relativity.
This leads to the recognition that physical computability, when properly defined with asymptotic resource scaling, can form a proper subset of Turing computability under explicit resource constraints we formalize.

1.3. Our Contributions

This paper develops the theoretical foundations of the Physical Church-Turing Thesis through:
  • Rigorous Framework: We develop a mathematically precise framework based on erasure complexity that connects physical constraints to computational limits.
  • Provable Energy Bounds: We provide concrete theorems showing how time-space-coherence trade-offs force unavoidable bit erasures, yielding quantitative energy lower bounds.
  • Reversible Computation Integration: We properly account for Bennett’s reversible computation results to distinguish between operations and erasures.
  • Physical Complexity Classes: We define new complexity classes that account for multiple physical resources simultaneously.
  • Practical Applications: We derive implications for quantum computing, artificial intelligence, and energy-efficient algorithm design.

2. Mathematical Framework for Physical Computation

2.1. Asymptotic Physical Computability

We begin by addressing the fundamental issue of defining physical computability in a way that is neither trivial nor vacuous.
Definition 1 
(Asymptotic Physical Computability). Fix a device model M with resource counters R = ( E , T , S , B , C ) representing energy, time, space, bandwidth, and coherence respectively. A total function f : { 0 , 1 } * { 0 , 1 } * is M -physically computable under resource budget R * ( n ) if there exists a family of devices { M n M } and an algorithm A such that on all inputs x with | x | = n , A M n ( x ) halts with output f ( x ) and uses resources R * ( n ) .
We say f isphysically scalableif R * ( n ) is polynomial in n component-wise.
Model Assumptions: Throughout this work, we assume: (i) ambient temperature T is fixed and finite; (ii) logical erasure refers to irreversible discarding of computationally relevant information; (iii) energy E represents free energy available to computation; (iv) coherence C represents the maximum number of quantum degrees of freedom that can be maintained in superposition simultaneously; (v) all bounds apply in the weak gravity regime with well-defined system boundaries.

Model Assumptions

Ambient temperature T is fixed; Q counts dissipated heat (free energy lost to the bath). A logical erasure is any many-to-one map on the computational degrees of freedom. Energy E denotes average available energy for QSL bounds; space S is peak working memory; bandwidth B counts communicated bits; coherence C bounds simultaneously maintained live state. All asymptotics are in input size n.
This definition avoids the triviality of requiring finite resources for all inputs while maintaining meaningful constraints on resource scaling.

2.2. Erasure Complexity and Landauer’s Principle

The key insight for connecting physical constraints to computational complexity is to focus on erasure complexity rather than operation count.
Definition 2 
(Erasure Complexity). For a computational problem f and input size n, define EC S , T ( f , n ) as the minimum number oflogically irreversible bit erasuresneeded by any algorithm that computes f on inputs of length n within time T ( n ) and workspace S ( n ) .
Lemma 1 
(Landauer Erasure Cost). Let a computation at ambient temperature T perform E erase logical bit erasures. Then the dissipated heat satisfies
Q k B T ln 2 · E erase .
Proof: This follows directly from Landauer’s principle, which has been experimentally verified by Bérut et al. (2012) and others.

2.3. Reversible Computation and Bennett’s Results

A crucial component of our framework is properly accounting for reversible computation, which can dramatically reduce erasure requirements.
Theorem 1 
(Bennett Reversible Simulation). For any ε > 0 , a multitape Turing machine running in time T and space S can be simulated by a logically reversible machine in time T 1 + ε and space O ( S log T ) .
Implication: The minimum energy depends on the number of erasures, not total operations. Via reversible computation, erasure can be made sublinear in step count (up to output and cleanup costs).
However, when both time and space are simultaneously constrained, reversible strategies may be forced to perform erasures:
Definition 3 
(Computation DAG and Reversible Pebbling Price). Let A be an algorithm class (e.g., circuits of fan-in 2 and bounded depth, or tree-like resolution refutations). For input length n, any A A induces a DAG G ( A , n ) whose sinks are the output/accept nodes. Let Peb S , T rev ( G ) be the minimum number of irreversible discards (erasures) required by any reversible pebbling schedule that uses at most S pebbles and completes within deadline T.
Theorem 2 
(Time-Space-Erasure Tradeoff, Precise). Fix an algorithm class A and budgets ( S , T ) . For every A A on inputs of length n, any implementation that computes all sinks of G ( A , n ) within workspace S and deadline T must perform at least
E erase Peb S , T rev ( G ( A , n ) )
logically irreversible bit erasures. Consequently Q k B T ln 2 · Peb S , T rev ( G ( A , n ) ) .
Proof Sketch: The proof uses reversible pebble game lower bounds. When both space (pebbles) and time are constrained below the reversible pebbling threshold, any pebbling strategy must perform "forced unpebblings" without available prerequisites, corresponding to irreversible information discards and thus erasures.

3. Quantum Speed Limits and Universal Computational Bounds

3.1. Margolus-Levitin Bound

The quantum speed limit provides a fundamental constraint on the rate of computation.
Theorem 3 
(Universe Operations Bound). Any device of average energy E operating for time T performs at most
N 2 E T π
elementary orthogonalizing operations (Margolus-Levitin bound).
For the observable universe with E 10 70 J and age T 4.3 × 10 17 s, this gives N = Θ ( 10 120 ) total possible operations.

3.2. Cosmic-Uniform Unrealizability

We can now formulate a precise notion of physical unrealizability:
Lemma 2 
(Cosmic-Uniform Unrealizability). Suppose the total free energy available to computation in our universe is upper bounded by E max . If computing f on inputs of length n requires, in any implementation within model M , at least e ( n ) logically necessarybit erasures with k B T ln 2 · e ( n ) > E max for all sufficiently large n, then f is not uniformly realizable within a single such universe.
Remark: The burden is to lower-bound necessary erasures e ( n ) , not step count. This requires problem-specific analysis using reversible pebbling techniques.

4. Physical Complexity Classes

4.1. Definitions and Basic Properties

We define complexity classes that account for multiple physical resources simultaneously:
Definition 4 
(Physical Complexity Classes). Fix resource budgets R * ( n ) = ( E * ( n ) , T * ( n ) , S * ( n ) , B * ( n ) , C * ( n ) ) .
  • PHYS - P consists of decision problems solvable by devices respecting R * ( n ) = poly ( n ) component-wise.
  • PHYS - NP consists of problems verifiable under the same resource constraints.
  • PHYS - PSPACE consists of problems solvable in polynomial space with polynomial energy.
Proposition 1 
(Physical Complexity Hierarchy). PHYS - P PHYS - NP PHYS - PSPACE .
Note: The inclusion PHYS - NP PHYS - PSPACE assumes that the verifier’s energy consumption is also polynomially bounded, not just the prover’s certificate length.
Open Problem: Does PHYS - P PHYS - NP ? This question depends on whether polynomial-time verification can be achieved with significantly fewer erasures than polynomial-time solution.

4.2. Energy-Based Separations

Rather than claiming unconditional collapses, we can prove conditional separations based on erasure complexity:
Theorem 4 
(Conditional Energy Separation). If there exist problem families in NP requiring ω ( poly ( n ) ) erasures for any polynomial-time, polynomial-space algorithm, then PHYS - P PHYS - NP under polynomial energy constraints.

5. Concrete Applications and Lower Bounds

5.1. SAT and Reversible Pebbling

We can derive concrete energy lower bounds for specific problem families using reversible pebbling techniques:
Theorem 5 
(Energy Lower Bound for Pebbling CNFs under Tree-like Resolution). Let ( F n ) be CNF families obtained from pebbling graphs ( G n ) . If Peb rev ( G n ) = Ω ( ϕ ( n ) ) , then any tree-like resolution refutation of F n that uses workspace S = poly ( n ) and completes within T = poly ( n ) must perform E erase Ω ( ϕ ( n ) ) erasures. Hence Q k B T ln 2 · Ω ( ϕ ( n ) ) .
Proof Sketch: The proof uses the correspondence between tree-like resolution space and reversible pebbling numbers. When space is constrained below the reversible pebbling threshold, the resolution algorithm must discard clauses irreversibly to make progress within the time bound.

5.2. Quantum Computing and Physical Constraints

For quantum algorithms, we can derive rigorous bounds based on the quantum speed limit:
Theorem 6 
(Per-gate Quantum Speed Limit). Let H ( t ) = H 0 + H c ( t ) generate gate U on the computational subspace. Then the time τ to implement U satisfies
0 τ H c ( t ) d t 2 L ( U , 1 ) ,
where · is any submultiplicative operator norm and L is a unitary distance (e.g., Fubini-Study-induced or geodesic on S U ( d ) ).
Proposition 2 
(Power-limited QEC Throughput). For surface code with distance d, let E cyc ( d ) be the control+measurement energy per cycle and f cyc ( d ) the cycle rate needed for target logical error. If P cold is the available cooling power at the relevant stages, then the steady-state logical-gate rate satisfies
Throughput P cold E cyc ( d ) · 1 g ( d ) ,
where g ( d ) counts the number of cycles per logical gate (including magic-state distillation).

6. Information-Theoretic Bounds on Intelligence

6.1. Task-Based Capability Bounds

Rather than defining a vague "intelligence capacity," we provide rigorous bounds on specific computational tasks:
Lemma 3 
(Landauer-Fano Bound). For any M-ary decision task with error probability at most δ, any device at temperature T must dissipate
Q k B T ln 2 · ( 1 δ ) log 2 M 1 .
Proof: Fano’s inequality gives P e 1 I + 1 log M , so I ( 1 δ ) log M 1 . Each bit of irreversible information gain costs k B T ln 2 by Landauer’s principle.

6.2. Decision Rate Bounds

Combining quantum speed limits with information-theoretic requirements:
Theorem 7 
(Physical Decision Bounds). For an M-ary decision with error probability at most δ:
(i)
Energy-limited: Over energy budget Q, the number of decisions satisfies
# decisions Q k B T ln 2 · ( ( 1 δ ) log 2 M 1 ) .
(ii)
Speed-limited: If each decision requires κ ops orthogonalizing steps, then
Rate 2 E π · 1 κ ops .
Proof: Part (i) follows from Lemma 6.1 by dividing the total energy budget by the per-decision energy requirement. Part (ii) follows from the Margolus-Levitin bound by dividing the total operation rate by the operations required per decision.

7. Information Storage and the Bekenstein Bound

7.1. Storage Density Limits

Proposition 3 
(Bekenstein Storage Bound). For a weak-gravity, well-isolated system of energy E and radius R, the total storable information (bits) is bounded by
I 2 π E R c ln 2 .
Scope and Limitations: This bound applies in the weak gravity regime with well-defined system boundaries. The bound assumes the system can be treated as isolated and that quantum field theory corrections are negligible. Edge cases involving black hole formation, strong gravitational fields, or systems with poorly defined boundaries may require modified analysis.

8. Implications and Future Directions

8.1. Algorithm Design Principles

The Physical Church-Turing Thesis suggests several principles for optimal algorithm design:
  • Minimize Erasures: Design algorithms to minimize irreversible operations through reversible computation techniques.
  • Time-Space-Energy Trade-offs: Optimize across multiple resource dimensions simultaneously rather than focusing on single metrics.
  • Coherence Management: For quantum algorithms, balance coherence requirements against decoherence timescales.
  • Energy-Aware Complexity: Consider energy complexity alongside traditional time and space complexity.

8.2. Open Problems

Several important questions remain open:
  • Erasure Complexity Characterization: Develop general techniques for lower-bounding necessary erasures for broad problem classes.
  • Physical vs Classical Separations: Determine which classical complexity separations persist under physical constraints.
  • Quantum Coherence Complexity: Develop a rigorous theory of coherence as a computational resource.
  • Biological Computation: Extend the framework to understand how biological systems achieve computational efficiency.

9. Conclusions

We have established a rigorous mathematical framework for the Physical Church-Turing Thesis that recognizes computation as an inherently physical process. Our main contributions include:
  • Rigorous Definitions: We provided mathematically precise definitions of physical computability that avoid triviality while maintaining meaningful constraints.
  • Erasure-Based Energy Bounds: We developed a framework based on erasure complexity that yields provable energy lower bounds via Landauer’s principle.
  • Reversible Computation Integration: We properly accounted for Bennett’s reversible computation results to distinguish between operations and erasures.
  • Concrete Lower Bounds: We provided specific theorems connecting time-space-coherence trade-offs to unavoidable erasures for explicit problem families.
  • Quantum Applications: We derived rigorous bounds for quantum computing based on quantum speed limits and error correction requirements.
  • Information-Theoretic Foundations: We established precise bounds on decision-making capabilities using Fano’s inequality and Landauer’s principle.
The Physical Church-Turing Thesis represents a paradigm shift from abstract mathematical computation to physically grounded computational theory. By recognizing that computation is fundamentally constrained by physical law, we gain both theoretical insights into the nature of computation and practical guidance for designing optimal computational systems.
Future work should focus on developing general techniques for characterizing erasure complexity, exploring the implications for specific computational problems, and investigating how biological and artificial systems can be designed to operate efficiently within fundamental physical constraints.
The framework developed here provides a foundation for understanding the ultimate capabilities and limitations of any computational system that can exist in our physical universe, bridging computer science and fundamental physics in a mathematically rigorous way.

AI Assistance Statement

Language and editorial suggestions were supported by AI tools; the author takes full responsibility for the content.

Data Availability Statement

No new data were generated or analyzed in this study.

Acknowledgments

The author thanks the Octonion Group research team for valuable discussions and computational resources. Special recognition goes to the broader computational complexity and quantum computing communities whose foundational work made this synthesis possible. The author particularly acknowledges the detailed technical feedback that helped strengthen the mathematical rigor of this work.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Church, A. An unsolvable problem of elementary number theory. American Journal of Mathematics 1936, 58(2), 345–363. [Google Scholar] [CrossRef]
  2. Turing, A. M. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society 1936, 42(2), 230–265. [Google Scholar]
  3. Landauer, R. Irreversibility and heat generation in the computing process. IBM Journal of Research and Development 1961, 5(3), 183–191. [Google Scholar] [CrossRef]
  4. Bérut, A.; Arakelyan, A.; Petrosyan, A.; Ciliberto, S.; Dillenschneider, R.; Lutz, E. Experimental verification of Landauer’s principle linking information and thermodynamics. Nature 2012, 483(7388), 187–189. [Google Scholar] [CrossRef] [PubMed]
  5. Bennett, C. H. Logical reversibility of computation. IBM Journal of Research and Development 1973, 17(6), 525–532. [Google Scholar] [CrossRef]
  6. Bennett, C. H. Time/space trade-offs for reversible computation. SIAM Journal on Computing 1989, 18(4), 766–776. [Google Scholar] [CrossRef]
  7. Margolus, N.; Levitin, L. B. The maximum speed of dynamical evolution. Physica D: Nonlinear Phenomena 1998, 120(1-2), 188–195. [Google Scholar] [CrossRef]
  8. Bekenstein, J. D. Universal upper bound on the entropy-to-energy ratio for bounded systems. Physical Review D 1981, 23(2), 287–298. [Google Scholar] [CrossRef]
  9. Lloyd, S. Ultimate physical limits to computation. Nature 2000, 406(6799), 1047–1054. [Google Scholar] [CrossRef] [PubMed]
  10. Vitányi, P. M. B. Locality, communication, and interconnect length in multicomputers. SIAM Journal on Computing 1990, 19(4), 683–707. [Google Scholar] [CrossRef]
  11. Lange, K.-J.; McKenzie, P.; Tapp, A. Reversible space equals deterministic space. Journal of Computer and System Sciences 2000, 60(2), 354–367. [Google Scholar] [CrossRef]
  12. Chan, S. M. Just a pebble game. In Proceedings of the 2013 IEEE Conference on Computational Complexity; 2013; pp. 133–143. [Google Scholar]
  13. Torán, J.; Wörz, F. Reversible pebble games and the relation between tree-like and general resolution space. Computational Complexity 2021, 30(1), 1–32. [Google Scholar] [CrossRef]
  14. Knill, E. An analysis of Bennett’s pebble game. arXiv 1995, arXiv:math/9508218. [Google Scholar]
  15. Cover, T. M.; Thomas, J. A. Elements of Information Theory, 2nd ed.; Wiley, 2006. [Google Scholar]
  16. Fano, R. M. Transmission of Information: A Statistical Theory of Communications; MIT Press, 1961. [Google Scholar]
  17. Shor, P. W. Algorithms for quantum computation: discrete logarithms and factoring. Proceedings 35th Annual Symposium on Foundations of Computer Science; 1994; pp. 124–134. [Google Scholar]
  18. Preskill, J. Quantum computing in the NISQ era and beyond. Quantum 2018, 2, 79. [Google Scholar] [CrossRef]
  19. Fowler, A. G.; Mariantoni, M.; Martinis, J. M.; Cleland, A. N. Surface codes: Towards practical large-scale quantum computation. Physical Review A 2012, 86(3), 032324. [Google Scholar] [CrossRef]
  20. Deutsch, D. Quantum theory, the Church-Turing principle and the universal quantum computer. Proceedings of the Royal Society of London A 1985, 400(1818), 97–117,. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated