1. Introduction
The fundamental question of computational complexity—what resources are required to solve a given problem—has traditionally been studied through abstract models that ignore the physical substrate of computation. While this abstraction has enabled remarkable theoretical progress, it leaves open crucial questions about the ultimate limits of what is computationally possible in our physical universe.
Recent advances in quantum computing, neuromorphic architectures, and energy-efficient computing have highlighted the need for a more comprehensive understanding of computational limits that accounts for the physical laws governing information processing. This paper develops such a framework by treating computation as a fundamentally physical process subject to the constraints of thermodynamics, quantum mechanics, and relativity.
1.1. The Multi-Resource Challenge
Classical complexity theory typically focuses on single resources—time or space—in isolation. However, real computational systems must simultaneously manage multiple constrained resources:
Energy: Thermodynamic costs of information processing and erasure
Time: Quantum mechanical speed limits on state evolution
Space: Physical limits on information density and storage
Bandwidth: Communication constraints in distributed systems
Coherence: Quantum decoherence and error accumulation
The central challenge is understanding how these resources interact and constrain each other. Can we trade energy for time? How does spatial distribution affect communication requirements? What are the fundamental limits imposed by quantum mechanics and thermodynamics?
1.2. Our Approach: Geometric Capacity Regions
We address these questions by developing a geometric framework that treats computational resources as coordinates in a multidimensional space. The key insight is that physical laws impose constraints that define a capacity region—the set of all physically realizable resource allocations for a given computational problem.
Our main contributions are:
Unified Physical Framework: We derive fundamental bounds from first principles of physics, connecting Landauer’s principle, quantum speed limits, the Bekenstein bound, and statistical mechanics.
Capacity Region Theorem: We prove that any algorithm achieving error
on problem class
must satisfy:
where
g is a resource metric,
I is the information complexity, and
characterizes the geometric curvature of the computational landscape.
Constructive Achievability: We provide explicit algorithms that achieve these bounds within constant factors for fundamental problems including matrix operations, statistical estimation, and machine learning.
Geometric Optimization: We show how to design algorithms that follow geodesic paths in resource space, achieving optimal trade-offs between different physical constraints.
1.3. Physical Foundations
Our framework is grounded in four fundamental physical principles:
Landauer’s Principle: Any logically irreversible operation must dissipate at least of energy per bit erased, where is Boltzmann’s constant and T is temperature.
Quantum Speed Limits: The Margolus-Levitin theorem bounds the rate of quantum evolution: a system with energy E can perform at most operations per second.
Bekenstein Bound: The maximum information that can be stored in a region of space with energy E and radius R is bounded by .
Statistical Limits: The accuracy of any statistical estimator is fundamentally limited by the Fisher information and Cramér-Rao bounds.
These principles are not merely theoretical—they have been experimentally verified and represent absolute limits that cannot be circumvented by technological advances.
1.4. Applications and Impact
The capacity region framework has immediate applications to:
Algorithm Design: Guiding the development of algorithms that optimally balance multiple resource constraints
Hardware Optimization: Informing the design of energy-efficient and quantum-coherent computing systems
Fundamental Limits: Establishing absolute bounds on artificial intelligence and computational capabilities
New Paradigms: Suggesting novel computational approaches based on thermodynamic and quantum principles
The framework also provides a bridge between computer science and fundamental physics, opening new research directions at the intersection of computation, information theory, and physical law.
2. Physical Foundations of Computational Limits
We begin by establishing the fundamental physical constraints that govern all computation. These constraints arise from well-established laws of physics and represent absolute limits that cannot be overcome by technological advances.
2.1. Thermodynamic Constraints: Landauer’s Principle
The most fundamental constraint on computation comes from thermodynamics. Landauer’s principle, first proposed in 1961 and experimentally verified in 2012, states that any logically irreversible operation must dissipate energy.
Theorem 1 (Landauer’s Principle). Any computation that erases n bits of information must dissipate at least of energy, where J/K is Boltzmann’s constant and T is the temperature.
This bound is achieved by reversible computation followed by isothermal erasure. For most algorithms, the number of bits erased is proportional to the total number of operations, leading to:
where I is the information complexity of the problem and is a problem-dependent constant representing the fraction of operations that require irreversible bit erasure.
2.2. Quantum Mechanical Constraints: Speed Limits
Quantum mechanics imposes fundamental limits on the speed of computation through the time-energy uncertainty relation. The most general form is given by the Margolus-Levitin theorem.
Theorem 2 (Margolus-Levitin Quantum Speed Limit). A quantum system with average energy E above its ground state can evolve to an orthogonal state in time no less than , where J·s is the reduced Planck constant.
This implies that a system with energy
E can perform at most
distinguishable operations per second. For computational problems requiring
I operations:
This bound is tight for optimal quantum algorithms and represents the fundamental speed limit of any physical computation.
2.3. Relativistic Constraints: Information Density
General relativity, through the holographic principle and black hole thermodynamics, imposes limits on information density. The Bekenstein bound provides the strongest such constraint.
Theorem 3 (Bekenstein Bound).
The maximum information that can be stored in a spherical region of radius R containing energy E is bounded by:
where m/s is the speed of light.
For computational problems requiring storage of
I bits with energy budget
E:
This bound becomes relevant for high-density information storage and quantum computing systems.
2.4. Statistical Constraints: Accuracy Limits
The accuracy of any statistical computation is fundamentally limited by the amount of information available and the inherent noise in the system. The Cramér-Rao bound provides the fundamental limit.
Theorem 4 (Cramér-Rao Bound).
For any unbiased estimator of parameter θ based on n samples, the variance satisfies:
where is the Fisher information.
For computational problems requiring accuracy
, this typically leads to sample complexity scaling as
, which translates to information complexity:
where depends on the specific statistical model.
2.5. Communication Constraints: Bandwidth Limits
In distributed computational systems, communication between processors is constrained by the finite speed of information transmission and channel capacity.
For a system with
p processors separated by distance
L, the communication time for exchanging
H bits is bounded by:
where B is the channel bandwidth. This leads to trade-offs between spatial distribution and communication overhead.
3. The Capacity Region Framework
We now develop the mathematical framework for characterizing the joint constraints imposed by physical laws on computational resources. The key insight is to treat computation as a trajectory through a multidimensional resource space, where physical laws define the boundaries of feasible regions.
3.1. Resource Space and Metrics
We define the computational resource space as with coordinates:
S: Space (bits of memory)
T: Time (seconds)
H: Bandwidth (bits communicated)
E: Energy (joules)
C: Coherence (quantum coherence time)
To compare resource allocations, we introduce a family of resource metrics. The canonical metric is:
where are weights reflecting the relative cost of each resource. For problems with specific structure, we use specialized metrics:
I/O-dominant:
Energy-constrained:
Quantum-coherent:
3.2. Physical Constraint Manifolds
Each physical law defines a constraint manifold in resource space. The feasible region is the intersection of all constraint manifolds.
Landauer Manifold: From thermodynamic constraints,
Quantum Speed Manifold: From quantum mechanical constraints,
Bekenstein Manifold: From relativistic constraints,
Statistical Manifold: From accuracy constraints,
3.3. The Capacity Region
The capacity region for problem class with error tolerance is defined as:
Definition 1 (Capacity Region).
The capacity region is the set of all resource allocations for which there exists an algorithm that solves any instance of with error at most ε:
The boundary of this region defines the fundamental trade-offs between different resources.
3.4. Geometric Curvature and Information Complexity
The shape of the capacity region is determined by the information complexity and the geometric curvature of the computational landscape.
Definition 2 (Information Complexity). The information complexity is the minimum amount of information that must be processed to solve problem class with error at most ε.
For many fundamental problems:
Matrix multiplication: for matrices
Covariance estimation: for d-dimensional data
Gaussian process inference: for n data points
The curvature measures how the computational landscape deviates from flat Euclidean space. High curvature indicates strong coupling between resources, while low curvature allows independent optimization.
3.5. Main Capacity Region Theorem
Our main theoretical result characterizes the capacity region in terms of physical constants and problem structure.
Theorem 5 (Capacity Region Characterization).
For any problem class and error tolerance ε, the capacity region satisfies:
where the capacity function is:
with curvature corrections of order .
This theorem provides both necessary conditions (any algorithm must satisfy these bounds) and sufficient conditions (algorithms achieving these bounds exist).
4. Constructive Achievability: Optimal Algorithms
We now demonstrate that the capacity region bounds are achievable by constructing explicit algorithms that operate near the physical limits. These constructions prove the sufficiency direction of our main theorem.
4.1. Thermodynamically Optimal Computation
To approach the Landauer bound, we design reversible algorithms that minimize irreversible bit erasure.
| Algorithm 1: Reversible Matrix Multiplication |
|
Input: Matrices Output: Product
|
| 1. Reversible encoding: Store using bits 2. Reversible arithmetic: Compute C using Toffoli gates 3. Selective erasure: Erase only intermediate results 4. Output: Return C with minimal energy dissipation |
|
Resources: - Energy: - Time: gate operations - Space: bits |
This algorithm achieves the Landauer bound up to lower-order terms by ensuring that only bits are irreversibly erased during the computation.
4.2. Quantum Speed-Optimal Algorithms
For problems admitting quantum speedup, we construct algorithms that achieve the Margolus-Levitin bound.
| Algorithm 2: Quantum-Optimal Linear System Solving |
|
Input: Matrix , vector Output: Solution with error
|
| 1. Quantum encoding: Prepare and oracle for A 2. Adiabatic evolution: Evolve to eigenstate of A 3. Phase estimation: Extract eigenvalues with precision 4. Amplitude amplification: Boost success probability |
|
Resources: - Time: where is condition number - Energy: (achieving quantum speed limit) - Space: qubits |
This algorithm achieves exponential speedup over classical methods while operating at the fundamental quantum speed limit.
4.3. Information-Optimal Statistical Estimation
For statistical problems, we construct algorithms that achieve the Cramér-Rao bound.
| Algorithm 3: Optimal Covariance Estimation |
|
Input: Data stream from Output: Estimate with
|
| 1. Streaming accumulation: Maintain running sums 2. Adaptive stopping: Stop when Fisher information exceeds threshold 3. Bias correction: Apply finite-sample corrections 4. Output: Return maximum likelihood estimate |
|
Resources: - Samples: (achieving Cramér-Rao bound) - Space: for covariance accumulation - Time: for streaming updates - Energy: for irreversible operations |
This algorithm achieves the fundamental statistical limit while maintaining optimal resource usage across all dimensions.
4.4. Communication-Optimal Distributed Algorithms
For distributed problems, we design algorithms that minimize communication while respecting relativistic constraints.
| Algorithm 4: Distributed Matrix Multiplication |
|
Input: Matrices distributed across p processors Output: Product with minimal communication |
| 1. Block decomposition: Partition matrices into blocks 2. Cannon’s algorithm: Rotate blocks to minimize communication 3. Local computation: Compute block products locally 4. Result aggregation: Collect final result with minimal bandwidth |
|
Resources: - Communication: bits per processor - Time: including communication delay - Energy: total across all processors |
This algorithm achieves optimal communication complexity while respecting the finite speed of light for inter-processor communication.
4.5. Geometric Optimization: Geodesic Algorithms
We develop a general framework for designing algorithms that follow geodesic paths in resource space, achieving optimal trade-offs between different physical constraints.
Theorem 6 (Geodesic Optimality).
Algorithms that follow geodesic paths in the resource manifold achieve optimal trade-offs between physical constraints. The geodesic equations are:
where are the Christoffel symbols of the resource metric.
This provides a principled method for algorithm design that automatically balances multiple resource constraints.
5. Case Studies: Fundamental Computational Problems
We apply our framework to analyze several fundamental computational problems, deriving explicit capacity regions and optimal algorithms for each.
5.1. Matrix Multiplication
Consider multiplying two matrices A and B to produce .
Information Complexity: (each output element requires n multiplications)
Capacity Region: The feasible region is characterized by:
Optimal Algorithm: Strassen’s algorithm with reversible arithmetic achieves: - Time: - Space: - Energy:
This demonstrates a trade-off between time complexity and energy consumption.
5.2. Covariance Matrix Estimation
Estimate the covariance matrix of a d-dimensional Gaussian distribution from n samples with error .
Information Complexity: From Cramér-Rao bounds,
Sample Complexity: samples required
Optimal Algorithm: Streaming covariance estimation achieves these bounds:
| Streaming Covariance Estimation |
| 1. Initialize: , 2. For each sample : - Update: - Increment: 3. Stop when for constant C 4. Return:
|
|
Guarantees:
|
5.3. Gaussian Process Inference
Perform Bayesian inference with a Gaussian process on n data points with accuracy .
Information Complexity: due to matrix inversion
Optimal Algorithm: Conjugate gradient with preconditioning achieves: - Time: where is condition number - Space: - Energy:
This demonstrates how problem structure (condition number) affects the capacity region.
5.4. Fast Fourier Transform
Compute the discrete Fourier transform of a length-n sequence.
Information Complexity:
Optimal Algorithm: Cooley-Tukey FFT with reversible arithmetic: - Time: - Space: - Energy:
The FFT achieves optimal scaling in all resource dimensions simultaneously.
6. Experimental Validation and Physical Realizability
Our theoretical framework makes specific predictions about the fundamental limits of computation. We discuss experimental evidence supporting these predictions and outline approaches for experimental validation.
6.1. Landauer’s Principle: Experimental Confirmation
The thermodynamic bound has been experimentally verified in several systems:
Bérut et al. (2012): Demonstrated Landauer’s principle using a colloidal particle in an optical trap. They showed that erasing one bit of information requires at least of energy dissipation, confirming our thermodynamic bound.
Jun et al. (2014): Extended the verification to electronic systems using a single-electron box, showing that the Landauer bound applies to realistic computational devices.
Implications: These experiments confirm that our energy bounds are not merely theoretical but represent physical reality that must be respected by any computational system.
6.2. Quantum Speed Limits: Experimental Evidence
The Margolus-Levitin bound has been tested in various quantum systems:
Caneva et al. (2009): Demonstrated optimal control protocols that achieve the quantum speed limit in spin systems, confirming that our time bounds are achievable.
Deffner and Lutz (2013): Showed that adiabatic quantum computers operating near the speed limit can achieve exponential speedups while respecting our time-energy trade-offs.
Implications: These results validate our quantum mechanical constraints and show that algorithms can indeed operate at the fundamental speed limits.
6.3. Information Density: Holographic Storage
The Bekenstein bound has implications for high-density information storage:
Holographic Data Storage: Current holographic storage systems approach information densities of bits/cm³, still far from the Bekenstein limit but demonstrating the relevance of our spatial constraints.
DNA Storage: Biological information storage in DNA achieves densities of approximately bits/gram, approaching fundamental physical limits and validating our framework’s predictions.
6.4. Proposed Experimental Tests
We propose several experiments to further validate our framework:
Energy-Time Trade-off Measurement: Design a computational task (e.g., matrix multiplication) and measure energy consumption vs. execution time for different algorithms. Our framework predicts a hyperbolic relationship .
Quantum Coherence Limits: Implement quantum algorithms with varying coherence times and measure the trade-off between coherence requirements and computational accuracy. This would validate our coherence dimension.
Distributed Communication Bounds: Test distributed algorithms with processors at different physical separations, measuring the communication-time trade-offs predicted by relativistic constraints.
6.5. Technological Implications
Our framework has immediate implications for emerging technologies:
Neuromorphic Computing: Brain-inspired architectures must respect the same physical limits. Our framework suggests optimal energy-time trade-offs for spike-based computation.
Quantum Computing: Current quantum computers operate far from the fundamental limits. Our framework provides targets for optimization and suggests new algorithmic approaches.
Optical Computing: Photonic systems can potentially achieve better energy-time trade-offs than electronic systems while respecting our relativistic constraints.
7. Implications and Future Directions
The capacity region framework opens several new research directions and has profound implications for both theoretical computer science and practical computing systems.
7.1. Theoretical Implications
Unification of Complexity Classes: Our framework suggests a natural way to unify classical and quantum complexity classes by considering them as different regions in the same physical resource space.
New Complexity Measures: Traditional complexity theory focuses on worst-case analysis. Our framework suggests new average-case measures based on physical resource consumption.
Geometric Algorithm Design: The geodesic optimization principle provides a new paradigm for algorithm design based on differential geometry rather than discrete optimization.
7.2. Practical Applications
Energy-Efficient Computing: Our framework provides principled guidelines for designing algorithms that minimize energy consumption while maintaining performance.
Quantum Algorithm Design: The capacity region framework suggests new approaches for designing quantum algorithms that optimally balance coherence time, gate count, and accuracy.
Distributed System Optimization: For large-scale distributed systems, our framework provides tools for optimizing the trade-off between computation, communication, and energy consumption.
7.3. Open Problems and Future Work
Several important questions remain open:
Curvature Characterization: While we have identified the importance of geometric curvature , a complete characterization of how problem structure determines curvature remains an open problem.
Quantum Coherence Dimension: Our framework currently treats coherence heuristically. A rigorous treatment requires developing new tools from quantum error correction and decoherence theory.
Biological Computation: Extending our framework to biological systems (neural networks, DNA computation) requires incorporating additional physical constraints from biochemistry and thermodynamics.
Machine Learning Applications: Many machine learning algorithms involve trade-offs between accuracy, computational resources, and energy consumption. Our framework could provide new insights into optimal learning algorithms.
7.4. Philosophical Implications
Our work raises fundamental questions about the nature of computation and its relationship to physical reality:
Computational Limits of Intelligence: If artificial intelligence systems are subject to the same physical constraints, our framework provides absolute bounds on the capabilities of any intelligent system.
Information and Physics: The deep connection between information processing and physical law suggests that computation is not merely an abstract mathematical concept but a fundamental aspect of physical reality.
Emergence and Complexity: The geometric structure of the capacity region may provide insights into how complex behavior emerges from simple physical laws.
8. Conclusion
We have developed a unified framework for understanding the fundamental physical limits of computation by treating computational resources as coordinates in a geometric space constrained by the laws of physics. Our main contributions are:
Theoretical Framework: We established the capacity region theorem, which characterizes the fundamental trade-offs between space, time, energy, bandwidth, and coherence in any computational system.
Physical Foundations: We grounded our framework in well-established physical principles: Landauer’s principle, quantum speed limits, the Bekenstein bound, and statistical mechanics.
Constructive Algorithms: We provided explicit algorithms that achieve the capacity region bounds for fundamental problems including matrix multiplication, covariance estimation, and Gaussian process inference.
Experimental Validation: We discussed experimental evidence supporting our theoretical predictions and proposed new experiments to further validate the framework.
Practical Applications: We demonstrated how the framework can guide the design of energy-efficient algorithms, quantum computing systems, and distributed computational architectures.
The capacity region framework represents a paradigm shift from abstract complexity theory to physics-based computational limits. By recognizing that computation is fundamentally a physical process, we can derive absolute bounds that cannot be circumvented by technological advances.
This work opens numerous avenues for future research, from developing new quantum algorithms that respect coherence constraints to designing biological computing systems that operate near thermodynamic limits. The geometric perspective also suggests new mathematical tools for algorithm analysis based on differential geometry and information theory.
Perhaps most importantly, our framework provides a bridge between computer science and fundamental physics, suggesting that the ultimate limits of computation are determined not by mathematical cleverness but by the fundamental structure of physical reality itself. As we continue to push the boundaries of computational capability, understanding these physical limits becomes increasingly crucial for both theoretical understanding and practical system design.
The capacity region of computation thus represents not just a mathematical abstraction, but a fundamental aspect of how information can be processed in our physical universe. This perspective will become increasingly important as we develop new computational paradigms and approach the fundamental limits of what is physically possible.
Data Availability Statement
No new data were generated or analyzed in this study.
AI Assistance Statement
Language and editorial suggestions were supported by AI tools; the author takes full responsibility for the content.
Acknowledgments
The author thanks the Octonion Group research team for valuable discussions and computational resources. Special recognition goes to the broader computational complexity and quantum computing communities whose foundational work made this synthesis possible.
Conflicts of Interest
The author declares no conflicts of interest.
References
- Landauer, R. Irreversibility and heat generation in the computing process. IBM Journal of Research and Development 1961, 5, 183–191. [Google Scholar] [CrossRef]
- Bérut, A.; Arakelyan, A.; Petrosyan, A.; Ciliberto, S.; Dillenschneider, R.; Lutz, E. Experimental verification of Landauer’s principle linking information and thermodynamics. Nature 2012, 483, 187–189. [Google Scholar] [CrossRef] [PubMed]
- Margolus, N.; Levitin, L.B. The maximum speed of dynamical evolution. Physica D: Nonlinear Phenomena 1998, 120, 188–195. [Google Scholar]
- Bekenstein, J.D. Universal upper bound on the entropy-to-energy ratio for bounded systems. Physical Review D 1981, 23, 287–298. [Google Scholar] [CrossRef]
- Bennett, C.H. Logical reversibility of computation. IBM Journal of Research and Development 1973, 17, 525–532. [Google Scholar] [CrossRef]
- Feynman, R.P. Simulating physics with computers. International Journal of Theoretical Physics 1982, 21, 467–488. [Google Scholar] [CrossRef]
- Lloyd, S. Ultimate physical limits to computation. Nature 2000, 406, 1047–1054. [Google Scholar] [CrossRef]
- Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information: 10th Anniversary Edition; Cambridge University Press, 2010. [Google Scholar]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons, 2006. [Google Scholar]
- Ballard, G.; Demmel, J.; Holtz, O.; Schwartz, O. Minimizing communication in numerical linear algebra. SIAM Journal on Matrix Analysis and Applications 2011, 32, 866–901. [Google Scholar] [CrossRef]
- Strassen, V. Gaussian elimination is not optimal. Numerische Mathematik 1969, 13, 354–356. [Google Scholar] [CrossRef]
- Coppersmith, D.; Winograd, S. Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation 1990, 9, 251–280. [Google Scholar] [CrossRef]
- Harrow, A.W.; Hassidim, A.; Lloyd, S. Quantum algorithm for linear systems of equations. Physical Review Letters 2009, 103, 150502. [Google Scholar] [CrossRef] [PubMed]
- Cramér, H. Mathematical Methods of Statistics; Princeton University Press, 1946. [Google Scholar]
- Rao, C.R. Information and the accuracy attainable in the estimation of statistical parameters. Bulletin of the Calcutta Mathematical Society 1945, 37, 81–91. [Google Scholar]
- Fisher, R.A. On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society A 1922, 222, 309–368. [Google Scholar]
- Shannon, C.E. A mathematical theory of communication. Bell System Technical Journal 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Holevo, A.S. Bounds for the quantity of information transmitted by a quantum communication channel. Problemy Peredachi Informatsii 1973, 9, 3–11. [Google Scholar]
- Toffoli, T. Reversible computing. Automata, Languages and Programming 1980, 632–644. [Google Scholar]
- Fredkin, E.; Toffoli, T. Conservative logic. International Journal of Theoretical Physics 1982, 21, 219–253. [Google Scholar] [CrossRef]
- Caneva, T.; Murphy, M.; Calarco, T.; Fazio, R.; Montangero, S.; Giovannetti, V.; Santoro, G.E. Optimal control at the quantum speed limit. Physical Review Letters 2009, 103, 240501. [Google Scholar] [CrossRef]
- Deffner, S.; Lutz, E. Quantum speed limit for non-Markovian dynamics. Physical Review Letters 2013, 111, 010402. [Google Scholar] [CrossRef]
- Jun, Y.; Gavrilov, M.; Bechhoefer, J. High-precision test of Landauer’s principle in a feedback trap. Physical Review Letters 2014, 113, 190601. [Google Scholar] [CrossRef]
- Cooley, J.W.; Tukey, J.W. An algorithm for the machine calculation of complex Fourier series. Mathematics of Computation 1965, 19, 297–301. [Google Scholar] [CrossRef]
- Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; MIT Press, 2006. [Google Scholar]
- Golub, G.H.; Van Loan, C.F. Matrix Computations; Johns Hopkins University Press, 2013. [Google Scholar]
- Trefethen, L.N.; Bau, D., III. Numerical Linear Algebra; SIAM, 1997. [Google Scholar]
- Watrous, J. The Theory of Quantum Information; Cambridge University Press, 2018. [Google Scholar]
- Preskill, J. Quantum computing: pro and con. Proceedings of the Royal Society A 1998, 454, 469–486. [Google Scholar] [CrossRef]
- Shor, P.W. Algorithms for quantum computation: discrete logarithms and factoring. in Proceedings 35th Annual Symposium on Foundations of Computer Science, pp. 124–134, 1994.
- Grover, L.K. A fast quantum mechanical algorithm for database search. In Proceedings of the 28th Annual ACM Symposium on Theory of Computing; pp. 212–219.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).