1. Introduction
This note is not intended to question the remarkable achievements of quantum information theory or its impressive practical realizations, such as quantum computers. Its aim is to temper the growing over-expectations surrounding quantum computing projects.
I work primarily on applications of quantum information theory outside physics — in cognition, artificial intelligence, decision making, economics and finance, social sciences, and biology (Khrennikov, 2010). As a reviewer, I am overwhelmed by a flood of papers predicting revolutionary applications of quantum computers in AI, banking, and virtually every domain imaginable. As any qualified specialist, however, I have serious doubts that the quantum computing project will be realized so smoothly — or realized at all in a practically usable form. It seems that many researchers outside the “quantum computing business” have been misled by the relentless advertising of an imminent quantum revolution in human society.
This situation is not the result of aggressive quantum computing propaganda but merely resulting from miscommunication between experts in quantum computing and the rest of scientific community and human society. At the same time some elements of hyping can be found even articles written by experts in the field of quantum computing.
We can, for example, mention the articles by Basak et al. (2022), Pilato and Vella (2023), and Yang et al. (2023). All three papers recognize the significant promise of quantum computing: the potential to solve intractable problems, achieve computational speedups, and enable novel applications. At the same time, they emphasize substantial practical limitations, including hardware scaling, error correction, data encoding, domain-specific challenges, and the fact that, in many cases, we remain far from large-scale deployment. Consequently, while the literature does contain hype-adjacent language — such as claims of significant acceleration or a computational paradigm shift — it is by no means purely evangelistic; each paper offers a sober assessment of the barriers involved. Even in the most comprehensive survey literature, authors describe dramatic future potential, but they invariably accompany these claims with major caveats. This suggests that while the hype is moderated within academic discourse, it remains present, particularly when amplified by external actors in industry, finance, and popular media, who often emphasize the promise while downplaying the challenges.
2. What is the Main Problem of Quantum Computing?
Quantum computing is a probabilistic form of computation. A quantum computer generates candidates for solutions to a particular class of problems — those for which it is extremely difficult to find a solution but relatively easy to verify whether a given candidate is correct (Yang et al., 2023). The main theoretical advantage of quantum computing is in exploring the laws of quantum probability theory – constructive probability interference (Khrennikov, 2021).
Each generation candidate-solution and verification cycle takes time. If the proportion of incorrect candidates is high, the supposed quantum advantage quickly disappears, overwhelmed by the noise inherent in the process (Huynh et al., 2023). High noise and high error rates drastically reduce the practical value of a quantum computer.
Creating one logical qubit with low noise requires a substantial number of physical qubits due to the overhead of error-correction protocols — some estimates suggest on the order of 1,000 physical qubits per logical qubit (cf. Beverland et al., 2022). To build a quantum computer capable of solving practically relevant problems, one would therefore need roughly 1,000 logical qubits, implying a total of about one million physical qubits. A few research groups are reportedly pursuing such large-scale architectures, but the project remains extremely complex and high-risk (Yang et al., 2023; White et al., 2024). We conclude this section by citation from Beverland et al. (2022): “…we assess three scaled quantum applications and find that hundreds of thousands to millions of physical qubits are needed to achieve practical quantum advantage.”
3. Against Speculative Hype
This note is directed against parasitic speculations and pseudo-research proclaiming the imminent transformation of human society on the basis of quantum computers that do not yet exist and may never become practical.
The present hyping situation is not the result of aggressive quantum computing propaganda but merely resulting from miscommunication between experts in quantum computing and the rest of scientific community and human society. At the same time some elements of hyping can be found even articles written by experts in the field of quantum computing.
We can, for example, mention the articles by Basak et al. (2022), Pilato and Vella (2023), and Yang et al. (2023). All three papers recognize the significant promise of quantum computing: the potential to solve intractable problems, achieve computational speedups, and enable novel applications. At the same time, they emphasize substantial practical limitations, including hardware scaling, error correction, data encoding, domain-specific challenges, and the fact that, in many cases, we remain far from large-scale deployment. Consequently, while the literature does contain hype-adjacent language — such as claims of significant acceleration or a computational paradigm shift — it is by no means purely evangelistic; each paper offers a sober assessment of the barriers involved. Even in the most comprehensive survey literature, authors describe dramatic future potential, but they invariably accompany these claims with major caveats. This suggests that while the hype is moderated within academic discourse, it remains present, particularly when amplified by external actors in industry, finance, and popular media, who often emphasize the promise while downplaying the challenges.
4. A Parallel Case: The Nuclear Fusion Project
The situation with quantum computing has a striking historical parallel — the nuclear fusion project. From the theoretical standpoint, controlled nuclear fusion was one of the best-justified and most promising ideas of the twentieth century. The equations were clear, the physical mechanisms were well understood, and the potential benefits were — and still are — enormous: virtually unlimited, clean energy.
However, the practical realization of fusion energy has faced a single, seemingly “small” but in fact fundamental obstacle — the instability of plasma confinement. The theoretical model assumes that hot plasma can be kept stable long enough for sustained fusion reactions to occur. In practice, however, even tiny perturbations lead to turbulence, energy losses, and eventual collapse of confinement. Despite decades of brilliant engineering and the construction of devices such as tokamaks and stellarators, the problem of plasma instability remains unsolved in any economically viable way.
Quantum computing faces a formally analogous situation. Here the “plasma instability” is replaced by the instability of quantum coherence. In theory, a quantum computer performs operations on perfectly isolated qubits that maintain their superpositions and entanglements indefinitely. In reality, every interaction with the environment — thermal noise, imperfect gates, or material defects — leads to decoherence and errors that destroy the delicate quantum state.
Just as plasma instability in fusion undermines the confinement necessary for energy release, decoherence in quantum systems undermines the stability necessary for reliable computation. Both projects therefore confront the same meta-problem: the discrepancy between theoretical idealization and physical feasibility.
Fusion research continues because it is fundamentally valuable — even partial successes yield insights into plasma physics, astrophysics, and materials science. The same applies to quantum computing: even if the dream of universal quantum computation proves unattainable, the research will continue to enrich quantum information theory, solid-state physics, and related fields. So, we note once again that he limitations of quantum computing resemble those of nuclear fusion projects (Chen, 2023; Ryzhkov, 2023; White et al., 2024
Finally, we conclude that one must be clear: strong theoretical justification does not guarantee practical realization. Both the quantum computer and the fusion reactor illustrate how the frontier between the possible and the feasible is defined not only by equations but by the stubborn realities of nature.
5. A More Realistic Direction
For physics, it may be more productive to shift focus from universal quantum computers to quantum simulators. Here, the analogy to classical analog computers is useful: one uses the physical system to directly simulate another system. Even in this case, one must examine whether quantum simulators are truly more powerful than classical analog computers (Maley, 2020; Köppel et al., 2021).
Additionally, quantum-inspired classical computations — algorithms borrowing ideas from quantum theory but running on conventional hardware — already show that claimed quantum superiority may not always be practical (Huynh et al., 2023).
6. Classical Analog Computers and Universal Digital Computers: A Historical and Conceptual Comparison
Analog computers, developed between the 1930s and 1960s, solved continuous equations by exploiting physical quantities like voltages or currents. Their main advantages were real-time processing and inherent parallelism (Köppel et al., 2021; MacLennan, 2006). Digital computers, by contrast, manipulate discrete symbols and are universal, programmable, and scalable.
Analog machines excelled in solving differential equations and simulating continuous systems but suffered from noise, drift, and limited precision (Maley, 2020; Killat et al., 2023). Digital computers gained dominance due to their robustness, error correction, and flexibility (MacLennan, 2006).
This mirrors the quantum scenario: universal quantum computers manipulate continuous probability amplitudes that are fragile to noise and decoherence, while digital computers (and even classical quantum-inspired algorithms) offer stability and scalability (Huynh et al., 2023; Basak et al., 2022).
Modern Revival: Hybrid and Domain-Specific Computing
Hybrid analog-digital architectures are re-emerging in ML, neuromorphic computing, and simulation tasks (Killat et al., 2023; Maley, 2020). Analog components act as co-processors, while digital parts maintain precision and control. Similarly, quantum simulators may be domain-specific devices rather than universal machines.
Lessons and Perspective
Analog vs digital history teaches that continuous systems are powerful but fragile, while discrete symbolic systems are robust and scalable. Universal quantum computation, like classical analog computing, requires delicate state control. Until decoherence and noise are practically mitigated, quantum devices are likely to remain specialized, simulator-like tools, echoing analog computing’s historical trajectory.
7. Constructive Probability Interference as the Root of Quantum Supremacy
In short article Khrennikov (2021) undertakes a foundational critique of the reasons often given for the anticipated superiority of quantum computers (sometimes called “quantum supremacy”). The article begins by referencing the breakout claim by Google in 2019 that a quantum processor had achieved a task beyond classical computers (Arute et al., 2019; Svozil, 2019), and then the following question is formulated: What exactly are the physical or computational roots of that expected advantage?
The paper reviews three frequently cited principles in quantum computing discussions: superposition, entanglement, and the principle of complementarity (following Niels Bohr). Khrennikov’s (2021) central argument is that superposition and entanglement — while mathematically and physically remarkable — do not in themselves provide a sufficient conceptual foundation for quantum computing’s advantage over classical computation. For example, it is pointed out that classical wave-systems can realize superpositions (optical systems being a case in point), so superposition alone cannot guarantee computational superiority.
On entanglement, Khrennikov (2021) warns that while it has become almost a buzz-word in the quantum computing community, its foundational role is ambiguous: entanglement is defined mathematically (non-separable states) but its physical meaning — especially when mixed with notions of “non-locality” — remains controversial. It is argued that simply pointing at entanglement does not clarify why quantum algorithms should out-perform classical ones in general.
Khrennikov (2021) turns attention to the principle of complementarity — Bohr’s notion that certain pairs of observables cannot be simultaneously determined, and that quantum phenomena always involve the interplay of measurement context. Some Bell-type experiments are reinterpreted as tests of complementarity rather than as purely demonstrations of non-locality. Within this interpretive framework, it is suggested that the real computational advantage may lie in the fact that quantum probability theory (QPT) allows one to proceed without computing full joint probability distributions (jpd) for large systems of random variables. Classical probabilistic algorithms must often handle exponentially large joint distributions; quantum processes — via interference and contextuality — may avoid that overhead.
Thus, Khrennikov (2021) proposes that the “root” of quantum computing advantage might be more subtle than simply superposition + entanglement; instead, the combinatorial saving enabled by complementarity and quantum interference may be the deeper source. He also notes the physical underpinning of this saving: the discrete nature of quantum measurement outcomes (as opposed to continuous wave-values) plays a role in constructing the interference effect.
This article offers a concise and conceptually clear alternative lens for viewing quantum computational advantage — shifting the focus from “hardware spectacle” (many qubits, entanglement) to a probabilistic-structural insight (joint distributions, complementarity).
It bridges quantum computing discourse with quantum foundations (Bohr, contextuality, quantum probability) in a way that is relatively rare in engineering-dominated quantum computing literature.
It provides critical caution: researchers claiming “quantum superiority” should not rely purely on familiar slogans like “entanglement is the key” but rather examine the why and how of potential quantum speedups.
Limitations and Caveats
As the author acknowledges, the treatment is conceptual and high-level; the article does not present new algorithmic-complexity proofs or full numerical resource estimates. It is more of a philosophical/interpretive argument than an engineering roadmap.
One might argue that while complementarity and quantum probability provide insight, the path to real-world quantum advantage still hinges on engineering issues (noise, decoherence, error correction) which are only briefly acknowledged here.
The article leaves open many questions: how exactly to formalize the “avoiding joint distributions” insight into concrete algorithmic speedups; what are the realistic bounds; and how hardware realities might support or limit these conceptual claims.
Implications for Research
For those working in quantum information outside physics (cognition, decision theory, social modelling), this article is particularly useful. It suggests that one should pay attention not only to “how many qubits” or “how much entanglement” but also how the probabilistic structure of the problem matches the quantum-probabilistic model. In other words, quantum-inspired modelling (in cognition, economics, finance, sociology) may benefit from the insight that avoiding joint distributions via interference/contextuality can be computationally fruitful — even if full-blown quantum hardware remains distant.
In the broader discussion of quantum computing hype vs realism, this article provides a foundation for a skeptical-but-constructive perspective: the theoretical promise is non-trivial, but its roots must be carefully interrogated.
8. Concluding Remarks
The contemporary narrative of quantum computing oscillates between extraordinary optimism and sober technical skepticism. As we have argued, this tension arises not from any inherent flaw in quantum information theory but from a widening gap between theoretical ideals and engineering realities. The analogy with controlled nuclear fusion underscores the central message: elegant theories do not guarantee feasible technologies. Quantum coherence, like plasma confinement, is exquisitely fragile. In both domains, decades of research have led to deep scientific insights, yet the path to scalable, economically viable implementation remains unclear.
The growing body of literature on quantum error correction, scalable architectures, and fault-tolerant qubits demonstrates remarkable progress, but it simultaneously reveals the extraordinary resource demands required for practical quantum advantage. Claims of impending breakthroughs must therefore be treated cautiously, especially when amplified beyond the scientific community into finance, industry, and the media. Such overextension risks fostering unrealistic expectations about quantum technologies and distorting research priorities.
A more measured and productive trajectory lies in domain-specific quantum simulators, hybrid quantum–classical methods, and quantum-inspired algorithms—approaches that do not require fully universal, fault-tolerant quantum computers. These paths parallel the historical evolution from analog to digital computing and may ultimately provide the most scientifically and practically meaningful results. Moreover, foundational work on quantum probability, contextuality, and constructive interference continues to offer conceptual clarity, deepening our understanding of where true quantum advantage may arise.
In conclusion, the future of quantum computing should be envisioned not as an inevitable revolution but as a challenging scientific frontier whose eventual impact depends on fundamental physical constraints. By separating genuine progress from premature speculation, we can better appreciate both the promise and the limits of quantum technologies, ensuring that research moves forward with integrity, realism, and intellectual clarity.
References
- Arute, F.; Arya, K.; Babbush, R.; et al. Quantum supremacy using a programmable superconducting processor. Nature 2019, 574, 505–510. [Google Scholar] [CrossRef] [PubMed]
- Basak, U.; Hossain, M. J.; Nazim, M.; Masum, M.; Shahriar, H.; Uddin, G.; Ahamed, S. I. Evolution of Quantum Computing: A Systematic Survey on the Use of Quantum Computing Tools. arXiv 2022, arXiv:2204.01856. [Google Scholar] [CrossRef]
- Beverland, M. E.; Murali, P.; Troyer, M.; Svore, K. M.; Höfler, T.; Kliuchnikov, V.; Low, G. H.; Soeken, M.; Santorum, A.; Vaschillo, A. Assessing requirements to scale to practical quantum advantage. arXiv 2022. [Google Scholar] [CrossRef]
- Chen, M. Key Issues and Application Prospects in High-Temperature Plasma Physics. Academic Journal of Science and Technology 2023. [Google Scholar] [CrossRef]
- Huynh, L.; Hong, J.; Mian, A.; Suzuki, H.; Wu, Y.; Camtepe, S. Quantum-Inspired Machine Learning: A Survey. arXiv 2023, arXiv:2308.11269. [Google Scholar] [CrossRef]
- Khrennikov, A. Y. Ubiquitous Quantum Structure: From Psychology to Finance. Springer. 2010. [Google Scholar]
- Khrennikov, A. Roots of quantum computing supremacy: superposition, entanglement, or complementarity? The European Physical Journal Special Topics 2021, 230(4), 1053–1057. [Google Scholar] [CrossRef]
- Killat, D.; Köppel, M.; Ulmann, B. Solving partial differential equations with Monte Carlo methods on hybrid analog-digital computers. arXiv 2023, arXiv:2309.05598. [Google Scholar]
- Köppel, M.; Ulmann, B.; Heimann, S.; Killat, D. Using analog computers in today’s largest computational challenges. Advances in Radio Science 2021, 19, 105–115. [Google Scholar] [CrossRef]
- MacLennan, B. J. A review of analog computing. Technical Report UT-CS-06-551, University of Tennessee. University of Tennessee, 2006. [Google Scholar]
- Pilato, G.; Vella, F. A Survey on Quantum Computing for Recommendation Systems. Information 2023, 14(1), 20. [Google Scholar] [CrossRef]
- Ryzhkov, S. V. Magneto-Inertial Fusion and Powerful Plasma Installations (A Review). Applied Sciences 2023, 13(11), 6658. [Google Scholar] [CrossRef]
- Svozil, K. Comment on “Quantum supremacy using a programmable superconducting processor”. 2019. Available online: https://arxiv.org/abs/1911.00577.
- White, A. E.; Baglietto, E.; Bucci, M.; Howard, N. T.; Rodriguez-Fernandez, P. Fusion plasma turbulence research beyond the burning plasma era. Frontiers in Nuclear Engineering 2024, 3, 1380108. [Google Scholar] [CrossRef]
- Yang, Z.; Zolanvari, M.; Jain, R. A Survey of Important Issues in Quantum Computing and Communications. IEEE Communications Surveys & Tutorials 2023. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).