Preprint
Article

This version is not peer-reviewed.

Quantum Computing: Foundational and Theoretical Models

Submitted:

22 July 2025

Posted:

24 July 2025

You are already at the latest version

Abstract
Quantum computing is a multidisciplinary field at the intersectionof quantum physics, computer science, and mathematics, aiming to har-ness quantum mechanical phenomena for computational advantage. Thisarticle provides a comprehensive review of both the foundational princi-ples and advanced theoretical models of quantum computing. We beginwith the basic framework of quantum computation, introducing the con-cept of qubits as two-level quantum systems, the linear algebra of Hilbertspaces, Dirac’s bra–ket notation, and the postulates of quantum mechan-ics (state superposition, unitary evolution, and projective measurement).We then discuss quantum logic gates and circuits, highlighting how clas-sical reversible computation principles extend to universal quantum gatesets. Next, we survey key quantum algorithms (from Shor’s factoring toGrover’s searching and beyond) and the complexity-theoretic implicationsof quantum computers, contrasting the class BQP with classical complex-ity classes. We cover the theoretical foundations of quantum error cor-rection and fault tolerance, explaining how quantum information can beprotected from decoherence using redundancy and how fault-tolerant pro-tocols can, in principle, allow scalable quantum computation despite noisyhardware. We then explore advanced models and paradigms of quantumcomputation, including measurement-based quantum computing (one-wayquantum computing), topological quantum computation with anyons, adi-abatic quantum computing and quantum annealing, continuous-variablequantum computing, and related approaches. We also provide an overviewof quantum information theory as it relates to computation, discussingentanglement, quantum entropy, and information-theoretic limits like theHolevo bound. Finally, we review the leading physical implementations(quantum hardware models) of quantum computers—trapped ions, super-conducting circuits, photonic systems, solid-state spin qubits, and oth-ers—outlining the challenges and achievements of each approach.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Classical computing, as formulated by Alan Turing and others in the 1930s, relies on bits that take values 0 or 1 and on logical operations implemented by electronic circuits [76]. Turing’s 1936 paper on computable numbers established the theoretical foundation of digital computation, and subsequent work by Shannon and others linked information to physical hardware via Boolean logic circuits [6]. In the 1960s and 1970s, researchers became aware that computation is a physical process subject to physical laws. Rolf Landauer famously argued that erasing a bit of information incurs an irreducible energy cost (“Landauer’s principle”) [7], highlighting that information and thermodynamics are intertwined. This led to the idea of reversible computation: Charles Bennett showed that computation could in principle be done without energy dissipation by using logically reversible gates [9]. The Fredkin-Toffoli model of conservative logic similarly demonstrated reversible universal gates in the early 1980s [10]. These insights set the stage for quantum computing, where quantum physics—particularly the unitary and reversible nature of closed-system dynamics—provides the natural substrate for computation.
Independently, quantum mechanics was revolutionizing our understanding of physical law. By the 1920s, the theory of quantum mechanics had been established, introducing counterintuitive principles such as superposition (a quantum system can exist in a linear combination of basis states) and entanglement (multi-particle states can exhibit correlations beyond those classically allowed). In 1935, Einstein, Podolsky, and Rosen pointed out the apparent paradox of entangled particles and questioned the completeness of quantum mechanics [4]. Schrödinger responded by emphasizing that entanglement is the characteristic trait of quantum theory, coining the term “entanglement” in the same year [5]. John Bell’s theorem in 1964 further solidified the non-classical nature of entanglement, showing that no local hidden-variable theory can reproduce all quantum predictions [8]. These foundational developments in quantum theory laid the groundwork for thinking about information in quantum terms.
The notion of a quantum computer was first explored in the 1980s. In 1980, Paul Benioff described a quantum mechanical model of a Turing machine, suggesting that a computer could in principle operate under quantum laws [11]. Yuri Manin also speculated in 1980 that classical computers might not efficiently simulate quantum processes [69]. Richard Feynman, in 1982, famously pointed out that simulating quantum physics appears to be exponentially hard on classical computers, and proposed that “nature isn’t classical… and if you want to make a simulation of nature, you’d better make it quantum mechanical.” He introduced the idea of a universal quantum simulator, a device that could mimic any other quantum system efficiently [12,13]. These ideas were further expanded by David Deutsch, who in 1985 formulated the concept of a universal quantum Turing machine and argued that quantum computers could perform tasks impossible for classical ones [14]. By 1993, Bernstein and Vazirani had defined a formal complexity-theoretic framework for quantum computation, introducing the class BQP and showing that quantum Turing machines and quantum circuits are polynomially equivalent models [16,17].
The excitement around quantum computing grew dramatically when Peter Shor discovered in 1994 that a quantum algorithm could efficiently factor large integers [31]. Shor’s algorithm demonstrated an exponential speedup over the best known classical factoring algorithms [32], thereby threatening the security of RSA encryption and proving that quantum computers, if realized, would have capabilities far beyond classical ones. Shortly thereafter, in 1996 Lov Grover found a quantum search algorithm that gives a quadratic speedup for the unstructured search problem [33]. These breakthroughs cemented the notion of “quantum supremacy” — the potential for quantum machines to solve certain problems faster than any classical computer.
In parallel with algorithmic advances, the physical feasibility of quantum computing was being explored. Early experiments in the mid-1990s used nuclear magnetic resonance (NMR) to manipulate nuclear spin qubits in molecules, demonstrating basic quantum logic gates and even a 7-qubit realization of Shor’s algorithm in 2001. Ion trap systems and superconducting circuits emerged as leading hardware platforms by the 2000s, each achieving progressively larger numbers of qubits and higher gate fidelities. By the 2020s, devices with over 50 qubits became available; for instance, Google’s 53-qubit superconducting processor achieved a milestone in 2019 by performing a sampling task in seconds that was argued to be infeasible on any contemporary supercomputer [78]. This milestone, termed quantum supremacy, is not a practical application per se but serves as a proof-of-principle that quantum devices can outperform classical ones on specific tasks. While the interpretation of such experiments is nuanced and some claims are debated, there is broad consensus that we are in the early stages of a new computing paradigm.
Despite rapid progress, quantum computing faces significant challenges. Quantum states are extremely sensitive to interaction with their environment, a phenomenon known as decoherence. Maintaining coherence of qubits long enough to perform lengthy computations is difficult, and any unwanted disturbance can introduce errors in quantum algorithms. Unlike classical bits, quantum information cannot be copied arbitrarily (due to the no- cloning theorem) [65], so simple repetition of data for error backup is not possible. Quantum error rates in current devices are several orders of magnitude too high to perform large-scale algorithms without correction. Overcoming these obstacles is a central focus of research, spurring developments in quantum error-correcting codes and fault-tolerant architectures.
This article provides a detailed overview of quantum computing from its theoretical underpinnings to advanced models. In Section 2, we introduce the basic formalism: qubits, Hilbert spaces, and quantum postulates, establishing the notation and tools (such as Dirac notation) used throughout. Section 3 describes quantum logic gates and circuits, explaining how quantum gates operate and how they can be composed into circuits for computation; we also discuss universal gate sets and the complexity of circuit synthesis. Section 4 covers quantum algorithms and complexity theory, highlighting major algorithms and what they imply for the relationship between quantum and classical complexity classes. In Section 5, we delve into quantum error correction—demonstrating how quantum information can be protected from errors—and Section 6 outlines the principles of fault-tolerant quantum computation that enable reliable computation on unreliable hardware. Section 7 surveys advanced paradigms beyond the standard circuit model, including measurement-based and topological computing, as well as adiabatic and continuous-variable models. Section 8 touches on key concepts of quantum information theory that are relevant to computation, such as entanglement, entropy, and information capacity. Section 9 provides an overview of the leading physical implementations (quantum hardware) and the state of experimental quantum computing. We conclude in Section 10 with a discussion of the outlook for the field, including both the enormous potential and the remaining hurdles on the way to practical quantum computers.
Throughout this article, we aim to be mathematically rigorous, providing derivations or references for fundamental results. The bibliography includes over 130 references, from seminal papers to comprehensive textbooks, reflecting the rich and rapidly evolving literature of this interdisciplinary field.

2. Qubits, States, and Quantum Mechanics Fundamentals

2.1. Qubits and Hilbert Space

The basic unit of quantum information is the qubit (quantum bit). A qubit is a two-level quantum system that can exist in a superposition of its two basis states, commonly denoted | 0 and | 1 by analogy to the binary states of a classical bit. The state of a single qubit can be written as
| ψ = α | 0 + β | 1 ,
where α , β C are complex probability amplitudes subject to the normalization condition | α | 2 + | β | 2 = 1 . This linear combination means the qubit simultaneously “occupies” the | 0 and | 1 states, with a relative weighting. If a measurement is made in the { | 0 , | 1 } basis (the computational basis), the probability of observing outcome 0 is | α | 2 and outcome 1 is | β | 2 , in accordance with the Born rule of quantum mechanics. Upon measurement, the qubit state collapses to the observed basis state (e.g., | 0 if outcome 0 is observed).
Mathematically, the state of a qubit resides in a two-dimensional complex Hilbert space H 2 C 2 . The basis { | 0 , | 1 } is an orthonormal basis for this space. We will often use the Dirac bra–ket notation, introduced by Paul Dirac [2], which provides an elegant way to represent quantum states and linear operations. In this notation, | ψ (a “ket”) denotes a column vector (state vector) in the Hilbert space, and its dual ψ | (a “bra”) denotes the conjugate transpose (row vector). Inner products are written as ϕ | ψ , and outer products as | ψ ϕ | , which yield linear operators. For example, the basis states can be represented as
| 0 = 1 0 , | 1 = 0 1 ,
and an arbitrary qubit | ψ = α | 0 + β | 1 corresponds to the vector α β .
A key distinction of a qubit is that, unlike a classical bit, it can exist in a superposition. This property has no classical analog: whereas a classical bit is definitively either 0 or 1, a qubit can be “both 0 and 1” until measured. It is important to note, however, that although we use intuition like “both at once,” a qubit in superposition does not yield both outcomes on one measurement—rather it yields one outcome or the other with probabilities given by the amplitude magnitudes squared. Superposition allows interference effects: if the relative phase between the terms α and β changes, it can affect measurement outcomes due to constructive or destructive interference between probability amplitudes.
Physically, many systems can serve as qubits: the two spin states of an electron (“spin up” | vs. “spin down” | in a magnetic field), the ground and excited states of an atom, polarization states of a single photon (horizontal vs. vertical), or distinct charge states of a superconducting circuit element, to name a few. Each of these realizes a two-level quantum system that can be coherently manipulated.
The space of states of even a single qubit is continuous (a complex projective 2-space). A helpful visualization is the Bloch sphere representation: any single-qubit pure state can be written (up to a global phase) as
| ψ = cos θ 2 | 0 + e i ϕ sin θ 2 | 1 ,
for some real angles 0 θ π and 0 ϕ < 2 π . One can map | ψ to a point on the unit sphere in R 3 with spherical coordinates ( θ , ϕ ) . The poles of the sphere represent the basis states | 0 (north pole, θ = 0 ) and | 1 (south pole, θ = π ), while superpositions lie on the surface. Opposite points on the Bloch sphere correspond to orthogonal states. This geometric picture is useful for visualizing single-qubit state evolution under various operations.
When dealing with multiple qubits, the state space is the tensor product of the individual qubit spaces. For example, a two-qubit system lives in H 2 H 2 C 4 , spanned by { | 00 , | 01 , | 10 , | 11 } . A general state of two qubits is
| Ψ = i , j { 0 , 1 } γ i j | i j ,
with complex coefficients γ i j normalized such that i , j | γ i j | 2 = 1 . States that can be factored as | ψ | ϕ are called product states (or separable states); otherwise the two qubits are in an entangled state. A famous example of entanglement is the two-qubit Bell state
| Φ + = 1 2 | 00 + | 11 ,
which is a maximally entangled state of two qubits. In | Φ + , neither qubit has a definite value on its own (each marginal state is maximally mixed), yet they are perfectly correlated—both are observed to be 0 or both 1 with equal probability. Bell states violate classical intuitions about separability and underlie quantum phenomena like EPR paradox and quantum teleportation.
Dirac notation and the formalism of Hilbert spaces provide a succinct language for reasoning about qubits and larger quantum systems. We emphasize that the state of a quantum system encapsulates everything one can predict probabilistically about the outcomes of measurements on that system. Unlike a probability distribution over classical states, the quantum state vector contains phase information that affects future evolution and interference, and these phases have observable consequences (e.g., in the output of quantum algorithms).

2.2. Operators and Postulates of Quantum Mechanics

Quantum systems evolve and are observed according to a set of postulates. We summarize the key postulates as they apply to quantum computing:
State Postulate: The state of an isolated quantum system is described by a unit vector | ψ in a Hilbert space H . If the system is not in a pure state (due to entanglement with an environment or lack of knowledge), it can be described more generally by a density operator ρ (a positive semi-definite, unit-trace operator on H ). In this article we primarily consider pure states for simplicity, except when discussing decoherence and noise.
Evolution Postulate: The evolution of a closed quantum system is unitary. Specifically, over time t the state changes as | ψ ( t ) = U ( t ) | ψ ( 0 ) , where U ( t ) is a unitary operator ( U U = I ) on H . Equivalently, the time-dependent Schrödinger equation governs the dynamics:
i d d t | ψ ( t ) = H | ψ ( t ) ,
where H is the Hamiltonian (energy observable) of the system, a Hermitian operator on H . The formal solution is | ψ ( t ) = e i H t / | ψ ( 0 ) , and the time-evolution operator U ( t ) = e i H t / is unitary. In a quantum computer, we have control over H ( t ) (within some hardware-dependent family), allowing us to enact desired unitary transformations as logic gates by turning on interactions (like laser pulses, microwave drives, etc.) for precise durations. This is the continuous model; in the circuit model (digital quantum computing) one considers a sequence of discrete unitary gates U 1 , U 2 , , U m such that the overall transformation is U m U 2 U 1 .
Measurement Postulate: A quantum measurement is described by a set of measurement operators { M k } acting on H , with outcome labels k. If the system is in state | ψ , the probability of obtaining outcome k is p ( k ) = ψ | M k M k | ψ , and the post-measurement (collapsed) state, given outcome k occurred, is M k | ψ p ( k ) . For a projective (von Neumann) measurement, the M k are orthogonal projectors onto an eigenbasis of an observable (Hermitian operator) O = k λ k P k , with P k the projector onto eigenvalue λ k . In that case, M k = P k and p ( k ) = ψ | P k | ψ , and the state collapses to P k | ψ / p ( k ) (an eigenstate of O with eigenvalue λ k ). In quantum computing, the most common measurement is in the computational basis { | 0 , | 1 } for each qubit. Measurement is inherently probabilistic and generally irreversible; it is usually the final step of a quantum algorithm to extract an output that can be read by a classical computer.
Two additional principles are worth noting: (i) Composite Systems: The state space of a joint system is the tensor product of the state spaces of the components. If we have subsystems A and B with states | ϕ A H A and | χ B H B , the joint state is | ϕ | χ H A H B . More generally, any entangled state lives in the tensor product space but cannot be factorized into separate states of each subsystem. (ii) No-Cloning Theorem: There is no physical operation that can take an arbitrary unknown quantum state | ψ and produce two copies | ψ | ψ . Formally, no unitary U on two identical Hilbert spaces can satisfy U ( | ψ | e ) = | ψ | ψ for all | ψ (with | e a fixed “blank” state). This result, discovered by Wootters and Zurek in 1982 [64], has profound implications: it forbids simple error correction strategies like making backup copies of qubits and underlies the security of quantum cryptographic protocols.
In the context of quantum computing, we often restrict our attention to a discrete set of operations (quantum gates) and measurements in a fixed basis (computational basis). It is important to remember that between these operations, the system’s state can be an arbitrary superposition or entangled state that cannot be directly interpreted in classical terms. The power of quantum computation stems from this ability to evolve complex superpositions that encode many computational paths at once, and to interfere them to amplify correct outcomes and cancel incorrect ones.

2.3. Quantum Entanglement and Correlation

Entanglement is a uniquely quantum form of correlation with no classical counterpart. Two or more qubits are entangled if their joint state is not expressible as a product state of each qubit individually. For example, the Bell state | Φ + = ( | 00 + | 11 ) / 2 introduced earlier is entangled. If we write down its density matrix ρ Φ + = | Φ + Φ + | , the reduced state of either single qubit (obtained by tracing out the other qubit) is 1 2 I , a maximally mixed state. Thus, each qubit individually has no definite value; nevertheless, if measured in the same basis, their outcomes are perfectly correlated (both outcomes match). This kind of correlation cannot be explained by any shared classical randomness, as evidenced by the violation of Bell’s inequalities [8].
Entanglement plays a critical role in many quantum algorithms and protocols. For instance, Shor’s algorithm uses entanglement between the work register and a separate accumulator register during the quantum Fourier transform stage; measuring one collapses the other into a state that reveals the desired period. Similarly, entanglement enables tasks like quantum teleportation and superdense coding, which have no analog in classical information theory.
A quantitative measure of entanglement for pure states of bipartite systems is the entropy of entanglement, defined as the von Neumann entropy of the reduced state: E ( | Ψ A B ) = S ( ρ A ) = tr ( ρ A log 2 ρ A ) , where ρ A = tr B ( | Ψ A B Ψ | | ) . If E = 0 , the state is separable; if E = 1 (for two qubits), it is maximally entangled.
One fundamental property of quantum states is that an overall global phase is physically irrelevant. The states | ψ and e i γ | ψ (for any real γ ) are indistinguishable by any measurement, since all measurable predictions involve outer products like | ψ ψ | where the phase cancels out. Only relative phases between components of a superposition have physical effects.

3. Quantum Gates and Quantum Circuits

3.1. Single-Qubit Gates

Quantum gates are unitary operations on one or more qubits that serve as the building blocks of quantum circuits. Key single-qubit gates include: - The Pauli X gate (quantum NOT gate): X = 0 1 1 0 . X flips the state: X | 0 = | 1 , X | 1 = | 0 . - The Pauli Z gate (phase-flip): Z = 1 0 0 1 . Z leaves | 0 unchanged and flips the phase of | 1 : Z | 1 = | 1 . - The Pauli Y gate (combination of X and Z): Y = i X Z = 0 i i 0 . - The Hadamard gate: H = 1 2 1 1 1 1 . H maps | 0 to ( | 0 + | 1 ) / 2 = : | + and | 1 to ( | 0 | 1 ) / 2 = : | , creating superposition from basis states. H is its own inverse and up to a global phase H 2 = I .
These gates can be physically implemented by, for example, applying a resonant pulse to a qubit for a certain duration (in NMR or superconducting circuits, this corresponds to a Rabi rotation). Many single-qubit gates are equivalent up to relative phase; e.g., the Pauli Y gate has the same effect on computational basis states as Y (it differs only by a global phase i ). Similarly, rotations about the Bloch sphere axes X , Y , Z by various angles form a continuous family of gates, with X , Y , Z being 180-degree rotations. It can be shown that any single-qubit unitary can be implemented (up to a global phase) using a sequence of rotations about two axes, say X and Z.

3.2. Multi-Qubit Gates and Universal Gate Sets

Multi-qubit gates act on two or more qubits. The prototypical two-qubit gate is the controlled-NOT (CNOT) gate, which flips the state of a target qubit if a control qubit is in state | 1 (and acts as identity if the control is | 0 ). In the { | 00 , | 01 , | 10 , | 11 } basis, CNOT = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 . Two-qubit controlled-phase gates (e.g., controlled-Z) are related to CNOT by single-qubit conjugations.
Three-qubit gates like the Toffoli (CCNOT) gate are also useful. The Toffoli has two control qubits and one target: it flips the target if both controls are | 1 , and does nothing otherwise. The Toffoli gate is an example of a classically reversible operation (a reversible AND) that has a convenient quantum implementation. It can implement any Boolean circuit in principle, since NAND (universal for classical logic) can be composed of Toffoli and X gates.
The quantum circuit model is a framework in which a computation is a sequence of quantum gates applied to an initial state (usually | 00 0 ). A quantum circuit is often depicted by a diagram with wires (for qubits) and symbols for gates. A simple example circuit might consist of preparing n qubits in | 0 n , applying a Hadamard gate to each (creating an equal superposition over all 2 n computational basis states), and then measuring—this implements a trivial parallel uniform sampling of bit-strings.
One of the crowning achievements in quantum computer science was the proof of the universality of quantum gate sets. In particular, it was shown that the CNOT gate together with any arbitrary single-qubit gate forms a universal set—meaning any n-qubit unitary operation can be approximated to arbitrary accuracy by a circuit using only those gates. Equivalently, there is a finite set of two-qubit gates such that any quantum algorithm can be compiled into a circuit over those gates. A common universal set is CNOT plus the single-qubit gates { H , S , T } (Hadamard, phase gate S = 1 0 0 i , and T = 1 0 0 e i π / 4 ). The T gate (also called π / 8 gate) is non-Clifford (does not map Pauli operators to Pauli operators under conjugation) and supplying this one gate in addition to the Clifford set { H , S , CNOT } yields universality.
Equivalence of the circuit model with other models was an important validation of the robustness of the notion of quantum computation. Yao (1993) showed that quantum Turing machines and uniform quantum circuits have equal computational power, up to polynomial simulation overhead. This extended the Church–Turing principle into the quantum domain, indicating that any reasonable model of quantum computation can simulate any other with at most polynomial overhead (thus the choice of model is usually a matter of convenience).
The circuit model is how most quantum algorithms are expressed. For instance, Shor’s factoring algorithm can be described by a circuit implementing modular exponentiation (via repeated controlled multiplication gates and addition circuits), followed by a quantum Fourier transform sub-circuit. Grover’s search algorithm similarly can be seen as repeated applications of an “inversion about the mean” operation, which is a specific multi-qubit reflection gate.
Before that, we should mention one more fundamental gate concept: measurement gates. In circuits, measurement is often depicted as a gate at the end of a wire, outputting a classical bit (usually a double-line arrow). If a measurement is performed partway through a circuit, the measured qubit’s state collapses and subsequent operations act on the collapsed (classical) state; however, any such early measurement can usually be deferred to the end by instead propagating a controlled operation or using an equivalent uncomputed ancilla, thanks to the principle of deferred measurement. Therefore, for the purpose of computational power, we often assume all measurements happen at the end without loss of generality.

4. Quantum Algorithms and Complexity Theory

4.1. Quantum Algorithms

Quantum algorithms are procedures that run on a quantum computer to solve computational problems faster (in terms of asymptotic complexity) than known classical algorithms. We highlight several landmark algorithms and general algorithmic techniques.

Deutsch-Jozsa Algorithm (1992):

This was one of the first examples to illustrate quantum advantage [15]. The Deutsch-Jozsa problem is a black-box (oracle) problem: given an oracle function f : { 0 , 1 } n { 0 , 1 } promised to be either constant (same output for all inputs) or balanced (output 0 for exactly half of inputs, 1 for the other half), determine which case it is. Classically, in the worst case 2 n 1 + 1 queries are required to be certain of the answer. Deutsch and Jozsa showed that a quantum algorithm can solve it with just one query to a quantum oracle, by preparing a superposition of all inputs, querying in parallel, and then applying a Hadamard transform and measuring. The algorithm always returns the correct answer with one query, showcasing how quantum parallelism and interference can outperform classical sampling. Although this problem is somewhat artificial (it does not involve a randomly chosen input; it’s a promise problem designed to show a wide separation), it set the stage for more practical algorithms.

Simon’s Algorithm (1994):

Simon’s problem provided an exponential speedup in the query complexity model, and importantly, it inspired Shor’s algorithm. The problem: given a black-box function f : { 0 , 1 } n { 0 , 1 } n that is guaranteed to be 2-to-1 (each output value corresponds to exactly two distinct inputs) and to satisfy f ( x ) = f ( y ) if and only if y = x s for some secret string s (bitwise XOR), find s. Classically, this requires exponentially many queries (as it’s essentially the collision-finding problem with an exponentially large domain). Simon’s quantum algorithm uses O ( n ) queries to identify s with high probability [30]. It does so by querying f on a superposition and then performing a measurement that yields a random bit-string orthogonal to s, repeating to gather n 1 independent equations and solving for s. Simon’s algorithm was the first to show an exponential separation (in query complexity) between quantum and deterministic classical computation for a specific problem. Shor later adapted the Fourier-measurement technique of Simon to the structure of integer factoring.

Shor’s Factoring Algorithm (1994):

Peter Shor’s algorithm for integer factorization (1994) [31,32] is one of the most celebrated results in quantum computing. The task is to find a nontrivial prime factor of a composite integer N. Shor’s approach reduces factoring to the problem of order-finding: given a randomly chosen integer a < N coprime to N, find the period r of the function f ( x ) = a x mod N (i.e., the smallest r such that a r 1 ( mod N ) ). Classically, finding r seems to require O ( N ) steps in the worst case, which is exponential in the input size (number of digits of N). The quantum algorithm finds r in polynomial time by creating a superposition of states encoding values of f ( x ) , performing a quantum Fourier transform (QFT) to reveal the period, and then using continued fractions to extract r from the measured result. Once r is known, one can compute gcd ( a r / 2 ± 1 , N ) to obtain a factor of N (if r is even and a r / 2 1 ( mod N ) , which happens with good probability given some number theory results). Shor’s algorithm runs in O ˜ ( n 3 ) time (where n = log N is the number of bits of N) using O ˜ ( n 2 ) quantum gates, a dramatic improvement over the best known classical algorithms (such as the general number field sieve) which run in sub-exponential but super-polynomial time in n. Shor’s algorithm can also be adapted to solve the discrete logarithm problem, thereby breaking several other cryptographic schemes. The impact of Shor’s work was profound: it indicated that widely used public-key cryptography could be vulnerable if a large-scale quantum computer is built, and it galvanized research into post-quantum cryptography as well as increased interest (and funding) in quantum technologies.

Grover’s Search Algorithm (1996):

Lov Grover discovered that quantum computers can perform unstructured search with a quadratic speedup. The problem: given an unstructured database of N items (or an oracle function f : { 1 , , N } { 0 , 1 } that outputs 1 for the marked item and 0 otherwise), find the index i 0 such that f ( i 0 ) = 1 . Classically, this requires O ( N ) queries in the worst case (linear search). Grover’s algorithm finds the marked item with high probability in O ( N ) queries [33]. It works by initializing an equal superposition of all N states, then repeatedly applying an oracle phase flip (which multiplies the amplitude of the marked state by 1 ) and a diffusion operation (which inverts all amplitudes about their average). This pair of operations constitutes a Grover “iteration” which increases the amplitude of the marked state while decreasing others. After about π 4 N iterations, the marked state’s amplitude is close to 1, and measuring yields the solution with high probability. Though the speedup is only quadratic, Grover’s algorithm is widely applicable (it finds a target in any unstructured search problem) and essentially optimal for black-box search problems (Bennett et al. proved that a quantum computer cannot do unstructured search in fewer than Ω ( N ) steps) [35]. Grover’s technique of amplitude amplification is a versatile subroutine that has been incorporated into many other quantum algorithms to achieve quadratic improvements for various problems (like optimization, formula satisfiability, or even generic enumeration tasks) [36].
Beyond these famous algorithms, there is a growing toolbox of quantum algorithms: - Quantum Fourier Transform (QFT): The QFT is a quantum analogue of the discrete Fourier transform applied to the amplitude indices. It can be implemented in O ( n 2 ) gates on n qubits. QFT is a central component of Shor’s algorithm and also finds use in algorithms for solving hidden subgroup problems, phase estimation, and computing the eigenvalues of unitary operators. - Phase Estimation: This algorithm estimates the phase (eigenvalue) e 2 π i ϕ of a given unitary U for an eigenstate | ψ such that U | ψ = e 2 π i ϕ | ψ . Phase estimation uses QFT and repeated applications of U to extract the binary expansion of ϕ . It runs in polynomial time in the desired precision and has applications in algorithms for chemistry and linear algebra. - Quantum Simulation: As Feynman envisioned, quantum computers can simulate other quantum systems more efficiently than classical computers. Lloyd (1996) showed that a time evolution under a local Hamiltonian (a sum of local terms) for time t can be simulated by a quantum circuit of size poly(t) [37]. - Quantum Algorithms for Linear Algebra: The HHL algorithm (Harrow, Hassidim, Lloyd, 2009) solves systems of linear equations A x = b in time polylog(N) for an N × N sparse matrix A, producing a quantum state proportional to the solution vector x [38]. While HHL has stringent conditions (one needs a good handle on condition numbers and the solution is quantum-form, meaning it must be extracted by measurements to get classical values), it initiated an area of quantum machine learning and linear algebra algorithms that attempt to exploit the speedups in handling large-dimensional data encoded in quantum states. - Quantum Walk Algorithms: Quantum walks are quantum analogues of random walks and have been used to develop algorithms, for example, for element distinctness or formula evaluation, sometimes achieving better than N scaling. Childs et al. (2003) constructed a quantum walk that traverses a certain graph exponentially faster than any classical algorithm, providing another oracle-based speedup.
Despite this growing list, not every problem is sped up by quantum computing. In fact, for many NP-hard problems, quantum algorithms are not known to give exponential speedup (Grover’s quadratic speedup is often the best known, and that is provably optimal for black-box search). A major open question is whether quantum computers can solve NP-complete problems in polynomial time; it is widely conjectured that they cannot, i.e., that NP is not contained in BQP, but this remains unproven. Notably, BQP is believed to be incomparable with NP : there are problems in BQP (like integer factoring) that are not known to lie in NP (factoring is in NP ∩ co-NP if we consider the output to be the factors themselves, but as a decision problem “does N have a factor less than M?” factoring is in NP only if a factor is given as a certificate); conversely, NP-complete problems are believed not to be in BQP. A common misconception is that quantum computers solve NP- complete problems efficiently; on the contrary, evidence such as oracle constructions suggests that NP is unlikely to be contained in BQP [79]. Bennett et al. (1997) argued that quantum algorithms searching for a single satisfying assignment cannot do better than Grover’s O ( 2 n / 2 ) in the black-box model, providing more evidence of that limitation.

4.2. Quantum Complexity Theory

Quantum complexity theory studies the computational complexity of problems in the quantum computation model and classifies them into complexity classes paralleling classical ones. The primary class of interest is BQP (Bounded-Error Quantum Polynomial time), defined as the class of decision problems solvable by a uniform family of quantum circuits of polynomial size, with error probability (for incorrect answer) at most 1/3 for all inputs. The choice of 1/3 is arbitrary; any constant < 1 / 2 can be amplified to, say, 2 p ( n ) by repeating the algorithm poly( p ( n ) ) times and taking a majority vote, using quantum error reduction techniques similar to classical ones. BQP can be informally thought of as the quantum analogue of the classical class BPP (probabilistic polynomial time).
It is known that P BPP BQP PSPACE (the last inclusion comes from the fact that a quantum computation on n qubits for T time steps can be simulated by a classical space O ( n T ) algorithm by brute-force amplitude tracking, which for polynomial T is polynomial space). It is conjectured (but not proven) that all inclusions are strict. Notably, BQP is believed to be incomparable with NP .
Another major class is Quantum Merlin-Arthur (QMA), the quantum analogue of NP, where a “yes” instance has a quantum proof (quantum state) that the verifier can check in polynomial time with a quantum circuit. QMA is the class of problems for which a yes-instance can be verified by a BQP machine given a quantum witness state (which may be entangled across many qubits). An example QMA-complete problem is the Local Hamiltonian Problem: given a Hermitian matrix (Hamiltonian) acting on n qubits, H = i = 1 m H i (each H i acts on a constant number of qubits, e.g., 2 qubits), determine if the smallest eigenvalue of H is below a or above b (for promised bounds a < b ), which is the quantum analogue of MAX-SAT (find ground state energy vs. energy threshold). This problem was shown to be QMA-complete by Kitaev and other authors.
Other notable complexity results: Watrous (2009) showed that QIP (quantum interactive proofs, where verifier and prover exchange quantum messages) is equal to PSPACE (whereas classical IP = PSPACE was a celebrated result in the 80s). This demonstrated that quantum proofs, while seemingly much more powerful (for example, multi-prover quantum interactive proofs can decide everything even uncomputable with suitable postselection), in the single-prover case do not give more power than classical interaction (beyond possibly constant-round protocols). Another result by Raz and Tal (2019) gave an oracle separation between BQP and PH (the polynomial hierarchy), meaning relative to some oracle there is a problem solvable by a quantum computer that is not in the entire PH. This indicates that quantum computers may solve problems outside the entire PH (which includes NP, NP∩co-NP, etc.), something not known for BPP vs. PH.
Moreover, BQP is low for PP (meaning PP BQP = PP ), which is evidence that quantum polynomial-time is not as powerful as PP (a class that, informally, allows unbounded postselected quantum computation). There are many other relationships and conjectures, but in summary, quantum complexity fits into the modern complexity theory landscape as a central piece that challenges some classical intuitions, but still largely conforms to the hierarchies and relativized barriers that structure classical complexity classes.
In terms of relativized separations, there have been oracle constructions separating BQP from PH. For example, relative to an oracle, BQP was shown to not be contained in PH by an oracle separation due to Raz and Tal (2019) [79]. This provides evidence that the power of quantum computers might exceed that of classical probabilistic polynomial-time machines relative to plausible complexity assumptions. It suggests quantum advantage could extend to problems that elude even the entire polynomial hierarchy.

5. Quantum Error Correction

5.1. Decoherence and Error Models

As powerful as quantum computation is, it is extraordinarily sensitive to errors. The same properties that give quantum computers their edge—superposition and entanglement—also make them fragile. Interaction with the environment (or noise from imperfect controls) can cause a qubit’s state to decohere or suffer a bit-flip or phase error. Unlike classical bits, one cannot simply copy a qubit state to guard against loss (the no-cloning theorem forbids it). Moreover, any measurement to check the state will collapse it, typically destroying the information. Thus, error correction is nontrivial: one must encode a single qubit of information into a larger entangled state of multiple physical qubits such that some redundancy is present, and errors can be detected and corrected without measuring the encoded information directly.
To model noise, we often use simple error models like the independent Pauli error model: each qubit at each time step has some probability p X of a bit- flip (X) error, p Z of a phase-flip (Z) error, or p Y of a Y error (bit+phase flip), and these occur independently. Another is the depolarizing model: with probability p the qubit’s state is replaced by a completely mixed state (equivalently, an X, Y, or Z error occurs with equal probability p / 3 ). For many analyses, it suffices to consider an independent depolarizing model with a single parameter p.
Given such a noise model, one defines the notion of a fault-tolerant quantum computing scheme: a way to encode and manipulate qubits so that as long as the physical error rates are below some threshold p th , an arbitrarily long quantum computation can be performed reliably (with error probability approaching zero, e.g., decreasing exponentially with overhead). This p th is called the fault-tolerance threshold. The idea is analogous to classical error-correcting codes in noisy channel communication, except complicated by the fact that errors are continuous (not discrete) and that any attempt to measure errors must avoid collapsing the state.

5.2. Quantum Error-Correcting Codes

Quantum error correction (QEC) is the theory and practice of protecting quantum information from errors by encoding it into a larger Hilbert space in such a way that errors can be detected and corrected without disturbing the encoded information. The first quantum error-correcting code was discovered by Peter Shor (1995) [42]. Shor’s code encodes 1 logical qubit into 9 physical qubits. Conceptually, it first uses a repetition code (three qubits) to protect against bit-flip errors, and then encodes each of those qubits into a three-qubit repetition code in the Hadamard-transformed basis (to protect against phase-flip errors). The resulting 9-qubit code can correct an arbitrary single-qubit error. Shortly thereafter, Andrew Steane found a more efficient [ [ 7 , 1 , 3 ] ] code (using the notation [ [ n , k , d ] ] meaning n physical qubits, k logical qubits, distance d) [43]. Steane’s 7-qubit code is able to correct a single error as well and is more convenient for certain implementations (it is the dual of the 7-qubit Hamming code, when expressed in the stabilizer formalism).
A major development in understanding QEC was the stabilizer formalism, introduced by Daniel Gottesman in 1996–97 [46]. In this framework, many codes can be described using the language of Pauli operators and their invariant subspaces. For example, Shor’s 9-qubit code can be described by 8 stabilizer generators (commuting operators whose + 1 eigenspace is the codespace). The theory of stabilizer codes unified and generalized many constructions, allowing systematic exploration of possible codes.
The theory of quantum error correction also yielded general conditions, known as the Knill-Laflamme conditions, that characterize when a subspace of states can serve as a quantum code for a given set of errors. In essence, a code will allow error correction if and only if the errors map any codeword to states that are mutually orthogonal for different codewords. Mathematically, for a code space C and error operators { E a } , there exist probabilities α a b such that ψ i | E a E b | ψ j = α a b δ i j for all E a , E b in the error set and all codeword states | ψ i , | ψ j (forming an orthonormal basis for C ). These conditions [47] ensure that an error E a applied to a codeword produces an output state orthogonal to what any different error E b would produce on a different codeword, so that error syndromes can be measured to identify and correct the errors without collapsing the information.
Many QEC codes have been developed: the 5-qubit code (the smallest nontrivial code, distance 3), CSS codes (which are built from two classical codes, named for Calderbank-Shor-Steane), concatenated codes (where a code’s qubits are themselves encoded in another code, recursively), etc. Perhaps the most important family is the class of topological codes, particularly surface codes (and related planar codes or toric codes). These are codes that live on a 2D grid of qubits with local stabilizer generators. They have the appealing feature of high tolerable error rates (around 1% per gate in some analyses, which is high for a quantum code) and require only local interactions in a plane. The surface code, discovered around 2001–2003 by Bravyi, Kitaev, Dennis et al., has become a leading choice for implementations due to its high threshold and geometric locality.
Quantum error correction has been demonstrated experimentally in small systems: for example, using 3 or 5 qubits in NMR and ion traps to correct single-bit flip or phase errors, or in superconducting qubits to detect and suppress certain error syndromes. As of 2023, quantum error correction remains at the “logical qubit” demonstration stage (showing a logical qubit that is longer-lived or more reliable than the components, albeit with heavy overhead), and reaching the break-even point (where an encoded qubit outperforms the best physical qubits) is an active research goal.

5.3. Fault Tolerance and the Threshold Theorem

Error correction by itself is not enough; it needs to be implemented in a fault-tolerant way. Fault tolerance means that the error-correction procedure itself doesn’t introduce more errors than it corrects, and that a few errors occurring during the process don’t cascade to cause a larger failure. To achieve fault tolerance, one uses techniques like applying gates transversely (where each physical qubit in one code block interacts with at most one qubit in another block, so an error can’t spread within a block), and adding plenty of redundancy and checks such that most errors are detected before they proliferate.
A pivotal result in the theory is the Threshold Theorem, proved in various forms by Shor, Aharonov and Ben-Or, Gottesman and Knill, Knill et al., and others in the late 1990s [39]. It states that if the physical error rate per gate/qubit is below some constant threshold p th , then it is possible to efficiently quantum compute arbitrarily long by using recursive error correction (concatenating codes or using a large enough code and distillation), with overhead that is polylogarithmic in the algorithm’s length. Early estimates gave very low thresholds like 10 6 or 10 9 , but with better codes and error models, the threshold has been estimated to be in the range of 10 3 to 10 2 (0.1% to 1topological codes like the surface code. For example, the surface code (a 2D array code) is known to have a threshold around p th 0.75 % for certain circuit noise models. Concatenated codes (e.g., concatenating Steane code with itself) had thresholds on the order of 10 4 or so in some analyses. The surface code’s high threshold and locality (nearest-neighbor interactions on a 2D lattice) have made it a leading candidate for many architectures.
Implementing fault-tolerant gates for a full universal set is nontrivial. Some gates can be done transversely (meaning independent operations on each qubit of a code block), which by construction does not propagate single errors to multi- qubit errors within the same block. For instance, in the Steane code, X , Z , H gates are transversal (apply the gate to each qubit in the block). However, other gates like T (the π / 8 phase gate) are not transversal for CSS-stabilizer codes like the surface or Steane code. A transversal T would violate the Eastin-Knill theorem (no quantum code can have a transversal implementation of a universal gate set), so something else is needed. One approach is magic state distillation: prepare noisy “magic” states (e.g., | A = ( | 0 + e i π / 4 | 1 ) / 2 which is a T-eigenstate) and purify them using error-correcting codes, then consume these purified states via teleportation to enact the non-Clifford gate on the data. This is resource- intensive but avoids the need to implement T directly on the data with many- qubit interactions. Other techniques include using different codes that admit certain gates transversely (e.g., CSS ( p , q ) codes allow transversal π / p phase gates, so one can switch to a different code for a particular gate, a technique known as code switching or gauge fixing).
The field of fault-tolerant quantum computing continues to evolve. Techniques like fault-tolerant state injection, flag qubits, adaptive error correction, and autonomous error correction are all being explored to reduce overhead. But the threshold theorem assures us that, in principle, we do not need error rates to be zero—just below a finite threshold. This is fundamentally different from analog classical computing, which cannot operate arbitrarily close to chaos because it lacks such an error threshold. The digital nature of quantum error correction gives a path (albeit a challenging one) to scaling up quantum systems.

6. Advanced Models of Quantum Computation

While the circuit model with qubits and logic gates is the most commonly discussed framework, several alternative models of quantum computation exist, each offering different insights, advantages, or implementation pathways. We discuss a few prominent ones: measurement-based quantum computing, topological quantum computing, adiabatic quantum computing, and continuous-variable quantum computing.

6.1. Measurement-Based Quantum Computing (One-Way QC)

Measurement-based quantum computing (MBQC) is a model where the computation is driven by measurements on an entangled resource state. The most famous MBQC scheme is the one-way quantum computer proposed by Raussendorf and Briegel (2001) [50]. In this model, one first prepares a highly entangled state of many qubits, called a cluster state (typically a 2D lattice of qubits entangled by controlled-phase gates between neighbors). This cluster state serves as a universal resource for computation. The algorithm is then carried out by single-qubit measurements on the cluster state qubits, one by one (or layer by layer). The choice of measurement basis on each qubit, and the adaptive adjustment of later measurement bases depending on earlier outcomes (to correct the effect of the probabilistic outcomes), steers the computation [51]. By the end of the sequence of measurements, the remaining qubits (or a pattern of outcomes) contain the result of the computation.
Remarkably, the cluster state is initially generic and does not depend on the specific computation; all the algorithm’s information is in the sequence of measurements. The cluster state for an n-qubit algorithm might involve significantly more qubits than n (like n times the depth of the circuit, in a sense). Each measured qubit yields classical outcomes (0 or 1) that may determine how future qubits are measured (this is sometimes called feed-forward of classical information).
MBQC is theoretically equivalent in power to the circuit model (it can simulate any quantum circuit with polynomial overhead). Any quantum circuit can be translated into a sequence of one-way measurements on a suitable cluster. Conversely, an MBQC pattern can be converted to a circuit.
One motivation for MBQC is that it separates the entanglement and computation phases. The resource state (cluster) can be created offline (and in some proposals, created by cooling or other methods). Then fast measurements (which are often easier to do with high fidelity than multi-qubit gates in some systems, especially photonics) drive the computation. MBQC is naturally suited to optical quantum computing: Knill, Laflamme, and Milburn (2001) originally proposed a scheme of nondeterministic gates for photonic qubits [1], and later work (Nielsen, 2004; Browne and Rudolph, 2005) realized that building large photonic cluster states and then measuring them is a more feasible route, since we bypass the need for deterministic two-photon gates by preparing entangled resources.
Another advantage is error localization: each measurement has a classical outcome which can signal if something went off, and one could in principle reroute or adapt (though error correction in MBQC typically requires similar overhead as in circuit model, just translated into the cluster state language).
The cluster state itself has a high degree of entanglement: it is a stabilizer state defined as the +1 eigenstate of certain stabilizer generators (each qubit’s X times all its neighbors’ Z). A 2D cluster state on a square lattice is a universal resource; interestingly, a 1D cluster (a simple chain of qubits entangled in a line) is not universal for MBQC (it can only create a limited set of states, basically it’s equivalent to IQP or restricted circuits).
In summary, MBQC shows that quantum computation can be done by measuring only (no explicit unitary gates during the computation), provided we have a sufficiently entangled initial state. This reinforces the notion that entanglement is a central resource for quantum computing.

6.2. Topological Quantum Computation

Topological quantum computation (TQC) is a model that employs exotic quasiparticles and their topological properties to store and manipulate quantum information in a way that is intrinsically protected from local noise. The basic idea is to use anyons — particles (or quasiparticles in 2D systems) that have non-trivial exchange statistics (neither bosonic nor fermionic, but something more general). In certain systems, exchanging (braiding) two anyons performs a unitary operation on the quantum state of the system that depends only on the topological class of the braiding path, not on the precise path taken. This means that if the anyons are kept far apart and moved around each other, small perturbations or noise that do not cause them to come together or scatter are irrelevant to the stored information.
Kitaev’s 2003 paper laid the foundation, showing how a certain model (the toric code model) exhibits anyonic excitations that can serve as qubits and how braiding them can perform gates [52]. Freedman, Kitaev, Larsen, and Wang (2003) formalized the notion of a topological quantum computer using the so- called “braid group” representations that come from anyonic systems [53]. In a topological quantum computer, the logical qubits are encoded in groups of anyons (e.g., the presence/absence of certain total topological charge) and logic gates are carried out by physically exchanging (braiding) anyons around each other. Because these operations depend only on the topology of the braids, local noise (which would cause, say, slight deviations in trajectories) does not easily lead to logical errors — one would have to cause a large deformation equivalent to a different braid, or create/annihilate anyons spontaneously, which in a gapped system is suppressed.
One leading platform for TQC is the use of non-Abelian anyons, particularly those that are theoretically supported in fractional quantum Hall states (like Moore-Read state at filling 5/2) or in certain superconducting heterostructures that might host Majorana zero modes (which effectively behave as Ising anyons). Majorana zero modes are like half-electron quasiparticles that can emerge in topological superconductors; exchanging two Majoranas is predicted to implement a π / 2 phase gate in the degenerate subspace of states (which can encode a qubit) [54]. More complex anyon systems (like Fibonacci anyons) can be universal for quantum computation by braiding alone [54]. The Fibonacci anyon, for example, has braiding rules that can approximate any unitary operation, forming a universal gate set. For Ising anyons (Majoranas), braiding gives you the Clifford group, which is not universal, so you need to supplement with additional operations (like measuring certain collective properties, or bringing anyons together for fusion measurement) to get universality.
A key aspect is topological protection: The system’s ground state manifold encodes the qubits and is separated by an energy gap from excited states. As long as operations are done slowly (adiabatically) and anyons are well-separated, the system stays in the ground-state manifold (so-called topological sector) and errors that cause leakage out of this manifold are exponentially suppressed by the gap and distance between anyons. This gives an inherent error correction. For example, the toric code can be seen as a topological code where anyons are error syndromes, and braiding them corresponds to logical operations.
In practice, the quest for suitable anyonic systems is ongoing. The fractional quantum Hall effect at 5/2 filling is believed to support non-Abelian anyons (perhaps Ising anyons). There have been experiments seeing signatures consistent with Majorana zero modes in nanowires (Zhang et al., 2018, for instance), though full braiding tests are still underway. In 2018, scientists at Microsoft and university labs (e.g., UW, TU Delft) attempted to engineer Majorana zero modes in superconducting-semiconductor devices, but as of 2025, universal topological quantum computing has not been realized. It remains a promising approach due to its potential for extremely low logical error rates, thereby reducing overhead for error correction drastically.
Even if fully topological qubits are not yet realized, ideas from topological quantum error correction are very influential, such as the surface code which is a topological stabilizer code (the toric code on a planar lattice, with boundaries). In that code, “anyons” are really just stabilizer syndrome defects (like an isolated 1 eigenvalue of a plaquette stabilizer acts like an e anyon, etc.), and moving those by applying gates is akin to braiding. One can implement logical gates by creating pairs of defects and moving them around one another (this is sometimes called “code deformation” or lattice surgery in surface codes). Thus, even in conventional quantum computing architectures, we make use of topology in our error correction schemes.
Topological QC stands out in that if achieved, the overhead for error suppression is built into the physics: one still likely needs some error correction for any non-topologically protected degrees of freedom or to correct for any accidental anyon creation, but it might allow a more scalable path.

6.3. Adiabatic Quantum Computing and Quantum Annealing

Adiabatic quantum computing (AQC) is a model where we encode the solution of a problem in the ground state of a Hamiltonian and then use the adiabatic theorem to reach that ground state from an initial easy-to-prepare state. One slowly changes the Hamiltonian from a simple initial form (whose ground state is known) to a final form (whose ground state encodes the answer). If the evolution is slow compared to the inverse of the minimum energy gap between the ground state and first excited state, the adiabatic theorem guarantees the system will stay in the ground state, thus arriving at the solution.
In summary, adiabatic quantum computing is another paradigm that, under ideal conditions, is as powerful as gate-model computing [56]. AQC is particularly well-suited for optimization problems: the final Hamiltonian usually has the form of a problem Hamiltonian where the ground state encodes the solution to, say, an NP-hard optimization problem (e.g., an Ising spin glass whose minimum energy configuration corresponds to the optimal solution). Quantum annealing is the practical implementation of this, where one starts in a superposition of all solutions (ground state of a transverse field Hamiltonian) and slowly turns on the problem’s cost function (Ising couplings, etc.) while turning off the transverse field.
A challenge in AQC is that the minimum gap can sometimes be extremely small, making the required runtime (adiabatic time) extremely long (possibly exponential). For some problems like unstructured search, adiabatic algorithms also take exponential time (mirroring Grover’s N , which is exponential in input size log N ). There are some problems where AQC provably gives polynomial time solutions (e.g., certain NP-hard problems under special structure and if allowed tunneling can sometimes escape local minima faster than classical simulated annealing, though rigorous results are scarce).
Quantum annealing machines, such as those made by D-Wave Systems, implement a limited form of AQC (with some open-system dynamics as well) aimed at solving optimization problems. These machines have demonstrated the ability to find optimal or near-optimal solutions for certain instances faster than classical heuristics, though it’s not clear if they provide a general asymptotic speedup. Notably, many classical algorithms can mimic quantum annealing via simulated annealing or other heuristics, and definitively proving a quantum speedup in annealing is challenging. Nevertheless, quantum annealers have scaled to thousands of qubits (albeit noisy, analog qubits) and are used as specialized accelerators for certain optimization tasks.
One key theoretical result is that AQC can be efficiently simulated by standard quantum circuits: any adiabatic algorithm can be converted (with polynomial overhead) into a gate-based algorithm. Conversely, any circuit algorithm can be mapped to an adiabatic procedure. Thus, AQC and circuit models are polynomially equivalent in computational power (this was proven by Aharonov et al., 2008 [56]). In terms of real-world implementation, AQC is more analog in nature and doesn’t require coherent error-corrected operations at the gate level—though one could also perform AQC in a fault-tolerant manner by digitizing it.
Quantum annealing, being analog, can sometimes exploit physics beyond the adiabatic theorem, like thermal relaxation or diabatic transitions, to find good solutions—this makes it hard to rigorously compare to classical algorithms, but also means it’s not a pure gate-model quantum computer. It remains an active area of research to determine for which tasks quantum annealing provides a significant advantage.

6.4. Continuous-Variable Quantum Computing

All the models discussed so far use two-level systems (qubits) or discrete collections of anyons, etc. Continuous-variable (CV) quantum computing uses quantum systems with continuous spectra, such as modes of the electromagnetic field (quantum harmonic oscillators). Instead of qubits, one works with quantum modes that can be described by variables like position and momentum (or, in optics, the quadrature amplitudes x and p of a field). These are infinite-dimensional systems, though in practice often truncated or approximate.
A typical basis for a CV mode is the Fock basis { | 0 , | 1 , | 2 , . . . } (photon number states), which is infinite-dimensional. Quantum gates in CV can be realized as operations like phase-space displacements, squeezers (which change the uncertainty distribution, e.g., reducing variance in x while increasing in p), and beamsplitters or non-linear interactions in optical systems. Some operations, like the Fourier transform, are naturally realized as a quarter-period free evolution (maps x to p and vice versa). In theory, a set of Gaussian operations (linear optics and squeezing) plus a non-Gaussian element (e.g., a Kerr interaction or cubic phase gate) is universal for CV quantum computing.
CV approaches are especially relevant for quantum communication (QKD can be done with coherent states and homodyne detection) and for certain implementation platforms (like optical or microwave photons, or vibrational modes). They also allow for deterministic generation of large entangled states (e.g., using optical parametric oscillators to create cluster states of thousands of modes). However, CV quantum computing often operates with approximate continuous variables because perfect continuum and infinite energy isn’t physically realizable—one truncates to a large Fock space or uses finite squeezing.
The CV model also connects to error correction: one example is the Gottesman-Kitaev-Preskill (GKP) code [61], which encodes a qubit into an oscillator’s continuous phase space by using a lattice of eigenstates (essentially making a grid in the continuous phase space that represents discrete logical states). This is a way to get a discrete subspace within a CV system that is resilient to small shifts in x or p.
In summary, continuous-variable quantum computing extends the palette of quantum information to systems beyond qubits, offering potentially different tools (like optical frequency combs, etc.) for quantum processing. Whether CV systems will be the main way we build quantum computers is unclear, but they provide a useful link between quantum computing and quantum optics/communication, and may facilitate hybrid architectures (e.g., superconducting qubits coupled to CV microwave resonators, or using CV cluster states as interconnects).

7. Quantum Information Theory Aspects

Quantum information theory underpins much of quantum computing by providing a framework for understanding information in quantum terms. We will touch on a few key concepts: entropy and information capacity, no-cloning and quantum cryptography, and entanglement in communication tasks.
In classical information theory, Shannon entropy H ( X ) = x p ( x ) log 2 p ( x ) quantifies the uncertainty or information content of a random variable X. In quantum, the analogous quantity is the von Neumann entropy S ( ρ ) = tr ( ρ log 2 ρ ) for a density matrix ρ . For a pure state ρ = ψ | ψ , S ( ρ ) = 0 (complete knowledge, no uncertainty). Mixed states have S > 0 . Entropy can be used to define other measures like mutual information, etc., similarly to classical.
A fundamental result by Holevo (1973) is the Holevo bound, which limits how much classical information can be encoded into quantum states and later decoded by measurements. Even though a quantum state (like a superposition over many basis states) might seem to hold a lot of information, the Holevo bound states that n qubits can carry at most n classical bits of information to a receiver (more precisely, given a quantum ensemble { p ( x ) , ρ x } , the accessible information in the ensemble is bounded by the Holevo quantity χ = S ( ρ ) x p ( x ) S ( ρ x ) , which is at most n for n qubits total). Thus, quantum encoding doesn’t allow unlimited classical communication capacity — entanglement and quantum states can’t violate classical channel capacities without also sending classical bits.
Another key concept is entanglement’s role in communication. Entanglement by itself cannot send messages (no signaling), but combined with classical communication, it enables protocols that are otherwise impossible. Two prime examples are quantum teleportation and superdense coding.
Teleportation and superdense coding are two hallmark protocols that illustrate these ideas: - Quantum Teleportation (Bennett et al., 1993) allows the transfer of an unknown qubit state from Alice to Bob using a pre-shared maximally entangled qubit pair and two classical bits of communication [68]. In the protocol, Alice performs a joint measurement on the qubit to be sent and her half of the entangled pair (a Bell measurement), which yields two classical bits. She sends those two bits to Bob, who then performs one of four possible unitary operations on his half of the entangled pair, leaving it in the state that the original qubit was in. The original qubit’s state disappears from Alice’s side (having been destroyed by the measurement) and appears on Bob’s side — hence, “teleportation.” Teleportation thus trades entanglement and classical communication for a quantum channel. It shows that entanglement plus 2 bits can effectively send a qubit. - Superdense Coding (Bennett & Wiesner, 1992) is the reverse: using an entangled pair, Alice can send 2 classical bits to Bob by sending only one qubit [67]. If Alice and Bob share a Bell pair, and Alice applies one of four unitary operations to her qubit (leaving it in one of four orthogonal states), then sends it to Bob, Bob can perform a Bell measurement on the two qubits and recover two classical bits of information. Thus, entanglement and one qubit can transmit two bits, doubling the classical capacity of a qubit channel.
Quantum cryptography is another information theory domain: the BB84 protocol (Bennett & Brassard, 1984) showed that two parties can establish a secret key with security guaranteed by quantum mechanics [66]. In BB84, Alice sends qubits in one of two bases (X or Z, randomly chosen) and Bob measures in randomly chosen bases. After the transmission, they publicly compare which bases they used and keep only those bits where their bases agreed. Due to the no-cloning theorem and the disturbance from measurement, any eavesdropper’s attempt to intercept or measure the qubits will introduce errors that Alice and Bob can detect by sacrificing some of the bits for testing. If the error rate is low enough, they can distill a secure key. This was the first protocol for quantum key distribution (QKD), and it has since been demonstrated in many settings (fiber, free-space). QKD is unique in that its security is not based on computational assumptions but on physical principles (withstand any attack allowed by quantum mechanics).
Other tasks in quantum information include entanglement distillation (extracting nearly pure entanglement from noisy correlations using local operations and classical communication), quantum data compression (asymptotically compressing quantum states if one has many copies of a source, analogous to classical compression—this is governed by the von Neumann entropy), and quantum channel capacities (how many qubits or classical bits can be sent through a noisy quantum channel—topics like coherent information and the quantum channel capacity theorems).
Overall, the theoretical underpinnings provided by quantum information theory inform what is possible or impossible with quantum computers and quantum communication. They set limits like the Holevo bound, offer new resources like entanglement that have no classical counterpart, and guide error correction and cryptographic protocol design.

8. Physical Implementations of Quantum Computers

The theoretical models of quantum computing must ultimately be realized in hardware to perform actual computations. In this section, we provide an overview of the leading physical platforms for quantum computation, each of which has distinct strengths and challenges. We cover superconducting circuits, trapped ions, photonic systems, solid-state spin qubits, and briefly mention others like defects in solids and neutral atoms.

8.1. Superconducting Circuits

Superconducting circuits are one of the most advanced platforms for quantum computing as of 2025 [72]. In these systems, information is stored in quantized energy levels of a superconducting element (such as a Josephson junction circuit) that behaves like an artificial atom (a qubit). Superconducting qubits are manipulated with microwave pulses and measured via coupled resonators. They benefit from leveraging modern integrated circuit fabrication techniques, allowing for rapid progress in scaling.
Progress in this field has been rapid: coherence times have improved from nanoseconds two decades ago to hundreds of microseconds in the best devices today, thanks to materials improvements and better designs [72]. Single-qubit gate fidelities above 99.9% and two-qubit fidelities around 99% have been achieved in certain systems. Perhaps most famously, a 53-qubit superconducting processor (Google’s “Sycamore”) was used in 2019 to demonstrate a sampling task at the cusp of quantum supremacy [78].
Scaling superconducting qubits further encounters challenges: each qubit requires control lines (microwave drive, flux bias lines, etc.), and readout resonators, which for tens of qubits is manageable but for millions becomes a complex wiring problem. Cross-talk and signal delivery at millikelvin temperatures are engineering issues. Moreover, maintaining coherence as devices grow is harder (crosstalk, interference, and inhomogeneities can introduce noise).
Nevertheless, companies like IBM, Google, Intel, and startups have roadmaps for superconducting qubit systems scaling to hundreds or thousands of qubits, with error correction. These qubits operate at ∼10-20 mK (requiring dilution refrigerators). They have the advantage of very fast gate times (nanosecond-scale pulses) and easy integration with classical electronics (semi-monolithic processes), but disadvantages include the cryogenics and relatively short coherence (microseconds vs. seconds in other systems).
Superconducting circuits also allow implementing tunable couplers and bosonic modes (as qubits encoded in oscillators) which have been used in bosonic codes. The maturity of fabrication and control in this platform currently makes it a front-runner for the first error-corrected quantum computer.

8.2. Trapped Ions

Trapped ion quantum computers use individual ions (charged atoms) trapped in electromagnetic traps (typically Paul traps with RF fields) as qubits.The qubit is usually an internal electronic state of the ion (two hyperfine levels separated by a microwave frequency, or a ground and metastable state separated by an optical transition). Ions are confined in space by the trap and can be manipulated with laser beams (or magnetic resonance techniques for hyperfine qubits). Entangling gates are performed by using the collective motional modes of ions as a bus: applying laser pulses that couple an ion’s state to motion and then to another ion’s state (Mølmer-Sørensen gate or Cirac-Zoller gate).
Trapped ions have achieved record gate fidelities and are among the best qubits in terms of coherence: T 2 coherence times of hours have been observed for hyperfine qubits in ions (limited only by magnetic field stability). Single-qubit gate fidelities can be > 99.99 % , and two-qubit gates have reached 99.9% in some experiments. Readout is high-fidelity via state-dependent fluorescence (bright/dark detection typically yields 99
A big strength is all-to-all connectivity in a single trap: each ion can in principle interact with any other via shared motion. This simplifies implementing algorithms and error-correcting codes that benefit from connectivity. For example, small error-correcting codes (Shor’s 9-qubit code or Steane’s 7-qubit code) have been demonstrated in ion traps, as well as multi-qubit GHZ states up to 14 ions entangled.
The main limitation is speed: two-qubit gate times are on the order of tens of microseconds to hundreds of microseconds, and if many ions are in one trap, gates get slower and spectra congested. There’s also the issue of scaling beyond ∼100 ions in one trap because keeping all modes under control and addressing ions individually becomes hard. To scale, concepts like modular ion trap architectures are pursued: where multiple traps (or trap zones) hold subsets of ions and one can shuttle ions between zones to interact, or use photonic interfaces to entangle ions in different traps (via interference and detection of photons each ion emits).
Companies like IonQ and Quantinuum (formerly Honeywell) are building ion trap quantum processors. IonQ has reported devices of 11 algorithmic qubits (like 23 physical ions, some for error mitigation) and have a roadmap for 32 and beyond. Quantinuum demonstrated a 20-qubit fully connected device. Shuttling ions has been done in QCCD (quantum charge-coupled device) architectures (by Honeywell, moving ions through X-junctions). These offer a path to scaling by adding more trapping zones and moving qubits around rather than wiring every interaction.
Trapped ions are also used to simulate physics problems, thanks to their tunable long-range spin-spin interactions via laser drives (a form of analog quantum simulation). They stand as a leading platform especially for near-term quantum computers where qubit quality matters more than quantity. The trade-off compared to superconductors is: ions have superb qubit quality but slower operations and harder to parallelize (because multiple gate lasers can interfere or you only have one or two modes to use at a time), whereas superconductors have fast parallel operations but more errors and shorter coherence.

8.3. Photonic Quantum Computing

Photons (particles of light) are natural information carriers for quantum communication and also can be used for computing. Photonic quantum computing typically uses the polarization or path of a photon as a qubit. Photons have the advantage of minimal decoherence (they barely interact with environment if in low-loss fibers or optical circuits) but the disadvantage is that photons don’t naturally interact with each other, which is needed for two-qubit gates.
One approach is linear optical quantum computing (LOQC) which uses beam splitters, phase shifters (linear optics) plus single-photon sources and detectors and off-line prepared entangled states to induce effective non-linearity. The famous KLM scheme (Knill, Laflamme, Milburn 2001) showed that given single photons, beam splitters, and photon detectors, one can probabilistically implement a two-qubit gate, and with enough attempts and ancillary photons, this can be made near-deterministic and hence achieve universal QC. The overhead is large though.
Another approach is one-way measurement-based photonic computing as mentioned: generate a large cluster state of photons (for example by parametric down-conversion sources that create entangled pairs and fusing them together probabilistically into a 2D lattice). Then perform sequential measurements to enact a computation. This again is challenging in terms of reliably creating large entangled states given probabilistic gates, but much progress has been made. A notable experiment created a 6x6 2D cluster of 18 photons with 85
Photonic systems are ideal for quantum communication (QKD deployed worldwide) and distributed computing (connecting distant quantum modules). For computing, a big plus is room-temperature operation and easy transport, but minuses are needing ultra-efficient sources and detectors to scale. Integrated photonics (building photonic circuits on chips) is an active area, aiming to have stable interferometers rather than bulk optics which drift.
In terms of milestones: small-scale logic (like 4 photons doing a simple algorithm or demonstrating error detection code) has been shown. BosonSampling experiments (3-9 photons, beyond classical brute force) have been done. In 2020, a Gaussian boson sampling experiment in China with 50-100 photons claimed a quantum computational advantage over classical simulation, analogous to Google’s random circuit advantage but in photonics. However, boson sampling is a restricted task, not general computation.
Despite challenges, photonics might play a role integrated with matter qubits or as a communication bus. Some proposals use quantum memory elements along with photons to have deterministic interactions (e.g., using quantum dots as photon emitters that also interact, or Rydberg atomic ensembles interacting via photons).

8.4. Spin Qubits in Semiconductors

Spin qubits use the spin of electrons or nuclei as qubits. A prominent direction is quantum dots in silicon or GaAs: electrons confined in nanostructured semiconductor potentials, where the spin-up/spin-down of an electron (or a specific singlet/triplet state of two electrons) forms a qubit. These are manipulated by microwave EPR pulses or electric gates via spin-orbit coupling, and entanglement is typically done by exchange interaction (bringing two dots close or pulsing an exchange gate so that the spins swap or become correlated). It’s essentially like a miniaturized version of an NMR quantum computer but on single electrons.
Si spin qubits have seen great progress: devices with 2-3 qubits have shown 99% fidelity on single-qubit gates, ∼90-95% on two-qubit gates (in isotopically purified silicon, nuclear spin-free environment to reduce decoherence). Coherence times can be long with isotopic purification and dynamical decoupling ( T 2 up to seconds in donors, or milliseconds in quantum dot spins). The main challenge is connecting many spins with high-fidelity gating, as cross-talk and variability in each quantum dot are issues, and operating at very low temperatures (20 mK). But the advantage is potential to leverage CMOS fabrication to scale to many qubits on a chip. Some see a path to millions of spin qubits integrated with classical control on chip.
Donor spins (like a phosphorus nuclear spin in silicon) have also been used as very coherent qubits (long memories, and two-qubit gates via exchange if two donors are close). In 2019, a two-qubit gate between two P donors in Si was demonstrated (90% fidelity). This platform is still in early stage compared to superconductors and ions.

8.5. Other Platforms

There are numerous other quantum hardware approaches: - Nitrogen-Vacancy (NV) centers in diamond: an NV center is a point defect in diamond (a substitutional nitrogen next to a vacancy) that has an electron spin qubit that can be manipulated and measured optically at room temperature. NV centers have coherence times up to milliseconds (with decoupling) at room T, and their spin can be entangled with nearby C 13 nuclear spins or with another NV’s spin via photonic links. NVs and similar defects (like silicon vacancy or divacancy in SiC) are great testbeds for small quantum networks and sensors, but scaling a large quantum computer out of NV centers is uncertain (they’re randomly located and connecting them in 2D is hard, though one can imagine fabricating arrays of them or using ion-implantation for placement). - Neutral atoms in optical traps: arrays of neutral atoms (like Rydberg atom arrays) have emerged recently. Atoms (like rubidium or cesium) are cooled and trapped in a 2D (or 3D) array by optical tweezers, then excited to Rydberg states to enact entangling gates via strong dipole-dipole interactions. Companies like Pasqal and QuEra have built 100-300 atom systems mainly for analog quantum simulation but also doing digital gates. Rydberg gates can be reasonably fast (a few μ s) and fidelity ∼97-98% at best reported (still behind ions or superconductors). But having hundreds of atoms with flexible geometry is a plus. The challenge is crosstalk and controlling many laser beams precisely. - Quantum dots with photons (quantum optics with quantum dot emitters): e.g., use a quantum dot in a cavity to mediate photon-photon interactions or generate photonic cluster states on the fly. Some success in generating long strings of entangled photons (thousands-long cluster from a single quantum dot).
Each platform has its trade-offs between speed, coherence, connectivity, and ease of scaling. It’s not yet clear which will win out; it might also be that hybrid approaches (e.g., superconducting qubits for processing and optical links for long-distance communication) combine strengths.

9. Conclusions

Quantum computing has evolved from a theoretical curiosity to an experimental reality over the past few decades. We now have a solid theoretical framework that spans multiple models—circuits, measurement-based, topological, adiabatic—each enriching our understanding of quantum computation’s capabilities and limitations. On the theoretical side, we have identified quantum algorithms that offer dramatic speedups for specific problems (factoring, unstructured search, quantum simulation), and we have mapped out a landscape of complexity classes that sharpen the question of what can or cannot be efficiently computed quantumly. The discovery of quantum error correction and fault tolerance has provided a blueprint for scaling up, ensuring that quantum computers can in principle perform arbitrarily long computations reliably, despite physical noise. This is a strong reassurance that no known physics principle forbids large-scale quantum computation; the remaining obstacles are primarily engineering challenges.
From the perspective of physical realization, there has been tremendous progress. Different hardware platforms—superconducting circuits, trapped ions, photonics, spin qubits in solids, among others—have achieved impressive milestones (high-fidelity gates, tens of qubits entangled, etc.). Quantum supremacy experiments have shown that quantum devices can surpass classical supercomputers on contrived tasks. However, we are still far from a fault-tolerant universal quantum computer. Each platform has limitations to overcome: superconductors need error rates lowered further or error correction added, ions need scaling and faster operation, photonics needs efficient sources and deterministic gates, etc.
If and when large-scale quantum computers become reality, the impact will be multifaceted. Certain cryptographic protocols will need to be upgraded (post-quantum cryptography to replace RSA and ECC). We will have new tools to simulate complex quantum systems, potentially designing new materials, catalysts, or drugs with atomic precision. Optimization problems and machine learning might see new heuristics inspired by quantum algorithms. Moreover, the very process of building a quantum computer will deepen our understanding of quantum mechanics and perhaps lead to new discoveries in condensed matter (as we learn to control decoherence) and other fields. In a sense, the effort to build quantum computers is also an exploration of the quantum world at an unprecedented level of control.
Finally, beyond utilitarian benefits, quantum computing represents a profound shift in how we compute, driven by the fundamental rules of reality rather than our familiar classical approximation. It has already taught us about the nature of information, entanglement, and complexity. As this technology progresses, it stands to not only revolutionize certain industries but also sharpen our philosophical understanding of computation and information in the universe.

References

  1. Knill, E., Laflamme, R., & Milburn, G. J. (2001). "A scheme for efficient quantum computation with linear optics." Nature, 409(6816), 46-52. [CrossRef]
  2. Dirac, P. A. M. (1930). The Principles of Quantum Mechanics. Oxford University Press.
  3. von Neumann, John (1932). Mathematische Grundlagen der Quantenmechanik [Mathematical Foundations of Quantum Mechanics]. Berlin: Springer. (English translation by R. T. Beyer, Princeton Univ. Press, 1955).
  4. Einstein, Albert; Podolsky, Boris; Rosen, Nathan (1935). “Can quantum-mechanical description of physical reality be considered complete?” Physical Review, 47(10): 777–780.
  5. Schrödinger, Erwin (1935). “Discussion of probability relations between separated systems.” Proceedings of the Cambridge Philosophical Society, 31(4): 555–563. [CrossRef]
  6. Shannon, Claude E. (1948). “A mathematical theory of communication.” Bell System Technical Journal, 27(3): 379–423; 27(4): 623–656.
  7. Landauer, Rolf (1961). “Irreversibility and heat generation in the computing process.” IBM Journal of Research and Development, 5(3): 183–191. [CrossRef]
  8. Bell, J. S. (1964). “On the Einstein Podolsky Rosen paradox.” Physics Physique Fizika, 1(3): 195–200.
  9. Bennett, Charles H. (1973). “Logical reversibility of computation.” IBM Journal of Research and Development, 17(6): 525–532.
  10. Fredkin, Edward; Toffoli, Tommaso (1982). “Conservative logic.” International Journal of Theoretical Physics, 21(3–4): 219–253.
  11. Benioff, Paul (1980). “The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines.” Journal of Statistical Physics, 22(5): 563–591. [CrossRef]
  12. Feynman, Richard P. (1982). “Simulating physics with computers.” International Journal of Theoretical Physics, 21(6–7): 467–488.
  13. Feynman, Richard P. (1986). “Quantum mechanical computers.” Foundations of Physics, 16(6): 507–531.
  14. Deutsch, David (1985). “Quantum theory, the Church–Turing principle and the universal quantum computer.” Proceedings of the Royal Society of London A, 400(1818): 97–117.
  15. Deutsch, David; Jozsa, Richard (1992). “Rapid solution of problems by quantum computation.” Proceedings of the Royal Society of London A, 439(1907): 553–558. [CrossRef]
  16. Bernstein, Ethan; Vazirani, Umesh (1993). “Quantum complexity theory.” In Proceedings of the 25th Annual ACM Symposium on Theory of Computing (STOC ’93), pp. 11–20. ACM Press.
  17. Yao, A. Chi-Chih (1993). “Quantum circuit complexity.” In Proceedings of the 34th Annual Symposium on Foundations of Computer Science (FOCS ’93), pp. 352–361. IEEE.
  18. A. Nasser. A Comprehensive Mathematical Review: Linear Algebra, Schrödinger Equation, and Quantum Field Theory. ResearchGate Preprint (2025). [CrossRef]
  19. A. Nasser. The Role of Quantum Tunneling in Drug-Receptor Interactions. ResearchGate Preprint (2025). [CrossRef]
  20. A. Nasser. Quantum-Resolved Molecular Spectroscopy: Bridging Wave Mechanics, Spectral Signatures, and Predictive Biomolecular Analysis. ResearchGate Preprint (2025). [CrossRef]
  21. A. Nasser. The Merging of Quantum Physics: An Interdisciplinary Scientific Review. ResearchGate Preprint (2025). [CrossRef]
  22. A. Nasser. Black Holes: A Comprehensive Review. ResearchGate Preprint (2025). [CrossRef]
  23. A. Nasser. Quantum Experiments That Changed Interpretation: A Historical and Theoretical Review. Preprints (May 2025). [CrossRef]
  24. A. Nasser. Quantum Causality and Retrocausation: Revisiting Temporal Order in Quantum Physics. ResearchGate Preprint (May 2025). [CrossRef]
  25. A. Nasser. The Quantum Measurement Problem: Foundations, Interpretations, and Recent Developments. Preprints (April 2025). [CrossRef]
  26. A. Nasser. Quantum Wavefront Dynamics (QWD): A Deterministic Framework for the Wave-to-Particle Transition. ResearchGate Preprint (April 2025). [CrossRef]
  27. A. Nasser, “Quantum Gravity: Detailed Frameworks, Mathematical Derivations, and Comparative Perspectives,” preprint, Jul. 2025. [Online]. Available: https://doi.org/10.13140/RG.2.2.32978.47042. [CrossRef]
  28. Nasser, Adam D. (2025). “A Decade of Advances in Lung Cancer: Progress, Artificial Intelligence, and Global Collaboration.” ResearchGate Preprint. [CrossRef]
  29. Nasser, Adam D. (2025). “Black Hole Experiments: Historical, Mathematical, and Theoretical Frameworks Review.” ResearchGate Preprint. [CrossRef]
  30. Simon, Daniel R. (1994). “On the power of quantum computation.” In Proceedings of the 35th Annual Symposium on Foundations of Computer Science (FOCS ’94), pp. 116–123. IEEE.
  31. Shor, Peter W. (1994). “Algorithms for quantum computation: discrete logarithms and factoring.” In Proceedings of the 35th Annual Symposium on Foundations of Computer Science (FOCS ’94), pp. 124–134. IEEE.
  32. Shor, Peter W. (1997). “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer.” SIAM Journal on Computing, 26(5): 1484–1509. [CrossRef]
  33. Grover, Lov K. (1996). “A fast quantum mechanical algorithm for database search.” In Proceedings of the 28th Annual ACM Symposium on Theory of Computing (STOC ’96), pp. 212–219. ACM.
  34. Grover, Lov K. (1997). “Quantum mechanics helps in searching for a needle in a haystack.” Physical Review Letters, 79(2): 325–328. [CrossRef]
  35. Bennett, Charles H.; Bernstein, Ethan; Brassard, Gilles; Vazirani, Umesh (1997). “Strengths and weaknesses of quantum computing.” SIAM Journal on Computing, 26(5): 1510–1523. [CrossRef]
  36. Brassard, Gilles; Høyer, Peter; Mosca, Michele; Tapp, Alain (2002). “Quantum amplitude amplification and estimation.” In Quantum Computation and Information, AMS Contemporary Mathematics Series, vol. 305, pp. 53–74.
  37. Lloyd, Seth (1996). “Universal quantum simulators.” Science, 273(5278): 1073–1078.
  38. Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2009). “Quantum algorithm for linear systems of equations.” Physical Review Letters, 103(15): 150502. [CrossRef]
  39. Aharonov, Dorit; Ben-Or, Michael (1997). “Fault-tolerant quantum computation with constant error rate.” In Proceedings of the 29th Annual ACM Symposium on Theory of Computing (STOC ’97), pp. 176–188. ACM.
  40. Knill, Emanuel; Laflamme, Raymond; Zurek, Wojciech H. (1998). “Resilient quantum computation: error correction using weak measurements.” Science, 279(5349): 342–345.
  41. Preskill, John (1998). “Reliable quantum computers.” Proceedings of the Royal Society A, 454(1969): 385–410.
  42. Shor, Peter W. (1995). “Scheme for reducing decoherence in quantum computer memory.” Physical Review A, 52(4): R2493–R2496. [CrossRef]
  43. Steane, Andrew M. (1996). “Error correcting codes in quantum theory.” Physical Review Letters, 77(5): 793–797. [CrossRef]
  44. Calderbank, A. R.; Shor, Peter W. (1996). “Good quantum error-correcting codes exist.” Physical Review A, 54(2): 1098–1105. [CrossRef]
  45. Laflamme, Raymond; Miquel, Caterina; Paz, Juan P.; Zurek, Wojciech H. (1996). “Perfect quantum error correcting code.” Physical Review Letters, 77(1): 198–201. [CrossRef]
  46. Gottesman, Daniel (1997). Stabilizer codes and quantum error correction. Ph.D. thesis, Caltech (arXiv:quant-ph/9705052).
  47. Knill, Emanuel; Laflamme, Raymond (1997). “Theory of quantum error-correcting codes.” Physical Review A, 55(2): 900–911. [CrossRef]
  48. Bennett, Charles H.; DiVincenzo, David P.; Smolin, John A.; Wootters, William K. (1996). “Mixed-state entanglement and quantum error correction.” Physical Review A, 54(5): 3824–3851. [CrossRef]
  49. Fowler, Austin G.; Mariantoni, Matteo; Martinis, John M.; Cleland, A. N. (2012). “Surface codes: Towards practical large-scale quantum computation.” Physical Review A, 86(3): 032324. [CrossRef]
  50. Raussendorf, Robert; Briegel, Hans J. (2001). “A one- way quantum computer.” Physical Review Letters, 86(22): 5188–5191.
  51. Raussendorf, Robert; Browne, Dan E.; Briegel, Hans J. (2003). “Measurement-based quantum computation on cluster states.” Physical Review A, 68(2): 022312. [CrossRef]
  52. Kitaev, A. Yu. (2003). “Fault-tolerant quantum computation by anyons.” Annals of Physics, 303(1): 2–30. [CrossRef]
  53. Freedman, Michael H.; Kitaev, Alexei; Larsen, Michael J.; Wang, Zhenghan (2003). “Topological quantum computation.” Bulletin of the American Mathematical Society, 40(1): 31–38.
  54. Nayak, Chetan; Simon, Steven H.; Stern, Ady; Freedman, Michael; Das Sarma, Sankar (2008). “Non-Abelian anyons and topological quantum computation.” Reviews of Modern Physics, 80(3): 1083–1159. [CrossRef]
  55. Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam; Lapan, Joshua; Lundgren, Andrew; Preda, Daniel (2001). “A quantum adiabatic evolution algorithm applied to random instances of an NP-complete problem.” Science, 292(5516): 472–476.
  56. Aharonov, Dorit; van Dam, Wim; Kempe, Julia; Landau, Zeph; Lloyd, Seth; Regev, Oded (2008). “Adiabatic quantum computation is equivalent to standard quantum computation.” SIAM Review, 50(4): 755–787.
  57. Kadowaki, Tadashi; Nishimori, Hidetoshi (1998). “Quantum annealing in the transverse Ising model.” Physical Review E, 58(5): 5355–5363. [CrossRef]
  58. Brooke, J.; Bitko, D.; Rosenbaum, T. F.; Aeppli, G. (1999). “Quantum annealing of a disordered magnet.” Science, 284(5415): 779–781. [CrossRef]
  59. Lloyd, Seth; Braunstein, Samuel L. (1999). “Quantum computation over continuous variables.” Physical Review Letters, 82(8): 1784–1787.
  60. Braunstein, Samuel L.; van Loock, Peter (2005). “Quantum information with continuous variables.” Reviews of Modern Physics, 77(2): 513–577.
  61. Gottesman, Daniel; Kitaev, Alexei; Preskill, John (2001). “Encoding a qubit in an oscillator.” Physical Review A, 64(1): 012310. [CrossRef]
  62. Holevo, A. S. (1973). “Bounds for the quantity of information transmitted by a quantum communication channel.” Problemy Peredachi Informatsii, 9(3): 3–11 [English: Problems of Information Transmission, 9(3): 177–183].
  63. Schumacher, Benjamin (1995). “Quantum coding.” Physical Review A, 51(4): 2738–2747.
  64. Wootters, William K.; Zurek, Wojciech H. (1982). “A single quantum cannot be cloned.” Nature, 299(5886): 802–803. [CrossRef]
  65. Dieks, Dennis (1982). “Communication by EPR devices.” Physics Letters A, 92(6): 271–272.
  66. Bennett, Charles H.; Brassard, Gilles (1984). “Quantum cryptography: Public key distribution and coin tossing.” In Proceedings of IEEE International Conference on Computers, Systems & Signal Processing, Bangalore, India, pp. 175–179. [CrossRef]
  67. Bennett, Charles H.; Wiesner, Stephen J. (1992). “Communication via one- and two-particle operators on Einstein-Podolsky-Rosen states.” Physical Review Letters, 69(20): 2881–2884. [CrossRef]
  68. Bennett, Charles H.; Brassard, Gilles; Crépeau, Claude; Jozsa, Richard; Peres, Asher; Wootters, William K. (1993). “Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels.” Physical Review Letters, 70(13): 1895–1899. [CrossRef]
  69. Manin, Yuri I. (1980). Vychislimoe i nevychislimoe [Computable and Noncomputable] (in Russian). Moscow: Sovetskoe Radio.
  70. Devoret, Michel H.; Schoelkopf, Robert J. (2013). “Superconducting circuits for quantum information: an outlook.” Science, 339(6124): 1169–1174. [CrossRef]
  71. Barends, R.; Kelly, J.; Megrant, A.; Veitia, A.; Sank, D.; Jeffrey, E.; White, T. C.; Mutus, J.; Fowler, Austin G.; Campbell, B.; et al. (2014). “Superconducting quantum circuits at the surface code threshold for fault tolerance.” Nature, 508(7497): 500–503. [CrossRef]
  72. Krantz, Philip; Kjaergaard, Morten; Yan, Fei; Orlando, Terry P.; Gustavsson, Simon; Oliver, William D. (2019). “A quantum engineer’s guide to superconducting qubits.” Applied Physics Reviews, 6(2): 021318.
  73. Jelezko, Fedor; Wrachtrup, Jörg (2006). “Single defect centres in diamond: A review.” Physica Status Solidi (a), 203(13): 3207–3225. [CrossRef]
  74. Neumann, Philipp; Mizuochi, Norikazu; Rempp, Frank; Hemmer, P. R.; Watanabe, Hideyuki; Yamasaki, Shintaro; Jacques, Vincent; Gaebel, Tobias; Jelezko, Fedor; Wrachtrup, Jörg (2010). “Quantum register based on coupled electron spins in a room-temperature solid.” Nature Physics, 6(4): 249–253.
  75. DiVincenzo, David P. (2000). “The physical implementation of quantum computation.” Fortschritte der Physik, 48(9–11): 771–783.
  76. Ladd, Thaddeus D.; Jelezko, Fedor; Laflamme, Raymond; Nakamura, Yasunobu; Monroe, Christopher; O’Brien, Jeremy L. (2010). “Quantum computers.” Nature, 464(7285): 45–53.
  77. Montanaro, Ashley (2016). “Quantum algorithms: an overview.” npj Quantum Information, 2: 15023.
  78. Arute, Frank; Arya, Kunal; Babbush, Ryan; Bacon, Dave; Bardin, Joseph C.; Barends, Rami; et al. (2019). “Quantum supremacy using a programmable superconducting processor.” Nature, 574(7779): 505–510. [CrossRef]
  79. Raz, Ran; Tal, Avishay (2019). “Oracle separation of BQP and PH.” In Proceedings of the 51st Annual ACM Symposium on Theory of Computing (STOC 2019), pp. 13–23. ACM.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated